diff --git a/published/20190312 When the web grew up- A browser story.md b/published/20190312 When the web grew up- A browser story.md new file mode 100644 index 0000000000..a53a5babfb --- /dev/null +++ b/published/20190312 When the web grew up- A browser story.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: (XYenChi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13130-1.html) +[#]: subject: (When the web grew up: A browser story) +[#]: via: (https://opensource.com/article/19/3/when-web-grew) +[#]: author: (Mike Bursell https://opensource.com/users/mikecamel) + +Web 的成长,就是一篇浏览器的故事 +====== + +> 互联网诞生之时我的个人故事。 + +![](https://img.linux.net.cn/data/attachment/album/202102/18/161753qb8tytkc6bnxbavn.jpg) + +最近,我和大家 [分享了][1] 我在 1994 年获得英国文学和神学学位离开大学后,如何在一个人们还不知道 Web 服务器是什么的世界里成功找到一份运维 Web 服务器的工作。我说的“世界”,并不仅仅指的是我工作的机构,而是泛指所有地方。Web 那时当真是全新的 —— 人们还正尝试理出头绪。 + +那并不是说我工作的地方(一家学术出版社)特别“懂” Web。这是个大部分人还在用 28.8K 猫(调制解调器,俗称“猫”)访问网页的世界。我还记得我拿到 33.6K 猫时有多激动。至少上下行速率不对称的日子已经过去了,[^1] 以前 1200/300 的带宽描述特别常见。这意味着(在同一家机构的)印刷人员制作的设计复杂、色彩缤纷、纤毫毕现的文档是完全不可能放在 Web 上的。我不能允许在网站的首页出现大于 40k 的 GIF 图片,这对我们的许多访问者来说是很难接受的。大于大约 60k 图片的会作为独立的图片,以缩略图链接过去。 + +如果说市场部只有这一点不喜欢,那是绝对是轻描淡写了。更糟的是布局问题。“浏览器决定如何布局文档,”我一遍又一遍地解释,“你可以使用标题或者段落,但是文档在页面上如何呈现并不取决于文档,而是取决于渲染器!”他们想控制这些,想要不同颜色的背景。后来明白了那些不能实现。我觉得我就像是参加了第一次讨论层叠样式表(CSS)的 W3C 会议,并进行了激烈地争论。关于文档编写者应控制布局的建议真令人厌恶。[^2] CSS 花了一些时间才被人们采用,与此同时,关心这些问题的人搭上了 PDF 这种到处都是安全问题的列车。 + +如何呈现文档不是唯一的问题。作为一个实体书出版社,对于市场部来说,拥有一个网站的全部意义在于,让客户(或者说潜在的客户)不仅知道一本书的内容,而且知道买这本书需要花多少钱。但这有一个问题,你看,互联网,包括快速发展的万维网,是开放的,是所有都免费的自由之地,没有人会在意钱;事实上,在那里谈钱是要回避和避免的。 + +我和主流“网民”的看法一致,认为没必要把价格信息放在线上。我老板,以及机构里相当多的人都持相反的意见。他们觉得消费者应该能够看到书要花多少钱。他们也觉得我的银行经理也会想看到我的账户里每个月进了多少钱,如果我不认同他们的观点的话,那我的收入就可能堪忧。 + +幸运的是,在我被炒鱿鱼之前,我已经自己认清了一些 —— 可能是在我开始迈入 Web 的几星期之后,Web 已经发生变化,有其他人公布他们的产品价格信息。这些新来者通常被那些从早期就开始运行 Web 服务器的老派人士所看不起,[^3] 但很明显,风向是往那边吹的。然而,这并不意味着我们的网站就赢得了战争。作为一个学术出版社,我们和大学共享一个域名(在 “ac.uk” 下)。大学不太相信发布价格信息是合适的,直到出版社的一些资深人士指出,普林斯顿大学出版社正在这样做,如果我们不做……看起来是不是有点傻? + +有趣的事情还没完。在我担任站点管理员(“webmaster@…”)的短短几个月后,我们和其他很多网站一样开始看到了一种令人担忧的趋势。某些访问者可以轻而易举地让我们的 Web 服务器跪了。这些访问者使用了新的网页浏览器:网景浏览器(Netscape)。网景浏览器实在太恶劣了,它居然是多线程的。 + +这为什么是个问题呢?在网景浏览器之前,所有的浏览器都是单线程。它们一次只进行一个连接,所以即使一个页面有五张 GIF 图,[^4] 也会先请求 HTML 基本文件进行解析,然后下载第一张 GIF,完成,接着第二张,完成,如此类推。事实上,GIF 的顺序经常出错,使得页面加载得非常奇怪,但这也是常规思路。而粗暴的网景公司的人决定,它们可以同时打开多个连接到 Web 服务器,比如说,可以同时请求所有的 GIF!为什么这是个问题呢?好吧,问题就在于大多数 Web 服务器都是单线程的。它们不是设计来一次进行多个连接的。确实,我们运行的 HTTP 服务的软件(MacHTTP)是单线程的。尽管我们花钱购买了它(最初是共享软件),但我们用的这版无法同时处理多个请求。 + +互联网上爆发了大量讨论。这些网景公司的人以为他们是谁,能改变世界的运作方式?它应该如何工作?大家分成了不同阵营,就像所有的技术争论一样,双方都用各种技术热词互丢。问题是,网景浏览器不仅是多线程的,它也比其他的浏览器更好。非常多 Web 服务器代码维护者,包括 MacHTTP 作者 Chuck Shotton 在内,开始坐下来认真地在原有代码基础上更新了多线程测试版。几乎所有人立马转向测试版,它们变得稳定了,最终,浏览器要么采用了这种技术,变成多线程,要么就像所有过时产品一样销声匿迹了。[^6] + +对我来说,这才是 Web 真正成长起来的时候。既不是网页展示的价格,也不是设计者能定义你能在网页上看到什么,[^8] 而是浏览器变得更易用,以及成千上万的浏览者向数百万浏览者转变的网络效应,使天平向消费者而不是生产者倾斜。在我的旅程中,还有更多故事,我将留待下次再谈。但从这时起,我的雇主开始看我们的月报,然后是周报、日报,并意识到这将是一件大事,真的需要关注。 + +[^1]: 它们又是怎么回来的? +[^2]: 你可能不会惊讶,我还是在命令行里最开心。 +[^3]: 大约六个月前。 +[^4]: 莽撞,没错,但它确实发生了 [^5] +[^5]: 噢,不,是 GIF 或 BMP,JPEG 还是个好主意,但还没有用上。 +[^6]: 没有真正的沉寂:总有一些坚持他们的首选解决方案具有技术上的优势,并哀叹互联网的其他人都是邪恶的死硬爱好者。 [^7] +[^7]: 我不是唯一一个说“我还在用 Lynx”的人。 +[^8]: 我会指出,为那些有各种无障碍需求的人制造严重而持续的问题。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/when-web-grew + +作者:[Mike Bursell][a] +选题:[lujun9972][b] +译者:[XYenChi](https://github.com/XYenChi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mikecamel +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/11/how-web-was-won diff --git a/published/20190404 Intel formally launches Optane for data center memory caching.md b/published/20190404 Intel formally launches Optane for data center memory caching.md new file mode 100644 index 0000000000..f07eb56caa --- /dev/null +++ b/published/20190404 Intel formally launches Optane for data center memory caching.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: (ShuyRoy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13109-1.html) +[#]: subject: (Intel formally launches Optane for data center memory caching) +[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +英特尔 Optane:用于数据中心内存缓存 +====== + +![](https://img.linux.net.cn/data/attachment/album/202102/12/111720yq1rvxcncjdsjb0g.jpg) + +> 英特尔推出了包含 3D Xpoint 内存技术的 Optane 持久内存产品线。英特尔的这个解决方案介乎于 DRAM 和 NAND 中间,以此来提升性能。 + +![Intel][1] + +英特尔在 2019 年 4 月的[大规模数据中心活动][2]中正式推出 Optane 持久内存产品线。它已经问世了一段时间,但是目前的 Xeon 服务器处理器还不能充分利用它。而新的 Xeon8200 和 9200 系列可以充分利用 Optane 持久内存的优势。 + +由于 Optane 是英特尔的产品(与美光合作开发),所以意味着 AMD 和 ARM 的服务器处理器不能够支持它。 + +正如[我之前所说的][3],OptaneDC 持久内存采用与美光合作研发的 3D Xpoint 内存技术。3D Xpoint 是一种比 SSD 更快的非易失性内存,速度几乎与 DRAM 相近,而且它具有 NAND 闪存的持久性。 + +第一个 3D Xpoint 产品是被称为英特尔“尺子”的 SSD,因为它们被设计成细长的样子,很像尺子的形状。它们被设计这样是为了适合 1u 的服务器机架。在发布的公告中,英特尔推出了新的利用四芯或者 QLC 3D NAND 内存的英特尔 SSD D5-P4325 [尺子][7] SSD,可以在 1U 的服务器机架上放 1PB 的存储。 + +OptaneDC 持久内存的可用容量最初可以通过使用 128GB 的 DIMM 达到 512GB。英特尔数据中心集团执行副总裁及总经理 Navin Shenoy 说:“OptaneDC 持久内存可达到的容量是 DRAM 的 2 到 4 倍。” + +他说:“我们希望服务器系统的容量可以扩展到每个插槽 4.5TB 或者 8 个插槽 36TB,这是我们第一代 Xeon 可扩展芯片的 3 倍。” + +### 英特尔Optane内存的使用和速度 + +Optane 有两种不同的运行模式:内存模式和应用直连模式。内存模式是将 DRAM 放在 Optane 内存之上,将 DRAM 作为 Optane 内存的缓存。应用直连模式是将 DRAM 和 OptaneDC 持久内存一起作为内存来最大化总容量。并不是每个工作负载都适合这种配置,所以应该在对延迟不敏感的应用程序中使用。正如英特尔推广的那样,Optane 的主要使用情景是内存模式。 + +几年前,当 3D Xpoint 最初发布时,英特尔宣称 Optane 的速度是 NAND 的 1000 倍,耐用是 NAND 的 1000 倍,密度潜力是 DRAM 的 10 倍。这虽然有点夸张,但这些因素确实很令人着迷。 + +在 256B 的连续 4 个缓存行中使用 Optane 内存可以达到 8.3GB/秒的读速度和 3.0GB/秒的写速度。与 SATA SSD 的 500MB/秒左右的读/写速度相比,可以看到性能有很大提升。请记住,Optane 充当内存,所以它会缓存被频繁访问的 SSD 中的内容。 + +这是了解 OptaneDC 的关键。它能将非常大的数据集存储在离内存非常近的位置,因此具有很低延迟的 CPU 可以最小化访问较慢的存储子系统的访问延迟,无论存储是 SSD 还是 HDD。现在,它提供了一种可能性,即把多个 TB 的数据放在非常接近 CPU 的地方,以实现更快的访问。 + +### Optane 内存的一个挑战 + +唯一真正的挑战是 Optane 插进内存所在的 DIMM 插槽。现在有些主板的每个 CPU 有多达 16 个 DIMM 插槽,但是这仍然是客户和设备制造商之间需要平衡的电路板空间:Optane 还是内存。有一些 Optane 驱动采用了 PCIe 接口进行连接,可以减轻主板上内存的拥挤。 + +3D Xpoint 由于它写数据的方式,提供了比传统的 NAND 闪存更高的耐用性。英特尔承诺 Optane 提供 5 年保修期,而很多 SSD 只提供 3 年保修期。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[RiaXu](https://github.com/ShuyRoy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg +[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html +[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html +[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html +[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb +[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb +[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/ +[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world diff --git a/published/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md b/published/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md new file mode 100644 index 0000000000..c80af774f7 --- /dev/null +++ b/published/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md @@ -0,0 +1,305 @@ +[#]: collector: (lujun9972) +[#]: translator: (chensanle) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13107-1.html) +[#]: subject: (Tmux Command Examples To Manage Multiple Terminal Sessions) +[#]: via: (https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +基于 Tmux 的多会话终端管理示例 +====== + +![](https://img.linux.net.cn/data/attachment/album/202102/11/101058ffso6wzzw94wm2ng.jpg) + +我们已经了解到如何通过 [GNU Screen][2] 进行多会话管理。今天,我们将要领略另一个著名的管理会话的命令行实用工具 **Tmux**。类似 GNU Screen,Tmux 是一个帮助我们在单一终端窗口中创建多个会话,同一时间内同时运行多个应用程序或进程的终端复用工具。Tmux 自由、开源并且跨平台,支持 Linux、OpenBSD、FreeBSD、NetBSD 以及 Mac OS X。本文将讨论 Tmux 在 Linux 系统下的高频用法。 + +### Linux 下安装 Tmux + +Tmux 可以在绝大多数的 Linux 官方仓库下获取。 + +在 Arch Linux 或它的变种系统下,执行下列命令来安装: + +``` +$ sudo pacman -S tmux +``` + +Debian、Ubuntu 或 Linux Mint: + +``` +$ sudo apt-get install tmux +``` + +Fedora: +``` +$ sudo dnf install tmux +``` + +RHEL 和 CentOS: + +``` +$ sudo yum install tmux +``` + +SUSE/openSUSE: + +``` +$ sudo zypper install tmux +``` + +以上,我们已经完成 Tmux 的安装。之后我们继续看看一些 Tmux 示例。 + +### Tmux 命令示例: 多会话管理 + +Tmux 默认所有命令的前置命令都是 `Ctrl+b`,使用前牢记这个快捷键即可。 + +> **注意**:**Screen** 的前置命令都是 `Ctrl+a`. + +#### 创建 Tmux 会话 + +在终端中运行如下命令创建 Tmux 会话并附着进入: + +``` +tmux +``` + +抑或, + +``` +tmux new +``` + +一旦进入 Tmux 会话,你将看到一个 **沉在底部的绿色的边栏**,如下图所示。 + +![][3] + +*创建 Tmux 会话* + +这个绿色的边栏能很容易提示你当前是否身处 Tmux 会话当中。 + +#### 退出 Tmux 会话 + +退出当前 Tmux 会话仅需要使用 `Ctrl+b` 和 `d`。无需同时触发这两个快捷键,依次按下 `Ctrl+b` 和 `d` 即可。 + +退出当前会话后,你将能看到如下输出: + +``` +[detached (from session 0)] +``` + +#### 创建有名会话 + +如果使用多个会话,你很可能会混淆运行在多个会话中的应用程序。这种情况下,我们需要会话并赋予名称。譬如需要 web 相关服务的会话,就创建一个名称为 “webserver”(或任意一个其他名称) 的 Tmux 会话。 + +``` +tmux new -s webserver +``` + +这里是新的 Tmux 有名会话: + +![][4] + +*拥有自定义名称的 Tmux 会话* + +如你所见上述截图,这个 Tmux 会话的名称已经被标注为 “webserver”。如此,你可以在多个会话中,轻易的区分应用程序的所在。 + +退出会话,轻按 `Ctrl+b` 和 `d`。 + +#### 查看 Tmux 会话清单 + +查看 Tmux 会话清单,执行: + +``` +tmux ls +``` + +示例输出: + +![][5] + +*列出 Tmux 会话* + +如你所见,我们开启了两个 Tmux 会话。 + +#### 创建非附着会话 + +有时候,你可能想要简单创建会话,但是并不想自动切入该会话。 + +创建一个非附着会话,并赋予名称 “ostechnix”,运行: + +``` +tmux new -s ostechnix -d +``` + +上述命令将会创建一个名为 “ostechnix” 的会话,但是并不会附着进入。 + +你可以通过使用 `tmux ls` 命令验证: + +![][6] + +*创建非附着会话* + +#### 附着进入 Tmux 会话 + +通过如下命令,你可以附着进入最后一个被创建的会话: + +``` +tmux attach +``` + +抑或, + +``` +tmux a +``` + +如果你想附着进入任意一个指定的有名会话,譬如 “ostechnix”,运行: + +``` +tmux attach -t ostechnix +``` + +或者,简写为: + +``` +tmux a -t ostechnix +``` + +#### 关闭 Tmux 会话 + +当你完成或者不再需要 Tmux 会话,你可以通过如下命令关闭: + +``` +tmux kill-session -t ostechnix +``` + +当身处该会话时,使用 `Ctrl+b` 以及 `x`。点击 `y` 来关闭会话。 + +可以通过 `tmux ls` 命令验证。 + +关闭所有 Tmux 服务下的所有会话,运行: + +``` +tmux kill-server +``` + +谨慎!这将终止所有 Tmux 会话,并不会产生任何警告,即便会话存在运行中的任务。 + +如果不存在活跃的 Tmux 会话,将看到如下输出: + +``` +$ tmux ls +no server running on /tmp/tmux-1000/default +``` + +#### 切割 Tmux 窗口 + +切割窗口成多个小窗口,在 Tmux 中,这个叫做 “Tmux 窗格”。每个窗格中可以同时运行不同的程序,并同时与所有的窗格进行交互。每个窗格可以在不影响其他窗格的前提下可以调整大小、移动位置和控制关闭。我们可以以水平、垂直或者二者混合的方式切割屏幕。 + +##### 水平切割窗格 + +欲水平切割窗格,使用 `Ctrl+b` 和 `"`(半个双引号)。 + +![][7] + +*水平切割 Tmux 窗格* + +可以使用组合键进一步切割面板。 + +##### 垂直切割窗格 + +垂直切割面板,使用 `Ctrl+b` 和 `%`。 + +![][8] + +*垂直切割 Tmux 窗格* + +##### 水平、垂直混合切割窗格 + +我们也可以同时采用水平和垂直的方案切割窗格。看看如下截图: + +![][9] + +*切割 Tmux 窗格* + +首先,我通过 `Ctrl+b` `"` 水平切割,之后通过 `Ctrl+b` `%` 垂直切割下方的窗格。 + +如你所见,每个窗格下我运行了不同的程序。 + +##### 切换窗格 + +通过 `Ctrl+b` 和方向键(上下左右)切换窗格。 + +##### 发送命令给所有窗格 + +之前的案例中,我们在每个窗格中运行了三个不同命令。其实,也可以发送相同的命令给所有窗格。 + +为此,使用 `Ctrl+b` 然后键入如下命令,之后按下回车: + +``` +:setw synchronize-panes +``` + +现在在任意窗格中键入任何命令。你将看到相同命令影响了所有窗格。 + +##### 交换窗格 + +使用 `Ctrl+b` 和 `o` 交换窗格。 + +##### 展示窗格号 + +使用 `Ctrl+b` 和 `q` 展示窗格号。 + +##### 终止窗格 + +要关闭窗格,直接键入 `exit` 并且按下回车键。或者,按下 `Ctrl+b` 和 `x`。你会看到确认信息。按下 `y` 关闭窗格。 + +![][10] + +*关闭窗格* + +##### 放大和缩小 Tmux 窗格 + +我们可以将 Tmux 窗格放大到当前终端窗口的全尺寸,以获得更好的文本可视性,并查看更多的内容。当你需要更多的空间或专注于某个特定的任务时,这很有用。在完成该任务后,你可以将 Tmux 窗格缩小(取消放大)到其正常位置。更多详情请看以下链接。 + +- [如何缩放 Tmux 窗格以提高文本可见度?](https://ostechnix.com/how-to-zoom-tmux-panes-for-better-text-visibility/) + +#### 自动启动 Tmux 会话 + +当通过 SSH 与远程系统工作时,在 Tmux 会话中运行一个长期运行的进程总是一个好的做法。因为,它可以防止你在网络连接突然中断时失去对运行进程的控制。避免这个问题的一个方法是自动启动 Tmux 会话。更多详情,请参考以下链接。 + +- [通过 SSH 登录远程系统时自动启动 Tmux 会话](https://ostechnix.com/autostart-tmux-session-on-remote-system-when-logging-in-via-ssh/) + +### 总结 + +这个阶段下,你已经获得了基本的 Tmux 技能来进行多会话管理,更多细节,参阅 man 页面。 + +``` +$ man tmux +``` + +GNU Screen 和 Tmux 工具都能透过 SSH 很好的管理远程服务器。学习 Screen 和 Tmux 命令,像个行家一样,彻底通过这些工具管理远程服务器。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[chensanle](https://github.com/chensanle) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-720x340.png +[2]: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/ +[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-session.png +[4]: https://www.ostechnix.com/wp-content/uploads/2019/06/Named-Tmux-session.png +[5]: https://www.ostechnix.com/wp-content/uploads/2019/06/List-Tmux-sessions.png +[6]: https://www.ostechnix.com/wp-content/uploads/2019/06/Create-detached-sessions.png +[7]: https://www.ostechnix.com/wp-content/uploads/2019/06/Horizontal-split.png +[8]: https://www.ostechnix.com/wp-content/uploads/2019/06/Vertical-split.png +[9]: https://www.ostechnix.com/wp-content/uploads/2019/06/Split-Panes.png +[10]: https://www.ostechnix.com/wp-content/uploads/2019/06/Kill-panes.png diff --git a/published/20190626 Where are all the IoT experts going to come from.md b/published/20190626 Where are all the IoT experts going to come from.md new file mode 100644 index 0000000000..5b10af4a11 --- /dev/null +++ b/published/20190626 Where are all the IoT experts going to come from.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: (scvoet) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13115-1.html) +[#]: subject: (Where are all the IoT experts going to come from?) +[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +物联网专家都从何而来? +====== + +> 物联网(IoT)的快速发展催生了对跨职能专家进行培养的需求,这些专家可以将传统的网络和基础设施专业知识与数据库和报告技能相结合。 + +![Kevin \(CC0\)][1] + +如果物联网(IoT)要实现其宏伟的诺言,它将需要大量聪明、熟练、**训练有素**的工人军团来实现这一切。而现在,这些人将从何而来尚不清楚。 + +这就是我为什么有兴趣同资产优化软件公司 [AspenTech][2] 的产品管理、研发高级总监 Keith Flynn 通邮件的原因,他说,当处理大量属于物联网范畴的新技术时,你需要能够理解如何配置技术和解释数据的人。Flynn 认为,现有的教育机构对物联网特定课程的需求越来越大,这同时也给了以物联网为重点,提供了完善课程的新私立学院机会。 + +Flynn 跟我说,“在未来,物联网项目将与如今普遍的数据管理和自动化项目有着巨大的不同……未来需要更全面的技能和交叉交易能力,这样我们才会说同一种语言。” + +Flynn 补充说,随着物联网每年增长 30%,将不再依赖于几个特定的技能,“从传统的部署技能(如网络和基础设施)到数据库和报告技能,坦白说,甚至是基础数据科学,都将需要一起理解和使用。” + +### 召集所有物联网顾问 + +Flynn 预测,“受过物联网教育的人的第一个大机会将会是在咨询领域,随着咨询公司对行业趋势的适应或淘汰……有受过物联网培训的员工将有助于他们在物联网项目中的定位,并在新的业务线中提出要求——物联网咨询。” + +对初创企业和小型公司而言,这个问题尤为严重。“组织越大,他们越有可能雇佣到不同技术类别的人”Flynn 这样说到,“但对于较小的组织和较小的物联网项目来说,你则需要一个能同时兼顾的人。” + +两者兼而有之?还是**一应俱全?**物联网“需要将所有知识和技能组合在一起”,Flynn 说到,“并不是所有技能都是全新的,只是在此之前从来没有被归纳在一起或放在一起教授过。” + +### 未来的物联网专家 + +Flynn 表示,真正的物联网专业技术是从基础的仪器仪表和电气技能开始的,这能帮助工人发明新的无线发射器或提升技术,以提高电池寿命和功耗。 + +“IT 技能,如网络、IP 寻址、子网掩码、蜂窝和卫星也是物联网的关键需求”,Flynn 说。他还认为物联网需要数据库管理技能和云管理和安全专业知识,“特别是当高级过程控制(APC)将传感器数据直接发送到数据库和数据湖等事情成为常态时。” + +### 物联网专家又从何而来? + +Flynn 说,标准化的正规教育课程将是确保毕业生或证书持有者掌握一套正确技能的最佳途径。他甚至还列出了一个样本课程。“按时间顺序开始,从基础知识开始,比如 [电气仪表] 和测量。然后讲授网络知识,数据库管理和云计算课程都应在此之后开展。这个学位甚至可以循序渐进至现有的工程课程中,这可能需要两年时间……来完成物联网部分的学业。” + +虽然企业培训也能发挥作用,但实际上却是“说起来容易做起来难”,Flynn 这样警告,“这些培训需要针对组织的具体努力而推动。” + +当然,现在市面上已经有了 [大量的在线物联网培训课程和证书课程][5]。但追根到底,这一工作全都依赖于工人自身的推断。 + +“在这个世界上,随着科技不断改变行业,提升技能是非常重要的”,Flynn 说,“如果这种提升技能的推动力并不是来源于你的雇主,那么在线课程和认证将会是提升你自己很好的一个方式。我们只需要创建这些课程……我甚至可以预见组织将与提供这些课程的高等教育机构合作,让他们的员工更好地开始。当然,物联网课程的挑战在于它需要不断发展以跟上科技的发展。” + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[Percy (@scvoet)](https://github.com/scvoet) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg +[2]: https://www.aspentech.com/ +[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career +[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training +[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb +[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb +[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb +[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/published/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md b/published/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md new file mode 100644 index 0000000000..54c752a913 --- /dev/null +++ b/published/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: (Chao-zhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13096-1.html) +[#]: subject: (EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World) +[#]: via: (https://itsfoss.com/endeavouros/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +EndeavourOS:填补 Antergos 在 ArchLinux 世界留下的空白 +====== + +![](https://img.linux.net.cn/data/attachment/album/202102/07/225558rdb85bmm6uumro71.jpg) + +我相信我们的大多数读者都知道 [Antergos 项目的终结][2]。在这一消息宣布之后,Antergos 社区的成员创建了几个发行版来继承 Antergos。今天,我们将着眼于 Antergos 的“精神继承人”之一:[EndeavourOS][3]。 + +### EndeavourOS 不是 Antergos 的分支 + +在我们开始之前,我想非常明确地指出,EndeavourOS 并不是一个 Antergos 的复刻版本。开发者们以 Antergos 为灵感,创建了一个基于 Arch 的轻量级发行版。 + +![Endeavouros First Boot][4] + +根据 [这个项目网站][5] 的说法,EndeavourOS 的诞生是因为 Antergos 社区的人们想要保持 Antergos 的精神。他们的目标很简单:“让 Arch 拥有一个易于使用的安装程序和一个友好、有帮助的社区,在掌握系统的过程中能够有一个社区可以依靠。” + +与许多基于 Arch 的发行版不同,EndeavourOS 打算像 [原生 Arch][5] 那样使用,“所以没有一键式安装你喜欢的应用程序的解决方案,也没有一堆你最终不需要的预装应用程序。”对于大多数人来说,尤其是那些刚接触 Linux 和 Arch 的人,会有一个学习曲线,但 EndeavourOS 的目标是建立一个大型友好的社区,鼓励人们提出问题并了解他们的系统。 + +![Endeavouros Installing][6] + +### 正在进行的工作 + +EndeavourOS 在 [2019 年 5 月 23 日首次宣布成立][8] 随后 [在 7 月 15 日发布第一个版本][7]。不幸的是,这意味着开发人员无法将他们计划的所有功能全部整合进来。(LCTT 译注:本文原文发表于 2019 年,而现在,EndeavourOS 还在持续活跃着。) + +例如,他们想要一个类似于 Antergos 的在线安装,但却遇到了[当前选项的问题][9]。“Cnchi 运行在 Antergos 生态系统之外会造成严重的问题,需要彻底重写才能发挥作用。RebornOS 的 Fenix 安装程序还没有完全成型,需要更多时间才能正常运行。”于是现在,EndeavourOS 将会和 [Calamares 安装程序 ][10] 一起发布。 + +EndeavourOS 会提供 [比 Antergos 少的东西][9]:它的存储库比 Antergos 小,尽管他们会附带一些 AUR 包。他们的目标是提供一个接近 Arch 却不是原生 Arch 的系统。 + +![Endeavouros Updating With Kalu][12] + +开发者[进一步声明 ][13]: + +> “Linux,特别是 Arch,核心精神是自由选择,我们提供了一个基本的安装,让你在一个精细的层面上方便地探索各项选择。我们永远不会强行为你作决定,比如为你安装 GUI 应用程序,如 Pamac,甚至采用沙盒解决方案,如 Flatpak 或 Snaps。想安装成什么样子完全取决于你,这是我们与 Antergos 或 Manjaro 的主要区别,但与 Antergos 一样,如果你安装的软件包遇到问题,我们会尽力帮助你。” + +### 体验 EndeavourOS + +我在 [VirtualBox][14] 中安装了 EndeavourOS,并且研究了一番。当我第一次启动时,我看到一个窗口,里面有关于安装的 EndeavourOS 网站的链接。它还有一个安装按钮和一个手动分区工具。Calamares 安装程序的安装过程非常顺利。 + +在我重新启动到新安装的 EndeavourOS 之后,迎接我的是一个彩色主题的 XFCE 桌面。我还收到了一堆通知消息。我使用过的大多数基于 Arch 的发行版都带有一个 GUI 包管理器,比如 [pamac][15] 或 [octopi][16],以进行系统更新。EndeavourOS 配有 [kalu][17](kalu 是 “Keeping Arch Linux Up-to-date” 的缩写)。它可以更新软件包、可以看 Archlinux 新闻、可以更新 AUR 包等等。一旦它检查到有更新,它就会显示通知消息。 + +我浏览了一下菜单,看看默认安装了什么。默认的安装并不多,连办公套件都没有。他们想让 EndeavourOS 成为一块空白画布,让任何人都可以创建他们想要的系统。他们正朝着正确的方向前进。 + +![Endeavouros Desktop][18] + +### 总结思考 + +EndeavourOS 还很年轻。第一个稳定版本都没有发布多久。它缺少一些东西,最重要的是一个在线安装程序。这就是说,我们无法估计他能够走到哪一步。(LCTT 译注:本文发表于 2019 年) + +虽然它不是 Antergos 的精确复刻,但 EndeavourOS 希望复制 Antergos 最重要的部分——热情友好的社区。很多时候,Linux 社区对初学者似乎是不受欢迎甚至是完全敌对的。我看到越来越多的人试图与这种消极情绪作斗争,并将更多的人引入 Linux。随着 EndeavourOS 团队把焦点放在社区建设上,我相信一个伟大的发行版将会诞生。 + +如果你当前正在使用 Antergos,有一种方法可以让你[不用重装系统就切换到 EndeavourOS][20] + +如果你想要一个 Antergos 的精确复刻,我建议你去看看 [RebornOS][21]。他们目前正在开发一个名为 Fenix 的 Cnchi 安装程序的替代品。 + +你试过 EndeavourOS 了吗?你的感受如何? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/endeavouros/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-logo.png?ssl=1 +[2]: https://itsfoss.com/antergos-linux-discontinued/ +[3]: https://endeavouros.com/ +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-first-boot.png?resize=800%2C600&ssl=1 +[5]: https://endeavouros.com/info-2/ +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-installing.png?resize=800%2C600&ssl=1 +[7]: https://endeavouros.com/endeavouros-first-stable-release-has-arrived/ +[8]: https://forum.antergos.com/topic/11780/endeavour-antergos-community-s-next-stage +[9]: https://endeavouros.com/what-to-expect-on-the-first-release/ +[10]: https://calamares.io/ +[11]: https://itsfoss.com/veltos-linux/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-updating-with-kalu.png?resize=800%2C600&ssl=1 +[13]: https://endeavouros.com/second-week-after-the-stable-release/ +[14]: https://itsfoss.com/install-virtualbox-ubuntu/ +[15]: https://aur.archlinux.org/packages/pamac-aur/ +[16]: https://octopiproject.wordpress.com/ +[17]: https://github.com/jjk-jacky/kalu +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-desktop.png?resize=800%2C600&ssl=1 +[19]: https://itsfoss.com/clear-linux/ +[20]: https://forum.endeavouros.com/t/how-to-switch-from-antergos-to-endevouros/105/2 +[21]: https://rebornos.org/ diff --git a/published/20191227 The importance of consistency in your Python code.md b/published/20191227 The importance of consistency in your Python code.md new file mode 100644 index 0000000000..5ff2d11caf --- /dev/null +++ b/published/20191227 The importance of consistency in your Python code.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: (stevenzdg988) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13082-1.html) +[#]: subject: (The importance of consistency in your Python code) +[#]: via: (https://opensource.com/article/19/12/zen-python-consistency) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez) + +Python 代码一致性的重要性 +====== + +> 本文是 Python 之禅特殊系列的一部分,重点是第十二、十三和十四原则:模糊性和明确性的作用。 + +![](https://img.linux.net.cn/data/attachment/album/202102/03/231758po1lcicxmxyjxlba.jpg) + +最小惊喜原则是设计用户界面时的一个 [准则][2]。它是说,当用户执行某项操作时,程序执行的事情应该使用户尽量少地感到意外。这和孩子们喜欢一遍又一遍地读同一本书的原因是一样的:没有什么比能够预测并让预测成真更让人欣慰的了。 + +在开发 [ABC 语言][3](Python 的灵感来源)的过程中,一个重要的见解是,编程设计是用户界面,需要使用与 UI 设计者相同的工具来设计。值得庆幸的是,从那以后,越来越多的语言采用了 UI 设计中的可承受性affordance人体工程学ergonomics的概念,即使它们的应用并不严格。 + +这就引出了 [Python 之禅][4] 中的三个原则。 + +### 面对歧义,要拒绝猜测的诱惑In the face of ambiguity, refuse the temptation to guess + +`1 + "1"` 的结果应该是什么? `"11"` 和 `2` 都是猜测。这种表达方式是*歧义的*:无论如何做都会让一些人感到惊讶。 + +一些语言选择猜测。在 JavaScript 中,结果为 `"11"`。在 Perl 中,结果为 `2`。在 C 语言中,结果自然是空字符串。面对歧义,JavaScript、Perl 和 C 都在猜测。 + +在 Python 中,这会引发 `TypeError`:这不是能忽略的错误。捕获 `TypeError` 是非典型的:它通常将终止程序或至少终止当前任务(例如,在大多数 Web 框架中,它将终止对当前请求的处理)。 + +Python 拒绝猜测 `1 + "1"` 的含义。程序员必须以明确的意图编写代码:`1 + int("1")`,即 `2`;或者 `str(1) + "1"`,即 `"11"`;或 `"1"[1:]`,这将是一个空字符串。通过拒绝猜测,Python 使程序更具可预测性。 + +### 尽量找一种,最好是唯一一种明显的解决方案There should be one—and preferably only one—obvious way to do it + +预测也会出现偏差。给定一个任务,你能预知要实现该任务的代码吗?当然,不可能完美地预测。毕竟,编程是一项具有创造性的任务。 + +但是,不必有意提供多种冗余方式来实现同一目标。从某种意义上说,某些解决方案或许 “更好” 或 “更 Python 化Pythonic”。 + +对 Python 美学欣赏部分是因为,可以就哪种解决方案更好进行健康的辩论。甚至可以持不同观点而继续编程。甚至为使其达成一致,接受不同意的观点也是可以的。但在这一切之下,必须有一种这样的认识,即正确的解决方案终将会出现。我们必须希望,通过商定实现目标的最佳方法,而最终达成真正的一致。 + +### 虽然这种方式一开始可能并不明显(除非你是荷兰人)Although that way may not be obvious at first (unless you're Dutch) + +这是一个重要的警告:首先,实现任务的最佳方法往往*不*明显。观念在不断发展。Python 也在进化。逐块读取文件的最好方法,可能要等到 Python 3.8 时使用 [walrus 运算符][5](`:=`)。 + +逐块读取文件这样常见的任务,在 Python 存在近 *30年* 的历史中并没有 “唯一的最佳方法”。 + +当我在 1998 年从 Python 1.5.2 开始使用 Python 时,没有一种逐行读取文件的最佳方法。多年来,知道字典中是否有某个键的最佳方法是使用关键字 `.haskey`,直到 `in` 操作符出现才发生改变。 + +只是要意识到找到实现目标的一种(也是唯一一种)方法可能需要 30 年的时间来尝试其它方法,Python 才可以不断寻找这些方法。这种历史观认为,为了做一件事用上 30 年是可以接受的,但对于美国这个存在仅 200 多年的国家来说,人们常常会感到不习惯。 + +从 Python 之禅的这一部分来看,荷兰人,无论是 Python 的创造者 [Guido van Rossum][6] 还是著名的计算机科学家 [Edsger W. Dijkstra][7],他们的世界观是不同的。要理解这一部分,某种程度的欧洲人对时间的感受是必不可少的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/12/zen-python-consistency + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[stevenzdg988](https://github.com/stevenzdg988) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 (Two animated computers waving one missing an arm) +[2]: https://www.uxpassion.com/blog/the-principle-of-least-surprise/ +[3]: https://en.wikipedia.org/wiki/ABC_(programming_language) +[4]: https://www.python.org/dev/peps/pep-0020/ +[5]: https://www.python.org/dev/peps/pep-0572/#abstract +[6]: https://en.wikipedia.org/wiki/Guido_van_Rossum +[7]: http://en.wikipedia.org/wiki/Edsger_W._Dijkstra diff --git a/published/20191228 The Zen of Python- Why timing is everything.md b/published/20191228 The Zen of Python- Why timing is everything.md new file mode 100644 index 0000000000..6ebab5320a --- /dev/null +++ b/published/20191228 The Zen of Python- Why timing is everything.md @@ -0,0 +1,45 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13103-1.html) +[#]: subject: (The Zen of Python: Why timing is everything) +[#]: via: (https://opensource.com/article/19/12/zen-python-timeliness) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez) + +Python 之禅:时机最重要 +====== + +> 这是 Python 之禅特别系列的一部分,重点是第十五和第十六条原则:现在与将来。 + +![](https://img.linux.net.cn/data/attachment/album/202102/09/231557dkuzz22ame4ja2jj.jpg) + +Python 一直在不断发展。Python 社区对特性请求的渴求是无止境的,对现状也总是不满意的。随着 Python 越来越流行,这门语言的变化会影响到更多的人。 + +确定什么时候该进行变化往往很难,但 [Python 之禅][2] 给你提供了指导。 + +### 现在有总比永远没有好Now is better than never + +总有一种诱惑,就是要等到事情完美才去做,虽然,它们永远没有完美的一天。当它们看起来已经“准备”得足够好了,那就大胆采取行动吧,去做出改变吧。无论如何,变化总是发生在*某个*现在:拖延的唯一作用就是把它移到未来的“现在”。 + +### 虽然将来总比现在好Although never is often better than right now + +然而,这并不意味着应该急于求成。从测试、文档、用户反馈等方面决定发布的标准。在变化就绪之前的“现在”,并不是一个好时机。 + +这不仅对 Python 这样的流行语言是个很好的经验,对你个人的小开源项目也是如此。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/12/zen-python-timeliness + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6 (Clock, pen, and notepad on a desk) +[2]: https://www.python.org/dev/peps/pep-0020/ diff --git a/published/20191229 How to tell if implementing your Python code is a good idea.md b/published/20191229 How to tell if implementing your Python code is a good idea.md new file mode 100644 index 0000000000..1a2358a77c --- /dev/null +++ b/published/20191229 How to tell if implementing your Python code is a good idea.md @@ -0,0 +1,47 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13116-1.html) +[#]: subject: (How to tell if implementing your Python code is a good idea) +[#]: via: (https://opensource.com/article/19/12/zen-python-implementation) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez) + +如何判断你的 Python 代码实现是否合适? +====== + +> 这是 Python 之禅特别系列的一部分,重点介绍第十七和十八条原则:困难和容易。 + +![](https://img.linux.net.cn/data/attachment/album/202102/14/120518rjkwvjs76p9d1911.jpg) + +一门语言并不是抽象存在的。每一个语言功能都必须用代码来实现。承诺一些功能是很容易的,但实现起来就会很麻烦。复杂的实现意味着更多潜在的 bug,甚至更糟糕的是,会带来日复一日的维护负担。 + +对于这个难题,[Python 之禅][2] 中有答案。 + +### 如果一个实现难以解释,那就是个坏思路If the implementation is hard to explain, it's a bad idea + +编程语言最重要的是可预测性。有时我们用抽象的编程模型来解释某个结构的语义,而这些模型与实现并不完全对应。然而,最好的释义就是*解释该实现*。 + +如果该实现很难解释,那就意味着这条路行不通。 + +### 如果一个实现易于解释,那它可能是一个好思路If the implementation is easy to explain, it may be a good idea + +仅仅因为某事容易,并不意味着它值得。然而,一旦解释清楚,判断它是否是一个好思路就容易得多。 + +这也是为什么这个原则的后半部分故意含糊其辞的原因:没有什么可以肯定一定是好的,但总是可以讨论一下。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/12/zen-python-implementation + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager) +[2]: https://www.python.org/dev/peps/pep-0020/ diff --git a/published/20191230 Namespaces are the shamash candle of the Zen of Python.md b/published/20191230 Namespaces are the shamash candle of the Zen of Python.md new file mode 100644 index 0000000000..dc98e7afc0 --- /dev/null +++ b/published/20191230 Namespaces are the shamash candle of the Zen of Python.md @@ -0,0 +1,58 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13123-1.html) +[#]: subject: (Namespaces are the shamash candle of the Zen of Python) +[#]: via: (https://opensource.com/article/19/12/zen-python-namespaces) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez) + +命名空间是 Python 之禅的精髓 +====== + +> 这是 Python 之禅特别系列的一部分,重点是一个额外的原则:命名空间。 + +![](https://img.linux.net.cn/data/attachment/album/202102/16/105800d64ceaeertt4u4ee.jpg) + +著名的光明节Hanukkah有八个晚上的庆祝活动。然而,光明节的灯台有九根蜡烛:八根普通的蜡烛和总是偏移的第九根蜡烛。它被称为 “shamash” 或 “shamos”,大致可以翻译为“仆人”或“看门人”的意思。 + +shamos 是点燃所有其它蜡烛的蜡烛:它是唯一一支可以用火的蜡烛,而不仅仅是观看。当我们结束 Python 之禅系列时,我看到命名空间提供了类似的作用。 + +### Python 中的命名空间 + +Python 使用命名空间来处理一切。虽然简单,但它们是稀疏的数据结构 —— 这通常是实现目标的最佳方式。 + +> *命名空间* 是一个从名字到对象的映射。 +> +> —— [Python.org][2] + +模块是命名空间。这意味着正确地预测模块语义通常只需要熟悉 Python 命名空间的工作方式。类是命名空间,对象是命名空间。函数可以访问它们的本地命名空间、父命名空间和全局命名空间。 + +这个简单的模型,即用 `.` 操作符访问一个对象,而这个对象又通常(但并不总是)会进行某种字典查找,这使得 Python 很难优化,但很容易解释。 + +事实上,一些第三方模块也采取了这个准则,并以此来运行。例如,[variants][3] 包把函数变成了“相关功能”的命名空间。这是一个很好的例子,说明 [Python 之禅][4] 是如何激发新的抽象的。 + +### 结语 + +感谢你和我一起参加这次以光明节为灵感的 [我最喜欢的语言][5] 的探索。 + +静心参禅,直至悟道。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/12/zen-python-namespaces + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building) +[2]: https://docs.python.org/3/tutorial/classes.html +[3]: https://pypi.org/project/variants/ +[4]: https://www.python.org/dev/peps/pep-0020/ +[5]: https://opensource.com/article/19/10/why-love-python diff --git a/published/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md b/published/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md new file mode 100644 index 0000000000..8171a81ede --- /dev/null +++ b/published/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md @@ -0,0 +1,244 @@ +[#]: collector: (lujun9972) +[#]: translator: (Chao-zhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13099-1.html) +[#]: subject: (Getting Started With Pacman Commands in Arch-based Linux Distributions) +[#]: via: (https://itsfoss.com/pacman-command/) +[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) + +Arch Linux 的 pacman 命令入门 +====== + +> 这本初学者指南向你展示了在 Linux 中可以使用 pacman 命令做什么,如何使用它们来查找新的软件包,安装和升级新的软件包,以及清理你的系统。 + +[pacman][1] 包管理器是 [Arch Linux][2] 和其他主要发行版如 Red Hat 和 Ubuntu/Debian 之间的主要区别之一。它结合了简单的二进制包格式和易于使用的 [构建系统][3]。`pacman` 的目标是方便地管理软件包,无论它是来自 [官方库][4] 还是用户自己构建的软件库。 + +如果你曾经使用过 Ubuntu 或基于 debian 的发行版,那么你可能使用过 `apt-get` 或 `apt` 命令。`pacman` 在 Arch Linux 中是同样的命令。如果你 [刚刚安装了 Arch Linux][5],在安装 Arch Linux 后,首先要做的 [几件事][6] 之一就是学习使用 `pacman` 命令。 + +在这个初学者指南中,我将解释一些基本的 `pacman` 命令的用法,你应该知道如何用这些命令来管理你的基于 Archlinux 的系统。 + +### Arch Linux 用户应该知道的几个重要的 pacman 命令 + +![](https://img.linux.net.cn/data/attachment/album/202102/09/111411uqadijqdd8afgk56.jpg) + +与其他包管理器一样,`pacman` 可以将包列表与软件库同步,它能够自动解决所有所需的依赖项,以使得用户可以通过一个简单的命令下载和安装软件。 + +#### 通过 pacman 安装软件 + +你可以用以下形式的代码来安装一个或者多个软件包: + +``` +pacman -S 软件包名1 软件包名2 ... +``` + +![安装一个包][8] + +`-S` 选项的意思是同步synchronization,它的意思是 `pacman` 在安装之前先与软件库进行同步。 + +`pacman` 数据库根据安装的原因将安装的包分为两组: + + * **显式安装**:由 `pacman -S` 或 `-U` 命令直接安装的包 + * **依赖安装**:由于被其他显式安装的包所 [依赖][9],而被自动安装的包。 + +#### 卸载已安装的软件包 + +卸载一个包,并且删除它的所有依赖。 + +``` +pacman -R 软件包名 +``` + +![移除一个包][10] + +删除一个包,以及其不被其他包所需要的依赖项: + +``` +pacman -Rs 软件包名 +``` + +如果需要这个依赖的包已经被删除了,这条命令可以删除所有不再需要的依赖项: + +``` +pacman -Qdtq | pacman -Rs - +``` + +#### 升级软件包 + +`pacman` 提供了一个简单的办法来 [升级 Arch Linux][11]。你只需要一条命令就可以升级所有已安装的软件包。这可能需要一段时间,这取决于系统的新旧程度。 + +以下命令可以同步存储库数据库,*并且* 更新系统的所有软件包,但不包括不在软件库中的“本地安装的”包: + +``` +pacman -Syu +``` + + * `S` 代表同步 + * `y` 代表更新本地存储库 + * `u` 代表系统更新 + +也就是说,同步到中央软件库(主程序包数据库),刷新主程序包数据库的本地副本,然后执行系统更新(通过更新所有有更新版本可用的程序包)。 + +![系统更新][12] + +> 注意! +> +> 对于 Arch Linux 用户,在系统升级前,建议你访问 [Arch-Linux 主页][2] 查看最新消息,以了解异常更新的情况。如果系统更新需要人工干预,主页上将发布相关的新闻。你也可以订阅 [RSS 源][13] 或 [Arch 的声明邮件][14]。 +> +> 在升级基础软件(如 kernel、xorg、systemd 或 glibc) 之前,请注意查看相应的 [论坛][15],以了解大家报告的各种问题。 +> +> 在 Arch 和 Manjaro 等滚动发行版中不支持**部分升级**。这意味着,当新的库版本被推送到软件库时,软件库中的所有包都需要根据库版本进行升级。例如,如果两个包依赖于同一个库,则仅升级一个包可能会破坏依赖于该库的旧版本的另一个包。 + +#### 用 Pacman 查找包 + +`pacman` 使用 `-Q` 选项查询本地包数据库,使用 `-S` 选项查询同步数据库,使用 `-F` 选项查询文件数据库。 + +`pacman` 可以在数据库中搜索包,包括包的名称和描述: + +``` +pacman -Ss 字符串1 字符串2 ... +``` + +![查找一个包][16] + +查找已经被安装的包: + +``` +pacman -Qs 字符串1 字符串2 ... +``` + +根据文件名在远程软包中查找它所属的包: + +``` +pacman -F 字符串1 字符串2 ... +``` + +查看一个包的依赖树: + +``` +pactree 软件包名 +``` + +#### 清除包缓存 + +`pacman` 将其下载的包存储在 `/var/cache/Pacman/pkg/` 中,并且不会自动删除旧版本或卸载的版本。这有一些优点: + + 1. 它允许 [降级][17] 一个包,而不需要通过其他来源检索以前的版本。 + 2. 已卸载的软件包可以轻松地直接从缓存文件夹重新安装。 + +但是,有必要定期清理缓存以防止文件夹增大。 + +[pacman contrib][19] 包中提供的 [paccache(8)][18] 脚本默认情况下会删除已安装和未安装包的所有缓存版本,但最近 3 个版本除外: + +``` +paccache -r +``` + +![清除缓存][20] + +要删除当前未安装的所有缓存包和未使用的同步数据库,请执行: + +``` +pacman -Sc +``` + +要从缓存中删除所有文件,请使用清除选项两次,这是最激进的方法,不会在缓存文件夹中留下任何内容: + +``` +pacman -Scc +``` + +#### 安装本地或者第三方的包 + +安装不是来自远程存储库的“本地”包: + +``` +pacman -U 本地软件包路径.pkg.tar.xz +``` + +安装官方存储库中未包含的“远程”软件包: + +``` +pacman -U http://www.example.com/repo/example.pkg.tar.xz +``` + +### 额外内容:用 pacman 排除常见错误 + +下面是使用 `pacman` 管理包时可能遇到的一些常见错误。 + +#### 提交事务失败(文件冲突) + +如果你看到以下报错: + +``` +error: could not prepare transaction +error: failed to commit transaction (conflicting files) +package: /path/to/file exists in filesystem +Errors occurred, no packages were upgraded. +``` + +这是因为 `pacman` 检测到文件冲突,不会为你覆盖文件。 + +解决这个问题的一个安全方法是首先检查另一个包是否拥有这个文件(`pacman-Qo 文件路径`)。如果该文件属于另一个包,请提交错误报告。如果文件不属于另一个包,请重命名“存在于文件系统中”的文件,然后重新发出更新命令。如果一切顺利,文件可能会被删除。 + +你可以显式地运行 `pacman -S –overwrite 要覆盖的文件模式**,强制 `pacman` 覆盖与 给模式匹配的文件,而不是手动重命名并在以后删除属于该包的所有文件。 + +#### 提交事务失败(包无效或损坏) + +在 `/var/cache/pacman/pkg/` 中查找 `.part` 文件(部分下载的包),并将其删除。这通常是由在 `pacman.conf` 文件中使用自定义 `XferCommand` 引起的。 + +#### 初始化事务失败(无法锁定数据库) + +当 `pacman` 要修改包数据库时,例如安装包时,它会在 `/var/lib/pacman/db.lck` 处创建一个锁文件。这可以防止 `pacman` 的另一个实例同时尝试更改包数据库。 + +如果 `pacman` 在更改数据库时被中断,这个过时的锁文件可能仍然保留。如果你确定没有 `pacman` 实例正在运行,那么请删除锁文件。 + +检查进程是否持有锁定文件: + +``` +lsof /var/lib/pacman/db.lck +``` + +如果上述命令未返回任何内容,则可以删除锁文件: + +``` +rm /var/lib/pacman/db.lck +``` + +如果你发现 `lsof` 命令输出了使用锁文件的进程的 PID,请先杀死这个进程,然后删除锁文件。 + +我希望你喜欢我对 `pacman` 基础命令的介绍。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/pacman-command/ + +作者:[Dimitrios Savvopoulos][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/dimitrios/ +[b]: https://github.com/lujun9972 +[1]: https://www.archlinux.org/pacman/ +[2]: https://www.archlinux.org/ +[3]: https://wiki.archlinux.org/index.php/Arch_Build_System +[4]: https://wiki.archlinux.org/index.php/Official_repositories +[5]: https://itsfoss.com/install-arch-linux/ +[6]: https://itsfoss.com/things-to-do-after-installing-arch-linux/ +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/essential-pacman-commands.jpg?ssl=1 +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-S.png?ssl=1 +[9]: https://wiki.archlinux.org/index.php/Dependency +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-R.png?ssl=1 +[11]: https://itsfoss.com/update-arch-linux/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Syu.png?ssl=1 +[13]: https://www.archlinux.org/feeds/news/ +[14]: https://mailman.archlinux.org/mailman/listinfo/arch-announce/ +[15]: https://bbs.archlinux.org/ +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Ss.png?ssl=1 +[17]: https://wiki.archlinux.org/index.php/Downgrade +[18]: https://jlk.fjfi.cvut.cz/arch/manpages/man/paccache.8 +[19]: https://www.archlinux.org/packages/?name=pacman-contrib +[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-paccache-r.png?ssl=1 diff --git a/published/20200529 A new way to build cross-platform UIs for Linux ARM devices.md b/published/20200529 A new way to build cross-platform UIs for Linux ARM devices.md new file mode 100644 index 0000000000..9a742f2eb2 --- /dev/null +++ b/published/20200529 A new way to build cross-platform UIs for Linux ARM devices.md @@ -0,0 +1,170 @@ +[#]: collector: (lujun9972) +[#]: translator: (Chao-zhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13129-1.html) +[#]: subject: (A new way to build cross-platform UIs for Linux ARM devices) +[#]: via: (https://opensource.com/article/20/5/linux-arm-ui) +[#]: author: (Bruno Muniz https://opensource.com/users/brunoamuniz) + +一种为 Linux ARM 设备构建跨平台 UI 的新方法 +====== + +> AndroidXML 和 TotalCross 的运用为树莓派和其他设备创建 UI 提供了更简单的方法。 + +![](https://img.linux.net.cn/data/attachment/album/202102/18/123715oomfuuz94ioi41ii.jpg) + +为应用程序创建良好的用户体验(UX)是一项艰巨的任务,尤其是在开发嵌入式应用程序时。今天,有两种图形用户界面(GUI)工具通常用于开发嵌入式软件:它们要么涉及复杂的技术,要么非常昂贵。 + +然而,我们已经创建了一个概念验证(PoC),它提供了一种新的方法来使用现有的、成熟的工具为运行在桌面、移动、嵌入式设备和低功耗 ARM 设备上的应用程序构建用户界面(UI)。我们的方法是使用 Android Studio 绘制 UI;使用 [TotalCross][2] 在设备上呈现 Android XML;采用被称为 [KnowCode][4] 的新 [TotalCross API][3];以及使用 [树莓派 4][5] 来执行应用程序。 + +### 选择 Android Studio + +可以使用 TotalCross API 为应用程序构建一个美观的响应式用户体验,但是在 Android Studio 中创建 UI 缩短了制作原型和实际应用程序之间的时间。 + +有很多工具可以用来为应用程序构建 UI,但是 [Android Studio][6] 是全世界开发者最常使用的工具。除了它被大量采用以外,这个工具的使用也非常直观,而且它对于创建简单和复杂的应用程序都非常强大。在我看来,唯一的缺点是使用该工具所需的计算机性能,它比其他集成开发环境 (IDE) 如 VSCode 或其开源替代方案 [VSCodium][7] 要庞大得多。 + +通过思考这些问题,我们创建了一个概念验证,使用 Android Studio 绘制 UI,并使用 TotalCross 直接在设备上运行 AndroidXML。 + +### 构建 UI + +对于我们的 PoC,我们想创建一个家用电器应用程序来控制温度和其他东西,并在 Linux ARM 设备上运行。 + +![Home appliance application to control thermostat][8] + +我们想为树莓派开发我们的应用程序,所以我们使用 Android 的 [ConstraintLayout][10] 来构建 848x480(树莓派的分辨率)的固定屏幕大小的 UI,不过你可以用其他布局构建响应性 UI。 + +Android XML 为 UI 创建增加了很多灵活性,使得为应用程序构建丰富的用户体验变得容易。在下面的 XML 中,我们使用了两个主要组件:[ImageView][11] 和 [TextView][12]。 + +``` + + +``` + +TextView 元素用于向用户显示一些数据,比如建筑物内的温度。大多数 ImageView 都用作用户与 UI 交互的按钮,但它们也需要实现屏幕上组件提供的事件。 + +### 用 TotalCross 整合 + +这个 PoC 中的第二项技术是 TotalCross。我们不想在设备上使用 Android 的任何东西,因为: + + 1。我们的目标是为 Linux ARM 提供一个出色的 UI。 + 2。我们希望在设备上实现低占用。 + 3。我们希望应用程序在低计算能力的低端硬件设备上运行(例如,没有 GPU、 低 RAM 等)。 + +首先,我们使用 [VSCode 插件][13] 创建了一个空的 TotalCross 项目。接下来,我们保存了 `drawable` 文件夹中的图像副本和 `xml` 文件夹中的 Android XML 文件副本,这两个文件夹都位于 `resources` 文件夹中: + +![Home Appliance file structure][14] + +为了使用 TotalCross 模拟器运行 XML 文件,我们添加了一个名为 KnowCode 的新 TotalCross API 和一个主窗口来加载 XML。下面的代码使用 API 加载和呈现 XML: + +``` +public void initUI() { +    XmlScreenAbstractLayout xmlCont = XmlScreenFactory.create("xml / homeApplianceXML.xml"); +    swap(xmlCont); +} +``` + +就这样!只需两个命令,我们就可以使用 TotalCross 运行 Android XML 文件。以下是 XML 如何在 TotalCross 的模拟器上执行: + +![TotalCross simulator running temperature application][15] + +完成这个 PoC 还有两件事要做:添加一些事件来提供用户交互,并在树莓派上运行它。 + +### 添加事件 + +KnowCode API 提供了一种通过 ID(`getControlByID`) 获取 XML 元素并更改其行为的方法,如添加事件、更改可见性等。 + +例如,为了使用户能够改变家中或其他建筑物的温度,我们在 UI 底部放置了加号和减号按钮,并在每次单击按钮时都会出现“单击”事件,使温度升高或降低一度: + +``` +Button plus = (Button) xmlCont.getControlByID("@+id/plus"); +Label insideTempLabel = (Label) xmlCont.getControlByID("@+id/insideTempLabel"); +plus.addPressListener(new PressListener() { + @Override + public void controlPressed(ControlEvent e) { + try { + String tempString = insideTempLabel.getText(); + int temp; + temp = Convert.toInt(tempString); + insideTempLabel.setText(Convert.toString(++temp)); + } catch (InvalidNumberException e1) { + e1.printStackTrace(); + } + } +}); +``` + +### 在树莓派 4 上测试 + +最后一步!我们在一台设备上运行了应用程序并检查了结果。我们只需要打包应用程序并在目标设备上部署和运行它。[VNC][19] 也可用于检查设备上的应用程序。 + +整个应用程序,包括资源(图像等)、Android XML、TotalCross 和 Knowcode API,在 Linux ARM 上大约是 8MB。 + +下面是应用程序的演示: + +![Application demo][20] + +在本例中,该应用程序仅为 Linux ARM 打包,但同一应用程序可以作为 Linux 桌面应用程序运行,在Android 设备 、Windows、windows CE 甚至 iOS 上运行。 + +所有示例源代码和项目都可以在 [HomeApplianceXML GitHub][21] 存储库中找到。 + +### 现有工具的新玩法 + +为嵌入式应用程序创建 GUI 并不需要像现在这样困难。这种概念证明为如何轻松地完成这项任务提供了新的视角,不仅适用于嵌入式系统,而且适用于所有主要的操作系统,所有这些系统都使用相同的代码库。 + +我们的目标不是为设计人员或开发人员创建一个新的工具来构建 UI 应用程序;我们的目标是为使用现有的最佳工具提供新的玩法。 + +你对这种新的应用程序开发方式有何看法?在下面的评论中分享你的想法。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/linux-arm-ui + +作者:[Bruno Muniz][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brunoamuniz +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop) +[2]: https://totalcross.com/ +[3]: https://yourapp.totalcross.com/knowcode-app +[4]: https://github.com/TotalCross/KnowCodeXML +[5]: https://www.raspberrypi.org/ +[6]: https://developer.android.com/studio +[7]: https://vscodium.com/ +[8]: https://opensource.com/sites/default/files/uploads/homeapplianceapp.png (Home appliance application to control thermostat) +[9]: https://creativecommons.org/licenses/by-sa/4.0/ +[10]: https://codelabs.developers.google.com/codelabs/constraint-layout/index.html#0 +[11]: https://developer.android.com/reference/android/widget/ImageView +[12]: https://developer.android.com/reference/android/widget/TextView +[13]: https://medium.com/totalcross-community/totalcross-plugin-for-vscode-4f45da146a0a +[14]: https://opensource.com/sites/default/files/uploads/homeappliancexml.png (Home Appliance file structure) +[15]: https://opensource.com/sites/default/files/uploads/totalcross-simulator_0.png (TotalCross simulator running temperature application) +[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+button +[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+label +[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string +[19]: https://tigervnc.org/ +[20]: https://opensource.com/sites/default/files/uploads/application.gif (Application demo) +[21]: https://github.com/TotalCross/HomeApplianceXML diff --git a/published/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md b/published/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md new file mode 100644 index 0000000000..11c43fd1f5 --- /dev/null +++ b/published/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md @@ -0,0 +1,201 @@ +[#]: collector: (lujun9972) +[#]: translator: (Chao-zhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13104-1.html) +[#]: subject: (Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself) +[#]: via: (https://itsfoss.com/arch-based-linux-distros/) +[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) + +9 个易用的基于 Arch 的用户友好型 Linux 发行版 +====== + +在 Linux 社区中,[Arch Linux][1] 有一群狂热的追随者。这个轻量级的发行版以 DIY 的态度提供了最前沿的更新。 + +但是,Arch 的目标用户是那些更有经验的用户。因此,它通常被认为是那些技术不够(或耐心不够)的人所无法触及的。 + +事实上,只是最开始的步骤,[安装 Arch Linux 就足以把很多人吓跑][2]。与大多数其他发行版不同,Arch Linux 没有一个易于使用的图形安装程序。安装过程中涉及到的磁盘分区,连接到互联网,挂载驱动器和创建文件系统等只用命令行工具来操作。 + +对于那些不想经历复杂的安装和设置的人来说,有许多用户友好的基于 Arch 的发行版。 + +在本文中,我将向你展示一些 Arch 替代发行版。这些发行版附带了图形安装程序、图形包管理器和其他工具,比它们的命令行版本更容易使用。 + +### 更容易设置和使用的基于 Arch 的 Linux 发行版 + +![](https://img.linux.net.cn/data/attachment/album/202102/10/112812sc42txp4eexco44x.jpg) + +请注意,这不是一个排名列表。这些数字只是为了计数的目的。排第二的发行版不应该被认为比排第七的发行版好。 + +#### 1、Manjaro Linux + +![][4] + +[Manjaro][5] 不需要任何介绍。它是几年来最流行的 Linux 发行版之一,它值得拥有。 + +Manjaro 提供了 Arch Linux 的所有优点,同时注重用户友好性和可访问性。Manjaro 既适合新手,也适合有经验的 Linux 用户。 + +**对于新手**,它提供了一个用户友好的安装程序,系统本身也设计成可以在你[最喜爱的桌面环境 ][6](DE)或窗口管理器中直接“开箱即用”。 + +**对于更有经验的用户**,Manjaro 还提供多种功能,以满足每个个人的口味和喜好。[Manjaro Architect][7] 提供了安装各种 Manjaro 风格的选项,并为那些想要完全自由地塑造系统的人提供了各种桌面环境、文件系统([最近推出的 ZFS][8]) 和引导程序的选择。 + +Manjaro 也是一个滚动发布的前沿发行版。然而,与 Arch 不同的是,Manjaro 首先测试更新,然后将其提供给用户。稳定在这里也很重要。 + +#### 2、ArcoLinux + +![][9] + +[ArcoLinux][10](以前称为 ArchMerge)是一个基于 Arch Linux 的发行版。开发团队提供了三种变体。ArcoLinux、ArcoLinuxD 和 ArcoLinuxB。 + +ArcoLinux 是一个功能齐全的发行版,附带有 [Xfce 桌面][11]、[Openbox][12] 和 [i3 窗口管理器][13]。 + +**ArcoLinuxD** 是一个精简的发行版,它包含了一些脚本,可以让高级用户安装任何桌面和应用程序。 + +**ArcoLinuxB** 是一个让用户能够构建自定义发行版的项目,同时还开发了几个带有预配置桌面的社区版本,如 Awesome、bspwm、Budgie、Cinnamon、Deepin、GNOME、MATE 和 KDE Plasma。 + +ArcoLinux 还提供了各种视频教程,因为它非常注重学习和获取 Linux 技能。 + +#### 3、Archlabs Linux + +![][14] + +[ArchLabs Linux][15] 是一个轻量级的滚动版 Linux 发行版,基于最精简的 Arch Linux,带有 [Openbox][16] 窗口管理器。[ArchLabs][17] 在观感设计中受到 [BunsenLabs][18] 的影响和启发,主要考虑到中级到高级用户的需求。 + +#### 4、Archman Linux + +![][19] + +[Archman][20] 是一个独立的项目。Arch Linux 发行版对于没有多少 Linux 经验的用户来说通常不是理想的操作系统。要想在最小的挫折感下让事情变得更有意义,必须要有相当的背景知识。Archman Linux 的开发人员正试图改变这种评价。 + +Archman 的开发是基于对开发的理解,包括用户反馈和体验组件。根据团队过去的经验,将用户的反馈和要求融合在一起,确定路线图并完成构建工作。 + +#### 5、EndeavourOS + +![][21] + +当流行的基于 Arch 的发行版 [Antergos 在 2019 结束][22] 时,它留下了一个友好且非常有用的社区。Antergos 项目结束的原因是因为该系统对于开发人员来说太难维护了。 + +在宣布结束后的几天内,一些有经验的用户通过创建一个新的发行版来填补 Antergos 留下的空白,从而维护了以前的社区。这就是 [EndeavourOS][23] 的诞生。 + +[EndeavourOS][24] 是轻量级的,并且附带了最少数量的预装应用程序。一块近乎空白的画布,随时可以个性化。 + +#### 6、RebornOS + +![][25] + +[RebornOS][26] 开发人员的目标是将 Linux 的真正威力带给每个人,一个 ISO 提供了 15 个桌面环境可供选择,并提供无限的定制机会。 + +RebornOS 还声称支持 [Anbox][27],它可以在桌面 Linux 上运行 Android 应用程序。它还提供了一个简单的内核管理器 GUI 工具。 + +再加上 [Pacman][28]、[AUR][29],以及定制版本的 Cnchi 图形安装程序,Arch Linux 终于可以让最没有经验的用户也能够使用了。 + +#### 7、Chakra Linux + +![][30] + +一个社区开发的 GNU/Linux 发行版,它的亮点在 KDE 和 Qt 技术。[Chakra Linux][31] 不在特定日期安排发布,而是使用“半滚动发布”系统。 + +这意味着 Chakra Linux 的核心包被冻结,只在修复安全问题时才会更新。这些软件包是在最新版本经过彻底测试后更新的,然后再转移到永久软件库(大约每六个月更新一次)。 + +除官方软件库外,用户还可以安装 Chakra 社区软件库 (CCR) 的软件包,该库为官方存储库中未包含的软件提供用户制作的 PKGINFOs 和 [PKGBUILD][32] 脚本,其灵感来自于 Arch 用户软件库(AUR)。 + +#### 8、Artix Linux + +![Artix Mate Edition][33] + +[Artix Linux][34] 也是一个基于 Arch Linux 的滚动发行版,它使用 [OpenRC][35]、[runit][36] 或 [s6][37] 作为初始化工具而不是 [systemd][38]。 + +Artix Linux 有自己的软件库,但作为一个基于 `pacman` 的发行版,它可以使用 Arch Linux 软件库或任何其他衍生发行版的软件包,甚至可以使用明确依赖于 systemd 的软件包。也可以使用 [Arch 用户软件库][29](AUR)。 + +#### 9、BlackArch Linux + +![][39] + +BlackArch 是一个基于 Arch Linux 的 [渗透测试发行版][40],它提供了大量的网络安全工具。它是专门为渗透测试人员和安全研究人员创建的。该软件库包含 2400 多个[黑客和渗透测试工具 ][41],可以单独安装,也可以分组安装。BlackArch Linux 兼容现有的 Arch Linux 包。 + +### 想要真正的原版 Arch Linux 吗?可以使用图形化 Arch 安装程序简化安装 + +如果你想使用原版的 Arch Linux,但又被它困难的安装所难倒。幸运的是,你可以下载一个带有图形安装程序的 Arch Linux ISO。 + +Arch 安装程序基本上是 Arch Linux ISO 的一个相对容易使用的基于文本的安装程序。它比裸奔的 Arch 安装容易得多。 + +#### Anarchy Installer + +![][42] + +[Anarchy installer][43] 打算为新手和有经验的 Linux 用户提供一种简单而无痛苦的方式来安装 ArchLinux。在需要的时候安装,在需要的地方安装,并且以你想要的方式安装。这就是 Anarchy 的哲学。 + +启动安装程序后,将显示一个简单的 [TUI 菜单][44],列出所有可用的安装程序选项。 + +#### Zen Installer + +![][45] + +[Zen Installer][46] 为安装 Arch Linux 提供了一个完整的图形(点击式)环境。它支持安装多个桌面环境 、AUR 以及 Arch Linux 的所有功能和灵活性,并且易于图形化安装。 + +ISO 将引导一个临场环境,然后在你连接到互联网后下载最新稳定版本的安装程序。因此,你将始终获得最新的安装程序和更新的功能。 + +### 总结 + +对于许多用户来说,基于 Arch 的发行版会是一个很好的无忧选择,而像 Anarchy 这样的图形化安装程序至少离原版的 Arch Linux 更近了一步。 + +在我看来,[Arch Linux 的真正魅力在于它的安装过程][2],对于 Linux 爱好者来说,这是一个学习的机会,而不是麻烦。Arch Linux 及其衍生产品有很多东西需要你去折腾,但是在折腾的过程中你就会进入到开源软件的世界,这里是神奇的新世界。下次再见! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/arch-based-linux-distros/ + +作者:[Dimitrios Savvopoulos][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/dimitrios/ +[b]: https://github.com/lujun9972 +[1]: https://www.archlinux.org/ +[2]: https://itsfoss.com/install-arch-linux/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/arch-based-linux-distributions.png?ssl=1 +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/manjaro-20.jpg?ssl=1 +[5]: https://manjaro.org/ +[6]: https://itsfoss.com/best-linux-desktop-environments/ +[7]: https://itsfoss.com/manjaro-architect-review/ +[8]: https://itsfoss.com/manjaro-20-release/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/arcolinux.png?ssl=1 +[10]: https://arcolinux.com/ +[11]: https://www.xfce.org/ +[12]: http://openbox.org/wiki/Main_Page +[13]: https://i3wm.org/ +[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archlabs.jpg?ssl=1 +[15]: https://itsfoss.com/archlabs-review/ +[16]: https://en.wikipedia.org/wiki/Openbox +[17]: https://archlabslinux.com/ +[18]: https://www.bunsenlabs.org/ +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archman.png?ssl=1 +[20]: https://archman.org/en/ +[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/04_endeavouros_slide.jpg?ssl=1 +[22]: https://itsfoss.com/antergos-linux-discontinued/ +[23]: https://itsfoss.com/endeavouros/ +[24]: https://endeavouros.com/ +[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/RebornOS.png?ssl=1 +[26]: https://rebornos.org/ +[27]: https://anbox.io/ +[28]: https://itsfoss.com/pacman-command/ +[29]: https://itsfoss.com/aur-arch-linux/ +[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Chakra_Goedel_Screenshot.png?ssl=1 +[31]: https://www.chakralinux.org/ +[32]: https://wiki.archlinux.org/index.php/PKGBUILD +[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Artix_MATE_edition.png?ssl=1 +[34]: https://artixlinux.org/ +[35]: https://en.wikipedia.org/wiki/OpenRC +[36]: https://en.wikipedia.org/wiki/Runit +[37]: https://en.wikipedia.org/wiki/S6_(software) +[38]: https://en.wikipedia.org/wiki/Systemd +[39]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/BlackArch.png?ssl=1 +[40]: https://itsfoss.com/linux-hacking-penetration-testing/ +[41]: https://itsfoss.com/best-kali-linux-tools/ +[42]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/anarchy.jpg?ssl=1 +[43]: https://anarchyinstaller.org/ +[44]: https://en.wikipedia.org/wiki/Text-based_user_interface +[45]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/zen.jpg?ssl=1 +[46]: https://sourceforge.net/projects/revenge-installer/ diff --git a/published/20200615 LaTeX Typesetting - Part 1 (Lists).md b/published/20200615 LaTeX Typesetting - Part 1 (Lists).md new file mode 100644 index 0000000000..b69d418b64 --- /dev/null +++ b/published/20200615 LaTeX Typesetting - Part 1 (Lists).md @@ -0,0 +1,260 @@ +[#]: collector: "lujun9972" +[#]: translator: "rakino" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-13112-1.html" +[#]: subject: "LaTeX Typesetting – Part 1 (Lists)" +[#]: via: "https://fedoramagazine.org/latex-typesetting-part-1/" +[#]: author: "Earl Ramirez https://fedoramagazine.org/author/earlramirez/" + +LaTeX 排版(1):列表 +====== + +![][1] + +本系列基于前文《[在 Fedora 上用 LaTex 和 TeXstudio 排版你的文档][2]》和《[LaTeX 基础][3]》,本文即系列的第一部分,是关于 LaTeX 列表的。 + +### 列表类型 + +LaTeX 中的列表是封闭的环境,列表中的每个项目可以取一行文字到一个完整的段落。在 LaTeX 中有三种列表类型: + + * `itemize`:无序列表unordered list/项目符号列表bullet list + * `enumerate`:有序列表ordered list + * `description`:描述列表descriptive list + +### 创建列表 + +要创建一个列表,需要在每个项目前加上控制序列 `\item`,并在项目清单前后分别加上控制序列 `\begin{<类型>}` 和 `\end`{<类型>}`(将其中的 `<类型>` 替换为将要使用的列表类型),如下例: + +#### itemize(无序列表) + +``` +\begin{itemize} + \item Fedora + \item Fedora Spin + \item Fedora Silverblue +\end{itemize} +``` + +![][4] + +#### enumerate(有序列表) + +``` +\begin{enumerate} + \item Fedora CoreOS + \item Fedora Silverblue + \item Fedora Spin +\end{enumerate} +``` + +![][5] + +#### description(描述列表) + +``` +\begin{description} + \item[Fedora 6] Code name Zod + \item[Fedora 8] Code name Werewolf +\end{description} +``` + +![][6] + +### 列表项目间距 + +可以通过在导言区加入 `\usepackage{enumitem}` 来自定义默认的间距,宏包 `enumitem` 启用了选项 `noitemsep` 和控制序列 `\itemsep`,可以在列表中使用它们,如下例所示: + +#### 使用选项 noitemsep + +将选项 `noitemsep` 封闭在方括号内,并同下文所示放在控制序列 `\begin` 之后,该选项将移除默认的间距。 + +``` +\begin{itemize}[noitemsep] + \item Fedora + \item Fedora Spin + \item Fedora Silverblue +\end{itemize} +``` + +![][7] + +#### 使用控制序列 \itemsep + +控制序列 `\itemsep` 必须以一个数字作为后缀,用以表示列表项目之间应该有多少空间。 + +``` +\begin{itemize} \itemsep0.75pt + \item Fedora Silverblue + \item Fedora CoreOS +\end{itemize} +``` + +![][8] + +### 嵌套列表 + +LaTeX 最多最多支持四层嵌套列表,如下例: + +#### 嵌套无序列表 + +``` +\begin{itemize}[noitemsep] + \item Fedora Versions + \begin{itemize} + \item Fedora 8 + \item Fedora 9 + \begin{itemize} + \item Werewolf + \item Sulphur + \begin{itemize} + \item 2007-05-31 + \item 2008-05-13 + \end{itemize} + \end{itemize} + \end{itemize} + \item Fedora Spin + \item Fedora Silverblue +\end{itemize} +``` + +![][9] + +#### 嵌套有序列表 + +``` +\begin{enumerate}[noitemsep] + \item Fedora Versions + \begin{enumerate} + \item Fedora 8 + \item Fedora 9 + \begin{enumerate} + \item Werewolf + \item Sulphur + \begin{enumerate} + \item 2007-05-31 + \item 2008-05-13 + \end{enumerate} + \end{enumerate} + \end{enumerate} + \item Fedora Spin + \item Fedora Silverblue +\end{enumerate} +``` + +![][10] + +### 每种列表类型的列表样式名称 + +**enumerate(有序列表)** | **itemize(无序列表)** +---|--- +`\alph*` (小写字母) | `$\bullet$` (●) +`\Alph*` (大写字母) | `$\cdot$` (•) +`\arabic*` (阿拉伯数字) | `$\diamond$` (◇) +`\roman*` (小写罗马数字) | `$\ast$` (✲) +`\Roman*` (大写罗马数字) | `$\circ$` (○) +  | `$-$` (-) + +### 按嵌套深度划分的默认样式 + +**嵌套深度** | **enumerate(有序列表)** | **itemize(无序列表)** +---|---|--- +1 | 阿拉伯数字 | (●) +2 | 小写字母 | (-) +3 | 小写罗马数字 | (✲) +4 | 大写字母 | (•) + +### 设置列表样式 + +下面的例子列举了无序列表的不同样式。 + +``` +% 无序列表样式 +\begin{itemize} + \item[$\ast$] Asterisk + \item[$\diamond$] Diamond + \item[$\circ$] Circle + \item[$\cdot$] Period + \item[$\bullet$] Bullet (default) + \item[--] Dash + \item[$-$] Another dash +\end{itemize} +``` + +![][11] + +有三种设置列表样式的方式,下面将按照优先级从高到低的顺序分别举例。 + +#### 方式一:为各项目单独设置 + +将需要的样式名称封闭在方括号内,并放在控制序列 `\item` 之后,如下例: + +``` +% 方式一 +\begin{itemize} + \item[$\ast$] Asterisk + \item[$\diamond$] Diamond + \item[$\circ$] Circle + \item[$\cdot$] period + \item[$\bullet$] Bullet (default) + \item[--] Dash + \item[$-$] Another dash +\end{itemize} +``` + +#### 方式二:为整个列表设置 + +将需要的样式名称以 `label=` 前缀并封闭在方括号内,放在控制序列 `\begin` 之后,如下例: + +``` +% 方式二 +\begin{enumerate}[label=\Alph*.] + \item Fedora 32 + \item Fedora 31 + \item Fedora 30 +\end{enumerate} +``` + +#### 方式三:为整个文档设置 + +该方式将改变整个文档的默认样式。使用 `\renewcommand` 来设置项目标签的值,下例分别为四个嵌套深度的项目标签设置了不同的样式。 + +``` +% 方式三 +\renewcommand{\labelitemi}{$\ast$} +\renewcommand{\labelitemii}{$\diamond$} +\renewcommand{\labelitemiii}{$\bullet$} +\renewcommand{\labelitemiv}{$-$} +``` + +### 总结 + +LaTeX 支持三种列表,而每种列表的风格和间距都是可以自定义的。在以后的文章中,我们将解释更多的 LaTeX 元素。 + +关于 LaTeX 列表的延伸阅读可以在这里找到:[LaTeX List Structures][12] + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/latex-typesetting-part-1/ + +作者:[Earl Ramirez][a] +选题:[lujun9972][b] +译者:[rakino](https://github.com/rakino) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/earlramirez/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png +[2]: https://fedoramagazine.org/typeset-latex-texstudio-fedora +[3]: https://fedoramagazine.org/fedora-classroom-latex-101-beginners +[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-1.png +[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-2.png +[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-3.png +[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-4.png +[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-5.png +[9]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-7.png +[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-8.png +[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-9.png +[12]: https://en.wikibooks.org/wiki/LaTeX/List_Structures diff --git a/published/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md b/published/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md new file mode 100644 index 0000000000..a645f460fb --- /dev/null +++ b/published/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: (Chao-zhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13110-1.html) +[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension) +[#]: via: (https://itsfoss.com/material-shell/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +使用 Material Shell 扩展将你的 GNOME 桌面打造成平铺式风格 +====== + +平铺式窗口的特性吸引了很多人的追捧。也许是因为它很好看,也许是因为它能提高 [Linux 快捷键][1] 玩家的效率。又或者是因为使用不同寻常的平铺式窗口是一种新奇的挑战。 + +![Tiling Windows in Linux | Image Source][2] + +从 i3 到 [Sway][3],Linux 桌面拥有各种各样的平铺式窗口管理器。配置一个平铺式窗口管理器需要一个陡峭的学习曲线。 + +这就是为什么像 [Regolith 桌面][4] 这样的项目会存在,给你预先配置好的平铺桌面,让你可以更轻松地开始使用平铺窗口。 + +让我给你介绍一个类似的项目 —— Material Shell。它可以让你用上平铺式桌面,甚至比 [Regolith][5] 还简单。 + +### Material Shell 扩展:将 GNOME 桌面转变成平铺式窗口管理器 + +[Material Shell][6] 是一个 GNOME 扩展,这就是它最好的地方。这意味着你不需要注销并登录其他桌面环境。你只需要启用或关闭这个扩展就可以自如的切换你的工作环境。 + +我会列出 Material Shell 的各种特性,但是也许视频更容易让你理解: + +- [video](https://youtu.be/Wc5mbuKrGDE) + +这个项目叫做 Material Shell 是因为它遵循 [Material Design][8] 原则。因此这个应用拥有一个美观的界面。这就是它最重要的一个特性。 + +#### 直观的界面 + +Material Shell 添加了一个左侧面板,以便快速访问。在此面板上,你可以在底部找到系统托盘,在顶部找到搜索和工作区。 + +所有新打开的应用都会添加到当前工作区中。你也可以创建新的工作区并切换到该工作区,以将正在运行的应用分类。其实这就是工作区最初的意义。 + +在 Material Shell 中,每个工作区都可以显示为具有多个应用程序的行列,而不是包含多个应用程序的程序框。 + +#### 平铺式窗口 + +在工作区中,你可以一直在顶部看到所有打开的应用程序。默认情况下,应用程序会像在 GNOME 桌面中那样铺满整个屏幕。你可以使用右上角的布局改变器来改变布局,将其分成两半、多列或多个应用网格。 + +这段视频一目了然的显示了以上所有功能: + +- [video](https://player.vimeo.com/video/460050750?dnt=1&app_id=122963) + +#### 固定布局和工作区 + +Material Shell 会记住你打开的工作区和窗口,这样你就不必重新组织你的布局。这是一个很好的特性,因为如果你对应用程序的位置有要求的话,它可以节省时间。 + +#### 热建/快捷键 + +像任何平铺窗口管理器一样,你可以使用键盘快捷键在应用程序和工作区之间切换。 + + * `Super+W` 切换到上个工作区; + * `Super+S` 切换到下个工作区; + * `Super+A` 切换到左边的窗口; + * `Super+D` 切换到右边的窗口; + * `Super+1`、`Super+2` … `Super+0` 切换到某个指定的工作区; + * `Super+Q` 关闭当前窗口; + * `Super+[鼠标拖动]` 移动窗口; + * `Super+Shift+A` 将当前窗口左移; + * `Super+Shift+D` 将当前窗口右移; + * `Super+Shift+W` 将当前窗口移到上个工作区; + * `Super+Shift+S` 将当前窗口移到下个工作区。 + +### 安装 Material Shell + +> 警告! +> +> 对于大多数用户来说,平铺式窗口可能会导致混乱。你最好先熟悉如何使用 GNOME 扩展。如果你是 Linux 新手或者你害怕你的系统发生翻天覆地的变化,你应当避免使用这个扩展。 + +Material Shell 是一个 GNOME 扩展。所以,请 [检查你的桌面环境][9],确保你运行的是 GNOME 3.34 或者更高的版本。 + +除此之外,我注意到在禁用 Material Shell 之后,它会导致 Firefox 的顶栏和 Ubuntu 的坞站消失。你可以在 GNOME 的“扩展”应用程序中禁用/启用 Ubuntu 的坞站扩展来使其变回原来的样子。我想这些问题也应该在系统重启后消失,虽然我没试过。 + +我希望你知道 [如何使用 GNOME 扩展][10]。最简单的办法就是 [在浏览器中打开这个链接][11],安装 GNOME 扩展浏览器插件,然后启用 Material Shell 扩展即可。 + +![][12] + +如果你不喜欢这个扩展,你也可以在同样的链接中禁用它。或者在 GNOME 的“扩展”应用程序中禁用它。 + +![][13] + +### 用不用平铺式? + +我使用多个电脑屏幕,我发现 Material Shell 不适用于多个屏幕的情况。这是开发者将来可以改进的地方。 + +除了这个毛病以外,Material Shell 是个让你开始使用平铺式窗口的好东西。如果你尝试了 Material Shell 并且喜欢它,请 [在 GitHub 上给它一个星标或赞助它][14] 来鼓励这个项目。 + +由于某些原因,平铺窗户越来越受欢迎。最近发布的 [Pop OS 20.04][15] 也增加了平铺窗口的功能。有一个类似的项目叫 PaperWM,也是这样做的。 + +但正如我前面提到的,平铺布局并不适合所有人,它可能会让很多人感到困惑。 + +你呢?你是喜欢平铺窗口还是喜欢经典的桌面布局? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/material-shell/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/ubuntu-shortcuts/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1 +[3]: https://itsfoss.com/sway-window-manager/ +[4]: https://itsfoss.com/regolith-linux-desktop/ +[5]: https://regolith-linux.org/ +[6]: https://material-shell.com +[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[8]: https://material.io/ +[9]: https://itsfoss.com/find-desktop-environment/ +[10]: https://itsfoss.com/gnome-shell-extensions/ +[11]: https://extensions.gnome.org/extension/3357/material-shell/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1 +[14]: https://github.com/material-shell/material-shell +[15]: https://itsfoss.com/pop-os-20-04-review/ diff --git a/published/20201013 My first day using Ansible.md b/published/20201013 My first day using Ansible.md new file mode 100644 index 0000000000..990ddc1f24 --- /dev/null +++ b/published/20201013 My first day using Ansible.md @@ -0,0 +1,297 @@ +[#]: collector: "lujun9972" +[#]: translator: "MjSeven" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-13079-1.html" +[#]: subject: "My first day using Ansible" +[#]: via: "https://opensource.com/article/20/10/first-day-ansible" +[#]: author: "David Both https://opensource.com/users/dboth" + +使用 Ansible 的第一天 +====== + +> 一名系统管理员分享了如何使用 Ansible 在网络中配置计算机并把其带入实际工作的信息和建议。 + +![](https://img.linux.net.cn/data/attachment/album/202102/03/105826sn41jj0i8evu19nn.jpg) + +无论是第一次还是第五十次,启动并运行一台新的物理或虚拟计算机都非常耗时,而且需要大量的工作。多年来,我一直使用我创建的一系列脚本和 RPM 来安装所需的软件包,并为我喜欢的工具配置各种选项。这种方法效果很好,简化了我的工作,而且还减少了在键盘上输入命令的时间。 + +我一直在寻找更好的工作方式。近几年来,我一直在听到并且读到有关 [Ansible][2] 的信息,它是一个自动配置和管理系统的强大工具。Ansible 允许系统管理员在一个或多个剧本playbook中为每个主机指定一个特定状态,然后执行各种必要的任务,使主机进入该状态。这包括安装或删除各种资源,例如 RPM 或 Apt 软件包、配置文件和其它文件、用户、组等等。 + +因为一些琐事,我推迟了很长一段时间学习如何使用它。直到最近,我遇到了一个我认为 Ansible 可以轻松解决的问题。 + +这篇文章并不会完整地告诉你如何入门 Ansible,相反,它只是对我遇到的问题和我在一些隐秘的地方发现的信息的做了一些记录。我在各种在线讨论和问答小组中找到的有关 Ansible 的许多信息都是错误的。错误范围包括明显的老旧信息(没有任何日期或来源的迹象),还有一些是完全错误的信息。 + +本文所介绍的内容是有用的,尽管可能还有其它方法可以完成相同的事情,但我使用的是 Ansible 2.9.13 和 [Python][3] 3.8.5。 + +### 我的问题 + +我所有的最佳学习经历都始于我需要解决的问题,这次也不例外。 + +我一直在做一个小项目,修改 [Midnight Commander][4] 文件管理器的配置文件,并将它们推送到我网络上的各种系统中进行测试。尽管我有一个脚本可以自动执行这个操作,但它仍然需要使用命令行循环来提供我想要推送新代码的系统名称。我对配置文件进行了大量的更改,这使我必须频繁推送新的配置文件。但是,就在我以为我的新配置刚刚好时,我发现了一个问题,所以我需要在修复后再进行一次推送。 + +这种环境使得很难跟踪哪些系统有新文件,哪些没有。我还有几个主机需要区别对待。我对 Ansible 的一点了解表明,它可能能够满足我的全部或大部分工作。 + +### 开始 + +我读过许多有关 Ansible 的好文章和书籍,但从来没有在“我必须现在就把这个做好!”的情况下读过。而现在 —— 好吧,就是现在! + +在重读这些文档时,我发现它们主要是在讨论如何从 GitHub 开始安装并使用 Ansible,这很酷。但是我真的只是想尽快开始,所以我使用 DNF 和 Fedora 仓库中的版本在我的 Fedora 工作站上安装了它,非常简单。 + +但是后来我开始寻找文件位置,并尝试确定需要修改哪些配置文件、将我的剧本保存在什么位置,甚至一个剧本怎么写以及它的作用,我脑海中有一大堆(到目前为止)悬而未决的问题。 + +因此,不不需要进一步描述我的困难的情况下,以下是我发现的东西以及促使我继续前进的东西。 + +### 配置 + +Ansible 的配置文件保存在 `/etc/ansible` 中,这很有道理,因为 `/etc/` 是系统程序应该保存配置文件的地方。我需要使用的两个文件是 `ansible.cfg` 和 `hosts`。 + +#### ansible.cfg + +在进行了从文档和线上找到的一些实践练习之后,我遇到了一些有关弃用某些较旧的 Python 文件的警告信息。因此,我在 `ansible.cfg` 中将 `deprecation_warnings` 设置为 `false`,这样那些愤怒的红色警告消息就不会出现了: + +``` +deprecation_warnings = False +``` + +这些警告很重要,所以我稍后将重新回顾它们,并弄清楚我需要做什么。但是现在,它们不会再扰乱屏幕,也不会让我混淆实际上需要关注的错误。 + +#### hosts 文件 + +与 `/etc/hosts` 文件不同,`hosts` 文件也被称为清单inventory文件,它列出了网络上的主机。此文件允许将主机分组到相关集合中,例如“servers”、“workstations”和任何你所需的名称。这个文件包含帮助和大量示例,所以我在这里就不详细介绍了。但是,有些事情你必须知道。 + +主机也可以列在组之外,但是组对于识别具有一个或多个共同特征的主机很有帮助。组使用 INI 格式,所以服务器组看起来像这样: + +``` +[servers] +server1 +server2 +...... +``` + +这个文件中必须有一个主机名,这样 Ansible 才能对它进行操作。即使有些子命令允许指定主机名,但除非主机名在 `hosts` 文件中,否则命令会失败。一个主机也可以放在多个组中。因此,除了 `[servers]` 组之外,`server1` 也可能是 `[webservers]` 组的成员,还可以是 `[ubuntu]` 组的成员,这样以区别于 Fedora 服务器。 + +Ansible 很智能。如果 `all` 参数用作主机名,Ansible 会扫描 `hosts` 文件并在它列出的所有主机上执行定义的任务。Ansible 只会尝试在每个主机上工作一次,不管它出现在多少个组中。这也意味着不需要定义 `all` 组,因为 Ansible 可以确定文件中的所有主机名,并创建自己唯一的主机名列表。 + +另一件需要注意的事情是单个主机的多个条目。我在 DNS 文件中使用 `CNAME` 记录来创建别名,这些别名指向某些主机的 [A 记录][5],这样,我可以将一个主机称为 `host1` 或 `h1` 或 `myhost`。如果你在 `hosts` 文件中为同一主机指定多个主机名,则 Ansible 将尝试在所有这些主机名上执行其任务,它无法知道它们指向同一主机。好消息是,这并不会影响整体结果;它只是多花了一点时间,因为 Ansible 会在次要主机名上工作,它会确定所有操作均已执行。 + +### Ansible 实情 + +我阅读过 Ansible 的大多数材料都谈到了 Ansible [实情][6]facts,它是与远程系统相关的数据,包括操作系统、IP 地址、文件系统等等。这些信息可以通过其它方式获得,如 `lshw`、`dmidecode` 或 `/proc` 文件系统等。但是 Ansible 会生成一个包含此信息的 JSON 文件。每次 Ansible 运行时,它都会生成这些实情数据。在这个数据流中,有大量的信息,都是以键值对形式出现的:`<"variable-name": "value">`。所有这些变量都可以在 Ansible 剧本中使用,理解大量可用信息的最好方法是实际显示一下: + +``` +# ansible -m setup | less +``` + +明白了吗?你想知道的有关主机硬件和 Linux 发行版的所有内容都在这里,它们都可以在剧本中使用。我还没有达到需要使用这些变量的地步,但是我相信在接下来的几天中会用到。 + +### 模块 + +上面的 `ansible` 命令使用 `-m` 选项来指定 `setup` 模块。Ansible 已经内置了许多模块,所以你对这些模块不需要使用 `-m`。也可以安装许多下载的模块,但是内置模块可以完成我目前项目所需的一切。 + +### 剧本 + +剧本playbook几乎可以放在任何地方。因为我需要以 root 身份运行,所以我将它放在了 `/root/ansible` 下。当我运行 Ansible 时,只要这个目录是当前的工作目录(PWD),它就可以找到我的剧本。Ansible 还有一个选项,用于在运行时指定不同的剧本和位置。 + +剧本可以包含注释,但是我看到的文章或书籍很少提及此。但作为一个相信记录一切的系统管理员,我发现使用注释很有帮助。这并不是说在注释中做和任务名称同样的事情,而是要确定任务组的目的,并确保我以某种方式或顺序记录我做这些事情的原因。当我可能忘记最初的想法时,这可以帮助以后解决调试问题。 + +剧本只是定义主机所需状态的任务集合。在剧本的开头指定主机名或清单组,并定义 Ansible 将在其上运行剧本的主机。 + +以下是我的一个剧本示例: + +``` +################################################################################ +# This Ansible playbook updates Midnight commander configuration files.        # +################################################################################ +- name: Update midnight commander configuration files +  hosts: all +  +  tasks: +  - name: ensure midnight commander is the latest version +    dnf: +      name: mc +      state: present + +  - name: create ~/.config/mc directory for root +    file: +      path: /root/.config/mc +      state: directory +      mode: 0755 +      owner: root +      group: root + +  - name: create ~/.config/mc directory for dboth +    file: +      path: /home/dboth/.config/mc +      state: directory +      mode: 0755 +      owner: dboth +      group: dboth + +  - name: copy latest personal skin +    copy: +      src: /root/ansible/UpdateMC/files/MidnightCommander/DavidsGoTar.ini +      dest: /usr/share/mc/skins/DavidsGoTar.ini +      mode: 0644 +      owner: root +      group: root + +  - name: copy latest mc ini file +    copy: +      src: /root/ansible/UpdateMC/files/MidnightCommander/ini +      dest: /root/.config/mc/ini +      mode: 0644 +      owner: root +      group: root + +  - name: copy latest mc panels.ini file +    copy: +      src: /root/ansible/UpdateMC/files/MidnightCommander/panels.ini +      dest: /root/.config/mc/panels.ini +      mode: 0644 +      owner: root +      group: root +<截断> +``` + +剧本从它自己的名字和它将要操作的主机开始,在本文中,所有主机都在我的 `hosts` 文件中。`tasks` 部分列出了使主机达到所需状态的特定任务。这个剧本从使用 DNF 更新 Midnight Commander 开始(如果它不是最新的版本的话)。下一个任务确保创建所需的目录(如果它们不存在),其余任务将文件复制到合适的位置,这些 `file` 和 `copy` 任务还可以为目录和文件设置所有权和文件模式。 + +剧本细节超出了本文的范围,但是我对这个问题使用了一点蛮力。还有其它方法可以确定哪些用户需要更新文件,而不是对每个用户的每个文件使用一个任务。我的下一个目标是简化这个剧本,使用一些更先进的技术。 + +运行剧本很容易,只需要使用 `ansible-playbook` 命令。`.yml` 扩展名代表 YAML,我看到过它的几种不同含义,但我认为它是“另一种标记语言Yet Another Markup Language”,尽管有些人声称 YAML 不是这种语言。 + +这个命令将会运行剧本,它会更新 Midnight Commander 文件: + +``` +# ansible-playbook -f 10 UpdateMC.yml +``` + +`-f` 选项指定 Ansible 使用 10 个线程来执行操作。这可以大大加快整个任务的完成速度,特别是在多台主机上工作时。 + +### 输出 + +剧本运行时会列出每个任务和执行结果。`ok` 代表任务管理的机器状态已经完成,因为在任务中定义的状态已经为真,所以 Ansible 不需要执行任何操作。 + +`changed` 表示 Ansible 已经执行了指定的任务。在这种情况下,任务中定义的机器状态不为真,所以执行指定的操作使其为真。在彩色终端上,`TASK` 行会以彩色显示。我的终端配色为“amber-on-black”,`TASK` 行显示为琥珀色,`changed` 是棕色,`ok` 为绿色,错误是红色。 + +下面的输出是我最终用于在新主机执行安装后配置的剧本: + +``` +PLAY [Post-installation updates, package installation, and configuration] + +TASK [Gathering Facts] +ok: [testvm2] + +TASK [Ensure we have connectivity] +ok: [testvm2] + +TASK [Install all current updates] +changed: [testvm2] + +TASK [Install a few command line tools] +changed: [testvm2] + +TASK [copy latest personal Midnight Commander skin to /usr/share] +changed: [testvm2] + +TASK [create ~/.config/mc directory for root] +changed: [testvm2] + +TASK [Copy the most current Midnight Commander configuration files to /root/.config/mc] +changed: [testvm2] => (item=/root/ansible/PostInstallMain/files/MidnightCommander/DavidsGoTar.ini) +changed: [testvm2] => (item=/root/ansible/PostInstallMain/files/MidnightCommander/ini) +changed: [testvm2] => (item=/root/ansible/PostInstallMain/files/MidnightCommander/panels.ini) + +TASK [create ~/.config/mc directory in /etc/skel] +changed: [testvm2] + +<截断> +``` + +### cowsay + +如果你的计算机上安装了 [cowsay][7] 程序,你会发现 `TASK` 的名字出现在奶牛的语音泡泡中: + +``` + ____________________________________ +< TASK [Ensure we have connectivity] > + ------------------------------------ +        \   ^__^ +         \  (oo)\\_______ +            (__)\       )\/\ +                ||----w | +                ||     || +``` + +如果你没有这个有趣的程序,你可以使用发行版的包管理器安装 Cowsay 程序。如果你有这个程序但不想要它,可以通过在 `/etc/ansible/ansible.cfg` 文件中设置 `nocows=1` 将其禁用。 + +我喜欢这头奶牛,它很有趣,但是它会占用我的一部分屏幕。因此,在它开始妨碍我使用时,我就把它禁用了。 + +### 目录 + +与我的 Midnight Commander 任务一样,经常需要安装和维护各种类型的文件。创建用于存储剧本的目录树的“最佳实践”和系统管理员一样多,至少与编写有关 Ansible 书和文章的作者数量一样多。 + +我选择了一个对我有意义的简单结构: + +```bash +/root/ansible +└── UpdateMC +    ├── files +    │   └── MidnightCommander +    │       ├── DavidsGoTar.ini +    │       ├── ini +    │       └── panels.ini +    └── UpdateMC.yml +``` + +你可以使用任何结构。但是请注意,其它系统管理员可能需要使用你设置的剧本来工作,所以目录应该具有一定程度的逻辑。当我使用 RPM 和 Bash 脚本执行安装任务后,我的文件仓库有点分散,绝对没有任何逻辑结构。当我为许多管理任务创建剧本时,我将介绍一个更有逻辑的结构来管理我的目录。 + +### 多次运行剧本 + +根据需要或期望多次运行剧本是安全的。只有当主机状态与任务中指定的状态不匹配时,才会执行每个任务。这使得很容易从先前的剧本运行中遇到的错误中恢复。因为当剧本遇到错误时,它将停止运行。 + +在测试我的第一个剧本时,我犯了很多错误并改正了它们。假设我的修正正确,那么剧本每次运行,都会跳过那些状态已与指定状态匹配的任务,执行不匹配状态的任务。当我的修复程序起作用时,之前失败的任务就会成功完成,并且会执行此任务之后的任务 —— 直到遇到另一个错误。 + +这使得测试变得容易。我可以添加新任务,并且在运行剧本时,只有新任务会被执行,因为它们是唯一与测试主机期望状态不匹配的任务。 + +### 一些思考 + +有些任务不适合用 Ansible,因为有更好的方法可以实现特定的计算机状态。我想到的场景是使 VM 返回到初始状态,以便可以多次使用它来执行从已知状态开始的测试。让 VM 进入特定状态,然后对此时的计算机状态进行快照要容易得多。还原到该快照与 Ansible 将主机返回到之前状态相比,通常还原到快照通常会更容易且更快。在研究文章或测试新代码时,我每天都会做几次这样的事情。 + +在完成用于更新 Midnight Commander 的剧本之后,我创建了一个新的剧本,用于在新安装的 Fedora 主机上执行安装任务。我已经取得了不错的进步,剧本比我第一个的更加复杂,但没有那么粗暴。 + +在使用 Ansible 的第一天,我创建了一个解决问题的剧本,我还开始编写第二个剧本,它将解决安装后配置的大问题,在这个过程中我学到了很多东西。 + +尽管我真的很喜欢使用 [Bash][8] 脚本来管理任务,但是我发现 Ansible 可以完成我想要的一切,并且可以使系统保持在我所需的状态。只用了一天,我就成为了 Ansible 的粉丝。 + +### 资源 + +我找到的最完整、最有用的参考文档是 Ansible 网站上的[用户指南][9]。本文仅供参考,不作为入门文档。 + +多年来,我们已经发布了许多[有关 Ansible 的文章][10],我发现其中大多数对我的需求很有帮助。Enable Sysadmin 网站上也有很多对我有帮助 [Ansible 文章][11]。你可以通过查看本周(2020 年 10 月 13 日至 14 日)的 [AnsibleFest][12] 了解更多信息。该活动完全是线上的,可以免费注册。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/10/first-day-ansible + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR "People work on a computer server with devices" +[2]: https://www.ansible.com/ +[3]: https://www.python.org/ +[4]: https://midnight-commander.org/ +[5]: https://en.wikipedia.org/wiki/List_of_DNS_record_types +[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html#ansible-facts +[7]: https://en.wikipedia.org/wiki/Cowsay +[8]: https://opensource.com/downloads/bash-cheat-sheet +[9]: https://docs.ansible.com/ansible/latest/user_guide/index.html +[10]: https://opensource.com/tags/ansible +[11]: https://www.redhat.com/sysadmin/topics/ansible +[12]: https://www.ansible.com/ansiblefest diff --git a/published/20160302 Go channels are bad and you should feel bad.md b/published/202101/20160302 Go channels are bad and you should feel bad.md similarity index 100% rename from published/20160302 Go channels are bad and you should feel bad.md rename to published/202101/20160302 Go channels are bad and you should feel bad.md diff --git a/published/202101/20181009 GCC- Optimizing Linux, the Internet, and Everything.md b/published/202101/20181009 GCC- Optimizing Linux, the Internet, and Everything.md new file mode 100644 index 0000000000..641a77b740 --- /dev/null +++ b/published/202101/20181009 GCC- Optimizing Linux, the Internet, and Everything.md @@ -0,0 +1,93 @@ +GCC:优化 Linux、互联网和一切 +====== + +![](https://img.linux.net.cn/data/attachment/album/202101/22/122155ujfd62u6zbx3i4b3.jpg) + +软件如果不能被电脑运行,那么它就是无用的。而在处理运行时run-time性能的问题上,即使是最有才华的开发人员也会受编译器的支配 —— 因为如果没有可靠的编译器工具链,就无法构建任何重要的东西。GNU 编译器集合GNU Compiler Collection(GCC)提供了一个健壮、成熟和高性能的工具,以帮助你充分发挥你代码的潜能。经过数十年成千上万人的开发,GCC 成为了世界上最受尊敬的编译器之一。如果你在构建应用程序是没有使用 GCC,那么你可能错过了最佳解决方案。 + +根据 LLVM.org 的说法,GCC 是“如今事实上的标准开源编译器” [^1],也是用来构建完整系统的基础 —— 从内核开始。GCC 支持超过 60 种硬件平台,包括 ARM、Intel、AMD、IBM POWER、SPARC、HP PA-RISC 和 IBM Z,以及各种操作环境,包括 GNU、Linux、Windows、macOS、FreeBSD、NetBSD、OpenBSD、DragonFly BSD、Solaris、AIX、HP-UX 和 RTEMS。它提供了高度兼容的 C/C++ 编译器,并支持流行的 C 库,如 GNU C Library(glibc)、Newlib、musl 和各种 BSD 操作系统中包含的 C 库,以及 Fortran、Ada 和 GO 语言的前端。GCC 还可以作为一个交叉编译器,可以为运行编译器的平台以外的其他平台创建可执行代码。GCC 是紧密集成的 GNU 工具链的核心组件,由 GNU 项目产生,它包括 glibc、Binutils 和 GNU 调试器(GDB)。 + +“一直以来我最喜欢的 GNU 工具是 GCC,即GNU 编译器集合GNU Compiler Collection。在开发工具非常昂贵的时候,GCC 是第二个 GNU 工具,也是使社区能够编写和构建所有其他工具的工具。这个工具一手改变了这个行业,导致了自由软件运动的诞生,因为一个好的、自由的编译器是一个社区软件的先决条件。”—— Red Hat 开源和标准团队的 Dave Neary。[^2] + +### 优化 Linux + +作为 Linux 内核源代码的默认编译器,GCC 提供了可靠、稳定的性能以及正确构建内核所需的额外扩展。GCC 是流行的 Linux 发行版的标准组件,如 ArchLinux、CentOS、Debian、Fedora、openSUSE 和 Ubuntu 这些发行版中,GCC 通常用来编译支持系统的组件。这包括 Linux 使用的默认库(如 libc、libm、libintl、libssh、libssl、libcrypto、libexpat、libpthread 和 ncurses),这些库依赖于 GCC 来提供可靠性和高性能,并且使应用程序和系统程序可以访问 Linux 内核功能。发行版中包含的许多应用程序包也是用 GCC 构建的,例如 Python、Perl、Ruby、nginx、Apache HTTP 服务器、OpenStack、Docker 和 OpenShift。各个 Linux 发行版使用 GCC 构建的大量代码组成了内核、库和应用程序软件。对于 openSUSE 发行版,几乎 100% 的原生代码都是由 GCC 构建的,包括 6135 个源程序包、5705 个共享库和 38927 个可执行文件。这相当于每周编译 24540 个源代码包。[^3] + +Linux 发行版中包含的 GCC 的基本版本用于创建定义系统应用程序二进制接口Application Binary Interface(ABI)的内核和库。用户空间User space开发者可以选择下载 GCC 的最新稳定版本,以获得高级功能、性能优化和可用性改进。Linux 发行版提供安装说明或预构建的工具链,用于部署最新版本的 GCC 以及其他 GNU 工具,这些工具有助于提高开发人员的工作效率和缩短部署时间。 + +### 优化互联网 + +GCC 是嵌入式系统中被广泛采用的核心编译器之一,支持为日益增长的物联网设备开发软件。GCC 提供了许多扩展功能,使其非常适合嵌入式系统软件开发,包括使用编译器的内建函数、#语法、内联汇编和以应用程序为中心的命令行选项进行精细控制。GCC 支持广泛的嵌入式体系结构,包括 ARM、AMCC、AVR、Blackfin、MIPS、RISC-V、Renesas Electronics V850、NXP 和 Freescale Power 处理器,可以生成高效、高质量的代码。GCC提供的交叉编译能力对这个社区至关重要,而预制的交叉编译工具链 [^4] 是一个主要需求。例如,GNU ARM 嵌入式工具链是经过集成和验证的软件包,其中包含 ARM 嵌入式 GCC 编译器、库和其它裸机软件开发所需的工具。这些工具链可用于在 Windows、Linux 和 macOS 主机操作系统上对流行的 ARM Cortex-R 和 Cortex-M 处理器进行交叉编译,这些处理器已装载于数百亿台支持互联网的设备中。[^5] + +GCC 为云计算赋能,为需要直接管理计算资源的软件提供了可靠的开发平台,如数据库和 Web 服务引擎以及备份和安全软件。GCC 完全兼容 C++ 11 和 C++ 14,为 C++ 17 和 C++ 2a 提供实验支持 [^6](LCTT 译注:本文原文发布于 2018 年),可以创建性能优异的对象代码,并提供可靠的调试信息。使用 GCC 的应用程序的一些例子包括:MySQL 数据库管理系统,它需要 Linux 的 GCC [^7];Apache HTTP 服务器,它建议使用 GCC [^8];Bacula,一个企业级网络备份工具,它需要 GCC。[^9] + +### 优化一切 + +对于高性能计算High Performance Computing(HPC)中使用的科学代码的研究和开发,GCC 提供了成熟的 C、C++ 和 Fortran 前端,以及对 OpenMP 和 OpenACC API的支持,用于基于指令的并行编程。因为 GCC 提供了跨计算环境的可移植性,它使得代码能够更容易地在各种新的和传统的客户机和服务器平台上进行测试。GCC 为 C、C++ 和 Fortran 编译器提供了 OpenMP 4.0 的完整支持,为 C 和 C++ 编译器提供了 OpenMP 4.5 完整支持。对于 OpenACC、 GCC 支持大部分 2.5 规范和性能优化,并且是唯一提供 [OpenACC][1] 支持的非商业、非学术编译器。 + +代码性能是这个社区的一个重要参数,GCC 提供了一个坚实的性能基础。Colfax Research 于 2017 年 11 月发表的一篇论文评估了 C++ 编译器在使用 OpenMP 4.x 指令并行化编译代码的速度和编译后代码的运行速度。图 1 描绘了不同编译器编译并使用单个线程运行时计算内核的相对性能。性能值经过了归一化处理,以 G++ 的性能为 1.0。 + +![performance][3] + +*图 1 为由不同编译器编译的每个内核的相对性能。(单线程,越高越好)。* + +他的论文总结道:“GNU 编译器在我们的测试中也做得很好。G++ 在六种情况中的三种情况下生成的代码速度是第二快的,并且在编译时间方面是最快的编译器之一。”[^10] + +### 谁在用 GCC? + +在 JetBrains 2018 年的开发者生态状况调查中,在接受调查的 6000 名开发者中,66% 的 C++ 程序员和 73% 的 C 程序员经常使用 GCC。[^11] 以下简要介绍 GCC 的优点,正是这些优点使它在开发人员社区中如此受欢迎。 + + * 对于需要为各种新的和遗留的计算平台和操作环境编写代码的开发人员,GCC 提供了对最广泛的硬件和操作环境的支持。硬件供应商提供的编译器主要侧重于对其产品的支持,而其他开源编译器在所支持的硬件和操作系统方面则受到很大限制。[^12] + * 有各种各样的基于 GCC 的预构建工具链,这对嵌入式系统开发人员特别有吸引力。这包括 GNU ARM 嵌入式工具链和 Bootlin 网站上提供的 138 个预编译交叉编译器工具链。[^13] 虽然其他开源编译器(如 Clang/LLVM)可以取代现有交叉编译工具链中的 GCC,但这些工具集需要开发者完全重新构建。[^14] + * GCC 通过成熟的编译器平台向应用程序开发人员提供可靠、稳定的性能。《在 AMD EPYC 平台上用 GCC 8/9 与 LLVM Clang 6/7 编译器基准测试》这篇文章提供了 49 个基准测试的结果,这些测试的编译器在三个优化级别上运行。使用 `-O3 -march=native` 级别的 GCC 8.2 RC1 在 34% 的时间里排在第一位,而在相同的优化级别 LLVM Clang 6.0 在 20% 的时间里赢得了第二位。[^15] + * GCC 为编译调试 [^16] 提供了改进的诊断方法,并为运行时调试提供了准确而有用的信息。GCC 与 GDB 紧密集成,GDB 是一个成熟且功能齐全的工具,它提供“不间断”调试,可以在断点处停止单个线程。 + * GCC 是一个得到良好支持的平台,它有一个活跃的、有责任感的社区,支持当前版本和以前的两个版本。由于每年都有发布计划,这为一个版本提供了两年的支持。 + +### GCC:仍然在继续优化 + +GCC 作为一个世界级的编译器继续向前发展。GCC 的最新版本是 8.2,于 2018 年 7 月发布(LCTT 译注:本文原文发表于 2018 年),增加了对即将推出的 Intel CPU、更多 ARM CPU 的硬件支持,并提高了 AMD 的 ZEN CPU 的性能。增加了对 C17 的初步支持,同时也对 C++2A 进行了初步工作。诊断功能继续得到增强,包括更好的发射诊断,改进了定位、定位范围和修复提示,特别是在 C++ 前端。Red Hat 的 David Malcolm 在 2018 年 3 月撰写的博客概述了 GCC 8 中的可用性改进。[^17] + +新的硬件平台继续依赖 GCC 工具链进行软件开发,例如 RISC-V,这是一种自由开放的 ISA,机器学习、人工智能(AI)和物联网细分市场都对其感兴趣。GCC 仍然是 Linux 系统持续开发的关键组件。针对 Intel 架构的 Clear Linux 项目是一个为云、客户端和物联网用例构建的新兴发行版,它提供了一个很好的示例,说明如何使用和改进 GCC 编译器技术来提高基于 Linux 的系统的性能和安全性。GCC 还被用于微软 Azure Sphere 的应用程序开发,这是一个基于 Linux 的物联网应用程序操作系统,最初支持基于 ARM 的联发科 MT3620 处理器。在培养下一代程序员方面,GCC 也是树莓派的 Windows 工具链的核心组件,树莓派是一种运行基于 Debian 的 GNU/Linux 的低成本嵌入式板,用于促进学校和发展中国家的基础计算机科学教学。 + +GCC 由 GNU 项目的创始人理查德•斯托曼Richard Stallman首次发布 于 1987 年 3 月 22 日,由于它是第一个作为自由软件发布的可移植的 ANSI C 优化编译器,因此它被认为是一个重大突破。GCC 由来自世界各地的程序员组成的社区在指导委员会的指导下维护,以确保对项目进行广泛的、有代表性的监督。GCC 的社区方法是它的优势之一,它形成了一个由开发人员和用户组成的庞大而多样化的社区,他们为项目做出了贡献并提供支持。根据 Open Hub 的说法,“GCC 是世界上最大的开源团队之一,在 Open Hub 上的所有项目团队中排名前 2%。”[^18] + +关于 GCC 的许可问题,人们进行了大量的讨论,其中大多数是混淆而不是启发。GCC 在 GNU 通用公共许可证(GPL)版本 3 或更高版本下发布,但运行时库例外。这是一个左版许可,这意味着衍生作品只能在相同的许可条款下分发。GPLv3 旨在保护 GCC,防止其成为专有软件,并要求对 GCC 代码的更改可以自由公开地进行。对于“最终用户”来说,这个编译器与其他编译器完全相同;使用 GCC 对你为自己的代码所选择的任何许可都没有区别。[^19] + + [^1]: http://clang.llvm.org/features.html#gcccompat + [^2]: https://opensource.com/article/18/9/happy-birthday-gnu + [^3]: 由 SUSE 基于最近的构建统计提供的信息。在 openSUSE 中还有其他不生成可执行镜像的源码包,这些不包括在统计中。 + [^4]: https://community.arm.com/tools/b/blog/posts/gnu-toolchain-performance-in-2018 + [^5]: https://www.arm.com/products/processors/cortex-m + [^6]: https://gcc.gnu.org/projects/cxx-status.html#cxx17 + [^7]: https://mysqlserverteam.com/mysql-8-0-source-code-improvements/ + [^8]: http://httpd.apache.org/docs/2.4/install.html + [^9]: https://blog.bacula.org/what-is-bacula/system-requirements/ +[^10]: https://colfaxresearch.com/compiler-comparison/ +[^11]: https://www.jetbrains.com/research/devecosystem-2018/ +[^12]: http://releases.llvm.org/6.0.0/tools/clang/docs/UsersManual.html +[^13]: https://bootlin.com/blog/free-and-ready-to-use-cross-compilation-toolchains/ +[^14]: https://clang.llvm.org/docs/Toolchain.html +[^15]: https://www.phoronix.com/scan.php?page=article&item=gcclang-epyc-summer18&num=1 +[^16]: https://gcc.gnu.org/wiki/ClangDiagnosticsComparison +[^17]: https://developers.redhat.com/blog/2018/03/15/gcc-8-usability-improvements/ +[^18]: https://www.openhub.net/p/gcc/factoids#FactoidTeamSizeVeryLarge +[^19]: https://www.gnu.org/licenses/gcc-exception-3.1-faq.en.html + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/gcc-optimizing-linux-internet-and-everything + +作者:[Margaret Lewis][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/margaret-lewis +[b]: https://github.com/lujun9972 +[1]: https://www.openacc.org/tools +[2]: /files/images/gccjpg-0 +[3]: https://lcom.static.linuxfound.org/sites/lcom/files/gcc_0.jpg?itok=HbGnRqWX "performance" +[4]: https://www.linux.com/licenses/category/used-permission diff --git a/published/20181123 Three SSH GUI Tools for Linux.md b/published/202101/20181123 Three SSH GUI Tools for Linux.md similarity index 100% rename from published/20181123 Three SSH GUI Tools for Linux.md rename to published/202101/20181123 Three SSH GUI Tools for Linux.md diff --git a/published/20190104 Search, Study And Practice Linux Commands On The Fly.md b/published/202101/20190104 Search, Study And Practice Linux Commands On The Fly.md similarity index 100% rename from published/20190104 Search, Study And Practice Linux Commands On The Fly.md rename to published/202101/20190104 Search, Study And Practice Linux Commands On The Fly.md diff --git a/published/20190204 Getting started with Git- Terminology 101.md b/published/202101/20190204 Getting started with Git- Terminology 101.md similarity index 100% rename from published/20190204 Getting started with Git- Terminology 101.md rename to published/202101/20190204 Getting started with Git- Terminology 101.md diff --git a/published/20190205 5 Streaming Audio Players for Linux.md b/published/202101/20190205 5 Streaming Audio Players for Linux.md similarity index 100% rename from published/20190205 5 Streaming Audio Players for Linux.md rename to published/202101/20190205 5 Streaming Audio Players for Linux.md diff --git a/published/202101/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/published/202101/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md new file mode 100644 index 0000000000..4f74e02e9e --- /dev/null +++ b/published/202101/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md @@ -0,0 +1,436 @@ +[#]: collector: (lujun9972) +[#]: translator: (stevenzdg988) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13041-1.html) +[#]: subject: (Install Apache, MySQL, PHP \(LAMP\) Stack On Ubuntu 18.04 LTS) +[#]: via: (https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +在 Ubuntu 中安装 Apache、MySQL、PHP(LAMP)套件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/lamp-720x340.jpg) + +**LAMP** 套件是一种流行的开源 Web 开发平台,可用于运行和部署动态网站和基于 Web 的应用程序。通常,LAMP 套件由 Apache Web 服务器、MariaDB/MySQL 数据库、PHP/Python/Perl 程序设计(脚本)语言组成。 LAMP 是 **L**inux,**M**ariaDB/**M**YSQL,**P**HP/**P**ython/**P**erl 的缩写。 本教程描述了如何在 Ubuntu 18.04 LTS 服务器中安装 Apache、MySQL、PHP(LAMP 套件)。 + +就本教程而言,我们将使用以下 Ubuntu 测试。 + + * **操作系统**:Ubuntu 18.04.1 LTS Server Edition + * **IP 地址** :192.168.225.22/24 + +### 1. 安装 Apache Web 服务器 + +首先,利用下面命令更新 Ubuntu 服务器: + +``` +$ sudo apt update +$ sudo apt upgrade +``` + +然后,安装 Apache Web 服务器(命令如下): + +``` +$ sudo apt install apache2 +``` + +检查 Apache Web 服务器是否已经运行: + +``` +$ sudo systemctl status apache2 +``` + +输出结果大概是这样的: + +``` +● apache2.service - The Apache HTTP Server + Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: en + Drop-In: /lib/systemd/system/apache2.service.d + └─apache2-systemd.conf + Active: active (running) since Tue 2019-02-05 10:48:03 UTC; 1min 5s ago + Main PID: 2025 (apache2) + Tasks: 55 (limit: 2320) + CGroup: /system.slice/apache2.service + ├─2025 /usr/sbin/apache2 -k start + ├─2027 /usr/sbin/apache2 -k start + └─2028 /usr/sbin/apache2 -k start + +Feb 05 10:48:02 ubuntuserver systemd[1]: Starting The Apache HTTP Server... +Feb 05 10:48:03 ubuntuserver apachectl[2003]: AH00558: apache2: Could not reliably +Feb 05 10:48:03 ubuntuserver systemd[1]: Started The Apache HTTP Server. +``` + +祝贺你! Apache 服务已经启动并运行了!! + +#### 1.1 调整防火墙允许 Apache Web 服务器 + +默认情况下,如果你已在 Ubuntu 中启用 UFW 防火墙,则无法从远程系统访问 Apache Web 服务器。 必须按照以下步骤开启 `http` 和 `https` 端口。 + +首先,使用以下命令列出 Ubuntu 系统上可用的应用程序配置文件: + +``` +$ sudo ufw app list +``` + +输出结果: + +``` +Available applications: +Apache +Apache Full +Apache Secure +OpenSSH +``` + +如你所见,Apache 和 OpenSSH 应用程序已安装 UFW 配置文件。你可以使用 `ufw app info "Profile Name"` 命令列出有关每个配置文件及其包含的规则的信息。 + +让我们研究一下 “Apache Full” 配置文件。 为此,请运行: + +``` +$ sudo ufw app info "Apache Full" +``` + +输出结果: + +``` +Profile: Apache Full +Title: Web Server (HTTP,HTTPS) +Description: Apache v2 is the next generation of the omnipresent Apache web +server. + +Ports: +80,443/tcp +``` + +如你所见,“Apache Full” 配置文件包含了启用经由端口 **80** 和 **443** 的传输规则: + +现在,运行以下命令配置允许 HTTP 和 HTTPS 传入通信: + +``` +$ sudo ufw allow in "Apache Full" +Rules updated +Rules updated (v6) +``` + +如果你不想允许 HTTP 通信,而只允许 HTTP(80) 通信,请运行: + +``` +$ sudo ufw app info "Apache" +``` + +#### 1.2 测试 Apache Web 服务器 + +现在,打开 Web 浏览器并导航到 来访问 Apache 测试页。 + +![](https://www.ostechnix.com/wp-content/uploads/2016/06/apache-2.png) + +如果看到上面类似的显示内容,那就成功了。 Apache 服务器正在工作! + +### 2. 安装 MySQL + +在 Ubuntu 安装 MySQL 请运行: + +``` +$ sudo apt install mysql-server +``` + +使用以下命令验证 MySQL 服务是否正在运行: + +``` +$ sudo systemctl status mysql +``` + +输出结果: + +``` +● mysql.service - MySQL Community Server +Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enab +Active: active (running) since Tue 2019-02-05 11:07:50 UTC; 17s ago +Main PID: 3423 (mysqld) +Tasks: 27 (limit: 2320) +CGroup: /system.slice/mysql.service +└─3423 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid + +Feb 05 11:07:49 ubuntuserver systemd[1]: Starting MySQL Community Server... +Feb 05 11:07:50 ubuntuserver systemd[1]: Started MySQL Community Server. +``` + +MySQL 正在运行! + +#### 2.1 配置数据库管理用户(root)密码 + +默认情况下,MySQL root 用户密码为空。你需要通过运行以下脚本使你的 MySQL 服务器安全: + +``` +$ sudo mysql_secure_installation +``` + +系统将询问你是否要安装 “VALIDATE PASSWORD plugin(密码验证插件)”。该插件允许用户为数据库配置强密码凭据。如果启用,它将自动检查密码的强度并强制用户设置足够安全的密码。**禁用此插件是安全的**。但是,必须为数据库使用唯一的强密码凭据。如果不想启用此插件,只需按任意键即可跳过密码验证部分,然后继续其余步骤。 + +如果回答是 `y`,则会要求你选择密码验证级别。 + +``` +Securing the MySQL server deployment. + +Connecting to MySQL using a blank password. + +VALIDATE PASSWORD PLUGIN can be used to test passwords +and improve security. It checks the strength of password +and allows the users to set only those passwords which are +secure enough. Would you like to setup VALIDATE PASSWORD plugin? + +Press y|Y for Yes, any other key for No y +``` + +可用的密码验证有 “low(低)”、 “medium(中)” 和 “strong(强)”。只需输入适当的数字(0 表示低,1 表示中,2 表示强密码)并按回车键。 + +``` +There are three levels of password validation policy: + +LOW Length >= 8 +MEDIUM Length >= 8, numeric, mixed case, and special characters +STRONG Length >= 8, numeric, mixed case, special characters and dictionary file + +Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: +``` + +现在,输入 MySQL root 用户的密码。请注意,必须根据上一步中选择的密码策略,为 MySQL root 用户使用密码。如果你未启用该插件,则只需使用你选择的任意强度且唯一的密码即可。 + +``` +Please set the password for root here. + +New password: + +Re-enter new password: + +Estimated strength of the password: 50 +Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y +``` + +两次输入密码后,你将看到密码强度(在此示例情况下为 50)。如果你确定可以,请按 `y` 继续提供的密码。如果对密码长度不满意,请按其他任意键并设置一个强密码。我现在的密码可以,所以我选择了`y`。 + +对于其余的问题,只需键入 `y` 并按回车键。这将删除匿名用户、禁止 root 用户远程登录并删除 `test`(测试)数据库。 + +``` +Remove anonymous users? (Press y|Y for Yes, any other key for No) : y +Success. + +Normally, root should only be allowed to connect from +'localhost'. This ensures that someone cannot guess at +the root password from the network. + +Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y +Success. + +By default, MySQL comes with a database named 'test' that +anyone can access. This is also intended only for testing, +and should be removed before moving into a production +environment. + +Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y +- Dropping test database... +Success. + +- Removing privileges on test database... +Success. + +Reloading the privilege tables will ensure that all changes +made so far will take effect immediately. + +Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y +Success. + +All done! +``` + +以上就是为 MySQL root 用户设置密码。 + +#### 2.2 更改 MySQL 超级用户的身份验证方法 + +默认情况下,Ubuntu 系统的 MySQL root 用户为 MySQL 5.7 版本及更新的版本使用插件 `auth_socket` 设置身份验证。尽管它增强了安全性,但是当你使用任何外部程序(例如 phpMyAdmin)访问数据库服务器时,也会变得更困难。要解决此问题,你需要将身份验证方法从 `auth_socket` 更改为 `mysql_native_password`。为此,请使用以下命令登录到你的 MySQL 提示符下: + +``` +$ sudo mysql +``` + +在 MySQL 提示符下运行以下命令,找到所有 MySQL 当前用户帐户的身份验证方法: + +``` +SELECT user,authentication_string,plugin,host FROM mysql.user; +``` + +输出结果: + +``` ++------------------|-------------------------------------------|-----------------------|-----------+ +| user | authentication_string | plugin | host | ++------------------|-------------------------------------------|-----------------------|-----------+ +| root | | auth_socket | localhost | +| mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | +| mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | +| debian-sys-maint | *F126737722832701DD3979741508F05FA71E5BA0 | mysql_native_password | localhost | ++------------------|-------------------------------------------|-----------------------|-----------+ +4 rows in set (0.00 sec) +``` + +![][2] + +如你所见,Mysql root 用户使用 `auth_socket` 插件进行身份验证。 + +要将此身份验证更改为 `mysql_native_password` 方法,请在 MySQL 提示符下运行以下命令。 别忘了用你选择的强大唯一的密码替换 `password`。 如果已启用 VALIDATION 插件,请确保已根据当前策略要求使用了强密码。 + +``` +ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; +``` + +使用以下命令更新数据库: + +``` +FLUSH PRIVILEGES; +``` + +使用命令再次检查身份验证方法是否已更改: + +``` +SELECT user,authentication_string,plugin,host FROM mysql.user; +``` + +输出结果: + +![][3] + +好!MySQL root 用户就可以使用密码进行身份验证来访问 `mysql shell`。 + +从 MySQL 提示符下退出: + +``` +exit +``` + +### 3. 安装 PHP + +安装 PHP 请运行: + +``` +$ sudo apt install php libapache2-mod-php php-mysql +``` + +安装 PHP 后,在 Apache 文档根目录中创建 `info.php` 文件。通常,在大多数基于 Debian 的 Linux 发行版中,Apache 文档根目录为 `/var/www/html/` 或 `/var/www/`。Ubuntu 18.04 LTS 系统下,文档根目录是 `/var/www/html/`。 + +在 Apache 根目录中创建 `info.php` 文件: + +``` +$ sudo vi /var/www/html/info.php +``` + +在此文件中编辑如下内容: + +``` + +``` + +然后按下 `ESC` 键并且输入 `:wq` 保存并退出此文件。重新启动 Apache 服务使更改生效。 + +``` +$ sudo systemctl restart apache2 +``` + +#### 3.1 测试 PHP + +打开 Web 浏览器,然后导航到 URL 。 + +你就将看到 PHP 测试页面。 + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/php-test-page.png) + +通常,当用户向 Web 服务器发出请求时,Apache 首先会在文档根目录中查找名为 `index.html` 的文件。如果你想将 Apache 更改为 `php` 文件提供服务而不是其他文件,请将 `dir.conf` 配置文件中的 `index.php` 移至第一个位置,如下所示: + +``` +$ sudo vi /etc/apache2/mods-enabled/dir.conf +``` + +上面的配置文件(`dir.conf`) 内容如下: + +``` + +DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm + + +# vim: syntax=apache ts=4 sw=4 sts=4 sr noet +``` + +将 `index.php` 移动到最前面。更改后,`dir.conf` 文件内容看起来如下所示。 + +``` + +DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm + + +# vim: syntax=apache ts=4 sw=4 sts=4 sr noet +``` + +然后按下 `ESC` 键并且输入 `:wq` 保存并关闭此文件。重新启动 Apache 服务使更改生效。 + +``` +$ sudo systemctl restart apache2 +``` + +#### 3.2 安装 PHP 模块 + +为了增加 PHP 的功能,可以安装一些其他的 PHP 模块。 + +要列出可用的 PHP 模块,请运行: + +``` +$ sudo apt-cache search php- | less +``` + +输出结果: + +![][4] + +使用方向键浏览结果。要退出,请输入 `q` 并按下回车键。 + +要查找任意 `php` 模块的详细信息,例如 `php-gd`,请运行: + +``` +$ sudo apt-cache show php-gd +``` + +安装 PHP 模块请运行: + +``` +$ sudo apt install php-gd +``` + +安装所有的模块(虽然没有必要),请运行: + +``` +$ sudo apt-get install php* +``` + +安装任何 `php` 模块后,请不要忘记重新启动 Apache 服务。要检查模块是否已加载,请在浏览器中打开 `info.php` 文件并检查是否存在。 + +接下来,你可能需要安装数据库管理工具,以通过 Web 浏览器轻松管理数据库。如果是这样,请按照以下链接中的说明安装 `phpMyAdmin`。 + +祝贺你!我们已经在 Ubuntu 服务器中成功配置了 LAMP 套件。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[stevenzdg988](https://github.com/stevenzdg988) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-1.png +[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2016/06/php-modules.png diff --git a/published/202101/20190215 Make websites more readable with a shell script.md b/published/202101/20190215 Make websites more readable with a shell script.md new file mode 100644 index 0000000000..a84cdfaa40 --- /dev/null +++ b/published/202101/20190215 Make websites more readable with a shell script.md @@ -0,0 +1,261 @@ +[#]: collector: (lujun9972) +[#]: translator: (stevenzdg988) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13052-1.html) +[#]: subject: (Make websites more readable with a shell script) +[#]: via: (https://opensource.com/article/19/2/make-websites-more-readable-shell-script) +[#]: author: (Jim Hall https://opensource.com/users/jim-hall) + +利用 Shell 脚本让网站更具可读性 +====== + +> 测算网站的文本和背景之间的对比度,以确保站点易于阅读。 + +![](https://img.linux.net.cn/data/attachment/album/202101/25/231152ce5ufhjtufxj1eeu.jpg) + +如果希望人们发现你的网站实用,那么他们需要能够阅读它。为文本选择的颜色可能会影响网站的可读性。不幸的是,网页设计中的一种流行趋势是在打印输出文本时使用低对比度的颜色,就像在白色背景上的灰色文本。对于 Web 设计师来说,这也许看起来很酷,但对于许多阅读它的人来说确实很困难。 + +W3C 提供了《Web 内容可访问性指南Web Content Accessibility Guidelines》,其中包括帮助 Web 设计人员选择易于区分文本和背景色的指导。z这就是所谓的“对比度contrast ratio”。 W3C 定义的对比度需要进行一些计算:给定两种颜色,首先计算每种颜色的相对亮度,然后计算对比度。对比度在 1 到 21 的范围内(通常写为 1:1 到 21:1)。对比度越高,文本在背景下的突出程度就越高。例如,白色背景上的黑色文本非常醒目,对比度为 21:1。对比度为 1:1 的白色背景上的白色文本不可读。 + +[W3C 说,正文][1] 的对比度至少应为 4.5:1,标题至少应为 3:1。但这似乎是最低限度的要求。W3C 还建议正文至少 7:1,标题至少 4.5:1。 + +计算对比度可能比较麻烦,因此最好将其自动化。我已经用这个方便的 Bash 脚本做到了这一点。通常,脚本执行以下操作: + + 1. 获取文本颜色和背景颜色 + 2. 计算相对亮度 + 3. 计算对比度 + +### 获取颜色 + +你可能知道显示器上的每种颜色都可以用红色、绿色和蓝色(R、G 和 B)来表示。要计算颜色的相对亮度,脚本需要知道颜色的红、绿和蓝的各个分量。理想情况下,脚本会将这些信息读取为单独的 R、G 和 B 值。 Web 设计人员可能知道他们喜欢的颜色的特定 RGB 代码,但是大多数人不知道不同颜色的 RGB 值。作为一种替代的方法是,大多数人通过 “red” 或 “gold” 或 “maroon” 之类的名称来引用颜色。 + +幸运的是,GNOME 的 [Zenity][2] 工具有一个颜色选择器应用程序,可让你使用不同的方法选择颜色,然后用可预测的格式 `rgb(R,G,B)` 返回 RGB 值。使用 Zenity 可以轻松获得颜色值: + +``` +color=$( zenity --title 'Set text color' --color-selection --color='black' ) +``` + +如果用户(意外地)单击 “Cancel(取消)” 按钮,脚本将假定一种颜色: + +``` +if [ $? -ne 0 ] ; then +        echo '** color canceled .. assume black' +        color='rgb(0,0,0)' +fi +``` + +脚本对背景颜色值也执行了类似的操作,将其设置为 `$background`。 + +### 计算相对亮度 + +一旦你在 `$color` 中设置了前景色,并在 `$background` 中设置了背景色,下一步就是计算每种颜色的相对亮度。 [W3C 提供了一个算法][3] 用以计算颜色的相对亮度。 + +> 对于 sRGB 色彩空间,一种颜色的相对亮度定义为: +> +> L = 0.2126 * R + 0.7152 * G + 0.0722 * B +> +> R、G 和 B 定义为: +> +> if $R_{sRGB}$ <= 0.03928 then R = $R_{sRGB}$/12.92 +> +> else R = (($R_{sRGB}$+0.055)/1.055) $^{2.4}$ +> +> if $G_{sRGB}$ <= 0.03928 then G = $G_{sRGB}$/12.92 +> +> else G = (($G_{sRGB}$+0.055)/1.055) $^{2.4}$ +> +> if $B_{sRGB}$ <= 0.03928 then B = $B_{sRGB}$/12.92 +> +> else B = (($B_{sRGB}$+0.055)/1.055) $^{2.4}$ +> +> $R_{sRGB}$、$G_{sRGB}$ 和 $B_{sRGB}$ 定义为: +> +> $R_{sRGB}$ = $R_{8bit}$/255 +> +> $G_{sRGB}$ = $G_{8bit}$/255 +> +> $B_{sRGB}$ = $B_{8bit}$/255 + +由于 Zenity 以 `rgb(R,G,B)` 的格式返回颜色值,因此脚本可以轻松拉取分隔开的 R、B 和 G 的值以计算相对亮度。AWK 可以使用逗号作为字段分隔符(`-F,`),并使用 `substr()` 字符串函数从 `rgb(R,G,B)` 中提取所要的颜色值: + +``` +R=$( echo $color | awk -F, '{print substr($1,5)}' ) +G=$( echo $color | awk -F, '{print $2}' ) +B=$( echo $color | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) +``` + +*有关使用 AWK 提取和显示数据的更多信息,[查看 AWK 备忘表][4]* + +最好使用 BC 计算器来计算最终的相对亮度。BC 支持计算中所需的简单 `if-then-else`,这使得这一过程变得简单。但是由于 BC 无法使用非整数指数直接计算乘幂,因此需要使用自然对数替代它做一些额外的数学运算: + +``` +echo "scale=4 +rsrgb=$R/255 +gsrgb=$G/255 +bsrgb=$B/255 +if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 * l((rsrgb+0.055)/1.055) ) +if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 * l((gsrgb+0.055)/1.055) ) +if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 * l((bsrgb+0.055)/1.055) ) +0.2126 * r + 0.7152 * g + 0.0722 * b" | bc -l +``` + +这会将一些指令传递给 BC,包括作为相对亮度公式一部分的 `if-then-else` 语句。接下来 BC 打印出最终值。 + +### 计算对比度 + +利用文本颜色和背景颜色的相对亮度,脚本就可以计算对比度了。 [W3C 确定对比度][5] 是使用以下公式: + +> (L1 + 0.05) / (L2 + 0.05),这里的 +> L1 是颜色较浅的相对亮度, +> L2 是颜色较深的相对亮度。 + +给定两个相对亮度值 `$r1` 和 `$r2`,使用 BC 计算器很容易计算对比度: + +``` +echo "scale=2 +if ( $r1 > $r2 ) { l1=$r1; l2=$r2 } else { l1=$r2; l2=$r1 } +(l1 + 0.05) / (l2 + 0.05)" | bc +``` + +使用 `if-then-else` 语句确定哪个值(`$r1` 或 `$r2`)是较浅还是较深的颜色。BC 执行结果计算并打印结果,脚本可以将其存储在变量中。 + +### 最终脚本 + +通过以上内容,我们可以将所有内容整合到一个最终脚本。 我使用 Zenity 在文本框中显示最终结果: + +``` +#!/bin/sh +# script to calculate contrast ratio of colors + +# read color and background color: +# zenity returns values like 'rgb(255,140,0)' and 'rgb(255,255,255)' + +color=$( zenity --title 'Set text color' --color-selection --color='black' ) +if [ $? -ne 0 ] ; then + echo '** color canceled .. assume black' + color='rgb(0,0,0)' +fi + +background=$( zenity --title 'Set background color' --color-selection --color='white' ) +if [ $? -ne 0 ] ; then + echo '** background canceled .. assume white' + background='rgb(255,255,255)' +fi + +# compute relative luminance: + +function luminance() +{ + R=$( echo $1 | awk -F, '{print substr($1,5)}' ) + G=$( echo $1 | awk -F, '{print $2}' ) + B=$( echo $1 | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) + + echo "scale=4 +rsrgb=$R/255 +gsrgb=$G/255 +bsrgb=$B/255 +if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 * l((rsrgb+0.055)/1.055) ) +if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 * l((gsrgb+0.055)/1.055) ) +if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 * l((bsrgb+0.055)/1.055) ) +0.2126 * r + 0.7152 * g + 0.0722 * b" | bc -l +} + +lum1=$( luminance $color ) +lum2=$( luminance $background ) + +# compute contrast + +function contrast() +{ + echo "scale=2 +if ( $1 > $2 ) { l1=$1; l2=$2 } else { l1=$2; l2=$1 } +(l1 + 0.05) / (l2 + 0.05)" | bc +} + +rel=$( contrast $lum1 $lum2 ) + +# print results + +( cat<文档管理系统Document Management System)界面内的文档中可以进行协同工作成为可能:查看、编辑、协同编辑、保存文件和用户访问管理,并可以作为服务的示例集成到 Python 应用程序中。 + +### 1、前置需求 + +首先,创建集成过程的关键组件:[ONLYOFFICE 文档服务器][4] 和用 Python 编写的文件管理系统。 + +#### 1.1、ONLYOFFICE 文档服务器 + +要安装 ONLYOFFICE 文档服务器,你可以从多个安装选项中进行选择:编译 GitHub 上可用的源代码,使用 `.deb` 或 `.rpm` 软件包亦或 Docker 镜像。 + +我们推荐使用下面这条命令利用 Docker 映像安装文档服务器和所有必需的依赖。请注意,选择此方法,你需要安装最新的 Docker 版本。 + +``` +docker run -itd -p 80:80 onlyoffice/documentserver-de +``` + +#### 1.2、利用 Python 开发 DMS + +如果已经拥有一个,请检查它是否满足以下条件: + + * 包含需要打开以查看/编辑的保留文件 + * 允许下载文件 + +对于该应用程序,我们将使用 Bottle 框架。我们将使用以下命令将其安装在工作目录中: + +``` +pip install bottle +``` + +然后我们创建应用程序代码 `main.py` 和模板 `index.tpl`。 + +我们将以下代码添加到 `main.py` 文件中: + +``` +from bottle import route, run, template, get, static_file # connecting the framework and the necessary components +@route('/') # setting up routing for requests for / +def index(): + return template('index.tpl') # showing template in response to request + +run(host="localhost", port=8080) # running the application on port 8080 +``` + +一旦我们运行该应用程序,点击 就会在浏览器上呈现一个空白页面 。 +为了使文档服务器能够创建新文档,添加默认文件并在模板中生成其名称列表,我们应该创建一个文件夹 `files` 并将3种类型文件(`.docx`、`.xlsx` 和 `.pptx`)放入其中。 + +要读取这些文件的名称,我们使用 `listdir` 组件(模块): + +``` +from os import listdir +``` + +现在让我们为文件夹中的所有文件名创建一个变量: + +``` +sample_files = [f for f in listdir('files')] +``` + +要在模板中使用此变量,我们需要通过 `template` 方法传递它: + +``` +def index(): + return template('index.tpl', sample_files=sample_files) +``` + +这是模板中的这个变量: + +``` +% for file in sample_files: +
+ {{file}} +
+% end +``` + +我们重新启动应用程序以查看页面上的文件名列表。 + +使这些文件可用于所有应用程序用户的方法如下: + +``` +@get("/files/") +def show_sample_files(filepath): + return static_file(filepath, root="files") +``` + +### 2、查看文档 + +所有组件准备就绪后,让我们添加函数以使编辑者可以利用应用接口操作。 + +第一个选项使用户可以打开和查看文档。连接模板中的文档编辑器 API : + +``` + +``` + +`editor_url` 是文档编辑器的链接接口。 + +打开每个文件以供查看的按钮: + +``` + +``` + +现在我们需要添加带有 `id` 的 `div` 标签,打开文档编辑器: + +``` +
+``` + +要打开编辑器,必须调用调用一个函数: + +``` + +``` + +DocEditor 函数有两个参数:将在其中打开编辑器的元素 `id` 和带有编辑器设置的 `JSON`。 +在此示例中,使用了以下必需参数: + + * `documentType` 由其格式标识(`.docx`、`.xlsx`、`.pptx` 用于相应的文本、电子表格和演示文稿) + * `document.url` 是你要打开的文件链接。 + * `editorConfig.mode`。 + +我们还可以添加将在编辑器中显示的 `title`。 + +接下来,我们可以在 Python 应用程序中查看文档。 + +### 3、编辑文档 + +首先,添加 “Edit”(编辑)按钮: + +``` + +``` + +然后创建一个新功能,打开文件进行编辑。类似于查看功能。 + +现在创建 3 个函数: + +``` + +``` + +`destroyEditor` 被调用以关闭一个打开的编辑器。 + +你可能会注意到,`edit()` 函数中缺少 `editorConfig` 参数,因为默认情况下它的值是:`{"mode":"edit"}`。 + +现在,我们拥有了打开文档以在 Python 应用程序中进行协同编辑的所有功能。 + +### 4、如何在 Python 应用中利用 ONLYOFFICE 协同编辑文档 + +通过在编辑器中设置对同一文档使用相同的 `document.key` 来实现协同编辑。 如果没有此键值,则每次打开文件时,编辑器都会创建编辑会话。 + +为每个文档设置唯一键,以使用户连接到同一编辑会话时进行协同编辑。 密钥格式应为以下格式:`filename +"_key"`。下一步是将其添加到当前文档的所有配置中。 + +``` +document: { + url: "host_url" + '/' + filepath, + title: filename, + key: filename + '_key' +}, +``` + +### 5、如何在 Python 应用中利用 ONLYOFFICE 保存文档 + +每次我们更改并保存文件时,ONLYOFFICE 都会存储其所有版本。 让我们仔细看看它是如何工作的。 关闭编辑器后,文档服务器将构建要保存的文件版本并将请求发送到 `callbackUrl` 地址。 该请求包含 `document.key`和指向刚刚构建的文件的链接。 + +`document.key` 用于查找文件的旧版本并将其替换为新版本。 由于这里没有任何数据库,因此仅使用 `callbackUrl` 发送文件名。 + +在 `editorConfig.callbackUrl` 的设置中指定 `callbackUrl` 参数并将其添加到 `edit()` 方法中: + +``` + function edit(filename) { + const filepath = 'files/' + filename; + if (editor) { + editor.destroyEditor() + } + editor = new DocsAPI.DocEditor("editor", + { + documentType: get_file_type(filepath), + document: { + url: "host_url" + '/' + filepath, + title: filename, + key: filename + '_key' + } + , + editorConfig: { + mode: 'edit', + callbackUrl: "host_url" + '/callback' + '&filename=' + filename // add file name as a request parameter + } + }); + } +``` + +编写一种方法,在获取到 POST 请求发送到 `/callback` 地址后将保存文件: + +``` +@post("/callback") # processing post requests for /callback +def callback(): + if request.json['status'] == 2: + file = requests.get(request.json['url']).content + with open('files/' + request.query['filename'], 'wb') as f: + f.write(file) + return "{\"error\":0}" +​ +``` + +`# status 2` 是已生成的文件,当我们关闭编辑器时,新版本的文件将保存到存储器中。 + +### 6、管理用户 + +如果应用中有用户,并且你需要查看谁在编辑文档,请在编辑器的配置中输入其标识符(`id`和`name`)。 + +在界面中添加选择用户的功能: + +``` + +``` + +如果在标记 ` + + + + +``` + +要在浏览器中运行此文件,请双击文件或打开你喜欢的浏览器,点击菜单,然后选择**文件->打开文件**。(如果使用 Brackets 软件,也可以使用角落处的闪电图标在浏览器中打开文件)。 + +### 生成伪随机数 + +猜谜游戏的第一步是为玩家生成一个数字供玩家猜测。JavaScript 包含几个内置的全局对象,可帮助你编写代码。要生成随机数,请使用 `Math` 对象。 + +JavaScript中的 [Math][17] 具有处理和数学相关的属性和功能。你将使用两个数学函数来生成随机数,供你的玩家猜测。 + +[Math.random()][18],会将生成一个介于 0 和 1 之间的伪随机数。(`Math.random` 包含 0 但不包含 1。这意味着该函数可以生成 0 ,永远不会产生 1) + +对于此游戏,请将随机数设置在 1 到 100 之间以缩小玩家的选择范围。取刚刚生成的小数,然后乘以 100,以产生一个介于 0 到……甚至不是 100 之间的小数。至此,你将需要其他步骤来解决这个问题。 + +现在,你的数字仍然是小数,但你希望它是一个整数。为此,你可以使用属于 `Math` 对象的另一个函数 [Math.floor()][19]。`Math.floor()` 的目的是返回小于或等于你作为参数指定的数字的最大整数,这意味着它会四舍五入为最接近的整数: + +``` +Math.floor(Math.random() * 100) +``` + +这样你将得到 0 到 99 之间的整数,这不是你想要的范围。你可以在最后一步修复该问题,即在结果中加 1。瞧!现在,你有一个(有点)随机生成的数字,介于 1 到 100 之间: + +``` +Math.floor(Math.random() * 100) + 1 +``` + +### 变量 + +现在,你需要存储随机生成的数字,以便可以将其与玩家的猜测进行比较。为此,你可以将其存储到一个 **变量**。 + +JavaScript 具有不同类型的变量,你可以选择这些类型,具体取决于你要如何使用该变量。对于此游戏,请使用 `const` 和 `let`。 + + * `let` 用于指示变量在整个程序中可以改变。 + * `const` 用于指示变量不应该被修改。 + +`const` 和 `let` 还有很多要说的,但现在知道这些就足够了。 + +随机数在游戏中仅生成一次,因此你将使用 `const` 变量来保存该值。你想给变量起一个清楚地表明要存储什么值的名称,因此将其命名为 `randomNumber`: + +``` +const randomNumber +``` + +有关命名的注意事项:JavaScript 中的变量和函数名称以驼峰形式编写。如果只有一个单词,则全部以小写形式书写。如果有多个单词,则第一个单词均为小写,其他任何单词均以大写字母开头,且单词之间没有空格。 + +### 打印到控制台 + +通常,你不想向任何人显示随机数,但是开发人员可能想知道生成的数字以使用它来帮助调试代码。 使用 JavaScript,你可以使用另一个内置函数 [console.log()][20] 将数字输出到浏览器的控制台。 + +大多数浏览器都包含开发人员工具,你可以通过按键盘上的 `F12` 键来打开它们。从那里,你应该看到一个 **控制台** 标签。打印到控制台的所有信息都将显示在此处。由于到目前为止编写的代码将在浏览器加载后立即运行,因此,如果你查看控制台,你应该会看到刚刚生成的随机数! + +![Javascript game with console][21] + +### 函数 + +接下来,你需要一种方法来从数字输入字段中获得玩家的猜测,将其与你刚刚生成的随机数进行比较,并向玩家提供反馈,让他们知道他们是否正确猜到了。为此,编写一个函数。 **函数** 是执行一定任务的代码块。函数是可以重用的,这意味着如果你需要多次运行相同的代码,则可以调用函数,而不必重写执行任务所需的所有步骤。 + +根据你使用的 JavaScript 版本,有许多不同的方法来编写或声明函数。由于这是该语言的基础入门,因此请使用基本函数语法声明函数。 + +以关键字 `function` 开头,然后起一个函数名。好的做法是使用一个描述该函数的功能的名称。在这个例子中,你正在检查玩家的猜测的数,因此此函数的名字可以是 `checkGuess`。在函数名称之后,写上一组小括号,然后写上一组花括号。 你将在以下花括号之间编写函数的主体: + +``` +function checkGuess() {} +``` + +### 使用 DOM + +JavaScript 的目的之一是与网页上的 HTML 交互。它通过文档对象模型(DOM)进行此操作,DOM 是 JavaScript 用于访问和更改网页信息的对象。现在,你需要从 HTML 中获取数字输入字段中玩家的猜测。你可以使用分配给 HTML 元素的 `id` 属性(在这种情况下为 `guess`)来做到这一点: + +``` + +``` + +JavaScript 可以通过访问玩家输入到数字输入字段中的数来获取其值。你可以通过引用元素的 ID 并在末尾添加 `.value` 来实现。这次,使用 `let` 定义的变量来保存用户的猜测值: + +``` +let myGuess = guess.value +``` + +玩家在数字输入字段中输入的任何数字都将被分配给 `checkGuess` 函数中的 `myGuess` 变量。 + +### 条件语句 + +下一步是将玩家的猜测与游戏产生的随机数进行比较。你还想给玩家反馈,让他们知道他们的猜测是太高,太低还是正确。 + +你可以使用一系列条件语句来决定玩家将收到的反馈。**条件语句** 在运行代码块之前检查是否满足条件。如果不满足条件,则代码停止,继续检查下一个条件,或者继续执行其余代码,而无需执行条件块中的代码: + +``` +if (myGuess === randomNumber){ +  feedback.textContent = "You got it right!" +} +else if(myGuess > randomNumber) { +  feedback.textContent = "Your guess was " + myGuess + ". That's too high. Try Again!" +} +else if(myGuess < randomNumber) { +  feedback.textContent = "Your guess was " + myGuess + ". That's too low. Try Again!" +} +``` + +第一个条件块使用比较运算符 `===` 将玩家的猜测与游戏生成的随机数进行比较。比较运算符检查右侧的值,将其与左侧的值进行比较,如果匹配则返回布尔值 `true`,否则返回布尔值 `false`。 + +如果数字匹配(猜对了!),为了让玩家知道。通过将文本添加到具有 `id` 属性 `feedback` 的 `

` 标记中来操作 DOM。就像上面的 `guess.value` 一样,除了不是从 DOM 获取信息,而是更改其中的信息。`

` 元素没有像 `` 元素那样的值,而是具有文本,因此请使用 `.textContent` 访问元素并设置要显示的文本: + +``` +feedback.textContent = "You got it right!" +``` + +当然,玩家很有可能在第一次尝试时就猜错了,因此,如果 `myGuess` 和 `randomNumber` 不匹配,请给玩家一个线索,以帮助他们缩小猜测范围。如果第一个条件失败,则代码将跳过该 `if` 语句中的代码块,并检查下一个条件是否为 `true`。 这使你进入 `else if` 块: + +``` +else if(myGuess > randomNumber) { +  feedback.textContent = "Your guess was " + myGuess + ". That's too high. Try Again!" +} +``` + +如果你将其作为句子阅读,则可能是这样的:“如果玩家的猜测等于随机数,请让他们知道他们猜对了。否则,请检查玩家的猜测是否大于 `randomNumber`,如果是,则显示玩家的猜测并告诉他们太高了。” + +最后一种可能性是玩家的猜测低于随机数。 要检查这一点,再添加一个 `else if` 块: + +``` +else if(myGuess < randomNumber) { +  feedback.textContent = "Your guess was " + myGuess + ". That's too low. Try Again!" +} +``` + +### 用户事件和事件监听器 + +如果你看上面的代码,则会看到某些代码在页面加载时自动运行,但有些则不会。你想在玩游戏之前生成随机数,但是你不想在玩家将数字输入到数字输入字段并准备检查它之前检查其猜测。 + +生成随机数并将其打印到控制台的代码不在函数的范围内,因此它将在浏览器加载脚本时自动运行。 但是,要使函数内部的代码运行,你必须对其进行调用。 + +调用函数有几种方法。在此,你希望该函数在用户单击 “Check My Guess” 按钮时运行。单击按钮将创建一个用户事件,然后 JavaScript 可以 “监听” 这个事件,以便知道何时需要运行函数。 + +代码的最后一行将事件侦听器添加到按钮上,以在单击按钮时调用函数。当它“听到”该事件时,它将运行分配给事件侦听器的函数: + +``` +submitGuess.addEventListener('click', checkGuess) +``` + +就像访问 DOM 元素的其他实例一样,你可以使用按钮的 ID 告诉 JavaScript 与哪个元素进行交互。 然后,你可以使用内置的 `addEventListener` 函数来告诉 JavaScript 要监听的事件。 + +你已经看到了带有参数的函数,但花点时间看一下它是如何工作的。参数是函数执行其任务所需的信息。并非所有函数都需要参数,但是 `addEventListener` 函数需要两个参数。它采用的第一个参数是将为其监听的用户事件的名称。用户可以通过多种方式与 DOM 交互,例如键入、移动鼠标,键盘上的 `TAB` 键和粘贴文本。在这种情况下,你正在监听的用户事件是单击按钮,因此第一个参数将是 `click`。 + +`addEventListener`的第二个所需的信息是用户单击按钮时要运行的函数的名称。 这里我们需要 `checkGuess` 函数。 + +现在,当玩家按下 “Check My Guess” 按钮时,`checkGuess` 函数将获得他们在数字输入字段中输入的值,将其与随机数进行比较,并在浏览器中显示反馈,以使玩家知道他们猜的怎么样。 太棒了!你的游戏已准备就绪。 + +### 学习 JavaScript 以获取乐趣和收益 + +这一点点的平凡无奇的 JavaScript 只是这个庞大的生态系统所提供功能的一小部分。这是一种值得花时间投入学习的语言,我鼓励你继续挖掘并学习更多。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/learn-javascript + +作者:[Mandy Kendall][a] +选题:[lujun9972][b] +译者:[amwps290](https://github.com/amwps290) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkendall +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_javascript.jpg?itok=60evKmGl "Javascript code close-up with neon graphic overlay" +[2]: https://opensource.com/tags/javascript +[3]: https://opensource.com/article/20/11/reactjs-tutorial +[4]: https://opensource.com/article/18/9/open-source-javascript-chart-libraries +[5]: https://opensource.com/article/20/12/brackets +[6]: http://december.com/html/4/element/html.html +[7]: http://december.com/html/4/element/head.html +[8]: http://december.com/html/4/element/meta.html +[9]: http://december.com/html/4/element/title.html +[10]: http://december.com/html/4/element/body.html +[11]: http://december.com/html/4/element/h1.html +[12]: http://december.com/html/4/element/p.html +[13]: http://december.com/html/4/element/label.html +[14]: http://december.com/html/4/element/input.html +[15]: http://december.com/html/4/element/form.html +[16]: http://december.com/html/4/element/script.html +[17]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math +[18]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random +[19]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/floor +[20]: https://developer.mozilla.org/en-US/docs/Web/API/Console/log +[21]: https://opensource.com/sites/default/files/javascript-game-with-console.png "Javascript game with console" diff --git a/published/20210121 How Nextcloud is the ultimate open source productivity suite.md b/published/20210121 How Nextcloud is the ultimate open source productivity suite.md new file mode 100644 index 0000000000..397dfea81f --- /dev/null +++ b/published/20210121 How Nextcloud is the ultimate open source productivity suite.md @@ -0,0 +1,70 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13077-1.html) +[#]: subject: (How Nextcloud is the ultimate open source productivity suite) +[#]: via: (https://opensource.com/article/21/1/nextcloud-productivity) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +Nextcloud 是如何成为终极开源生产力套件的 +====== + +> Nextcloud 可以取代你用于协作、组织和任务管理的许多在线应用。 + +![](https://img.linux.net.cn/data/attachment/album/202102/02/121553uhl3pjljjkhj0h8p.jpg) + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十一天。 + +基于 Web 的服务几乎可以在任何地方访问你的数据,它们每小时可以支持数百万用户。不过对于我们中的一些人来说,由于各种原因,运行自己的服务比使用大公司的服务更可取。也许我们的工作是受监管的或有明确安全要求。也许我们有隐私方面的考虑,或者只是喜欢能够自己构建、运行和修复事物。不管是什么情况,[Nextcloud][2] 都可以提供你所需要的大部分服务,但是是在你自己的硬件上。 + +![NextCloud Dashboard displaying service options][3] + +*Nextcloud 控制面板(Kevin Sonney, [CC BY-SA 4.0][4])* + +大多数时候,当我们想到 Nextcloud 时,我们会想到文件共享和同步,类似于 Dropbox、OneDrive 和 Google Drive 等商业产品。然而,如今,它是一个完整的生产力套件,拥有电子邮件客户端、日历、任务和笔记本。 + +有几种方法可以安装和运行 Nextcloud。你可以把它安装到裸机服务器上、在 Docker 容器中运行,或者作为虚拟机运行。如果可以考虑,还有一些托管服务将为你运行 Nextcloud。最后,有适用于所有主流操作系统的应用,包括移动应用,以便随时访问。 + +![Nextcloud virtual machine][5] + +*Nextcloud 虚拟机(Kevin Sonney, [CC BY-SA 4.0][4])* + +默认情况下,Nextcloud 会安装文件共享和其他一些相关应用(或附加组件)。你可以在管理界面中找到“应用”页面,这里允许你安装单个附加组件和一些预定义的相关应用捆绑。对我而言,我选择了 “Groupware Bundle”,其中包括“邮件”、“日历”、“联系人”和 “Deck”。“Deck” 是一个轻量级的看板,用于处理任务。我也安装了“记事本”和“任务”应用。 + +Nextcloud “邮件” 是一个非常直白的 IMAP 邮件客户端。虽然 Nextcloud 没有将 IMAP 或 SMTP 服务器作为软件包的一部分,但你可以很容易地在操作系统中添加一个或使用远程服务。“日历”应用是相当标准的,也允许你订阅远程日历。有一个缺点是,远程日历(例如,来自大型云提供商)是只读的,所以你可以查看但不能修改它们。 + +![NextCoud App Interface][6] + +*Nextcloud 应用界面 (Kevin Sonney, [CC BY-SA 4.0][4])* + +“记事本” 是一个简单的文本记事本,允许你创建和更新简短的笔记、日记和相关的东西。“任务” 是一款待办事项应用,支持多个列表、任务优先级、完成百分比以及其他一些用户期待的标准功能。如果你安装了 “Deck”,它的任务卡也会被列出来。每个看板都会显示自己的列表,所以你可以使用 “Deck” 或 “任务” 来跟踪完成的内容。 + +“Deck” 本身就是一个看板应用,将任务以卡片的形式呈现在流程中。如果你喜欢看板流程,它是一个追踪进度的优秀应用。 + +![Taking notes][7] + +*做笔记 (Kevin Sonney, [CC BY-SA 4.0][4])* + +Nextcloud 中所有的应用都原生支持通过标准协议进行共享。与一些类似的解决方案不同,它的分享并不是为了完成功能列表中的一项而加上去的。分享是 Nextcloud 存在的主要原因之一,所以使用起来非常简单。你还可以将链接分享到社交媒体、通过电子邮件分享等。你可以用一个 Nextcloud 取代多个在线服务,它在任何地方都可以访问,以协作为先。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/nextcloud-productivity + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) +[2]: https://nextcloud.com/ +[3]: https://opensource.com/sites/default/files/day11-image1_0.png +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://opensource.com/sites/default/files/pictures/nextcloud-vm.png (Nextcloud virtual machine) +[6]: https://opensource.com/sites/default/files/pictures/nextcloud-app-interface.png (NextCoud App Interface) +[7]: https://opensource.com/sites/default/files/day11-image3.png (Taking notes in Nextcloud) diff --git a/published/20210122 3 tips for automating your email filters.md b/published/20210122 3 tips for automating your email filters.md new file mode 100644 index 0000000000..f3644bb0e5 --- /dev/null +++ b/published/20210122 3 tips for automating your email filters.md @@ -0,0 +1,60 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13073-1.html) +[#]: subject: (3 tips for automating your email filters) +[#]: via: (https://opensource.com/article/21/1/email-filter) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +3 个自动化电子邮件过滤器的技巧 +====== + +> 通过这些简单的建议,减少你的电子邮件并让你的生活更轻松。 + +![](https://img.linux.net.cn/data/attachment/album/202102/01/103638ozdejmy6eycm6omx.jpg) + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十二天。 + +如果有一件事是我喜欢的,那就是自动化。只要有机会,我就会把小任务进行自动化。早起打开鸡舍的门?我买了一扇门,可以在日出和日落时开门和关门。每天从早到晚实时监控鸡群?用 Node-RED 和 [OBS-Websockets][2] 稍微花点时间,就能搞定。 + +我们还有电子邮件。几天前,我写过关于处理邮件的文章,也写过关于标签和文件夹的文章。只要做一点前期的工作,你就可以在邮件进来的时候,你就可以自动摆脱掉大量管理邮件的开销。 + +![Author has 480 filters][3] + +*是的,我有很多过滤器。(Kevin Sonney, [CC BY-SA 4.0][4])* + +有两种主要方式来过滤你的电子邮件:在服务端或者客户端上。我更喜欢在服务端上做,因为我不断地在尝试新的和不同的电子邮件客户端。(不,真的,我光这个星期就已经使用了五个不同的客户端。我可能有问题。) + +无论哪种方式,我都喜欢用电子邮件过滤规则做几件事,以使我的电子邮件更容易浏览,并保持我的收件箱不混乱。 + + 1. 将不紧急的邮件移到“稍后阅读”文件夹中。对我而言,这包括来自社交网络、新闻简报和邮件列表的通知。 + 2. 按列表或主题给消息贴上标签。我属于几个组织,虽然它们经常会被放在“稍后阅读”文件夹中,但我会添加第二个或第三个标签,以说明该来源或项目的内容,以帮助搜索时找到相关的东西。 + 3. 不要把规则搞得太复杂。这个想法让我困难了一段时间。我想把邮件发送到某个文件夹的所有可能情况都加到一个规则里。如果有什么问题或需要添加或删除的东西,有一个大规则只是让它更难修复。 + +![Unsubscribe from email][5] + +*点击它,点击它就行!(Kevin Sonney, [CC BY-SA 4.0][4])* + +说了这么多,还有一件事我一直在做,它有助于减少我花在电子邮件上的时间:退订邮件。两年前我感兴趣的那个邮件列表已经不感兴趣了,所以就不订阅了。产品更新通讯是我去年停止使用的商品?退订!这一直在积极解放我。我每年都会试着评估几次列表中的邮件信息是否(仍然)有用。 + +过滤器和规则可以是非常强大的工具,让你的电子邮件保持集中,减少花在它们身上的时间。而点击取消订阅按钮是一种解放。试试就知道了! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/email-filter + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation) +[2]: https://opensource.com/article/20/6/obs-websockets-streaming +[3]: https://opensource.com/sites/default/files/day12-image1_0.png +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://opensource.com/sites/default/files/day12-image2_0.png diff --git a/published/20210125 Explore binaries using this full-featured Linux tool.md b/published/20210125 Explore binaries using this full-featured Linux tool.md new file mode 100644 index 0000000000..ffa4c8554b --- /dev/null +++ b/published/20210125 Explore binaries using this full-featured Linux tool.md @@ -0,0 +1,635 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13074-1.html) +[#]: subject: (Explore binaries using this full-featured Linux tool) +[#]: via: (https://opensource.com/article/21/1/linux-radare2) +[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe) + +全功能的二进制文件分析工具 Radare2 指南 +====== + +> Radare2 是一个为二进制分析定制的开源工具。 + +![](https://img.linux.net.cn/data/attachment/album/202102/01/112611baw4gpqlch10ps1c.jpg) + +在《[Linux 上分析二进制文件的 10 种方法][2]》中,我解释了如何使用 Linux 上丰富的原生工具集来分析二进制文件。但如果你想进一步探索你的二进制文件,你需要一个为二进制分析定制的工具。如果你是二进制分析的新手,并且大多使用的是脚本语言,这篇文章《[GNU binutils 里的九种武器][3]》可以帮助你开始学习编译过程和什么是二进制。 + +### 为什么我需要另一个工具? + +如果现有的 Linux 原生工具也能做类似的事情,你自然会问为什么需要另一个工具。嗯,这和你用手机做闹钟、做笔记、做相机、听音乐、上网、偶尔打电话和接电话的原因是一样的。以前,使用单独的设备和工具处理这些功能 —— 比如拍照的实体相机,记笔记的小记事本,起床的床头闹钟等等。对用户来说,有一个设备来做多件(但相关的)事情是*方便的*。另外,杀手锏就是独立功能之间的*互操作性*。 + +同样,即使许多 Linux 工具都有特定的用途,但在一个工具中捆绑类似(和更好)的功能是非常有用的。这就是为什么我认为 [Radare2][4] 应该是你需要处理二进制文件时的首选工具。 + +根据其 [GitHub 简介][5],Radare2(也称为 r2)是一个“类 Unix 系统上的逆向工程框架和命令行工具集”。它名字中的 “2” 是因为这个版本从头开始重写的,使其更加模块化。 + +### 为什么选择 Radare2? + +有大量(非原生的)Linux 工具可用于二进制分析,为什么要选择 Radare2 呢?我的理由很简单。 + +首先,它是一个开源项目,有一个活跃而健康的社区。如果你正在寻找新颖的功能或提供着 bug 修复的工具,这很重要。 + +其次,Radare2 可以在命令行上使用,而且它有一个功能丰富的图形用户界面(GUI)环境,叫做 Cutter,适合那些对 GUI 比较熟悉的人。作为一个长期使用 Linux 的用户,我对习惯于在 shell 上输入。虽然熟悉 Radare2 的命令稍微有一点学习曲线,但我会把它比作 [学习 Vim][6]。你可以先学习基本的东西,一旦你掌握了它们,你就可以继续学习更高级的东西。很快,它就变成了肌肉记忆。 + +第三,Radare2 通过插件可以很好的支持外部工具。例如,最近开源的 [Ghidra][7] 二进制分析和逆向工具reversing tool很受欢迎,因为它的反编译器功能是逆向软件的关键要素。你可以直接从 Radare2 控制台安装 Ghidra 反编译器并使用,这很神奇,让你两全其美。 + +### 开始使用 Radare2 + +要安装 Radare2,只需克隆其存储库并运行 `user.sh` 脚本。如果你的系统上还没有一些预备软件包,你可能需要安装它们。一旦安装完成,运行 `r2 -v` 命令来查看 Radare2 是否被正确安装: + +``` +$ git clone https://github.com/radareorg/radare2.git +$ cd radare2 +$ ./sys/user.sh + +# version + +$ r2 -v +radare2 4.6.0-git 25266 @ linux-x86-64 git.4.4.0-930-g48047b317 +commit: 48047b3171e6ed0480a71a04c3693a0650d03543 build: 2020-11-17__09:31:03 +$ +``` + +#### 获取二进制测试样本 + +现在 `r2` 已经安装好了,你需要一个样本二进制程序来试用它。你可以使用任何系统二进制文件(`ls`、`bash` 等),但为了使本教程的内容简单,请编译以下 C 程序: + +``` +$ cat adder.c +``` + +``` +#include + +int adder(int num) { + return num + 1; +} + +int main() { + int res, num1 = 100; + res = adder(num1); + printf("Number now is : %d\n", res); + return 0; +} +``` + +``` +$ gcc adder.c -o adder +$ file adder +adder: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=9d4366f7160e1ffb46b14466e8e0d70f10de2240, not stripped +$ ./adder +Number now is : 101 +``` + +#### 加载二进制文件 + +要分析二进制文件,你必须在 Radare2 中加载它。通过提供文件名作为 `r2` 命令的一个命令行参数来加载它。你会进入一个独立的 Radare2 控制台,这与你的 shell 不同。要退出控制台,你可以输入 `Quit` 或 `Exit` 或按 `Ctrl+D`: + +``` +$ r2 ./adder + -- Learn pancake as if you were radare! +[0x004004b0]> quit +$ +``` + +#### 分析二进制 + +在你探索二进制之前,你必须让 `r2` 为你分析它。你可以通过在 `r2` 控制台中运行 `aaa` 命令来实现: + +``` +$ r2 ./adder + -- Sorry, radare2 has experienced an internal error. +[0x004004b0]> +[0x004004b0]> +[0x004004b0]> aaa +[x] Analyze all flags starting with sym. and entry0 (aa) +[x] Analyze function calls (aac) +[x] Analyze len bytes of instructions for references (aar) +[x] Check for vtables +[x] Type matching analysis for all functions (aaft) +[x] Propagate noreturn information +[x] Use -AA or aaaa to perform additional experimental analysis. +[0x004004b0]> +``` + +这意味着每次你选择一个二进制文件进行分析时,你必须在加载二进制文件后输入一个额外的命令 `aaa`。你可以绕过这一点,在命令后面跟上 `-A` 来调用 `r2`;这将告诉 `r2` 为你自动分析二进制: + +``` +$ r2 -A ./adder +[x] Analyze all flags starting with sym. and entry0 (aa) +[x] Analyze function calls (aac) +[x] Analyze len bytes of instructions for references (aar) +[x] Check for vtables +[x] Type matching analysis for all functions (aaft) +[x] Propagate noreturn information +[x] Use -AA or aaaa to perform additional experimental analysis. + -- Already up-to-date. +[0x004004b0]> +``` + +#### 获取一些关于二进制的基本信息 + +在开始分析一个二进制文件之前,你需要一些背景信息。在许多情况下,这可以是二进制文件的格式(ELF、PE 等)、二进制的架构(x86、AMD、ARM 等),以及二进制是 32 位还是 64 位。方便的 `r2` 的 `iI` 命令可以提供所需的信息: + +``` +[0x004004b0]> iI +arch x86 +baddr 0x400000 +binsz 14724 +bintype elf +bits 64 +canary false +class ELF64 +compiler GCC: (GNU) 8.3.1 20190507 (Red Hat 8.3.1-4) +crypto false +endian little +havecode true +intrp /lib64/ld-linux-x86-64.so.2 +laddr 0x0 +lang c +linenum true +lsyms true +machine AMD x86-64 architecture +maxopsz 16 +minopsz 1 +nx true +os linux +pcalign 0 +pic false +relocs true +relro partial +rpath NONE +sanitiz false +static false +stripped false +subsys linux +va true + +[0x004004b0]> +[0x004004b0]> +``` + +### 导入和导出 + +通常情况下,当你知道你要处理的是什么样的文件后,你就想知道二进制程序使用了什么样的标准库函数,或者了解程序的潜在功能。在本教程中的示例 C 程序中,唯一的库函数是 `printf`,用来打印信息。你可以通过运行 `ii` 命令看到这一点,它显示了该二进制所有导入的库: + +``` +[0x004004b0]> ii +[Imports] +nth vaddr bind type lib name +――――――――――――――――――――――――――――――――――――― +1 0x00000000 WEAK NOTYPE _ITM_deregisterTMCloneTable +2 0x004004a0 GLOBAL FUNC printf +3 0x00000000 GLOBAL FUNC __libc_start_main +4 0x00000000 WEAK NOTYPE __gmon_start__ +5 0x00000000 WEAK NOTYPE _ITM_registerTMCloneTable +``` + +该二进制也可以有自己的符号、函数或数据。这些函数通常显示在 `Exports` 下。这个测试的二进制导出了两个函数:`main` 和 `adder`。其余的函数是在编译阶段,当二进制文件被构建时添加的。加载器需要这些函数来加载二进制文件(现在不用太关心它们): + +``` +[0x004004b0]> +[0x004004b0]> iE +[Exports] + +nth paddr vaddr bind type size lib name +―――――――――――――――――――――――――――――――――――――――――――――――――――――― +82 0x00000650 0x00400650 GLOBAL FUNC 5 __libc_csu_fini +85 ---------- 0x00601024 GLOBAL NOTYPE 0 _edata +86 0x00000658 0x00400658 GLOBAL FUNC 0 _fini +89 0x00001020 0x00601020 GLOBAL NOTYPE 0 __data_start +90 0x00000596 0x00400596 GLOBAL FUNC 15 adder +92 0x00000670 0x00400670 GLOBAL OBJ 0 __dso_handle +93 0x00000668 0x00400668 GLOBAL OBJ 4 _IO_stdin_used +94 0x000005e0 0x004005e0 GLOBAL FUNC 101 __libc_csu_init +95 ---------- 0x00601028 GLOBAL NOTYPE 0 _end +96 0x000004e0 0x004004e0 GLOBAL FUNC 5 _dl_relocate_static_pie +97 0x000004b0 0x004004b0 GLOBAL FUNC 47 _start +98 ---------- 0x00601024 GLOBAL NOTYPE 0 __bss_start +99 0x000005a5 0x004005a5 GLOBAL FUNC 55 main +100 ---------- 0x00601028 GLOBAL OBJ 0 __TMC_END__ +102 0x00000468 0x00400468 GLOBAL FUNC 0 _init + +[0x004004b0]> +``` + +### 哈希信息 + +如何知道两个二进制文件是否相似?你不能只是打开一个二进制文件并查看里面的源代码。在大多数情况下,二进制文件的哈希值(md5sum、sha1、sha256)是用来唯一识别它的。你可以使用 `it` 命令找到二进制的哈希值: + +``` +[0x004004b0]> it +md5 7e6732f2b11dec4a0c7612852cede670 +sha1 d5fa848c4b53021f6570dd9b18d115595a2290ae +sha256 13dd5a492219dac1443a816ef5f91db8d149e8edbf26f24539c220861769e1c2 +[0x004004b0]> +``` + +### 函数 + +代码按函数分组;要列出二进制中存在的函数,请运行 `afl` 命令。下面的列表显示了 `main` 函数和 `adder` 函数。通常,以 `sym.imp` 开头的函数是从标准库(这里是 glibc)中导入的: + +``` +[0x004004b0]> afl +0x004004b0    1 46           entry0 +0x004004f0    4 41   -> 34   sym.deregister_tm_clones +0x00400520    4 57   -> 51   sym.register_tm_clones +0x00400560    3 33   -> 32   sym.__do_global_dtors_aux +0x00400590    1 6            entry.init0 +0x00400650    1 5            sym.__libc_csu_fini +0x00400658    1 13           sym._fini +0x00400596    1 15           sym.adder +0x004005e0    4 101          loc..annobin_elf_init.c +0x004004e0    1 5            loc..annobin_static_reloc.c +0x004005a5    1 55           main +0x004004a0    1 6            sym.imp.printf +0x00400468    3 27           sym._init +[0x004004b0]> +``` + +### 交叉引用 + +在 C 语言中,`main` 函数是一个程序开始执行的地方。理想情况下,其他函数都是从 `main` 函数调用的,在退出程序时,`main` 函数会向操作系统返回一个退出状态。这在源代码中是很明显的,然而,二进制程序呢?如何判断 `adder` 函数的调用位置呢? + +你可以使用 `axt` 命令,后面加上函数名,看看 `adder` 函数是在哪里调用的;如下图所示,它是从 `main` 函数中调用的。这就是所谓的交叉引用cross-referencing。但什么调用 `main` 函数本身呢?从下面的 `axt main` 可以看出,它是由 `entry0` 调用的(关于 `entry0` 的学习我就不说了,留待读者练习)。 + +``` +[0x004004b0]> axt sym.adder +main 0x4005b9 [CALL] call sym.adder +[0x004004b0]> +[0x004004b0]> axt main +entry0 0x4004d1 [DATA] mov rdi, main +[0x004004b0]> +``` + +### 寻找定位 + +在处理文本文件时,你经常通过引用行号和行或列号在文件内移动;在二进制文件中,你需要使用地址。这些是以 `0x` 开头的十六进制数字,后面跟着一个地址。要找到你在二进制中的位置,运行 `s` 命令。要移动到不同的位置,使用 `s` 命令,后面跟上地址。 + +函数名就像标签一样,内部用地址表示。如果函数名在二进制中(未剥离的),可以使用函数名后面的 `s` 命令跳转到一个特定的函数地址。同样,如果你想跳转到二进制的开始,输入 `s 0`: + +``` +[0x004004b0]> s +0x4004b0 +[0x004004b0]> +[0x004004b0]> s main +[0x004005a5]> +[0x004005a5]> s +0x4005a5 +[0x004005a5]> +[0x004005a5]> s sym.adder +[0x00400596]> +[0x00400596]> s +0x400596 +[0x00400596]> +[0x00400596]> s 0 +[0x00000000]> +[0x00000000]> s +0x0 +[0x00000000]> +``` + +### 十六进制视图 + +通常情况下,原始二进制没有意义。在十六进制模式下查看二进制及其等效的 ASCII 表示法会有帮助: + +``` +[0x004004b0]> s main +[0x004005a5]> +[0x004005a5]> px +- offset -   0 1  2 3  4 5  6 7  8 9  A B  C D  E F  0123456789ABCDEF +0x004005a5  5548 89e5 4883 ec10 c745 fc64 0000 008b  UH..H....E.d.... +0x004005b5  45fc 89c7 e8d8 ffff ff89 45f8 8b45 f889  E.........E..E.. +0x004005c5  c6bf 7806 4000 b800 0000 00e8 cbfe ffff  ..x.@........... +0x004005d5  b800 0000 00c9 c30f 1f40 00f3 0f1e fa41  .........@.....A +0x004005e5  5749 89d7 4156 4989 f641 5541 89fd 4154  WI..AVI..AUA..AT +0x004005f5  4c8d 2504 0820 0055 488d 2d04 0820 0053  L.%.. .UH.-.. .S +0x00400605  4c29 e548 83ec 08e8 57fe ffff 48c1 fd03  L).H....W...H... +0x00400615  741f 31db 0f1f 8000 0000 004c 89fa 4c89  t.1........L..L. +0x00400625  f644 89ef 41ff 14dc 4883 c301 4839 dd75  .D..A...H...H9.u +0x00400635  ea48 83c4 085b 5d41 5c41 5d41 5e41 5fc3  .H...[]A\A]A^A_. +0x00400645  9066 2e0f 1f84 0000 0000 00f3 0f1e fac3  .f.............. +0x00400655  0000 00f3 0f1e fa48 83ec 0848 83c4 08c3  .......H...H.... +0x00400665  0000 0001 0002 0000 0000 0000 0000 0000  ................ +0x00400675  0000 004e 756d 6265 7220 6e6f 7720 6973  ...Number now is +0x00400685  2020 3a20 2564 0a00 0000 0001 1b03 3b44    : %d........;D +0x00400695  0000 0007 0000 0000 feff ff88 0000 0020  ............... +[0x004005a5]> +``` + +### 反汇编 + +如果你使用的是编译后的二进制文件,则无法查看源代码。编译器将源代码转译成 CPU 可以理解和执行的机器语言指令;其结果就是二进制或可执行文件。然而,你可以查看汇编指令(的助记词)来理解程序正在做什么。例如,如果你想查看 `main` 函数在做什么,你可以使用 `s main` 寻找 `main` 函数的地址,然后运行 `pdf` 命令来查看反汇编的指令。 + +要理解汇编指令,你需要参考体系结构手册(这里是 x86),它的应用二进制接口(ABI,或调用惯例),并对堆栈的工作原理有基本的了解: + +``` +[0x004004b0]> s main +[0x004005a5]> +[0x004005a5]> s +0x4005a5 +[0x004005a5]> +[0x004005a5]> pdf +            ; DATA XREF from entry0 @ 0x4004d1 +┌ 55: int main (int argc, char **argv, char **envp); +│           ; var int64_t var_8h @ rbp-0x8 +│           ; var int64_t var_4h @ rbp-0x4 +│           0x004005a5      55             push rbp +│           0x004005a6      4889e5         mov rbp, rsp +│           0x004005a9      4883ec10       sub rsp, 0x10 +│           0x004005ad      c745fc640000.  mov dword [var_4h], 0x64    ; 'd' ; 100 +│           0x004005b4      8b45fc         mov eax, dword [var_4h] +│           0x004005b7      89c7           mov edi, eax +│           0x004005b9      e8d8ffffff     call sym.adder +│           0x004005be      8945f8         mov dword [var_8h], eax +│           0x004005c1      8b45f8         mov eax, dword [var_8h] +│           0x004005c4      89c6           mov esi, eax +│           0x004005c6      bf78064000     mov edi, str.Number_now_is__:__d ; 0x400678 ; "Number now is  : %d\n" ; const char *format +│           0x004005cb      b800000000     mov eax, 0 +│           0x004005d0      e8cbfeffff     call sym.imp.printf         ; int printf(const char *format) +│           0x004005d5      b800000000     mov eax, 0 +│           0x004005da      c9             leave +└           0x004005db      c3             ret +[0x004005a5]> +``` + +这是 `adder` 函数的反汇编结果: + +``` +[0x004005a5]> s sym.adder +[0x00400596]> +[0x00400596]> s +0x400596 +[0x00400596]> +[0x00400596]> pdf +            ; CALL XREF from main @ 0x4005b9 +┌ 15: sym.adder (int64_t arg1); +│           ; var int64_t var_4h @ rbp-0x4 +│           ; arg int64_t arg1 @ rdi +│           0x00400596      55             push rbp +│           0x00400597      4889e5         mov rbp, rsp +│           0x0040059a      897dfc         mov dword [var_4h], edi     ; arg1 +│           0x0040059d      8b45fc         mov eax, dword [var_4h] +│           0x004005a0      83c001         add eax, 1 +│           0x004005a3      5d             pop rbp +└           0x004005a4      c3             ret +[0x00400596]> +``` + +### 字符串 + +查看二进制中存在哪些字符串可以作为二进制分析的起点。字符串是硬编码到二进制中的,通常会提供重要的提示,可以让你将重点转移到分析某些区域。在二进制中运行 `iz` 命令来列出所有的字符串。这个测试二进制中只有一个硬编码的字符串: + +``` +[0x004004b0]> iz +[Strings] +nth paddr      vaddr      len size section type  string +――――――――――――――――――――――――――――――――――――――――――――――――――――――― +0   0x00000678 0x00400678 20  21   .rodata ascii Number now is  : %d\n + +[0x004004b0]> +``` + +### 交叉引用字符串 + +和函数一样,你可以交叉引用字符串,看看它们是从哪里被打印出来的,并理解它们周围的代码: + +``` +[0x004004b0]> ps @ 0x400678 +Number now is  : %d + +[0x004004b0]> +[0x004004b0]> axt 0x400678 +main 0x4005c6 [DATA] mov edi, str.Number_now_is__:__d +[0x004004b0]> +``` + +### 可视模式 + +当你的代码很复杂,有多个函数被调用时,很容易迷失方向。如果能以图形或可视化的方式查看哪些函数被调用,根据某些条件采取了哪些路径等,会很有帮助。在移动到感兴趣的函数后,可以通过 `VV` 命令来探索 `r2` 的可视化模式。例如,对于 `adder` 函数: + +``` +[0x004004b0]> s sym.adder +[0x00400596]> +[0x00400596]> VV +``` + +![Radare2 Visual mode][8] + +*(Gaurav Kamathe, [CC BY-SA 4.0][9])* + +### 调试器 + +到目前为止,你一直在做的是静态分析 —— 你只是在看二进制文件中的东西,而没有运行它,有时你需要执行二进制文件,并在运行时分析内存中的各种信息。`r2` 的内部调试器允许你运行二进制文件、设置断点、分析变量的值、或者转储寄存器的内容。 + +用 `-d` 标志启动调试器,并在加载二进制时添加 `-A` 标志进行分析。你可以通过使用 `db ` 命令在不同的地方设置断点,比如函数或内存地址。要查看现有的断点,使用 `dbi` 命令。一旦你放置了断点,使用 `dc` 命令开始运行二进制文件。你可以使用 `dbt` 命令查看堆栈,它可以显示函数调用。最后,你可以使用 `drr` 命令转储寄存器的内容: + +``` +$ r2 -d -A ./adder +Process with PID 17453 started... += attach 17453 17453 +bin.baddr 0x00400000 +Using 0x400000 +asm.bits 64 +[x] Analyze all flags starting with sym. and entry0 (aa) +[x] Analyze function calls (aac) +[x] Analyze len bytes of instructions for references (aar) +[x] Check for vtables +[x] Type matching analysis for all functions (aaft) +[x] Propagate noreturn information +[x] Use -AA or aaaa to perform additional experimental analysis. + -- git checkout hamster +[0x7f77b0a28030]> +[0x7f77b0a28030]> db main +[0x7f77b0a28030]> +[0x7f77b0a28030]> db sym.adder +[0x7f77b0a28030]> +[0x7f77b0a28030]> dbi +0 0x004005a5 E:1 T:0 +1 0x00400596 E:1 T:0 +[0x7f77b0a28030]> +[0x7f77b0a28030]> afl | grep main +0x004005a5    1 55           main +[0x7f77b0a28030]> +[0x7f77b0a28030]> afl | grep sym.adder +0x00400596    1 15           sym.adder +[0x7f77b0a28030]> +[0x7f77b0a28030]> dc +hit breakpoint at: 0x4005a5 +[0x004005a5]> +[0x004005a5]> dbt +0  0x4005a5           sp: 0x0                 0    [main]  main sym.adder+15 +1  0x7f77b0687873     sp: 0x7ffe35ff6858      0    [??]  section..gnu.build.attributes-1345820597 +2  0x7f77b0a36e0a     sp: 0x7ffe35ff68e8      144  [??]  map.usr_lib64_ld_2.28.so.r_x+65034 +[0x004005a5]> dc +hit breakpoint at: 0x400596 +[0x00400596]> dbt +0  0x400596           sp: 0x0                 0    [sym.adder]  rip entry.init0+6 +1  0x4005be           sp: 0x7ffe35ff6838      0    [main]  main+25 +2  0x7f77b0687873     sp: 0x7ffe35ff6858      32   [??]  section..gnu.build.attributes-1345820597 +3  0x7f77b0a36e0a     sp: 0x7ffe35ff68e8      144  [??]  map.usr_lib64_ld_2.28.so.r_x+65034 +[0x00400596]> +[0x00400596]> +[0x00400596]> dr +rax = 0x00000064 +rbx = 0x00000000 +rcx = 0x7f77b0a21738 +rdx = 0x7ffe35ff6948 +r8 = 0x7f77b0a22da0 +r9 = 0x7f77b0a22da0 +r10 = 0x0000000f +r11 = 0x00000002 +r12 = 0x004004b0 +r13 = 0x7ffe35ff6930 +r14 = 0x00000000 +r15 = 0x00000000 +rsi = 0x7ffe35ff6938 +rdi = 0x00000064 +rsp = 0x7ffe35ff6838 +rbp = 0x7ffe35ff6850 +rip = 0x00400596 +rflags = 0x00000202 +orax = 0xffffffffffffffff +[0x00400596]> +``` + +### 反编译器 + +能够理解汇编是二进制分析的前提。汇编语言总是与二进制建立和预期运行的架构相关。一行源代码和汇编代码之间从来没有 1:1 的映射。通常,一行 C 源代码会产生多行汇编代码。所以,逐行读取汇编代码并不是最佳的选择。 + +这就是反编译器的作用。它们试图根据汇编指令重建可能的源代码。这与用于创建二进制的源代码绝不完全相同,它是基于汇编的源代码的近似表示。另外,要考虑到编译器进行的优化,它会生成不同的汇编代码以加快速度,减小二进制的大小等,会使反编译器的工作更加困难。另外,恶意软件作者经常故意混淆代码,让恶意软件的分析人员望而却步。 + +Radare2 通过插件提供反编译器。你可以安装任何 Radare2 支持的反编译器。使用 `r2pm -l` 命令可以查看当前插件。使用 `r2pm install` 命令来安装一个示例的反编译器 `r2dec`: + +``` +$ r2pm  -l +$ +$ r2pm install r2dec +Cloning into 'r2dec'... +remote: Enumerating objects: 100, done. +remote: Counting objects: 100% (100/100), done. +remote: Compressing objects: 100% (97/97), done. +remote: Total 100 (delta 18), reused 27 (delta 1), pack-reused 0 +Receiving objects: 100% (100/100), 1.01 MiB | 1.31 MiB/s, done. +Resolving deltas: 100% (18/18), done. +Install Done For r2dec +gmake: Entering directory '/root/.local/share/radare2/r2pm/git/r2dec/p' +[CC] duktape/duktape.o +[CC] duktape/duk_console.o +[CC] core_pdd.o +[CC] core_pdd.so +gmake: Leaving directory '/root/.local/share/radare2/r2pm/git/r2dec/p' +$ +$ r2pm  -l +r2dec +$ +``` + +### 反编译器视图 + +要反编译一个二进制文件,在 `r2` 中加载二进制文件并自动分析它。在本例中,使用 `s sym.adder` 命令移动到感兴趣的 `adder` 函数,然后使用 `pdda` 命令并排查看汇编和反编译后的源代码。阅读这个反编译后的源代码往往比逐行阅读汇编更容易: + +``` +$ r2 -A ./adder +[x] Analyze all flags starting with sym. and entry0 (aa) +[x] Analyze function calls (aac) +[x] Analyze len bytes of instructions for references (aar) +[x] Check for vtables +[x] Type matching analysis for all functions (aaft) +[x] Propagate noreturn information +[x] Use -AA or aaaa to perform additional experimental analysis. + -- What do you want to debug today? +[0x004004b0]> +[0x004004b0]> s sym.adder +[0x00400596]> +[0x00400596]> s +0x400596 +[0x00400596]> +[0x00400596]> pdda +    ; assembly                               | /* r2dec pseudo code output */ +                                             | /* ./adder @ 0x400596 */ +                                             | #include <stdint.h> +                                             |   +    ; (fcn) sym.adder ()                     | int32_t adder (int64_t arg1) { +                                             |     int64_t var_4h; +                                             |     rdi = arg1; +    0x00400596 push rbp                      |     +    0x00400597 mov rbp, rsp                  |     +    0x0040059a mov dword [rbp - 4], edi      |     *((rbp - 4)) = edi; +    0x0040059d mov eax, dword [rbp - 4]      |     eax = *((rbp - 4)); +    0x004005a0 add eax, 1                    |     eax++; +    0x004005a3 pop rbp                       |     +    0x004005a4 ret                           |     return eax; +                                             | } +[0x00400596]> +``` + +### 配置设置 + +随着你对 Radare2 的使用越来越熟悉,你会想改变它的配置,以适应你的工作方式。你可以使用 `e` 命令查看 `r2` 的默认配置。要设置一个特定的配置,在 `e` 命令后面添加 `config = value`: + +``` +[0x004005a5]> e | wc -l +593 +[0x004005a5]> e | grep syntax +asm.syntax = intel +[0x004005a5]> +[0x004005a5]> e asm.syntax = att +[0x004005a5]> +[0x004005a5]> e | grep syntax +asm.syntax = att +[0x004005a5]> +``` + +要使配置更改永久化,请将它们放在 `r2` 启动时读取的名为 `.radare2rc` 的启动文件中。这个文件通常在你的主目录下,如果没有,你可以创建一个。一些示例配置选项包括: + +``` +$ cat ~/.radare2rc +e asm.syntax = att +e scr.utf8 = true +eco solarized +e cmd.stack = true +e stack.size = 256 +$ +``` + +### 探索更多 + +你已经看到了足够多的 Radare2 功能,对这个工具有了一定的了解。因为 Radare2 遵循 Unix 哲学,即使你可以从它的主控台做各种事情,它也会在下面使用一套独立的二进制来完成它的任务。 + +探索下面列出的独立二进制文件,看看它们是如何工作的。例如,用 `iI` 命令在控制台看到的二进制信息也可以用 `rabin2 ` 命令找到: + +``` +$ cd bin/ +$ +$ ls +prefix  r2agent    r2pm  rabin2   radiff2  ragg2    rarun2   rasm2 +r2      r2-indent  r2r   radare2  rafind2  rahash2  rasign2  rax2 +$ +``` + +你觉得 Radare2 怎么样?请在评论中分享你的反馈。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/linux-radare2 + +作者:[Gaurav Kamathe][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gkamathe +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/binary_code_computer_screen.png?itok=7IzHK1nn (Binary code on a computer screen) +[2]: https://linux.cn/article-12187-1.html +[3]: https://linux.cn/article-11441-1.html +[4]: https://rada.re/n/ +[5]: https://github.com/radareorg/radare2 +[6]: https://opensource.com/article/19/3/getting-started-vim +[7]: https://ghidra-sre.org/ +[8]: https://opensource.com/sites/default/files/uploads/radare2_visual-mode_0.png (Radare2 Visual mode) +[9]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/published/20210125 Use Joplin to find your notes faster.md b/published/20210125 Use Joplin to find your notes faster.md new file mode 100644 index 0000000000..13e6be8bdf --- /dev/null +++ b/published/20210125 Use Joplin to find your notes faster.md @@ -0,0 +1,63 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13080-1.html) +[#]: subject: (Use Joplin to find your notes faster) +[#]: via: (https://opensource.com/article/21/1/notes-joplin) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +使用 Joplin 更快地找到你的笔记 +====== + +> 在多个手写和数字平台上整理笔记是一个严峻的挑战。这里有一个小技巧,可以更好地组织你的笔记,并快速找到你需要的东西。 + +![](https://img.linux.net.cn/data/attachment/album/202102/03/120141dkiqil1vlqiz6wql.jpg) + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十五天。 + +保持生产力也意味着(在某种程度上)要有足够的组织能力,以便找到笔记并在需要时参考它们。这不仅是对我自己的挑战,也是与我交谈的很多人的挑战。 + +多年来,我在应用中单独或使用数字笔记、纸质笔记、便签、数字便签、Word 文档、纯文本文件以及一堆我忘记的其他格式的组合。这不仅让寻找笔记变得困难,而且知道把它们放在哪里是一个更大的挑战。 + +![Stacks of paper notes on a desk][2] + +*一堆笔记 (Jessica Cherry, [CC BY-SA 4.0][3])* + +还有就是做笔记最重要的一点:如果你以后找不到它,笔记就没有任何价值。知道含有你所需信息的笔记存在于你保存笔记的*某处*,根本没有任何帮助。 + +我是如何为自己解决这个问题的呢?正如他们所说,这是一个过程,我希望这也是一个对其他人有效的过程。 + +我首先看了看自己所做的笔记种类。不同的主题需要用不同的方式保存吗?由于我为我的播客手写笔记,而几乎所有其他的东西都使用纯文本笔记,我需要两种不同的方式来维护它们。对于手写的笔记,我把它们都放在一个文件夹里,方便我参考。 + +![Man holding a binder full of notes][4] + +*三年多的笔记 (Kevin Sonney, [CC BY-SA 4.0][3])* + +为了保存我的数字笔记,我需要将它们全部集中到一个地方。这个工具需要能够从多种设备上访问,具有有用的搜索功能,并且能够导出或共享我的笔记。在尝试了很多很多不同的选项之后,我选择了 [Joplin][5]。Joplin 可以让我用 Markdown 写笔记,有一个相当不错的搜索功能,有适用于所有操作系统(包括手机)的应用,并支持几种不同的设备同步方式。另外,它还有文件夹*和*标签,因此我可以按照对我有意义的方式将笔记分组。 + +![Organized Joplin notes management page][6] + +*我的 Joplin* + +我花了一些时间才把所有的东西都放在我想要的地方,但最后,这真的是值得的。现在,我可以找到我所做的笔记,而不是让它们散落在我的办公室、不同的机器和各种服务中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/notes-joplin + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop) +[2]: https://opensource.com/sites/default/files/day15-image1.jpg +[3]: https://creativecommons.org/licenses/by-sa/4.0/ +[4]: https://opensource.com/sites/default/files/day15-image2.png +[5]: https://joplinapp.org/ +[6]: https://opensource.com/sites/default/files/day15-image3.png diff --git a/published/20210125 Why you need to drop ifconfig for ip.md b/published/20210125 Why you need to drop ifconfig for ip.md new file mode 100644 index 0000000000..c4370c6f6d --- /dev/null +++ b/published/20210125 Why you need to drop ifconfig for ip.md @@ -0,0 +1,201 @@ +[#]: collector: "lujun9972" +[#]: translator: "MjSeven" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-13089-1.html" +[#]: subject: "Why you need to drop ifconfig for ip" +[#]: via: "https://opensource.com/article/21/1/ifconfig-ip-linux" +[#]: author: "Rajan Bhardwaj https://opensource.com/users/rajabhar" + +放弃 ifconfig,拥抱 ip 命令 +====== + +> 开始使用现代方法配置 Linux 网络接口。 + +![](https://img.linux.net.cn/data/attachment/album/202102/05/233847lpg1lnz7kl2czgfj.jpg) + +在很长一段时间内,`ifconfig` 命令是配置网络接口的默认方法。它为 Linux 用户提供了很好的服务,但是网络很复杂,所以配置网络的命令必须健壮。`ip` 命令是现代系统中新的默认网络命令,在本文中,我将向你展示如何使用它。 + +`ip` 命令工作在 [OSI 网络栈][2] 的两个层上:第二层(数据链路层)和第三层(网络 或 IP)层。它做了之前 `net-tools` 包的所有工作。 + +### 安装 ip + +`ip` 命令包含在 `iproute2util` 包中,它可能已经在你的 Linux 发行版中安装了。如果没有,你可以从发行版的仓库中进行安装。 + +### ifconfig 和 ip 使用对比 + +`ip` 和 `ifconfig` 命令都可以用来配置网络接口,但它们做事方法不同。接下来,作为对比,我将用它们来执行一些常见的任务。 + +#### 查看网口和 IP 地址 + +如果你想查看主机的 IP 地址或网络接口信息,`ifconfig` (不带任何参数)命令提供了一个很好的总结。 + +``` +$ ifconfig + +eth0: flags=4099 mtu 1500 + ether bc:ee:7b:5e:7d:d8 txqueuelen 1000 (Ethernet) + RX packets 0 bytes 0 (0.0 B) + RX errors 0 dropped 0 overruns 0 frame 0 + TX packets 0 bytes 0 (0.0 B) + TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 + +lo: flags=73 mtu 65536 + inet 127.0.0.1 netmask 255.0.0.0 + inet6 ::1 prefixlen 128 scopeid 0x10 + loop txqueuelen 1000 (Local Loopback) + RX packets 41 bytes 5551 (5.4 KiB) + RX errors 0 dropped 0 overruns 0 frame 0 + TX packets 41 bytes 5551 (5.4 KiB) + TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 + +wlan0: flags=4163 mtu 1500 + inet 10.1.1.6 netmask 255.255.255.224 broadcast 10.1.1.31 + inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212 prefixlen 64 scopeid 0x0 + inet6 fe80::8eb3:4bc0:7cbb:59e8 prefixlen 64 scopeid 0x20 + ether 08:71:90:81:1e:b5 txqueuelen 1000 (Ethernet) + RX packets 569459 bytes 779147444 (743.0 MiB) + RX errors 0 dropped 0 overruns 0 frame 0 + TX packets 302882 bytes 38131213 (36.3 MiB) + TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 +``` + +新的 `ip` 命令提供了类似的结果,但命令是 `ip address show`,或者简写为 `ip a`: + +``` +$ ip a + +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever +2: eth0: mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 + link/ether bc:ee:7b:5e:7d:d8 brd ff:ff:ff:ff:ff:ff +3: wlan0: mtu 1500 qdisc noqueue state UP group default qlen 1000 + link/ether 08:71:90:81:1e:b5 brd ff:ff:ff:ff:ff:ff + inet 10.1.1.6/27 brd 10.1.1.31 scope global dynamic wlan0 + valid_lft 83490sec preferred_lft 83490sec + inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212/64 scope global noprefixroute dynamic + valid_lft 6909sec preferred_lft 3309sec + inet6 fe80::8eb3:4bc0:7cbb:59e8/64 scope link + valid_lft forever preferred_lft forever +``` + +#### 添加 IP 地址 + +使用 `ifconfig` 命令添加 IP 地址命令为: + +``` +$ ifconfig eth0 add 192.9.203.21 +``` + +`ip` 类似: + +``` +$ ip address add 192.9.203.21 dev eth0 +``` + +`ip` 中的子命令可以缩短,所以下面这个命令同样有效: + +``` +$ ip addr add 192.9.203.21 dev eth0 +``` + +你甚至可以更短些: + +``` +$ ip a add 192.9.203.21 dev eth0 +``` + +#### 移除一个 IP 地址 + +添加 IP 地址与删除 IP 地址正好相反。 + +使用 `ifconfig`,命令是: + +``` +$ ifconfig eth0 del 192.9.203.21 +``` + +`ip` 命令的语法是: + +``` +$ ip a del 192.9.203.21 dev eth0 +``` + +#### 启用或禁用组播 + +使用 `ifconfig` 接口来启用或禁用 [组播][3]multicast: + +``` +# ifconfig eth0 multicast +``` + +对于 `ip`,使用 `set` 子命令与设备(`dev`)以及一个布尔值和 `multicast` 选项: + +``` +# ip link set dev eth0 multicast on +``` + +#### 启用或禁用网络 + +每个系统管理员都熟悉“先关闭,然后打开”这个技巧来解决问题。对于网络接口来说,即打开或关闭网络。 + +`ifconfig` 命令使用 `up` 或 `down` 关键字来实现: + +``` +# ifconfig eth0 up +``` + +或者你可以使用一个专用命令: + +``` +# ifup eth0 +``` + +`ip` 命令使用 `set` 子命令将网络设置为 `up` 或 `down` 状态: + +``` +# ip link set eth0 up +``` + +#### 开启或关闭地址解析功能(ARP) + +使用 `ifconfig`,你可以通过声明它来启用: + +``` +# ifconfig eth0 arp +``` + +使用 `ip`,你可以将 `arp` 属性设置为 `on` 或 `off`: + +``` +# ip link set dev eth0 arp on +``` + +### ip 和 ipconfig 的优缺点 + +`ip` 命令比 `ifconfig` 更通用,技术上也更有效,因为它使用的是 `Netlink` 套接字,而不是 `ioctl` 系统调用。 + +`ip` 命令可能看起来比 `ifconfig` 更详细、更复杂,但这是它拥有更多功能的一个原因。一旦你开始使用它,你会了解它的内部逻辑(例如,使用 `set` 而不是看起来随意混合的声明或设置)。 + +最后,`ifconfig` 已经过时了(例如,它缺乏对网络命名空间的支持),而 `ip` 是为现代网络而生的。尝试并学习它,使用它,你会由衷高兴的! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/ifconfig-ip-linux + +作者:[Rajan Bhardwaj][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rajabhar +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk "Tips and gears turning" +[2]: https://en.wikipedia.org/wiki/OSI_model +[3]: https://en.wikipedia.org/wiki/Multicast diff --git a/published/20210126 Use your Raspberry Pi as a productivity powerhouse.md b/published/20210126 Use your Raspberry Pi as a productivity powerhouse.md new file mode 100644 index 0000000000..d7deca1036 --- /dev/null +++ b/published/20210126 Use your Raspberry Pi as a productivity powerhouse.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13084-1.html) +[#]: subject: (Use your Raspberry Pi as a productivity powerhouse) +[#]: via: (https://opensource.com/article/21/1/raspberry-pi-productivity) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +将你的树莓派用作生产力源泉 +====== + +> 树莓派已经从主要为黑客和业余爱好者服务,成为了小型生产力工作站的可靠选择。 + +![](https://img.linux.net.cn/data/attachment/album/202102/04/103826pjbxb7j1m8ok6ezf.jpg) + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十六天。 + +[树莓派][2]是一台相当棒的小电脑。它体积小,功能却出奇的强大,而且非常容易设置和使用。我曾将它们用于家庭自动化项目、面板和专用媒体播放器。但它也能成为生产力的动力源泉么? + +答案相当简单:是的。 + +![Geary and Calendar apps on the Raspberry Pi][3] + +*Geary 和 Calendar 应用 (Kevin Sonney, [CC BY-SA 4.0][4])* + +基本的 [Raspbian][5] 安装包括 [Claw Mail][6],这是一个轻量级的邮件客户端。它的用户界面有点过时了,而且非常的简陋。如果你是一个 [Mutt 用户][7],它可能会满足你的需求。 + +我更喜欢安装 [Geary][8],因为它也是轻量级的,而且有一个现代化的界面。另外,与 Claws 不同的是,Geary 默认支持富文本 (HTML) 邮件。我不喜欢富文本电子邮件,但它已经成为必要的,所以对它有良好的支持是至关重要的。 + +默认的 Raspbian 安装不包含日历,所以我添加了 [GNOME 日历][9] ,因为它可以与远程服务通信(因为我的几乎所有日历都在云提供商那里)。 + +![GTG and GNote open on Raspberry Pi][10] + +*GTG 和 GNote(Kevin Sonney, [CC BY-SA 4.0][4])* + +那笔记和待办事项清单呢?有很多选择,但我喜欢用 [GNote][11] 来做笔记,用 [Getting-Things-GNOME!][12] 来做待办事项。两者都相当轻量级,并且可以相互同步,也可以同步到其他服务。 + +你会注意到,我在这里使用了不少 GNOME 应用。为什么不直接安装完整的 GNOME 桌面呢?在内存为 4Gb(或 8Gb)的树莓派 4 上,GNOME 工作得很好。你需要采取一些额外的步骤来禁用 Raspbian 上的默认 wifi 设置,并用 Network Manager 来代替它,但这个在网上有很好的文档,而且真的很简单。 + +GNOME 中包含了 [Evolution][13],它将邮件、日历、笔记、待办事项和联系人管理整合到一个应用中。与 Geary 和 GNOME Calendar 相比,它有点重,但在树莓派 4 上却很稳定。这让我很惊讶,因为我习惯了 Evolution 有点消耗资源,但树莓派 4 却和我的品牌笔记本一样运行良好,而且资源充足。 + +![Evolution on Raspbian][14] + +*Raspbian 上的 Evolution (Kevin Sonney, [CC BY-SA 4.0][4])* + +树莓派在过去的几年里进步很快,已经从主要为黑客和业余爱好者服务,成为了小型生产力工作站的可靠选择。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/raspberry-pi-productivity + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos) +[2]: https://www.raspberrypi.org/ +[3]: https://opensource.com/sites/default/files/day16-image1.png +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://www.raspbian.org/ +[6]: https://www.claws-mail.org/ +[7]: http://www.mutt.org/ +[8]: https://wiki.gnome.org/Apps/Geary +[9]: https://wiki.gnome.org/Apps/Calendar +[10]: https://opensource.com/sites/default/files/day16-image2.png +[11]: https://wiki.gnome.org/Apps/Gnote +[12]: https://wiki.gnome.org/Apps/GTG +[13]: https://opensource.com/business/18/1/desktop-email-clients +[14]: https://opensource.com/sites/default/files/day16-image3.png diff --git a/published/20210126 Write GIMP scripts to make image processing faster.md b/published/20210126 Write GIMP scripts to make image processing faster.md new file mode 100644 index 0000000000..4caa8357ff --- /dev/null +++ b/published/20210126 Write GIMP scripts to make image processing faster.md @@ -0,0 +1,234 @@ +[#]: collector: (lujun9972) +[#]: translator: (amwps290) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13093-1.html) +[#]: subject: (Write GIMP scripts to make image processing faster) +[#]: via: (https://opensource.com/article/21/1/gimp-scripting) +[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana) + +编写 GIMP 脚本使图像处理更快 +====== + +> 通过向一批图像添加效果来学习 GIMP 的脚本语言 Script-Fu。 + +![](https://img.linux.net.cn/data/attachment/album/202102/06/231011c0xhvxitxjv899qv.jpg) + +前一段时间,我想给方程图片加一个黑板式的外观。我开始是使用 [GIMP][2] 来处理的,我对结果很满意。问题是我必须对图像执行几个操作,当我想再次使用此样式,不想对所有图像重复这些步骤。此外,我确信我会很快忘记这些步骤。 + +![Fourier transform equations][3] + +*傅立叶变换方程式(Cristiano Fontana,[CC BY-SA 4.0] [4])* + +GIMP 是一个很棒的开源图像编辑器。尽管我已经使用了多年,但从未研究过其批处理功能或 [Script-Fu][5] 菜单。这是探索它们的绝好机会。 + +### 什么是 Script-Fu? + +[Script-Fu][6] 是 GIMP 内置的脚本语言。是一种基于 [Scheme][7] 的编程语言。如果你从未使用过 Scheme,请尝试一下,因为它可能非常有用。我认为 Script-Fu 是一个很好的入门方法,因为它对图像处理具有立竿见影的效果,所以你可以很快感觉到自己的工作效率的提高。你也可以使用 [Python][8] 编写脚本,但是 Script-Fu 是默认选项。 + +为了帮助你熟悉 Scheme,GIMP 的文档提供了深入的 [教程][9]。Scheme 是一种类似于 [Lisp][10] 的语言,因此它的主要特征是使用 [前缀][11] 表示法和 [许多括号][12]。函数和运算符通过前缀应用到操作数列表中: + +``` +(函数名 操作数 操作数 ...) + +(+ 2 3) +↳ 返回 5 + +(list 1 2 3 5) +↳ 返回一个列表,包含 1、 2、 3 和 5 +``` + +我花了一些时间才找到完整的 GIMP 函数列表文档,但实际上很简单。在 **Help** 菜单中,有一个 **Procedure Browser**,其中包含所有可用的函数的丰富详尽文档。 + +![GIMP Procedure Browser][13] + +### 使用 GIMP 的批处理模式 + +你可以使用 `-b` 选项以批处理的方式启动 GIMP。`-b` 选项的参数可以是你想要运行的脚本,或者用一个 `-` 来让 GIMP 进入交互模式而不是命令行模式。正常情况下,当你启动 GIMP 的时候,它会启动图形界面,但是你可以使用 `-i` 选项来禁用它。 + +### 开始编写你的第一个脚本 + +创建一个名为 `chalk.scm` 的文件,并把它保存在 **Preferences** 窗口中 **Folders** 选项下的 **Script** 中指定的 `script` 文件夹下。就我而言,是在 `$HOME/.config/GIMP/2.10/scripts`。 + +在 `chalk.scm` 文件中,写入下面的内容: + +``` +(define (chalk filename grow-pixels spread-amount percentage) + (let* ((image (car (gimp-file-load RUN-NONINTERACTIVE filename filename))) + (drawable (car (gimp-image-get-active-layer image))) + (new-filename (string-append "modified_" filename))) + (gimp-image-select-color image CHANNEL-OP-REPLACE drawable '(0 0 0)) + (gimp-selection-grow image grow-pixels) + (gimp-context-set-foreground '(0 0 0)) + (gimp-edit-bucket-fill drawable BUCKET-FILL-FG LAYER-MODE-NORMAL 100 255 TRUE 0 0) + (gimp-selection-none image) + (plug-in-spread RUN-NONINTERACTIVE image drawable spread-amount spread-amount) + (gimp-drawable-invert drawable TRUE) + (plug-in-randomize-hurl RUN-NONINTERACTIVE image drawable percentage 1 TRUE 0) + (gimp-file-save RUN-NONINTERACTIVE image drawable new-filename new-filename) + (gimp-image-delete image))) +``` + +### 定义脚本变量 + +在脚本中, `(define (chalk filename grow-pixels spread-amound percentage) ...)` 函数定义了一个名叫 `chalk` 的新函数。它的函数参数是 `filename`、`grow-pixels`、`spread-amound` 和 `percentage`。在 `define` 中的所有内容都是 `chalk` 函数的主体。你可能已经注意到,那些名字比较长的变量中间都有一个破折号来分割。这是类 Lisp 语言的惯用风格。 + +`(let* ...)` 函数是一个特殊过程procedure,可以让你定义一些只有在这个函数体中才有效的临时变量。临时变量有 `image`、`drawable` 以及 `new-filename`。它使用 `gimp-file-load` 来载入图片,这会返回它所包含的图片的一个列表。并通过 `car` 函数来选取第一项。然后,它选择第一个活动层并将其引用存储在 `drawable` 变量中。最后,它定义了包含图像新文件名的字符串。 + +为了帮助你更好地了解该过程,我将对其进行分解。首先,启动带 GUI 的 GIMP,然后你可以通过依次点击 **Filters → Script-Fu → Console** 来打开 Script-Fu 控制台。 在这种情况下,不能使用 `let *`,因为变量必须是持久的。使用 `define` 函数定义 `image` 变量,并为其提供查找图像的正确路径: + +``` +(define image (car (gimp-file-load RUN-NONINTERACTIVE "Fourier.png" "Fourier.png"))) +``` + +似乎在 GUI 中什么也没有发生,但是图像已加载。 你需要通过以下方式来让图像显示: + +``` +(gimp-display-new image) +``` + +![GUI with the displayed image][14] + +现在,获取活动层并将其存储在 `drawable` 变量中: + +``` +(define drawable (car (gimp-image-get-active-layer image))) +``` + +最后,定义图像的新文件名: + +``` +(define new-filename "modified_Fourier.png") +``` + +运行命令后,你将在 Script-Fu 控制台中看到以下内容: + +![Script-Fu console][15] + +在对图像执行操作之前,需要定义将在脚本中作为函数参数的变量: + +``` +(define grow-pixels 2) +(define spread-amount 4) +(define percentage 3) +``` + +### 处理图片 + +现在,所有相关变量都已定义,你可以对图像进行操作了。 脚本的操作可以直接在控制台上执行。第一步是在活动层上选择黑色。颜色被写成一个由三个数字组成的列表,即 `(list 0 0 0)` 或者是 `'(0 0 0)`: + +``` +(gimp-image-select-color image CHANNEL-OP-REPLACE drawable '(0 0 0)) +``` + +![Image with the selected color][16] + +扩大选取两个像素: + +``` +(gimp-selection-grow image grow-pixels) +``` + +![Image with the selected color][17] + +将前景色设置为黑色,并用它填充选区: + +``` +(gimp-context-set-foreground '(0 0 0)) +(gimp-edit-bucket-fill drawable BUCKET-FILL-FG LAYER-MODE-NORMAL 100 255 TRUE 0 0) +``` + +![Image with the selection filled with black][18] + +删除选区: + +``` +(gimp-selection-none image) +``` + +![Image with no selection][19] + +随机移动像素: + +``` +(plug-in-spread RUN-NONINTERACTIVE image drawable spread-amount spread-amount) +``` + +![Image with pixels moved around][20] + +反转图像颜色: + +``` +(gimp-drawable-invert drawable TRUE) +``` + +![Image with pixels moved around][21] + +随机化像素: + +``` +(plug-in-randomize-hurl RUN-NONINTERACTIVE image drawable percentage 1 TRUE 0) +``` + +![Image with pixels moved around][22] + +将图像保存到新文件: + +``` +(gimp-file-save RUN-NONINTERACTIVE image drawable new-filename new-filename) +``` + +![Equations of the Fourier transform and its inverse][23] + +*傅立叶变换方程 (Cristiano Fontana, [CC BY-SA 4.0][4])* + +### 以批处理模式运行脚本 + +现在你知道了脚本的功能,可以在批处理模式下运行它: + +``` +gimp -i -b '(chalk "Fourier.png" 2 4 3)' -b '(gimp-quit 0)' +``` + +在运行 `chalk` 函数之后,它将使用 `-b` 选项调用第二个函数 `gimp-quit` 来告诉 GIMP 退出。 + +### 了解更多 + +本教程向你展示了如何开始使用 GIMP 的内置脚本功能,并介绍了 GIMP 的 Scheme 实现:Script-Fu。如果你想继续前进,建议你查看官方文档及其[入门教程][9]。如果你不熟悉 Scheme 或 Lisp,那么一开始的语法可能有点吓人,但我还是建议你尝试一下。这可能是一个不错的惊喜。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/gimp-scripting + +作者:[Cristiano L. Fontana][a] +选题:[lujun9972][b] +译者:[amwps290](https://github.com/amwps290) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cristianofontana +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) +[2]: https://www.gimp.org/ +[3]: https://opensource.com/sites/default/files/uploads/fourier.png (Fourier transform equations) +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://docs.gimp.org/en/gimp-filters-script-fu.html +[6]: https://docs.gimp.org/en/gimp-concepts-script-fu.html +[7]: https://en.wikipedia.org/wiki/Scheme_(programming_language) +[8]: https://docs.gimp.org/en/gimp-filters-python-fu.html +[9]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html +[10]: https://en.wikipedia.org/wiki/Lisp_%28programming_language%29 +[11]: https://en.wikipedia.org/wiki/Polish_notation +[12]: https://xkcd.com/297/ +[13]: https://opensource.com/sites/default/files/uploads/procedure_browser.png (GIMP Procedure Browser) +[14]: https://opensource.com/sites/default/files/uploads/gui01_image.png (GUI with the displayed image) +[15]: https://opensource.com/sites/default/files/uploads/console01_variables.png (Script-Fu console) +[16]: https://opensource.com/sites/default/files/uploads/gui02_selected.png (Image with the selected color) +[17]: https://opensource.com/sites/default/files/uploads/gui03_grow.png (Image with the selected color) +[18]: https://opensource.com/sites/default/files/uploads/gui04_fill.png (Image with the selection filled with black) +[19]: https://opensource.com/sites/default/files/uploads/gui05_no_selection.png (Image with no selection) +[20]: https://opensource.com/sites/default/files/uploads/gui06_spread.png (Image with pixels moved around) +[21]: https://opensource.com/sites/default/files/uploads/gui07_invert.png (Image with pixels moved around) +[22]: https://opensource.com/sites/default/files/uploads/gui08_hurl.png (Image with pixels moved around) +[23]: https://opensource.com/sites/default/files/uploads/modified_fourier.png (Equations of the Fourier transform and its inverse) diff --git a/published/20210127 3 email mistakes and how to avoid them.md b/published/20210127 3 email mistakes and how to avoid them.md new file mode 100644 index 0000000000..4e0b3c8cc9 --- /dev/null +++ b/published/20210127 3 email mistakes and how to avoid them.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13086-1.html) +[#]: subject: (3 email mistakes and how to avoid them) +[#]: via: (https://opensource.com/article/21/1/email-mistakes) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +3 个电子邮件错误以及如何避免它们 +====== + +> 自动化是美好的,但也不总是那样。确保你的电子邮件自动回复和抄送配置正确,这样你就不会浪费大家的时间。 + +![](https://img.linux.net.cn/data/attachment/album/202102/05/090335a888nqn7pcolblzn.jpg) + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十七天。 + +好了,我们已经谈到了一些我们应该对电子邮件做的事情:[不要再把它当作即时通讯工具][2]、[优先处理事情][3]、[努力达到收件箱 0 新邮件][4],以及[有效过滤][5]。但哪些事情是我们不应该做的呢? + +![Automated email reply][6] + +*你真幸运 (Kevin Sonney, [CC BY-SA 4.0][7])* + +### 1、请不要对所有事情自动回复 + +邮件列表中总有些人,他们去度假了,并设置了一个“我在度假”的自动回复信息。然而,他们没有正确地设置,所以它对列表上的每一封邮件都会回复“我在度假”,直到管理员将其屏蔽或取消订阅。 + +我们都感受到了这种痛苦,我承认过去至少有一次,我就是那个人。 + +从我的错误中吸取教训,并确保你的自动回复器或假期信息对它们将回复谁和多久回复一次有限制。 + +![An actual email with lots of CC'd recipients][8] + +*这是一封真实的电子邮件 (Kevin Sonney, [CC BY-SA 4.0][7])* + +### 2、请不要抄送给所有人 + +我们都至少做过一次。我们需要发送邮件的人员众多,因此我们只需抄送他们*所有人*。有时这是有必要的,但大多数时候,它并不是。当然,你邀请每个人在庭院吃生日蛋糕,或者你的表姐要结婚了,或者公司刚拿到一个大客户,这都是好事。如果你有邮件列表的话,请用邮件列表,如果没有的话,请给每个人密送。说真的,密送是你的朋友。 + +### 3、回复所有人不是你的朋友 + +![Reply options in Kmail][9] + +这一条与上一条是相辅相成的。我不知道有多少次看到有人向一个名单(或者是一个大群的人)发送信息,而这个信息本来是要发给一个人的。我见过这样发送的相对好的邮件,以及随之而来的纪律处分邮件。 + +认真地说,除非必须,不要使用“回复全部”按钮。即使是这样,也要确保你*真的*需要这样做。 + +有些电子邮件应用比其他应用管理得更好。Kmail,[KDE Kontact][10] 的电子邮件组件,在**回复**工具栏按钮的子菜单中,有几个不同的回复选项。你可以选择只回复给**发件人**字段中的任何实体(通常是一个人,但有时是一个邮件列表),或者回复给作者(在抄送或密送中去除每一个人),或者只回复给一个邮件列表,或者回复*所有人*(不要这样做)。看到明确列出的选项,的确可以帮助你了解谁会收到你要发送的邮件副本,这有时比你想象的更发人深省。我曾经发现,当我意识到一个评论并不一定会对一个复杂的讨论的最终目标有所帮助时,我就会把邮件的收件人改为只是作者,而不是整个列表。 + +(另外,如果你写的邮件可能会给你的 HR 或公司带来麻烦,请在点击发送之前多考虑下。—— + +希望你已经*不再*在电子邮件中这么做了。如果你认识的人是这样的呢?欢迎与他们分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/email-mistakes + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open) +[2]: https://opensource.com/article/21/1/email-rules +[3]: https://opensource.com/article/21/1/prioritize-tasks +[4]: https://opensource.com/article/21/1/inbox-zero +[5]: https://opensource.com/article/21/1/email-filter +[6]: https://opensource.com/sites/default/files/day17-image1.png +[7]: https://creativecommons.org/licenses/by-sa/4.0/ +[8]: https://opensource.com/sites/default/files/day17-image2.png +[9]: https://opensource.com/sites/default/files/kmail-replies.jpg (Reply options in Kmail) +[10]: https://opensource.com/article/21/1/kde-kontact diff --git a/published/20210127 Why I use the D programming language for scripting.md b/published/20210127 Why I use the D programming language for scripting.md new file mode 100644 index 0000000000..63618522ea --- /dev/null +++ b/published/20210127 Why I use the D programming language for scripting.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13100-1.html) +[#]: subject: (Why I use the D programming language for scripting) +[#]: via: (https://opensource.com/article/21/1/d-scripting) +[#]: author: (Lawrence Aberba https://opensource.com/users/aberba) + +我为什么要用 D 语言写脚本? +====== + +> D 语言以系统编程语言而闻名,但它也是编写脚本的一个很好的选择。 + +![](https://img.linux.net.cn/data/attachment/album/202102/09/134351j4m3hrhll0h38plp.jpg) + +D 语言由于其静态类型和元编程能力,经常被宣传为系统编程语言。然而,它也是一种非常高效的脚本语言。 + +由于 Python 在自动化任务和快速实现原型想法方面的灵活性,它通常被选为脚本语言。这使得 Python 对系统管理员、[管理者][2]和一般的开发人员非常有吸引力,因为它可以自动完成他们可能需要手动完成的重复性任务。 + +我们自然也可以期待任何其他的脚本编写语言具有 Python 的这些特性和能力。以下是我认为 D 是一个不错的选择的两个原因。 + +### 1、D 很容易读和写 + +作为一种类似于 C 的语言,D 应该是大多数程序员所熟悉的。任何使用 JavaScript、Java、PHP 或 Python 的人对 D 语言都很容易上手。 + +如果你还没有安装 D,请[安装 D 编译器][3],这样你就可以[运行本文中的 D 代码][4]。你也可以使用[在线 D 编辑器][5]。 + +下面是一个 D 代码的例子,它从一个名为 `words.txt` 的文件中读取单词,并在命令行中打印出来: + +``` +open +source +is +cool +``` + +用 D 语言写脚本: + +``` +#!/usr/bin/env rdmd +// file print_words.d + +// import the D standard library +import std; + +void main(){ + // open the file + File("./words.txt") + + //iterate by line + .byLine + + // print each number + .each!writeln; +} +``` + +这段代码以 [释伴][6] 开头,它将使用 [rdmd][7] 来运行这段代码,`rdmd` 是 D 编译器自带的编译和运行代码的工具。假设你运行的是 Unix 或 Linux,在运行这个脚本之前,你必须使用` chmod` 命令使其可执行: + +``` +chmod u+x print_words.d +``` + +现在脚本是可执行的,你可以运行它: + +``` +./print_words.d +``` + +这将在你的命令行中打印以下内容: + +``` +open +source +is +cool +``` + +恭喜你,你写了第一个 D 语言脚本。你可以看到 D 是如何让你按顺序链式调用函数,这让阅读代码的感觉很自然,类似于你在头脑中思考问题的方式。这个[功能让 D 成为我最喜欢的编程语言][8]。 + +试着再写一个脚本:一个非营利组织的管理员有一个捐款的文本文件,每笔金额都是单独的一行。管理员想把前 10 笔捐款相加,然后打印出金额: + +``` +#!/usr/bin/env rdmd +// file sum_donations.d + +import std; + +void main() +{ + double total = 0; + + // open the file + File("monies.txt") + + // iterate by line + .byLine + + // pick first 10 lines + .take(10) + + // remove new line characters (\n) + .map!(strip) + + // convert each to double + .map!(to!double) + + // add element to total + .tee!((x) { total += x; }) + + // print each number + .each!writeln; + + // print total + writeln("total: ", total); +} +``` + +与 `each` 一起使用的 `!` 操作符是[模板参数][9]的语法。 + +### 2、D 是快速原型设计的好帮手 + +D 是灵活的,它可以快速地将代码敲打在一起,并使其发挥作用。它的标准库中包含了丰富的实用函数,用于执行常见的任务,如操作数据(JSON、CSV、文本等)。它还带有一套丰富的通用算法,用于迭代、搜索、比较和 mutate 数据。这些巧妙的算法通过定义通用的 [基于范围的接口][10] 而按照序列进行处理。 + +上面的脚本显示了 D 中的链式调用函数如何提供顺序处理和操作数据的要领。D 的另一个吸引人的地方是它不断增长的用于执行普通任务的第三方包的生态系统。一个例子是,使用 [Vibe.d][11] web 框架构建一个简单的 web 服务器很容易。下面是一个例子: + +``` +#!/usr/bin/env dub +/+ dub.sdl: +dependency "vibe-d" version="~>0.8.0" ++/ +void main() +{ + import vibe.d; + listenHTTP(":8080", (req, res) { + res.writeBody("Hello, World: " ~ req.path); + }); + runApplication(); +} +``` + +它使用官方的 D 软件包管理器 [Dub][12],从 [D 软件包仓库][13]中获取 vibe.d Web 框架。Dub 负责下载 Vibe.d 包,然后在本地主机 8080 端口上编译并启动一个 web 服务器。 + +### 尝试一下 D 语言 + +这些只是你可能想用 D 来写脚本的几个原因。 + +D 是一种非常适合开发的语言。你可以很容易从 D 下载页面安装,因此下载编译器,看看例子,并亲自体验 D 语言。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/d-scripting + +作者:[Lawrence Aberba][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/aberba +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating) +[2]: https://opensource.com/article/20/3/automating-community-management-python +[3]: https://tour.dlang.org/tour/en/welcome/install-d-locally +[4]: https://tour.dlang.org/tour/en/welcome/run-d-program-locally +[5]: https://run.dlang.io/ +[6]: https://en.wikipedia.org/wiki/Shebang_(Unix) +[7]: https://dlang.org/rdmd.html +[8]: https://opensource.com/article/20/7/d-programming +[9]: http://ddili.org/ders/d.en/templates.html +[10]: http://ddili.org/ders/d.en/ranges.html +[11]: https://vibed.org +[12]: https://dub.pm/getting_started +[13]: https://code.dlang.org diff --git a/published/20210128 4 tips for preventing notification fatigue.md b/published/20210128 4 tips for preventing notification fatigue.md new file mode 100644 index 0000000000..843364bb91 --- /dev/null +++ b/published/20210128 4 tips for preventing notification fatigue.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13094-1.html) +[#]: subject: (4 tips for preventing notification fatigue) +[#]: via: (https://opensource.com/article/21/1/alert-fatigue) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +防止通知疲劳的 4 个技巧 +====== + +> 不要让提醒淹没自己:设置重要的提醒,让其它提醒消失。你会感觉更好,工作效率更高。 + +![W】(https://img.linux.net.cn/data/attachment/album/202102/06/234924mo3okotjlv7lo3yo.jpg) + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十八天。 + +当我和人们谈论生产力时,我注意到一件事,那就是几乎每个人都是为了保持更清晰的头脑。我们不是把所有的约会都记在脑子里,而是把它们放在一个数字日历上,在事件发生前提醒我们。我们有数字或实体笔记,这样我们就不必记住某件事的每一个小细节。我们有待办事项清单,提醒我们去做该做的事情。 + +![Text box offering to send notifications][2] + +*NOPE(Kevin Sonney, [CC BY-SA 4.0][3])* + +如此多的应用、网站和服务想要提醒我们每一件小事,我们很容易就把它们全部调出来。而且,如果我们不这样做,我们将开始遭受**提醒疲劳**的困扰 —— 这时我们处于边缘的状态,只是等待下一个提醒,并生活在恐惧之中。 + +提醒疲劳在那些因工作而被随叫随到的人中非常常见。它也发生在那些 **FOMO** (错失恐惧症)的人身上,从而对每一个关键词、标签或在社交媒体上提到他们感兴趣的事情都会设置提醒。 + +此时,设置能引起我们的注意,但不会被忽略的提醒是件棘手的事情。不过,我确实有一些有用的提示,这样重要的提醒可能会在这个忙碌的世界中越过我们自己的心理过滤器。 + +![Alert for a task][4] + +*我可以忽略这个,对吧?(Kevin Sonney, [CC BY-SA 4.0][3])* + + 1. 弄清楚什么更适合你:视觉提醒或声音提醒。我使用视觉弹出和声音的组合,但这对我是有效的。有些人需要触觉提醒。比如手机或手表的震动。找到适合你的那一种。 + 2. 为重要的提醒指定独特的音调或视觉效果。我有一个朋友,他的工作页面的铃声最响亮、最讨厌。这旨在吸引他的注意力,让他看到提醒。我的显示器上有一盏灯,当我在待命时收到工作提醒时,它就会闪烁红灯,以及发送通知到我手机上。 + 3. 关掉那些实际上无关紧要的警报。社交网络、网站和应用都希望得到你的关注。它们不会在意你是否错过会议、约会迟到,或者熬夜到凌晨 4 点。关掉那些不重要的,让那些重要的可以被看到。 + 4. 每隔一段时间就改变一下。上个月有效的东西,下个月可能就不行了。我们会适应、习惯一些东西,然后我们会忽略。如果有些东西不奏效,就换个东西试试吧!它不会伤害你,即使无法解决问题,也许你也会学到一些新知识。 + +![Blue alert indicators light][5] + +*蓝色是没问题。红色是有问题。(Kevin Sonney, [CC BY-SA 4.0][3])* + +### 开源和选择 + +一个好的应用可以通知提供很多选择。我最喜欢的一个是 Android 的 Etar 日历应用。[Etar 可以从开源 F-droid 仓库中获得][6]。 + +Etar 和许多开源应用一样,为你提供了很多选项,尤其是通知设置。 + +![Etar][7] + +通过 Etar,你可以激活或停用弹出式通知,设置打盹时间、打盹延迟、是否提醒你已拒绝的事件等。结合有计划的日程安排策略,你可以通过控制数字助手对你需要做的事情进行提示的频率来改变你一天的进程。 + +提醒和警报真的很有用,只要我们收到重要的提醒并予以注意即可。这可能需要做一些实验,但最终,少一些噪音是好事,而且更容易注意到真正需要我们注意的提醒。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/alert-fatigue + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) +[2]: https://opensource.com/sites/default/files/day18-image1.png +[3]: https://creativecommons.org/licenses/by-sa/4.0/ +[4]: https://opensource.com/sites/default/files/day18-image2.png +[5]: https://opensource.com/sites/default/files/day18-image3.png +[6]: https://f-droid.org/en/packages/ws.xsoh.etar/ +[7]: https://opensource.com/sites/default/files/etar.jpg (Etar) diff --git a/published/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md b/published/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md new file mode 100644 index 0000000000..f2c5fd298c --- /dev/null +++ b/published/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md @@ -0,0 +1,168 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13106-1.html) +[#]: subject: (How to Run a Shell Script in Linux [Essentials Explained for Beginners]) +[#]: via: (https://itsfoss.com/run-shell-script-linux/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +基础:如何在 Linux 中运行一个 Shell 脚本 +====== + +![](https://img.linux.net.cn/data/attachment/album/202102/10/235325tkv7h8dvlp4makkk.jpg) + +在 Linux 中有两种运行 shell 脚本的方法。你可以使用: + +``` +bash script.sh +``` + +或者,你可以像这样执行 shell 脚本: + +``` +./script.sh +``` + +这可能很简单,但没太多解释。不要担心,我将使用示例来进行必要的解释,以便你能理解为什么在运行一个 shell 脚本时要使用给定的特定语法格式。 + +我将使用这一行 shell 脚本来使需要解释的事情变地尽可能简单: + +``` +abhishek@itsfoss:~/Scripts$ cat hello.sh + +echo "Hello World!" +``` + +### 方法 1:通过将文件作为参数传递给 shell 以运行 shell 脚本 + +第一种方法涉及将脚本文件的名称作为参数传递给 shell 。 + +考虑到 bash 是默认 shell,你可以像这样运行一个脚本: + +``` +bash hello.sh +``` + +你知道这种方法的优点吗?**你的脚本不需要执行权限**。对于简单的任务非常方便快速。 + +![在 Linux 中运行一个 Shell 脚本][1] + +如果你还不熟悉,我建议你 [阅读我的 Linux 文件权限详细指南][2] 。 + +记住,将其作为参数传递的需要是一个 shell 脚本。一个 shell 脚本是由命令组成的。如果你使用一个普通的文本文件,它将会抱怨错误的命令。 + +![运行一个文本文件为脚本][3] + +在这种方法中,**你要明确地具体指定你想使用 bash 作为脚本的解释器** 。 + +shell 只是一个程序,并且 bash 只是 Shell 的一种实现。还有其它的 shell 程序,像 ksh 、[zsh][4] 等等。如果你安装有其它的 shell ,你也可以使用它们来代替 bash 。 + +例如,我已安装了 zsh ,并使用它来运行相同的脚本: + +![使用 Zsh 来执行 Shell 脚本][5] + +### 方法 2:通过具体指定 shell 脚本的路径来执行脚本 + +另外一种运行一个 shell 脚本的方法是通过提供它的路径。但是要这样做之前,你的文件必须是可执行的。否则,当你尝试执行脚本时,你将会得到 “权限被拒绝” 的错误。 + +因此,你首先需要确保你的脚本有可执行权限。你可以 [使用 chmod 命令][8] 来给予你自己脚本的这种权限,像这样: + +``` +chmod u+x script.sh +``` + +使你的脚本是可执行之后,你只需输入文件的名称及其绝对路径或相对路径。大多数情况下,你都在同一个目录中,因此你可以像这样使用它: + +``` +./script.sh +``` + +如果你与你的脚本不在同一个目录中,你可以具体指定脚本的绝对路径或相对路径: + +![在其它的目录中运行 Shell 脚本][9] + +在脚本前的这个 `./` 是非常重要的(当你与脚本在同一个目录中)。 + +![][10] + +为什么当你在同一个目录下,却不能使用脚本名称?这是因为你的 Linux 系统会在 `PATH` 环境变量中指定的几个目录中查找可执行的文件来运行。 + +这里是我的系统的 `PATH` 环境变量的值: + +``` +abhishek@itsfoss:~$ echo $PATH +/home/abhishek/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin +``` + +这意味着在下面目录中具有可执行权限的任意文件都可以在系统的任何位置运行: + + * `/home/abhishek/.local/bin` + * `/usr/local/sbin` + * `/usr/local/bin` + * `/usr/sbin` + * `/usr/bin` + * `/sbin` + * `/bin` + * `/usr/games` + * `/usr/local/games` + * `/snap/bin` + +Linux 命令(像 `ls`、`cat` 等)的二进制文件或可执行文件都位于这些目录中的其中一个。这就是为什么你可以在你系统的任何位置通过使用命令的名称来运作这些命令的原因。看看,`ls` 命令就是位于 `/usr/bin` 目录中。 + +![][11] + +当你使用脚本而不具体指定其绝对路径或相对路径时,系统将不能在 `PATH` 环境变量中找到提及的脚本。 + +> 为什么大多数 shell 脚本在其头部包含 #! /bin/bash ? +> +> 记得我提过 shell 只是一个程序,并且有 shell 程序的不同实现。 +> +> 当你使用 `#! /bin/bash` 时,你是具体指定 bash 作为解释器来运行脚本。如果你不这样做,并且以 `./script.sh` 的方式运行一个脚本,它通常会在你正在运行的 shell 中运行。 +> +> 有问题吗?可能会有。看看,大多数的 shell 语法是大多数种类的 shell 中通用的,但是有一些语法可能会有所不同。 +> +> 例如,在 bash 和 zsh 中数组的行为是不同的。在 zsh 中,数组索引是从 1 开始的,而不是从 0 开始。 +> +>![Bash Vs Zsh][12] +> +> 使用 `#! /bin/bash` 来标识该脚本是 bash 脚本,并且应该使用 bash 作为脚本的解释器来运行,而不受在系统上正在使用的 shell 的影响。如果你使用 zsh 的特殊语法,你可以通过在脚本的第一行添加 `#! /bin/zsh` 的方式来标识其是 zsh 脚本。 +> +> 在 `#!` 和 `/bin/bash` 之间的空格是没有影响的。你也可以使用 `#!/bin/bash` 。 + +### 它有帮助吗? + +我希望这篇文章能够增加你的 Linux 知识。如果你还有问题或建议,请留下评论。 + +专家用户可能依然会挑出我遗漏的东西。但这种初级题材的问题是,要找到信息的平衡点,避免细节过多或过少,并不容易。 + +如果你对学习 bash 脚本感兴趣,在我们专注于系统管理的网站 [Linux Handbook][14] 上,我们有一个 [完整的 Bash 初学者系列][13] 。如果你想要,你也可以 [购买带有附加练习的电子书][15] ,以支持 Linux Handbook。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/run-shell-script-linux/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/run-a-shell-script-linux.png?resize=741%2C329&ssl=1 +[2]: https://linuxhandbook.com/linux-file-permissions/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/running-text-file-as-script.png?resize=741%2C329&ssl=1 +[4]: https://www.zsh.org +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/execute-shell-script-with-zsh.png?resize=741%2C253&ssl=1 +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/run-multiple-commands-in-linux.png?fit=800%2C450&ssl=1 +[7]: https://itsfoss.com/run-multiple-commands-linux/ +[8]: https://linuxhandbook.com/chmod-command/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/running-shell-script-in-other-directory.png?resize=795%2C272&ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/executing-shell-scripts-linux.png?resize=800%2C450&ssl=1 +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/locating-command-linux.png?resize=795%2C272&ssl=1 +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/bash-vs-zsh.png?resize=795%2C386&ssl=1 +[13]: https://linuxhandbook.com/tag/bash-beginner/ +[14]: https://linuxhandbook.com +[15]: https://www.buymeacoffee.com/linuxhandbook diff --git a/published/20210129 Manage containers with Podman Compose.md b/published/20210129 Manage containers with Podman Compose.md new file mode 100644 index 0000000000..d1d3ce6286 --- /dev/null +++ b/published/20210129 Manage containers with Podman Compose.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13125-1.html) +[#]: subject: (Manage containers with Podman Compose) +[#]: via: (https://fedoramagazine.org/manage-containers-with-podman-compose/) +[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/) + +用 Podman Compose 管理容器 +====== + +![][1] + +容器很棒,让你可以将你的应用连同其依赖项一起打包,并在任何地方运行。从 2013 年的 Docker 开始,容器已经让软件开发者的生活变得更加轻松。 + +Docker 的一个缺点是它有一个中央守护进程,它以 root 用户的身份运行,这对安全有影响。但这正是 Podman 的用武之地。Podman 是一个 [无守护进程容器引擎][2],用于开发、管理和在你的 Linux 系统上以 root 或无 root 模式运行 OCI 容器。 + +下面这些文章可以用来了解更多关于 Podman 的信息: + + * [使用 Podman 以非 root 用户身份运行 Linux 容器][11] + * [在 Fedora 上使用 Podman 的 Pod][3] + * [在 Fedora 中结合权能使用 Podman][4] + +如果你使用过 Docker,你很可能也知道 Docker Compose,它是一个用于编排多个可能相互依赖的容器的工具。要了解更多关于 Docker Compose 的信息,请看它的[文档][5]。 + +### 什么是 Podman Compose? + +[Podman Compose][6] 项目的目标是作为 Docker Compose 的替代品,而不需要对 docker-compose.yaml 文件进行任何修改。由于 Podman Compose 使用吊舱pod 工作,所以最好看下“吊舱”的最新定义。 + +> 一个“吊舱pod ”(如一群鲸鱼或豌豆荚)是由一个或多个[容器][7]组成的组,具有共享的存储/网络资源,以及如何运行容器的规范。 +> +> [Pods - Kubernetes 文档][8] + +(LCTT 译注:容器技术领域大量使用了航海比喻,pod 一词,意为“豆荚”,在航海领域指“吊舱” —— 均指盛装多个物品的容器。常不翻译,考虑前后文,可译做“吊舱”。) + +Podman Compose 的基本思想是,它选中 `docker-compose.yaml` 文件里面定义的服务,为每个服务创建一个容器。Docker Compose 和 Podman Compose 的一个主要区别是,Podman Compose 将整个项目的容器添加到一个单一的吊舱中,而且所有的容器共享同一个网络。如你在例子中看到的,在创建容器时使用 `--add-host` 标志,它甚至用和 Docker Compose 一样的方式命名容器。 + +### 安装 + +Podman Compose 的完整安装说明可以在[项目页面][6]上找到,它有几种方法。要安装最新的开发版本,使用以下命令: + +``` +pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz +``` + +确保你也安装了 [Podman][9],因为你也需要它。在 Fedora 上,使用下面的命令来安装Podman: + +``` +sudo dnf install podman +``` + +### 例子:用 Podman Compose 启动一个 WordPress 网站 + +想象一下,你的 `docker-compose.yaml` 文件在一个叫 `wpsite` 的文件夹里。一个典型的 WordPress 网站的 `docker-compose.yaml` (或 `docker-compose.yml`) 文件是这样的: + +``` +version: "3.8" +services: + web: + image: wordpress + restart: always + volumes: + - wordpress:/var/www/html + ports: + - 8080:80 + environment: + WORDPRESS_DB_HOST: db + WORDPRESS_DB_USER: magazine + WORDPRESS_DB_NAME: magazine + WORDPRESS_DB_PASSWORD: 1maGazine! + WORDPRESS_TABLE_PREFIX: cz + WORDPRESS_DEBUG: 0 + depends_on: + - db + networks: + - wpnet + db: + image: mariadb:10.5 + restart: always + ports: + - 6603:3306 + + volumes: + - wpdbvol:/var/lib/mysql + + environment: + MYSQL_DATABASE: magazine + MYSQL_USER: magazine + MYSQL_PASSWORD: 1maGazine! + MYSQL_ROOT_PASSWORD: 1maGazine! + networks: + - wpnet +volumes: + wordpress: {} + wpdbvol: {} + +networks: + wpnet: {} +``` + +如果你用过 Docker,你就会知道你可运行 `docker-compose up` 来启动这些服务。Docker Compose 会创建两个名为 `wpsite_web_1` 和 `wpsite_db_1` 的容器,并将它们连接到一个名为 `wpsite_wpnet` 的网络。 + +现在,看看当你在项目目录下运行 `podman-compose up` 时会发生什么。首先,一个以执行命令的目录命名的吊舱被创建。接下来,它寻找 YAML 文件中定义的任何名称的卷,如果它们不存在,就创建卷。然后,在 YAML 文件的 `services` 部分列出的每个服务都会创建一个容器,并添加到吊舱中。 + +容器的命名与 Docker Compose 类似。例如,为你的 web 服务创建一个名为 `wpsite_web_1` 的容器。Podman Compose 还为每个命名的容器添加了 `localhost` 别名。之后,容器仍然可以通过名字互相解析,尽管它们并不像 Docker 那样在一个桥接网络上。要做到这一点,使用选项 `-add-host`。例如,`-add-host web:localhost`。 + +请注意,`docker-compose.yaml` 包含了一个从主机 8080 端口到容器 80 端口的 Web 服务的端口转发。现在你应该可以通过浏览器访问新 WordPress 实例,地址为 `http://localhost:8080`。 + +![WordPress Dashboard][10] + +### 控制 pod 和容器 + +要查看正在运行的容器,使用 `podman ps`,它可以显示 web 和数据库容器以及吊舱中的基础设施容器。 + +``` +CONTAINER ID  IMAGE                               COMMAND               CREATED      STATUS          PORTS                                         NAMES +a364a8d7cec7  docker.io/library/wordpress:latest  apache2-foregroun...  2 hours ago  Up 2 hours ago  0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp  wpsite_web_1 +c447024aa104  docker.io/library/mariadb:10.5      mysqld                2 hours ago  Up 2 hours ago  0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp  wpsite_db_1 +12b1e3418e3e  k8s.gcr.io/pause:3.2 +``` + +你也可以验证 Podman 已经为这个项目创建了一个吊舱,以你执行命令的文件夹命名。 + +``` +POD ID        NAME             STATUS    CREATED      INFRA ID      # OF CONTAINERS +8a08a3a7773e  wpsite           Degraded  2 hours ago  12b1e3418e3e  3 +``` + +要停止容器,在另一个命令窗口中输入以下命令: + +``` +podman-compose down +``` + +你也可以通过停止和删除吊舱来实现。这实质上是停止并移除所有的容器,然后再删除包含的吊舱。所以,同样的事情也可以通过这些命令来实现: + +``` +podman pod stop podname +podman pod rm podname +``` + +请注意,这不会删除你在 `docker-compose.yaml` 中定义的卷。所以,你的 WordPress 网站的状态被保存下来了,你可以通过运行这个命令来恢复它。 + +``` +podman-compose up +``` + +总之,如果你是一个 Podman 粉丝,并且用 Podman 做容器工作,你可以使用 Podman Compose 来管理你的开发和生产中的容器。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/manage-containers-with-podman-compose/ + +作者:[Mehdi Haghgoo][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/powergame/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/podman-compose-1-816x345.jpg +[2]: https://podman.io +[3]: https://fedoramagazine.org/podman-pods-fedora-containers/ +[4]: https://linux.cn/article-12859-1.html +[5]: https://docs.docker.com/compose/ +[6]: https://github.com/containers/podman-compose +[7]: https://kubernetes.io/docs/concepts/containers/ +[8]: https://kubernetes.io/docs/concepts/workloads/pods/ +[9]: https://podman.io/getting-started/installation +[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Screenshot-from-2021-01-08-06-27-29-1024x767.png +[11]: https://linux.cn/article-10156-1.html \ No newline at end of file diff --git a/published/20210131 3 wishes for open source productivity in 2021.md b/published/20210131 3 wishes for open source productivity in 2021.md new file mode 100644 index 0000000000..4517bae151 --- /dev/null +++ b/published/20210131 3 wishes for open source productivity in 2021.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13113-1.html) +[#]: subject: (3 wishes for open source productivity in 2021) +[#]: via: (https://opensource.com/article/21/1/productivity-wishlist) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +2021 年开源生产力的 3 个愿望 +====== + +> 2021年,开源世界可以拓展的有很多。这是我特别感兴趣的三个领域。 + +![Looking at a map for career journey][1] + +在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的最后一天。 + +我们已经到了又一个系列的结尾处。因此,让我们谈谈我希望在 2021 年看到的更多事情。 + +### 断网 + +![Large Lego set built by the author][2] + +*我在假期期间制作的(Kevin Sonney, [CC BY-SA 4.0][3])* + +对*许多、许多的*人来说,2020 年是非常困难的一年。疫情大流行、各种政治事件、24 小时的新闻轰炸等等,都对我们的精神健康造成了伤害。虽然我确实谈到了 [抽出时间进行自我护理][4],但我只是想断网:也就是关闭提醒、手机、平板等,暂时无视这个世界。我公司的一位经理实际上告诉我们,如果放假或休息一天,就把所有与工作有关的东西都关掉(除非我们在值班)。我最喜欢的“断网”活动之一就是听音乐和搭建大而复杂的乐高。 + +### 可访问性 + +尽管我谈论的许多技术都是任何人都可以做的,但是软件方面的可访问性都有一定难度。相对于自由软件运动之初,Linux 和开源世界在辅助技术方面已经有了长足发展。但是,仍然有太多的应用和系统不会考虑有些用户没有与设计者相同的能力。我一直在关注这一领域的发展,因为每个人都应该能够访问事物。 + +### 更多的一体化选择 + +![JPilot all in one organizer software interface][5] + +*JPilot(Kevin Sonney, [CC BY-SA 4.0][3])* + +在 FOSS 世界中,一体化的个人信息管理解决方案远没有商业软件世界中那么多。总体趋势是使用单独的应用,它们必须通过配置来相互通信或通过中介服务(如 CalDAV 服务器)。移动市场在很大程度上推动了这一趋势,但我仍然向往像 [JPilot][6] 这样无需额外插件或服务就能完成几乎所有我需要的事情的日子。 + +非常感谢大家阅读这个年度系列。如果你认为我错过了什么,或者明年需要注意什么,请在下方评论。 + +就像我在 [生产力炼金术][7] 上说的那样,尽最大努力保持生产力! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/productivity-wishlist + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey) +[2]: https://opensource.com/sites/default/files/day21-image1.png +[3]: https://creativecommons.org/licenses/by-sa/4.0/ +[4]: https://opensource.com/article/21/1/self-care +[5]: https://opensource.com/sites/default/files/day21-image2.png +[6]: http://www.jpilot.org/ +[7]: https://productivityalchemy.com diff --git a/published/20210201 Generate QR codes with this open source tool.md b/published/20210201 Generate QR codes with this open source tool.md new file mode 100644 index 0000000000..bc554bf5cb --- /dev/null +++ b/published/20210201 Generate QR codes with this open source tool.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13097-1.html) +[#]: subject: (Generate QR codes with this open source tool) +[#]: via: (https://opensource.com/article/21/2/zint-barcode-generator) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins) + +Zint:用这个开源工具生成二维码 +====== + +> Zint 可以轻松生成 50 多种类型的自定义条码。 + +![](https://img.linux.net.cn/data/attachment/album/202102/07/231854y8ffstg0m6l2fcmz.jpg) + +二维码是一种很好的可以向人们提供信息的方式,且没有打印的麻烦和费用。大多数人的智能手机都支持二维码扫描,无论其操作系统是什么。 + +你可能想使用二维码的原因有很多。也许你是一名教师,希望通过补充材料来测试你的学生,以增强学习效果,或者是一家餐厅,需要在遵守社交距离准则的同时提供菜单。我经常行走于自然小径,那里贴有树木和其他植物的标签。用二维码来补充这些小标签是一种很好的方式,它可以提供关于公园展品的额外信息,而无需花费和维护标识牌。在这些和其他情况下,二维码是非常有用的。 + +在互联网上搜索一个简单的、开源的方法来创建二维码时,我发现了 [Zint][2]。Zint 是一个优秀的开源 (GPLv3.0) 生成条码的解决方案。根据该项目的 [GitHub 仓库][3]:“Zint 是一套可以方便地对任何一种公共领域条形码标准的数据进行编码的程序,并允许你将这种功能集成到你自己的程序中。” + +Zint 支持 50 多种类型的条形码,包括二维码(ISO 18004),你可以轻松地创建这些条形码,然后复制和粘贴到 word 文档、博客、维基和其他数字媒体中。人们可以用智能手机扫描这些二维码,快速链接到信息。 + +### 安装 Zint + +Zint 适用于 Linux、macOS 和 Windows。 + +你可以在基于 Ubuntu 的 Linux 发行版上使用 `apt` 安装 Zint 命令: + +``` +$ sudo apt install zint +``` + +我还想要一个图形用户界面(GUI),所以我安装了 Zint-QT: + +``` +$ sudo apt install zint-qt +``` + +请参考手册的[安装部分][4],了解 macOS 和 Windows 的说明。 + +### 用 Zint 生成二维码 + +安装好后,我启动了它,并创建了我的第一个二维码,这是一个指向 Opensource.com 的链接。 + +![Generating QR code with Zint][5] + +Zint 的 50 多个其他条码选项包括许多国家的邮政编码、DotCode、EAN、EAN-14 和通用产品代码 (UPC)。[项目文档][2]中包含了它可以渲染的所有代码的完整列表。 + +你可以将任何条形码复制为 BMP 或 SVG,或者将输出保存为你应用中所需要的任何尺寸的图像文件。这是我的 77x77 像素的二维码。 + +![QR code][7] + +该项目维护了一份出色的用户手册,其中包含了在[命令行][8]和 [GUI][9] 中使用 Zint 的说明。你甚至可以[在线][10]试用 Zint。对于功能请求或错误报告,请[访问网站][11]或[发送电子邮件][12]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/zint-barcode-generator + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop) +[2]: http://www.zint.org.uk/ +[3]: https://github.com/zint/zint +[4]: http://www.zint.org.uk/Manual.aspx?type=p&page=2 +[5]: https://opensource.com/sites/default/files/uploads/zintqrcode_generation.png (Generating QR code with Zint) +[6]: https://creativecommons.org/licenses/by-sa/4.0/ +[7]: https://opensource.com/sites/default/files/uploads/zintqrcode_77px.png (QR code) +[8]: http://zint.org.uk/Manual.aspx?type=p&page=4 +[9]: http://zint.org.uk/Manual.aspx?type=p&page=3 +[10]: http://www.barcode-generator.org/ +[11]: https://lists.sourceforge.net/lists/listinfo/zint-barcode +[12]: mailto:zint-barcode@lists.sourceforge.net diff --git a/published/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md b/published/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md new file mode 100644 index 0000000000..12f6aa8801 --- /dev/null +++ b/published/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md @@ -0,0 +1,86 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13119-1.html) +[#]: subject: (Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop) +[#]: via: (https://itsfoss.com/filmulator/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Filmulator:一个简单的、开源的 Raw 图像编辑器 +====== + +![](https://img.linux.net.cn/data/attachment/album/202102/15/100616o54wb5h4aqgmq4qe.jpg) + +> Filmulator 是一个开源的具有库管理功能的 raw 照片编辑应用,侧重于简单、易用和简化的工作流程。 + +### Filmulator:适用于 Linux(和 Windows)的 raw 图像编辑器 + +[Linux 中有一堆 raw 照片编辑器][1],[Filmulator][2] 就是其中之一。Filmulator 的目标是仅提供基本要素,从而使 raw 图像编辑变得简单。它还增加了库处理的功能,如果你正在为你的相机图像寻找一个不错的应用,这是一个加分项。 + +对于那些不知道 raw 的人来说,[raw 图像文件][3]是一个最低限度处理、未压缩的文件。换句话说,它是未经压缩的数字文件,并且只经过了最低限度的处理。专业摄影师更喜欢用 raw 文件拍摄照片,并自行处理。普通人从智能手机拍摄照片,它通常被压缩为 JPEG 格式或被过滤。 + +让我们来看看在 Filmulator 编辑器中会有什么功能。 + +### Filmulator 的功能 + +![Filmulator interface][4] + +Filmulator 宣称,它不是典型的“胶片效果滤镜” —— 这只是复制了胶片的外在特征。相反,Filmulator 从根本上解决了胶片的魅力所在:显影过程。 + +它模拟了胶片的显影过程:从胶片的“曝光”,到每个像素内“银晶”的生长,再到“显影剂”在相邻像素之间与储槽中大量显影剂的扩散。 + +Fimulator 开发者表示,这种模拟带来了以下好处: + + * 大的明亮区域变得更暗,压缩了输出动态范围。 + * 小的明亮区域使周围环境变暗,增强局部对比度。 + * 在明亮区域,饱和度得到增强,有助于保留蓝天、明亮肤色和日落的色彩。 + * 在极度饱和的区域,亮度会被减弱,有助于保留细节,例如花朵。 + +以下是经 Filmulator 处理后的 raw 图像的对比,以自然的方式增强色彩,而不会引起色彩剪切。 + +![原图][5] + +![处理后][10] + +### 在 Ubuntu/Linux 上安装 Filmulator + +Filmulator 有一个 AppImage 可用,这样你就可以在 Linux 上轻松使用它。使用 [AppImage 文件][6]真的很简单。下载后,使它可执行,然后双击运行。 + +- [下载 Filmulator for Linux][7] + +对 Windows 用户也有一个 Windows 版本。除此之外,你还可以随时前往[它的 GitHub 仓库][8]查看它的源代码。 + +有一份[小文档][9]来帮助你开始使用 Fimulator。 + +### 总结 + +Fimulator 的设计理念是为任何工作提供最好的工具,而且只有这一个工具。这意味着牺牲了灵活性,但获得了一个大大简化和精简的用户界面。 + +我连业余摄影师都不是,更别说是专业摄影师了。我没有单反或其他高端摄影设备。因此,我无法测试和分享我对 Filmulator 的实用性的经验。 + +如果你有更多处理 raw 图像的经验,请尝试下 Filmulator,并分享你的意见。有一个 AppImage 可以让你快速测试它,看看它是否适合你的需求。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/filmulator/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/raw-image-tools-linux/ +[2]: https://filmulator.org/ +[3]: https://www.findingtheuniverse.com/what-is-raw-in-photography/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/Filmulate.jpg?resize=799%2C463&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/image-without-filmulator.jpeg?ssl=1 +[6]: https://itsfoss.com/use-appimage-linux/ +[7]: https://filmulator.org/download/ +[8]: https://github.com/CarVac/filmulator-gui +[9]: https://github.com/CarVac/filmulator-gui/wiki +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/image-with-filmulator.jpeg?ssl=1 \ No newline at end of file diff --git a/published/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md b/published/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md new file mode 100644 index 0000000000..c30c45aaab --- /dev/null +++ b/published/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md @@ -0,0 +1,153 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13122-1.html) +[#]: subject: (Paru – A New AUR Helper and Pacman Wrapper Based on Yay) +[#]: via: (https://itsfoss.com/paru-aur-helper/) +[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) + +Paru:基于 Yay 的新 AUR 助手 +====== + +![](https://img.linux.net.cn/data/attachment/album/202102/16/101301ldekk9kkpqlplke6.jpg) + +[用户选择 Arch Linux][1] 或 [基于 Arch 的 Linux 发行版][2]的主要原因之一就是 [Arch 用户仓库(AUR)][3]。 + +遗憾的是,[pacman][4],也就是 Arch 的包管理器,不能以类似官方仓库的方式访问 AUR。AUR 中的包是以 [PKGBUILD][5] 的形式存在的,需要手动过程来构建。 + +AUR 助手可以自动完成这个过程。毫无疑问,[yay][6] 是最受欢迎和备受青睐的 AUR 助手之一。 + +最近,`yay` 的两位开发者之一的 [Morganamilo][7][宣布][8]将退出 `yay` 的维护工作,以开始自己的 AUR 助手 [paru][9]。`paru` 是用 [Rust][10] 编写的,而 `yay` 是用 [Go][11] 编写的,它的设计是基于 yay 的。 + +请注意,`yay` 还没有结束支持,它仍然由 [Jguer][12] 积极维护。他还[评论][13]说,`paru` 可能适合那些寻找丰富功能的 AUR 助手的用户。因此我推荐大家尝试一下。 + +### 安装 Paru AUR 助手 + +要安装 `paru`,打开你的终端,逐一输入以下命令: + +``` +sudo pacman -S --needed base-devel +git clone https://aur.archlinux.org/paru.git +cd paru +makepkg -si +``` + +现在已经安装好了,让我们来看看如何使用它。 + +### 使用 Paru AUR 助手的基本命令 + +在我看来,这些都是 `paru` 最基本的命令。你可以在 [GitHub][9] 的官方仓库中探索更多。 + + * `paru <用户输入>`:搜索并安装“用户输入” + * `paru -`:`paru -Syu` 的别名 + * `paru -Sua`:仅升级 AUR 包。 + * `paru -Qua`:打印可用的 AUR 更新 + * `paru -Gc <用户输入>`:显示“用户输入”的 AUR 评论 + +### 充分使用 Paru AUR 助手 + +你可以在 GitHub 上访问 `paru` 的[更新日志][14]来查看完整的变更日志历史,或者你可以在[首次发布][15]中查看对 `yay` 的变化。 + +#### 在 Paru 中启用颜色 + +要在 `paru` 中启用颜色,你必须先在 `pacman` 中启用它。所有的[配置文件][16]都在 `/etc` 目录下。在此例中,我[使用 Nano 文本编辑器][17],但是,你可以选择使用任何[基于终端的文本编辑器][18]。 + +``` +sudo nano /etc/pacman.conf +``` + +打开 `pacman` 配置文件后,取消 `Color` 的注释,即可启用此功能。 + +![][19] + +#### 反转搜索顺序 + +根据你的搜索条件,最相关的包通常会显示在搜索结果的顶部。在 `paru` 中,你可以反转搜索顺序,使你的搜索更容易。 + +与前面的例子类似,打开 `paru` 配置文件: + +``` +sudo nano /etc/paru.conf +``` + +取消注释 `BottomUp` 项,然后保存文件。 + +![][20] + +如你所见,顺序是反转的,第一个包出现在了底部。 + +![][21] + +#### 编辑 PKGBUILD (对于高级用户) + +如果你是一个有经验的 Linux 用户,你可以通过 `paru` 编辑 AUR 包。要做到这一点,你需要在 `paru` 配置文件中启用该功能,并设置你所选择的文件管理器。 + +在此例中,我将使用配置文件中的默认值,即 vifm 文件管理器。如果你还没有使用过它,你可能需要安装它。 + +``` +sudo pacman -S vifm +sudo nano /etc/paru.conf +``` + +打开配置文件,如下所示取消注释。 + +![][22] + +让我们回到 [Google Calendar][23] 的 AUR 包,并尝试安装它。系统会提示你审查该软件包。输入 `Y` 并按下回车。 + +![][24] + +从文件管理器中选择 PKGBUILD,然后按下回车查看软件包。 + +![][25] + +你所做的任何改变都将是永久性的,下次升级软件包时,你的改变将与上游软件包合并。 + +![][26] + +### 总结 + +`paru` 是 [AUR 助手家族][27]的又一个有趣的新成员,前途光明。此时,我不建议更换 `yay`,因为它还在维护,但一定要试试 `paru`。你可以把它们两个都安装到你的系统中,然后得出自己的结论。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/paru-aur-helper/ + +作者:[Dimitrios Savvopoulos][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/dimitrios/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/why-arch-linux/ +[2]: https://itsfoss.com/arch-based-linux-distros/ +[3]: https://itsfoss.com/aur-arch-linux/ +[4]: https://itsfoss.com/pacman-command/ +[5]: https://wiki.archlinux.org/index.php/PKGBUILD +[6]: https://news.itsfoss.com/qt-6-released/ +[7]: https://github.com/Morganamilo +[8]: https://www.reddit.com/r/archlinux/comments/jjn1c1/paru_v100_and_stepping_away_from_yay/ +[9]: https://github.com/Morganamilo/paru +[10]: https://www.rust-lang.org/ +[11]: https://golang.org/ +[12]: https://github.com/Jguer +[13]: https://aur.archlinux.org/packages/yay/#pinned-788241 +[14]: https://github.com/Morganamilo/paru/releases +[15]: https://github.com/Morganamilo/paru/releases/tag/v1.0.0 +[16]: https://linuxhandbook.com/linux-directory-structure/#-etc-configuration-files +[17]: https://itsfoss.com/nano-editor-guide/ +[18]: https://itsfoss.com/command-line-text-editors-linux/ +[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/pacman.conf-color.png?resize=800%2C480&ssl=1 +[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-bottomup.png?resize=800%2C480&ssl=1 +[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-bottomup-2.png?resize=800%2C480&ssl=1 +[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-vifm.png?resize=732%2C439&ssl=1 +[23]: https://aur.archlinux.org/packages/gcalcli/ +[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review.png?resize=800%2C480&ssl=1 +[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review-2.png?resize=800%2C480&ssl=1 +[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review-3.png?resize=800%2C480&ssl=1 +[27]: https://itsfoss.com/best-aur-helpers/ +[28]: https://news.itsfoss.com/ \ No newline at end of file diff --git a/published/20210204 A hands-on tutorial of SQLite3.md b/published/20210204 A hands-on tutorial of SQLite3.md new file mode 100644 index 0000000000..057e0ce98e --- /dev/null +++ b/published/20210204 A hands-on tutorial of SQLite3.md @@ -0,0 +1,255 @@ +[#]: collector: "lujun9972" +[#]: translator: "amwps290" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-13117-1.html" +[#]: subject: "A hands-on tutorial of SQLite3" +[#]: via: "https://opensource.com/article/21/2/sqlite3-cheat-sheet" +[#]: author: "Klaatu https://opensource.com/users/klaatu" + +SQLite3 实践教程 +====== + +> 开始使用这个功能强大且通用的数据库吧。 + +![](https://img.linux.net.cn/data/attachment/album/202102/14/131146jsx2kvyobwxwswct.jpg) + +应用程序经常需要保存数据。无论你的用户是创建简单的文本文档、复杂的图形布局、游戏进度还是错综复杂的客户和订单号列表,软件通常都意味着生成数据。有很多方法可以存储数据以供重复使用。你可以将文本转储为 INI、[YAML][2]、XML 或 JSON 等配置格式,可以输出原始的二进制数据,也可以将数据存储在结构化数据库中。SQLite 是一个自包含的、轻量级数据库,可轻松创建、解析、查询、修改和传输数据。 + +- 下载 [SQLite3 备忘录][3] + +SQLite 专用于 [公共领域][4],[从技术上讲,这意味着它没有版权,因此不需要许可证][5]。如果你需要许可证,则可以 [购买所有权担保][6]。SQLite 非常常见,大约有 1 万亿个 SQLite 数据库正在使用中。在每个基于 Webkit 的 Web 浏览器,现代电视机,汽车多媒体系统以及无数其他软件应用程序中,Android 和 iOS 设备, macOS 和 Windows 10 计算机,大多数 Linux 系统上都包含多个这种数据库。 + +总而言之,它是用于存储和组织数据的一个可靠而简单的系统。 + +### 安装 + +你的系统上可能已经有 SQLite 库,但是你需要安装其命令行工具才能直接使用它。在 Linux上,你可能已经安装了这些工具。该工具提供的命令是 `sqlite3` (而不仅仅是 sqlite)。 + +如果没有在你的 Linux 或 BSD 上安装 SQLite,你则可以从软件仓库中或 ports 树中安装 SQLite,也可以从源代码或已编译的二进制文件进行[下载并安装][7]。 + +在 macOS 或 Windows 上,你可以从 [sqlite.org][7] 下载并安装 SQLite 工具。 + +### 使用 SQLite + +通过编程语言与数据库进行交互是很常见的。因此,像 Java、Python、Lua、PHP、Ruby、C++ 以及其他编程语言都提供了 SQLite 的接口(或“绑定”)。但是,在使用这些库之前,了解数据库引擎的实际情况以及为什么你对数据库的选择很重要是有帮助的。本文向你介绍 SQLite 和 `sqlite3` 命令,以便你熟悉该数据库如何处理数据的基础知识。 + +### 与 SQLite 交互 + +你可以使用 `sqlite3` 命令与 SQLite 进行交互。 该命令提供了一个交互式的 shell 程序,以便你可以查看和更新数据库。 + +``` +$ sqlite3 +SQLite version 3.34.0 2020-12-01 16:14:00 +Enter ".help" for usage hints. +Connected to a transient in-memory database. +Use ".open FILENAME" to reopen on a persistent database. +sqlite> +``` + +该命令将使你处于 SQLite 的子 shell 中,因此现在的提示符是 SQLite 的提示符。你以前使用的 Bash 命令在这里将不再适用。你必须使用 SQLite 命令。要查看 SQLite 命令列表,请输入 `.help`: + +``` +sqlite> .help +.archive ... Manage SQL archives +.auth ON|OFF SHOW authorizer callbacks +.backup ?DB? FILE Backup DB (DEFAULT "main") TO FILE +.bail ON|off Stop after hitting an error. DEFAULT OFF +.binary ON|off Turn BINARY output ON OR off. DEFAULT OFF +.cd DIRECTORY CHANGE the working directory TO DIRECTORY +[...] +``` + +这些命令中的其中一些是二进制的,而其他一些则需要唯一的参数(如文件名、路径等)。这些是 SQLite Shell 的管理命令,不是用于数据库查询。数据库以结构化查询语言(SQL)进行查询,许多 SQLite 查询与你从 [MySQL][8] 和 [MariaDB][9] 数据库中已经知道的查询相同。但是,数据类型和函数有所不同,因此,如果你熟悉另一个数据库,请特别注意细微的差异。 + +### 创建数据库 + +启动 SQLite 时,可以打开内存数据库,也可以选择要打开的数据库: + +``` +$ sqlite3 mydatabase.db +``` + +如果还没有数据库,则可以在 SQLite 提示符下创建一个数据库: + +``` +sqlite> .open mydatabase.db +``` + +现在,你的硬盘驱动器上有一个空文件,可以用作 SQLite 数据库。 文件扩展名 `.db` 是任意的。你也可以使用 `.sqlite` 或任何你想要的后缀。 + +### 创建一个表 + +数据库包含一些table,可以将其可视化为电子表格。有许多的行(在数据库中称为记录record)和列。行和列的交集称为字段field。 + +结构化查询语言(SQL)以其提供的内容而命名:一种以可预测且一致的语法查询数据库内容以接收有用的结果的方法。SQL 读起来很像普通的英语句子,即使有点机械化。当前,你的数据库是一个没有任何表的空数据库。 + +你可以使用 `CREATE` 来创建一个新表,你可以和 `IF NOT EXISTS` 结合使用。以便不会破坏现在已有的同名的表。 + +你无法在 SQLite 中创建一个没有任何字段的空表,因此在尝试 `CREATE` 语句之前,必须考虑预期表将存储的数据类型。在此示例中,我将使用以下列创建一个名为 `member` 的表: + + * 唯一标识符 + * 人名 + * 记录创建的时间和日期 + +#### 唯一标识符 + +最好用唯一的编号来引用记录,幸运的是,SQLite 认识到这一点,创建一个名叫 `rowid` 的列来为你自动实现这一点。 + +无需 SQL 语句即可创建此字段。 + +#### 数据类型 + +对于我的示例表中,我正在创建一个 `name` 列来保存 `TEXT` 类型的数据。为了防止在没有指定字段数据的情况下创建记录,可以添加 `NOT NULL` 指令。 + +用 `name TEXT NOT NULL` 语句来创建。 + +SQLite 中有五种数据类型(实际上是 _储存类别_): + + * `TEXT`:文本字符串 + * `INTEGER`:一个数字 + * `REAL`:一个浮点数(小数位数无限制) + * `BLOB`:二进制数据(例如,.jpeg 或 .webp 图像) + * `NULL`:空值 + +#### 日期和时间戳 + +SQLite 有一个方便的日期和时间戳功能。它本身不是数据类型,而是 SQLite 中的一个函数,它根据所需的格式生成字符串或整数。 在此示例中,我将其保留为默认值。 + +创建此字段的 SQL 语句是:`datestamp DATETIME DEFAULT CURRENT_TIMESTAMP`。 + +### 创建表的语句 + +在 SQLite 中创建此示例表的完整 SQL: + +``` +sqlite> CREATE TABLE +...> IF NOT EXISTS +...> member (name TEXT NOT NULL, +...> datestamp DATETIME DEFAULT CURRENT_TIMESTAMP); +``` + +在此代码示例中,我在语句的分句后按了回车键。以使其更易于阅读。除非以分号(`;`)终止,否则 SQLite 不会运行你的 SQL 语句。 + +你可以使用 SQLite 命令 `.tables` 验证表是否已创建: + +``` +sqlite> .tables +member +``` + +### 查看表中的所有列 + +你可以使用 `PRAGMA` 语句验证表包含哪些列和行: + +``` +sqlite> PRAGMA table_info(member); +0|name|TEXT|1||0 +1|datestamp|DATETIME|0|CURRENT_TIMESTAMP|0 +``` + +### 数据输入 + +你可以使用 `INSERT` 语句将一些示例数据填充到表中: + +``` +> INSERT INTO member (name) VALUES ('Alice'); +> INSERT INTO member (name) VALUES ('Bob'); +> INSERT INTO member (name) VALUES ('Carol'); +> INSERT INTO member (name) VALUES ('David'); +``` + +查看表中的数据: + +``` +> SELECT * FROM member; +Alice|2020-12-15 22:39:00 +Bob|2020-12-15 22:39:02 +Carol|2020-12-15 22:39:05 +David|2020-12-15 22:39:07 +``` + +#### 添加多行数据 + +现在创建第二个表: + +``` +> CREATE TABLE IF NOT EXISTS linux ( +...> distro TEXT NOT NULL); +``` + +填充一些示例数据,这一次使用小的 `VALUES` 快捷方式,因此你可以在一个命令中添加多行。关键字 `VALUES` 期望以括号形式列出列表,而用多个逗号分隔多个列表: + +``` +> INSERT INTO linux (distro) +...> VALUES ('Slackware'), ('RHEL'), +...> ('Fedora'),('Debian'); +``` + +### 修改表结构 + +你现在有两个表,但是到目前为止,两者之间没有任何关系。它们每个都包含独立的数据,但是可能你可能需要将第一个表的成员与第二个表中列出的特定项相关联。 + +为此,你可以为第一个表创建一个新列,该列对应于第二个表。由于两个表都设计有唯一标识符(这要归功于 SQLite 的自动创建),所以连接它们的最简单方法是将其中一个的 `rowid` 字段用作另一个的选择器。 + +在第一个表中创建一个新列,以存储第二个表中的值: + +``` +> ALTER TABLE member ADD os INT; +``` + +使用 `linux` 表中的唯一标识符作为 `member` 表中每一条记录中 `os` 字段的值。因为记录已经存在。因此你可以使用 `UPDATE` 语句而不是使用 `INSERT` 语句来更新数据。需要特别注意的是,你首先需要选中特定的一行来然后才能更新其中的某个字段。从句法上讲,这有点相反,更新首先发生,选择匹配最后发生: + +``` +> UPDATE member SET os=1 WHERE name='Alice'; +``` + +对 `member` 表中的其他行重复相同的过程。更新 `os` 字段,为了数据多样性,在四行记录上分配三种不同的发行版(其中一种加倍)。 + +### 联接表 + +现在,这两个表相互关联,你可以使用 SQL 显示关联的数据。数据库中有多种 _联接方式_,但是一旦掌握了基础知识,就可以尝试所有的联接形式。这是一个基本联接,用于将 `member` 表的 `os` 字段中的值与 linux 表的 `rowid` 字段相关联: + +``` +> SELECT * FROM member INNER JOIN linux ON member.os=linux.rowid; +Alice|2020-12-15 22:39:00|1|Slackware +Bob|2020-12-15 22:39:02|3|Fedora +Carol|2020-12-15 22:39:05|3|Fedora +David|2020-12-15 22:39:07|4|Debian +``` + +`os` 和 `rowid` 字段形成了关联。 + +在一个图形应用程序中,你可以想象 `os` 字段是一个下拉选项菜单,其中的值是 `linux` 表中 `distro` 字段中的数据。将相关的数据集通过唯一的字段相关联,可以确保数据的一致性和有效性,并且借助 SQL,你可以在以后动态地关联它们。 + +### 了解更多 + +SQLite 是一个非常有用的自包含的、可移植的开源数据库。学习以交互方式使用它是迈向针对 Web 应用程序进行管理或通过编程语言库使用它的重要的第一步。 + +如果你喜欢 SQLite,也可以尝试由同一位作者 Richard Hipp 博士的 [Fossil][10]。 + +在学习和使用 SQLite 时,有一些常用命令可能会有所帮助,所以请立即下载我们的 [SQLite3 备忘单][3]! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/sqlite3-cheat-sheet + +作者:[Klaatu][a] +选题:[lujun9972][b] +译者:[amwps290](https://github.com/amwps290) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/klaatu +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP "Cheat Sheet cover image" +[2]: https://www.redhat.com/sysadmin/yaml-beginners +[3]: https://opensource.com/downloads/sqlite-cheat-sheet +[4]: https://sqlite.org/copyright.html +[5]: https://directory.fsf.org/wiki/License:PublicDomain +[6]: https://www.sqlite.org/purchase/license? +[7]: https://www.sqlite.org/download.html +[8]: https://www.mysql.com/ +[9]: https://mariadb.org/ +[10]: https://opensource.com/article/20/11/fossil diff --git a/published/20210207 Why the success of open source depends on empathy.md b/published/20210207 Why the success of open source depends on empathy.md new file mode 100644 index 0000000000..6905d890a1 --- /dev/null +++ b/published/20210207 Why the success of open source depends on empathy.md @@ -0,0 +1,58 @@ +[#]: collector: (lujun9972) +[#]: translator: (scvoet) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13120-1.html) +[#]: subject: (Why the success of open source depends on empathy) +[#]: via: (https://opensource.com/article/21/2/open-source-empathy) +[#]: author: (Bronagh Sorota https://opensource.com/users/bsorota) + +为何开源的成功取决于同理心? +====== + +> 随着对同理心认识的提高和传播同理心的激励,开源生产力将得到提升,协作者将会聚拢,可以充分激发开源软件开发的活力。 + +![](https://img.linux.net.cn/data/attachment/album/202102/15/110606rc48qf05904m9n7p.jpg) + +开源开发的协调创新精神和社区精神改变了世界。Jim Whitehurst 在[《开放式组织》][2]中解释说,开源的成功源于“将人们视为社区的一份子,从交易思维转变为基于承诺基础的思维方式”。 但是,开源开发模型的核心仍然存在障碍:它经常性地缺乏人类的同理心empathy。 + +同理心是理解或感受他人感受的能力。在开源社区中,面对面的人际互动和协作是很少的。任何经历过 GitHub 拉取请求Pull request议题Issue的开发者都曾收到过来自他们可能从未见过的人的评论,这些人往往身处地球的另一端,而他们的交流也可能同样遥远。现代开源开发就是建立在这种异步、事务性的沟通基础之上。因此,人们在社交媒体平台上所经历的同类型的网络欺凌和其他虐待行为,在开源社区中也不足为奇。 + +当然,并非所有开源交流都会事与愿违。许多人在工作中发展出了尊重并秉持着良好的行为标准。但是很多时候,人们的沟通也常常缺乏常识性的礼仪,他们将人们像机器而非人类一般对待。这种行为是激发开源创新模型全部潜力的障碍,因为它让许多潜在的贡献者望而却步,并扼杀了灵感。 + +### 恶意交流的历史 + +代码审查中存在的敌意言论对开源社区来说并不新鲜,它多年来一直被社区所容忍。开源教父莱纳斯·托瓦尔兹Linus Torvalds经常在代码不符合他的标准时[抨击][3] Linux 社区,并将贡献者赶走。埃隆大学计算机科学教授 Megan Squire 借助[机器学习][4]分析了托瓦尔兹的侮辱行为,发现它们在四年内的数量高达数千次。2018 年,莱纳斯因自己的不良行为而自我放逐,责成自己学习同理心,道歉并为 Linux 社区制定了行为准则。 + +2015 年,[Sage Sharp][5] 虽然在技术上受人尊重,但因其缺乏对个人的尊重,被辞去了 FOSS 女性外展计划中的 Linux 内核协调员一职。 + +PR 审核中存在的贬低性评论对开发者会造成深远的影响。它导致开发者在提交 PR 时产生畏惧感,让他们对预期中的反馈感到恐惧。这吞噬了开发者对自己能力的信心。它逼迫工程师每次都只能追求完美,从而减缓了开发速度,这与许多社区采用的敏捷方法论背道而驰。 + +### 如何缩小开源中的同理心差距? + +通常情况下,冒犯的评论常是无意间的,而通过一些指导,作者则可以学会如何在不带负面情绪的情况下表达意见。GitHub 不会监控议题和 PR 的评论是否有滥用内容,相反,它提供了一些工具,使得社区能够对其内容进行审查。仓库的所有者可以删除评论和锁定对话,所有贡献者可以报告滥用和阻止用户。 + +制定社区行为准则可为所有级别的贡献者提供一个安全且包容的环境,并且能让所有级别的贡献者参与并定义降低协作者之间冲突的过程。 + +我们能够克服开源中存在的同理心问题。面对面的辩论比文字更有利于产生共鸣,所以尽可能选择视频通话。以同理心的方式分享反馈,树立榜样。如果你目睹了一个尖锐的评论,请做一个指导者而非旁观者。如果你是受害者,请大声说出来。在面试候选人时,评估同理心能力,并将同理心能力与绩效评估和奖励挂钩。界定并执行社区行为准则,并管理好你的社区。 + +随着对同理心认识的提高和传播同理心的激励,开源生产力将得到提升,协作者将会聚拢,可以充分激发开源软件开发的活力。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/open-source-empathy + +作者:[Bronagh Sorota][a] +选题:[lujun9972][b] +译者:[scvoet](https://github.com/scvoet) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bsorota +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/practicing-empathy.jpg?itok=-A7fj6NF (Practicing empathy) +[2]: https://www.redhat.com/en/explore/the-open-organization-book +[3]: https://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/ +[4]: http://flossdata.syr.edu/data/insults/hicssInsultsv2.pdf +[5]: https://en.wikipedia.org/wiki/Sage_Sharp diff --git a/published/20210208 3 open source tools that make Linux the ideal workstation.md b/published/20210208 3 open source tools that make Linux the ideal workstation.md new file mode 100644 index 0000000000..82a71b31f7 --- /dev/null +++ b/published/20210208 3 open source tools that make Linux the ideal workstation.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-13133-1.html) +[#]: subject: (3 open source tools that make Linux the ideal workstation) +[#]: via: (https://opensource.com/article/21/2/linux-workday) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +让 Linux 成为理想的工作站的 3 个开源工具 +====== + +> Linux 不但拥有你认为所需的一切,还有更多可以让你高效工作的工具。 + +![](https://img.linux.net.cn/data/attachment/album/202102/19/134935qhe252ifbvbpnzxk.jpg) + +在 2021 年,有更多让人们喜欢 Linux 的理由。在这个系列中,我将分享 21 种使用 Linux 的不同理由。今天,我将与你分享为什么 Linux 是你工作的最佳选择。 + +每个人都希望在工作期间提高工作效率。如果你的工作通常涉及到文档、演示文稿和电子表格的工作,那么你可能已经习惯了特定的例行工作。问题在于,这个*惯常的例行工作*通常是由一两个特定的应用程序决定的,无论是某个办公套件还是桌面操作系统。当然,习惯并不意味着它是理想的,但是它往往会毫无疑义地持续存在,甚至影响到企业的运作架构。 + +### 更聪明地工作 + +如今,许多办公应用程序都在云端运行,因此如果你愿意的话,你可以在 Linux 上使用相同的方式。然而,由于许多典型的知名办公应用程序并不符合 Linux 上的文化预期,因此你可能会发现自己受到启发,想去探索其他的选择。正如任何渴望走出“舒适区”的人所知道的那样,这种微妙的打破可能会出奇的有用。很多时候,你不知道自己效率低下,因为你实际上并没有尝试过以不同的方式做事。强迫自己去探索其他方式,你永远不知道会发现什么。你甚至不必完全知道要寻找的内容。 + +### LibreOffice + +Linux(或任何其他平台)上显而易见的开源办公主力之一是 [LibreOffice][2]。它具有多个组件,包括文字处理器、演示软件、电子表格、关系型数据库界面、矢量绘图等。它可以从其他流行的办公应用程序中导入许多文档格式,因此从其他工具过渡到 LibreOffice 通常很容易。 + +然而,LibreOffice 不仅仅是一个出色的办公套件。LibreOffice 支持宏,所以机智的用户可以自动完成重复性任务。它还具有终端命令的功能,因此你可以在不启动 LibreOffice 界面的情况下执行许多任务。 + +想象一下,比如要打开 21 个文档,导航到**文件**菜单,到**导出**或**打印**菜单项,并将文件导出为 PDF 或 EPUB。这至少需要 84 次以上的点击,可能要花费一个小时的时间。相比之下,打开一个文档文件夹,并转换所有文件为 PDF 或 EPUB,只需要执行一个迅速的命令或菜单操作。转换将在后台运行,而你可以处理其他事情。只需要四分之一的时间,可能更少。 + +``` +$ libreoffice --headless --convert-to epub *.docx +``` + +这是一个小改进,是由 Linux 工具集和你可以自定义环境和工作流程的便利性所潜在带来的鼓励。 + +### Abiword 和 Gnumeric + +有时,你并不需要一个大而全的办公套件。如果你喜欢简化你的办公室工作,那么使用一个轻量级和针对特定任务的应用程序可能更好。例如,我大部分时间都是用文本编辑器写文章,因为我知道在转换为 HTML 的过程中,所有的样式都会被丢弃。但有些时候,文字处理器是很有用的,无论是打开别人发给我的文档,还是因为我想用一种快速简单的方法来生成一些样式漂亮的文本。 + +[Abiword][3] 是一款简单的文字处理器,它基本支持流行的文档格式,并具备你所期望的文字处理器的所有基本功能。它并不意味着是一个完整的办公套件,这是它最大的特点。虽然没有太多的选择,但我们仍然处于信息过载的时代,这正是一个完整的办公套件或文字处理器有时会犯的错误。如果你想避免这种情况,那就用一些简单的东西来代替。 + +同样,[Gnumeric][4] 项目提供了一个简单的电子表格应用程序。Gnumeric 避免了任何严格意义上的电子表格所不需要的功能,所以你仍然可以获得强大的公式语法、大量的函数,以及样式和操作单元格的所有选项。我不怎么使用电子表格,所以我发现自己在极少数需要查看或处理分类账中的数据时,对 Gnumeric 相当满意。 + +### Pandoc + +通过专门的命令和文件处理程序,可以最小化。`pandoc` 命令专门用于文件转换。它就像 `libreoffice --headless` 命令一样,只是要处理的文档格式数量是它的十倍。你甚至可以用它来生成演示文稿! 如果你的工作之一是从一个文档中提取源文本,并以多种格式交付它,那么 Pandoc 是必要的,所以你应该[下载我们的攻略][5]看看。 + +广义上讲,Pandoc 代表的是一种完全不同的工作方式。它让你脱离了办公应用的束缚。它将你从试图将你的想法写成文字,并同时决定这些文字应该使用什么字体的工作中分离出来。在纯文本中工作,然后转换为各种交付格式,让你可以使用任何你想要的应用程序,无论是移动设备上的记事本,还是你碰巧坐在电脑前的简单文本编辑器,或者是云端的文本编辑器。 + +### 寻找替代品 + +Linux 有很多意想不到的替代品。你可以从你正在做的事情中退后一步,分析你的工作流程,评估你所需的结果,并调查那些声称可以完全你的需求的新应用程序来找到它们。 + +改变你所使用的工具、工作流程和日常工作可能会让你迷失方向,特别是当你不知道你要找的到底是什么的时候。但 Linux 的优势在于,你有机会重新评估你在多年的计算机使用过程中潜意识里形成的假设。如果你足够努力地寻找答案,你最终会意识到问题所在。通常,你最终会欣赏你学到的东西。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-workday + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) +[2]: http://libreoffice.org +[3]: https://www.abisource.com +[4]: http://www.gnumeric.org +[5]: https://opensource.com/article/20/5/pandoc-cheat-sheet diff --git a/sources/talk/20160921 lawyer The MIT License, Line by Line.md b/sources/talk/20160921 lawyer The MIT License, Line by Line.md deleted file mode 100644 index 8d4424117a..0000000000 --- a/sources/talk/20160921 lawyer The MIT License, Line by Line.md +++ /dev/null @@ -1,296 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (bestony) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (lawyer The MIT License, Line by Line) -[#]: via: (https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html) -[#]: author: (Kyle E. Mitchell https://kemitchell.com/) - -lawyer The MIT License, Line by Line -====== - -### The MIT License, Line by Line - -[The MIT License][1] is the most popular open-source software license. Here’s one read of it, line by line. - -#### Read the License - -If you’re involved in open-source software and haven’t taken the time to read the license from top to bottom—it’s only 171 words—you need to do so now. Especially if licenses aren’t your day-to-day. Make a mental note of anything that seems off or unclear, and keep trucking. I’ll repeat every word again, in chunks and in order, with context and commentary. But it’s important to have the whole in mind. - -> The MIT License (MIT) -> -> Copyright (c) -> -> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: -> -> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -> -> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the Software. - -The license is arranged in five paragraphs, but breaks down logically like this: - - * **Header** - * **License Title** : “The MIT License” - * **Copyright Notice** : “Copyright (c) …” - * **License Grant** : “Permission is hereby granted …” - * **Grant Scope** : “… to deal in the Software …” - * **Conditions** : “… subject to …” - * **Attribution and Notice** : “The above … shall be included …” - * **Warranty Disclaimer** : “The software is provided ‘as is’ …” - * **Limitation of Liability** : “In no event …” - - - -Here we go: - -#### Header - -##### License Title - -> The MIT License (MIT) - -“The MIT License” is a not a single license, but a family of license forms derived from language prepared for releases from the Massachusetts Institute of Technology. It has seen a lot of changes over the years, both for the original projects that used it, and also as a model for other projects. The Fedora Project maintains a [kind of cabinet of MIT license curiosities][2], with insipid variations preserved in plain text like anatomical specimens in formaldehyde, tracing a wayward kind of evolution. - -Fortunately, the [Open Source Initiative][3] and [Software Package Data eXchange][4] groups have standardized a generic MIT-style license form as “The MIT License”. OSI in turn has adopted SPDX’ standardized [string identifiers][5] for common open-source licenses, with `MIT` pointing unambiguously to the standardized form “MIT License”. If you want MIT-style terms for a new project, use [the standardized form][1]. - -Even if you include “The MIT License” or “SPDX:MIT” in a `LICENSE` file, any responsible reviewer will still run a comparison of the text against the standard form, just to be sure. While various license forms calling themselves “MIT License” vary only in minor details, the looseness of what counts as an “MIT License” has tempted some authors into adding bothersome “customizations”. The canonical horrible, no good, very bad example of this is [the JSON license][6], an MIT-family license plus “The Software shall be used for Good, not Evil.”. This kind of thing might be “very Crockford”. It is definitely a pain in the ass. Maybe the joke was supposed to be on the lawyers. But they laughed all the way to the bank. - -Moral of the story: “MIT License” alone is ambiguous. Folks probably have a good idea what you mean by it, but you’re only going to save everyone—yourself included—time by copying the text of the standard MIT License form into your project. If you use metadata, like the `license` property in package manager metadata files, to designate the `MIT` license, make sure your `LICENSE` file and any header comments use the standard form text. All of this can be [automated][7]. - -##### Copyright Notice - -> Copyright (c) - -Until the 1976 Copyright Act, United States copyright law required specific actions, called “formalities”, to secure copyright in creative works. If you didn’t follow those formalities, your rights to sue others for unauthorized use of your work were limited, often completely lost. One of those formalities was “notice”: Putting marks on your work and otherwise making it known to the market that you were claiming copyright. The © is a standard symbol for marking copyrighted works, to give notice of copyright. The ASCII character set doesn’t have the © symbol, but `Copyright (c)` gets the same point across. - -The 1976 Copyright Act, which “implemented” many requirements of the international Berne Convention, eliminated formalities for securing copyright. At least in the United States, copyright holders still need to register their copyrighted works before suing for infringement, with potentially higher damages if they register before infringement begins. In practice, however, many register copyright right before bringing suit against someone in particular. You don’t lose your copyright just by failing to put notices on it, registering, sending a copy to the Library of Congress, and so on. - -Even if copyright notices aren’t as absolutely necessary as they used to be, they are still plenty useful. Stating the year a work was authored and who the copyright belonged to give some sense of when copyright in the work might expire, bringing the work into the public domain. The identity of the author or authors is also useful: United States law calculates copyright terms differently for individual and “corporate” authors. Especially in business use, it may also behoove a company to think twice about using software from a known competitor, even if the license terms give very generous permission. If you’re hoping others will see your work and want to license it from you, copyright notices serve nicely for attribution. - -As for “copyright holder”: Not all standard form licenses have a space to write this out. More recent license forms, like [Apache 2.0][8] and [GPL 3.0][9], publish `LICENSE` texts that are meant to be copied verbatim, with header comments and separate files elsewhere to indicate who owns copyright and is giving the license. Those approaches neatly discourage changes to the “standard” texts, accidental or intentional. They also make automated license identification more reliable. - -The MIT License descends from language written for releases of code by institutions. For institutional releases, there was just one clear “copyright holder”, the institution releasing the code. Other institutions cribbed these licenses, replacing “MIT” with their own names, leading eventually to the generic forms we have now. This process repeated for other short-form institutional licenses of the era, notably the [original four-clause BSD License][10] for the University of California, Berkeley, now used in [three-clause][11] and [two-clause][12] variants, as well as [The ISC License][13] for the Internet Systems Consortium, an MIT variant. - -In each case, the institution listed itself as the copyright holder in reliance on rules of copyright ownership, called “[works made for hire][14]” rules, that give employers and clients ownership of copyright in some work their employees and contractors do on their behalf. These rules don’t usually apply to distributed collaborators submitting code voluntarily. This poses a problem for project-steward foundations, like the Apache Foundation and Eclipse Foundation, that accept contributions from a more diverse group of contributors. The usual foundation approach thus far has been to use a house license that states a single copyright holder—[Apache 2.0][8] and [EPL 1.0][15]—backed up by contributor license agreements—[Apache CLAs][16] and [Eclipse CLAs][17]—to collect rights from contributors. Collecting copyright ownership in one place is even more important under “copyleft” licenses like the GPL, which rely on copyright owners to enforce license conditions to promote software-freedom values. - -These days, loads of projects without any kind of institutional or business steward use MIT-style license terms. SPDX and OSI have helped these use cases by standardizing forms of licenses like MIT and ISC that don’t refer to a specific entity or institutional copyright holder. Armed with those forms, the prevailing practice of project authors is to fill their own name in the copyright notice of the form very early on … and maybe bump the year here and there. At least under United States copyright law, the resulting copyright notice doesn’t give a full picture. - -The original owner of a piece of software retains ownership of their work. But while MIT-style license terms give others rights to build on and change the software, creating what the law calls “derivative works”, they don’t give the original author ownership of copyright in others’ contributions. Rather, each contributor has copyright in any [even marginally creative][18] work they make using the existing code as a starting point. - -Most of these projects also balk at the idea of taking contributor license agreements, to say nothing of signed copyright assignments. That’s both naive and understandable. Despite the assumption of some newer open-source developers that sending a pull request on GitHub “automatically” licenses the contribution for distribution on the terms of the project’s existing license, United States law doesn’t recognize any such rule. Strong copyright protection, not permissive licensing, is the default. - -Update: GitHub later changed its site-wide terms of service to include an attempt to flip this default, at least on GitHub.com. I’ve written up some thoughts on that development, not all of them positive, in [another post][19]. - -To fill the gap between legally effective, well-documented grants of rights in contributions and no paper trail at all, some projects have adopted the [Developer Certificate of Origin][20], a standard statement contributors allude to using `Signed-Off-By` metadata tags in their Git commits. The Developer Certificate of Origin was developed for Linux kernel development in the wake of the infamous SCO lawsuits, which alleged that chunks of Linux’ code derived from SCO-owned Unix source. As a means of creating a paper trail showing that each line of Linux came from a contributor, the Developer Certificate of Origin functions nicely. While the Developer Certificate of Origin isn’t a license, it does provide lots of good evidence that those submitting code expected the project to distribute their code, and for others to use it under the kernel’s existing license terms. The kernel also maintains a machine-readable `CREDITS` file listing contributors with name, affiliation, contribution area, and other metadata. I’ve done [some][21] [experiments][22] adapting that approach for projects that don’t use the kernel’s development flow. - -#### License Grant - -> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), - -The meat of The MIT License is, you guessed it, a license. In general terms, a license is permission that one person or legal entity—the “licensor”—gives another—the “licensee”—to do something the law would otherwise let them sue for. The MIT License is a promise not to sue. - -The law sometimes distinguishes licenses from promises to give licenses. If someone breaks a promise to give a license, you may be able to sue them for breaking their promise, but you may not end up with a license. “Hereby” is one of those hokey, archaic-sounding words lawyers just can’t get rid of. It’s used here to show that the license text itself gives the license, and not just a promise of a license. It’s a legal [IIFE][23]. - -While many licenses give permission to a specific, named licensee, The MIT License is a “public license”. Public licenses give everybody—the public at large—permission. This is one of the three great ideas in open-source licensing. The MIT License captures this idea by giving a license “to any person obtaining a copy of … the Software”. As we’ll see later, there is also a condition to receiving this license that ensures others will learn about their permission, too. - -The parenthetical with a capitalized term in quotation marks (a “Definition”), is the standard way to give terms specific meanings in American-style legal documents. Courts will reliably look back to the terms of the definition when they see a defined, capitalized term used elsewhere in the document. - -##### Grant Scope - -> to deal in the Software without restriction, - -From the licensee’s point of view, these are the seven most important words in The MIT License. The key legal concerns are getting sued for copyright infringement and getting sued for patent infringement. Neither copyright law nor patent law uses “to deal in” as a term of art; it has no specific meaning in court. As a result, any court deciding a dispute between a licensor and a licensee would ask what the parties meant and understood by this language. What the court will see is that the language is intentionally broad and open-ended. It gives licensees a strong argument against any claim by a licensor that they didn’t give permission for the licensee to do that specific thing with the software, even if the thought clearly didn’t occur to either side when the license was given. - -> including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, - -No piece of legal writing is perfect, “fully settled in meaning”, or unmistakably clear. Beware anyone who pretends otherwise. This is the least perfect part of The MIT License. There are three main issues: - -First, “including without limitation” is a legal antipattern. It crops up in any number of flavors: - - * “including, without limitation” - * “including, without limiting the generality of the foregoing” - * “including, but not limited to” - * many, many pointless variations - - - -All of these share a common purpose, and they all fail to achieve it reliably. Fundamentally, drafters who use them try to have their cake and eat it, too. In The MIT License, that means introducing specific examples of “dealing in the Software”—“use, copy, modify” and so on—without implying that licensee action has to be something like the examples given to count as “dealing in”. The trouble is that, if you end up needing a court to review and interpret the terms of a license, the court will see its job as finding out what those fighting meant by the language. If the court needs to decide what “deal in” means, it cannot “unsee” the examples, even if you tell it to. I’d argue that “deal in the Software without restriction” alone would be better for licensees. Also shorter. - -Second, the verbs given as examples of “deal in” are a hodgepodge. Some have specific meanings under copyright or patent law, others almost do or just plain don’t: - - * use appears in [United States Code title 35, section 271(a)][24], the patent law’s list of what patent owners can sue others for doing without permission. - - * copy appears in [United States Code title 17, section 106][25], the copyright law’s list of what copyright owners can sue others for doing without permission. - - * modify doesn’t appear in either copyright or patent statute. It is probably closest to “prepare derivative works” under the copyright statute, but may also implicate improving or otherwise derivative inventions. - - * merge doesn’t appear in either copyright or patent statute. “Merger” has a specific meaning in copyright, but that’s clearly not what’s intended here. Rather, a court would probably read “merge” according to its meaning in industry, as in “to merge code”. - - * publish doesn’t appear in either copyright or patent statute. Since “the Software” is what’s being published, it probably hews closest to “distribute” under the [copyright statute][25]. That statute also covers rights to perform and display works “publicly”, but those rights apply only to specific kinds of copyrighted work, like plays, sound recordings, and motion pictures. - - * distribute appears in the [copyright statute][25]. - - * sublicense is a general term of intellectual property law. The right to sublicense means the right to give others licenses of their own, to do some or all of what you have permission to do. The MIT License’s right to sublicense is actually somewhat unusual in open-source licenses generally. The norm is what Heather Meeker calls a “direct licensing” approach, where everyone who gets a copy of the software and its license terms gets a license direct from the owner. Anyone who might get a sublicense under the MIT License will probably end up with a copy of the license telling them they have a direct license, too. - - * sell copies of is a mongrel. It is close to “offer to sell” and “sell” in the [patent statute][24], but refers to “copies”, a copyright concept. On the copyright side, it seems close to “distribute”, but the [copyright statute][25] makes no mention of sales. - - * permit persons to whom the Software is furnished to do so seems redundant of “sublicense”. It’s also unnecessary to the extent folks who get copies also get a direct license. - - - - -Lastly, as a result of this mishmash of legal, industry, general-intellectual-property, and general-use terms, it isn’t clear whether The MIT License includes a patent license. The general language “deal in” and some of the example verbs, especially “use”, point toward a patent license, albeit a very unclear one. The fact that the license comes from the copyright holder, who may or may not have patent rights in inventions in the software, as well as most of the example verbs and the definition of “the Software” itself, all point strongly toward a copyright license. More recent permissive open-source licenses, like [Apache 2.0][8], address copyright, patent, and even trademark separately and specifically. - -##### Three License Conditions - -> subject to the following conditions: - -There’s always a catch! MIT has three! - -If you don’t follow The MIT License’s conditions, you don’t get the permission the license offers. So failing to do what the conditions say at least theoretically leaves you open to a lawsuit, probably a copyright lawsuit. - -Using the value of the software to the licensee to motivate compliance with conditions, even though the licensee paid nothing for the license, is the second great idea of open-source licensing. The last, not found in The MIT License, builds off license conditions: “Copyleft” licenses like the [GNU General Public License][9] use license conditions to control how those making changes can license and distribute their changed versions. - -##### Notice Condition - -> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -If you give someone a copy of the software, you need to include the license text and any copyright notice. This serves a few critical purposes: - - 1. Gives others notice that they have permission for the software under the public license. This is a key part of the direct-licensing model, where each user gets a license direct from the copyright holder. - - 2. Makes known who’s behind the software, so they can be showered in praises, glory, and cold, hard cash donations. - - 3. Ensures the warranty disclaimer and limitation of liability (coming up next) follow the software around. Everyone who gets a copy should get a copy of those licensor protections, too. - - - - -There’s nothing to stop you charging for providing a copy, or even a copy in compiled form, without source code. But when you do, you can’t pretend that the MIT code is your own proprietary code, or provided under some other license. Those receiving get to know their rights under the “public license”. - -Frankly, compliance with this condition is breaking down. Nearly every open-source license has such an “attribution” condition. Makers of system and installed software often understand they’ll need to compile a notices file or “license information” screen, with copies of license texts for libraries and components, for each release of their own. The project-steward foundations have been instrumental in teaching those practices. But web developers, as a whole, haven’t got the memo. It can’t be explained away by a lack of tooling—there is plenty—or the highly modular nature of packages from npm and other repositories—which uniformly standardize metadata formats for license information. All the good JavaScript minifiers have command-line flags for preserving license header comments. Other tools will concatenate `LICENSE` files from package trees. There’s really no excuse. - -##### Warranty Disclaimer - -> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. - -Nearly every state in the United States has enacted a version of the Uniform Commercial Code, a model statute of laws governing commercial transactions. Article 2 of the UCC—“Division 2” in California—governs contracts for sales of goods, from used automobiles bought off the lot to large shipments of industrial chemicals to manufacturing plants. - -Some of the UCC’s rules about sales contracts are mandatory. These rules always apply, whether those buying and selling like them or not. Others are just “defaults”. Unless buyers and sellers opt out in writing, the UCC implies that they want the baseline rule found in the UCC’s text for their deal. Among the default rules are implied “warranties”, or promises by sellers to buyers about the quality and usability of the goods being sold. - -There is a big theoretical debate about whether public licenses like The MIT License are contracts—enforceable agreements between licensors and licensees—or just licenses, which go one way, but may come with strings attached, their conditions. There is less debate about whether software counts as “goods”, triggering the UCC’s rules. There is no debate among licensors on liability: They don’t want to get sued for lots of money if the software they give away for free breaks, causes problems, doesn’t work, or otherwise causes trouble. That’s exactly the opposite of what three default rules for “implied warranties” do: - - 1. The implied warranty of “merchantability” under [UCC section 2-314][26] is a promise that “the goods”—the Software—are of at least average quality, properly packaged and labeled, and fit for the ordinary purposes they are intended to serve. This warranty applies only if the one giving the software is a “merchant” with respect to the software, meaning they deal in software and hold themselves out as skilled in software. - - 2. The implied warranty of “fitness for a particular purpose” under [UCC section 2-315][27] kicks in when the seller knows the buyer is relying on them to provide goods for a particular purpose. The goods need to actually be “fit” for that purpose. - - 3. The implied warranty of “noninfringement” is not part of the UCC, but is a common feature of general contract law. This implied promise protects the buyer if it turns out the goods they received infringe somebody else’s intellectual property rights. That would be the case if the software under The MIT License didn’t actually belong to the one trying to license it, or if it fell under a patent owned by someone else. - - - - -[Section 2-316(3)][28] of the UCC requires language opting out of, or “excluding”, implied warranties of merchantability and fitness for a particular purpose to be conspicuous. “Conspicuous” in turn means written or formatted to call attention to itself, the opposite of microscopic fine print meant to slip past unwary consumers. State law may impose a similar attention-grabbing requirement for disclaimers of noninfringement. - -Lawyers have long suffered under the delusion that writing anything in `ALL-CAPS` meets the conspicuous requirement. That isn’t true. Courts have criticized the Bar for pretending as much, and most everyone agrees all-caps does more to discourage reading than compel it. All the same, most open-source-license forms set their warranty disclaimers in all-caps, in part because that’s the only obvious way to make it stand out in plain-text `LICENSE` files. I’d prefer to use asterisks or other ASCII art, but that ship sailed long, long ago. - -##### Limitation of Liability - -> In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software. - -The MIT License gives permission for software “free of charge”, but the law does not assume that folks receiving licenses free of charge give up their rights to sue when things go wrong and the licensor is to blame. “Limitations of liability”, often paired with “damages exclusions”, work a lot like licenses, as promises not to sue. But these are protections for the licensor against lawsuits by licensees. - -In general, courts read limitations of liability and damages exclusions warily, since they can shift an incredible amount of risk from one side to another. To protect the community’s vital interest in giving folks a way to redress wrongs done in court, they “strictly construe” language limiting liability, reading it against the one protected by it where possible. Limitations of liability have to be specific to stand up. Especially in “consumer” contracts and other situations where those giving up the right to sue lack sophistication or bargaining power, courts have sometimes refused to honor language that seemed buried out of sight. Partly for that reason, partly by sheer force of habit, lawyers tend to give limits of liability the all-caps treatment, too. - -Drilling down a bit, the “limitation of liability” part is a cap on the amount of money a licensee can sue for. In open-source licenses, that limit is always no money at all, $0, “not liable”. By contrast, in commercial licenses, it’s often a multiple of license fees paid in the last 12-month period, though it’s often negotiated. - -The “exclusion” part lists, specifically, kinds of legal claims—reasons to sue for damages—the licensor cannot use. Like many, many legal forms, The MIT License mentions actions “of contract”—for breaching a contract—and “of tort”. Tort rules are general rules against carelessly or maliciously harming others. If you run someone down on the road while texting, you have committed a tort. If your company sells faulty headphones that burn peoples’ ears off, your company has committed a tort. If a contract doesn’t specifically exclude tort claims, courts sometimes read exclusion language in a contract to prevent only contract claims. For good measure, The MIT License throws in “or otherwise”, just to catch the odd admiralty law or other, exotic kind of legal claim. - -The phrase “arising from, out of or in connection with” is a recurring tick symptomatic of the legal draftsman’s inherent, anxious insecurity. The point is that any lawsuit having anything to do with the software is covered by the limitation and exclusions. On the off chance something can “arise from”, but not “out of”, or “in connection with”, it feels better to have all three in the form, so pack ‘em in. Never mind that any court forced to split hairs in this part of the form will have to come up with different meanings for each, on the assumption that a professional drafter wouldn’t use different words in a row to mean the same thing. Never mind that in practice, where courts don’t feel good about a limitation that’s disfavored to begin with, they’ll be more than ready to read the scope trigger narrowly. But I digress. The same language appears in literally millions of contracts. - -#### Overall - -All these quibbles are a bit like spitting out gum on the way into church. The MIT License is a legal classic. The MIT License works. It is by no means a panacea for all software IP ills, in particular the software patent scourge, which it predates by decades. But MIT-style licenses have served admirably, fulfilling a narrow purpose—reversing troublesome default rules of copyright, sales, and contract law—with a minimal combination of discreet legal tools. In the greater context of computing, its longevity is astounding. The MIT License has outlasted and will outlast the vast majority of software licensed under it. We can only guess how many decades of faithful legal service it will have given when it finally loses favor. It’s been especially generous to those who couldn’t have afforded their own lawyer. - -We’ve seen how the The MIT License we know today is a specific, standardized set of terms, bringing order at long last to a chaos of institution-specific, haphazard variations. - -We’ve seen how its approach to attribution and copyright notice informed intellectual property management practices for academic, standards, commercial, and foundation institutions. - -We’ve seen how The MIT Licenses grants permission for software to all, for free, subject to conditions that protect licensors from warranties and liability. - -We’ve seen that despite some crusty verbiage and lawyerly affectation, one hundred and seventy one little words can get a hell of a lot of legal work done, clearing a path for open-source software through a dense underbrush of intellectual property and contract. - -I’m so grateful for all who’ve taken the time to read this rather long post, to let me know they found it useful, and to help improve it. As always, I welcome your comments via [e-mail][29], [Twitter][30], and [GitHub][31]. - -A number of folks have asked where they can read more, or find run-downs of other licenses, like the GNU General Public License or the Apache 2.0 license. No matter what your particular continuing interest may be, I heartily recommend the following books: - - * Andrew M. St. Laurent’s [Understanding Open Source & Free Software Licensing][32], from O’Reilly. - -I start with this one because, while it’s somewhat dated, its approach is also closest to the line-by-line approach used above. O’Reilly has made it [available online][33]. - - * Heather Meeker’s [Open (Source) for Business][34] - -In my opinion, by far the best writing on the GNU General Public License and copyleft more generally. This book covers the history, the licenses, their development, as well as compatibility and compliance. It’s the book I lend to clients considering or dealing with the GPL. - - * Larry Rosen’s [Open Source Licensing][35], from Prentice Hall. - -A great first book, also available for free [online][36]. This is the best introduction to open-source licensing and related law for programmers starting from scratch. This one is also a bit dated in some specific details, but Larry’s taxonomy of licenses and succinct summary of open-source business models stand the test of time. - - - - -All of these were crucial to my own education as an open-source licensing lawyer. Their authors are professional heroes of mine. Have a read! — K.E.M - -I license this article under a [Creative Commons Attribution-ShareAlike 4.0 license][37]. - - --------------------------------------------------------------------------------- - -via: https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html - -作者:[Kyle E. Mitchell][a] -选题:[lujun9972][b] -译者:[bestony](https://github.com/bestony) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://kemitchell.com/ -[b]: https://github.com/lujun9972 -[1]: http://spdx.org/licenses/MIT -[2]: https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT -[3]: https://opensource.org -[4]: https://spdx.org -[5]: http://spdx.org/licenses/ -[6]: https://spdx.org/licenses/JSON -[7]: https://www.npmjs.com/package/licensor -[8]: https://www.apache.org/licenses/LICENSE-2.0 -[9]: https://www.gnu.org/licenses/gpl-3.0.en.html -[10]: http://spdx.org/licenses/BSD-4-Clause -[11]: https://spdx.org/licenses/BSD-3-Clause -[12]: https://spdx.org/licenses/BSD-2-Clause -[13]: http://www.isc.org/downloads/software-support-policy/isc-license/ -[14]: http://worksmadeforhire.com/ -[15]: https://www.eclipse.org/legal/epl-v10.html -[16]: https://www.apache.org/licenses/#clas -[17]: https://wiki.eclipse.org/ECA -[18]: https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co. -[19]: https://writing.kemitchell.com/2017/02/16/Against-Legislating-the-Nonobvious.html -[20]: http://developercertificate.org/ -[21]: https://github.com/berneout/berneout-pledge -[22]: https://github.com/berneout/authors-certificate -[23]: https://en.wikipedia.org/wiki/Immediately-invoked_function_expression -[24]: https://www.govinfo.gov/app/details/USCODE-2017-title35/USCODE-2017-title35-partIII-chap28-sec271 -[25]: https://www.govinfo.gov/app/details/USCODE-2017-title17/USCODE-2017-title17-chap1-sec106 -[26]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2314.&lawCode=COM -[27]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2315.&lawCode=COM -[28]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2316.&lawCode=COM -[29]: mailto:kyle@kemitchell.com -[30]: https://twitter.com/kemitchell -[31]: https://github.com/kemitchell/writing/tree/master/_posts/2016-09-21-MIT-License-Line-by-Line.md -[32]: https://lccn.loc.gov/2006281092 -[33]: http://www.oreilly.com/openbook/osfreesoft/book/ -[34]: https://www.amazon.com/dp/1511617772 -[35]: https://lccn.loc.gov/2004050558 -[36]: http://www.rosenlaw.com/oslbook.htm -[37]: https://creativecommons.org/licenses/by-sa/4.0/legalcode diff --git a/sources/talk/20170928 The Lineage of Man.md b/sources/talk/20170928 The Lineage of Man.md index de2cc7515e..b51d67f4f9 100644 --- a/sources/talk/20170928 The Lineage of Man.md +++ b/sources/talk/20170928 The Lineage of Man.md @@ -92,7 +92,7 @@ via: https://twobithistory.org/2017/09/28/the-lineage-of-man.html 作者:[Two-Bit History][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[bestony](https://github.com/bestony) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/talk/20190312 When the web grew up- A browser story.md b/sources/talk/20190312 When the web grew up- A browser story.md deleted file mode 100644 index 9f47402ac1..0000000000 --- a/sources/talk/20190312 When the web grew up- A browser story.md +++ /dev/null @@ -1,65 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (XYenChi ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (When the web grew up: A browser story) -[#]: via: (https://opensource.com/article/19/3/when-web-grew) -[#]: author: (Mike Bursell https://opensource.com/users/mikecamel) - -web 的诞生:浏览器的故事grew up: A browser story -====== -互联网诞生之处的个人故事。A personal story of when the internet came of age. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Internet_Sign.png?itok=5MFGKs14) - -最近,我Recently, I [分享了shared how][1] 获得英文文学和神学学位离开大学,在一个大家都还不知道 web 服务器是什么的地方,设法找到一份运行 web 服务器的工作。upon leaving university in 1994 with a degree in English literature and theology, I somehow managed to land a job running a web server in a world where people didn't really know what a web server was yet. 那“地方”,我不仅仅指的是我工作的组织,而是泛指所有地方。And by "in a world," I don't just mean within the organisation in which I worked, but the world in general. Web 那时当真是全新的——人们还正尝试理出头绪。The web was new—really new—and people were still trying to get their heads around it. - -That's not to suggest that the place where I was working—an academic publisher—particularly "got it" either. This was a world in which a large percentage of the people visiting their website were still running 28k8 modems. I remember my excitement in getting a 33k6 modem. At least we were past the days of asymmetric upload/download speeds,1 where 1200/300 seemed like an eminently sensible bandwidth description. This meant that the high-design, high-colour, high-resolution documents created by the print people (with whom I shared a floor) were completely impossible on the web. I wouldn't allow anything bigger than a 40k GIF on the front page of the website, and that was pushing it for many of our visitors. Anything larger than 60k or so would be explicitly linked as a standalone image from a thumbnail on the referring page. - -To say that the marketing department didn't like this was an understatement. Even worse was the question of layout. "Browsers decide how to lay out documents," I explained, time after time, "you can use headers or paragraphs, but how documents appear on the page isn't defined by the document, but by the renderer!" They wanted control. They wanted different coloured backgrounds. After a while, they got that. I went to what I believe was the first W3C meeting at which the idea of Cascading Style Sheets (CSS) was discussed. And argued vehemently against them. The suggestion that document writers should control layout was anathema.2 It took some while for CSS to be adopted, and in the meantime, those who cared about such issues adopted the security trainwreck that was Portable Document Format (PDF). - -How documents were rendered wasn't the only issue. Being a publisher of actual physical books, the whole point of having a web presence, as far as the marketing department was concerned, was to allow customers—or potential customers—to know not only what a book was about, but also how much it was going to cost them to buy. This, however, presented a problem. You see, the internet—in which I include the rapidly growing World Wide Web—was an open, free-for-all libertarian sort of place where nobody was interested in money; in fact, where talk of money was to be shunned and avoided. - -I took the mainstream "Netizen" view that there was no place for pricing information online. My boss—and, indeed, pretty much everybody else in the organisation—took a contrary view. They felt that customers should be able to see how much books would cost them. They also felt that my bank manager would like to see how much money was coming into my bank account on a monthly basis, which might be significantly reduced if I didn't come round to their view. - -Luckily, by the time I'd climbed down from my high horse and got over myself a bit—probably only a few weeks after I'd started digging my heels in—the web had changed, and there were other people putting pricing information up about their products. These newcomers were generally looked down upon by the old schoolers who'd been running web servers since the early days,3 but it was clear which way the wind was blowing. This didn't mean that the battle was won for our website, however. As an academic publisher, we shared an academic IP name ("ac.uk") with the University. The University was less than convinced that publishing pricing information was appropriate until some senior folks at the publisher pointed out that Princeton University Press was doing it, and wouldn't we look a bit silly if…? - -The fun didn't stop there, either. A few months into my tenure as webmaster ("webmaster@…"), we started to see a worrying trend, as did lots of other websites. Certain visitors were single-handedly bringing our webserver to its knees. These visitors were running a new web browser: Netscape. Netscape was badly behaved. Netscape was multi-threaded. - -Why was this an issue? Well, before Netscape, all web browsers had been single-threaded. They would open one connection at a time, so even if you had, say five GIFs on a page,4 they would request the HTML base file, parse that, then download the first GIF, complete that, then the second, complete that, and so on. In fact, they often did the GIFs in the wrong order, which made for very odd page loading, but still, that was the general idea. The rude people at Netscape decided that they could open multiple connections to the webserver at a time to request all the GIFs at the same time, for example! And why was this a problem? Well, the problem was that most webservers were single-threaded. They weren't designed to have multiple connections open at any one time. Certainly, the HTTP server that we ran (MacHTTP) was single-threaded. Even though we had paid for it (it was originally shareware), the version we had couldn't cope with multiple requests at a time. - -The debate raged across the internet. Who did these Netscape people think they were, changing how the world worked? How it was supposed to work? The world settled into different camps, and as with all technical arguments, heated words were exchanged on both sides. The problem was that not only was Netscape multi-threaded, it was also just better than the alternatives. Lots of web server code maintainers, MacHTTP author Chuck Shotton among them, sat down and did some serious coding to produce multi-threaded beta versions of their existing code. Everyone moved almost immediately to the beta versions, they got stable, and in the end, single-threaded browsers either adapted and became multi-threaded themselves, or just went the way of all outmoded products and died a quiet death.6 - -This, for me, is when the web really grew up. It wasn't prices on webpages nor designers being able to define what you'd see on a page,8 but rather when browsers became easier to use and when the network effect of thousands of viewers moving to many millions tipped the balance in favour of the consumer, not the producer. There were more steps in my journey—which I'll save for another time—but from around this point, my employers started looking at our monthly, then weekly, then daily logs, and realising that this was actually going to be something big and that they'd better start paying some real attention. - -1\. How did they come back, again? - -2\. It may not surprise you to discover that I'm still happiest at the command line. - -3\. About six months before. - -4\. Reckless, true, but it was beginning to happen.5 - -5\. Oh, and no—it was GIFs or BMP. JPEG was still a bright idea that hadn't yet taken off. - -6\. It's never actually quiet: there are always a few diehard enthusiasts who insist that their preferred solution is technically superior and bemoan the fact that the rest of the internet has gone to the devil.7 - -7\. I'm not one to talk: I still use Lynx from time to time. - -8\. Creating major and ongoing problems for those with different accessibility needs, I would point out. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/3/when-web-grew - -作者:[Mike Bursell][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mikecamel -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/article/18/11/how-web-was-won diff --git a/sources/talk/20190626 Where are all the IoT experts going to come from.md b/sources/talk/20190626 Where are all the IoT experts going to come from.md deleted file mode 100644 index 22a303d2f6..0000000000 --- a/sources/talk/20190626 Where are all the IoT experts going to come from.md +++ /dev/null @@ -1,78 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Where are all the IoT experts going to come from?) -[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html) -[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) - -Where are all the IoT experts going to come from? -====== -The fast growth of the internet of things (IoT) is creating a need to train cross-functional experts who can combine traditional networking and infrastructure expertise with database and reporting skills. -![Kevin \(CC0\)][1] - -If the internet of things (IoT) is going to fulfill its enormous promise, it’s going to need legions of smart, skilled, _trained_ workers to make everything happen. And right now, it’s not entirely clear where those people are going to come from. - -That’s why I was interested in trading emails with Keith Flynn, senior director of product management, R&D at asset-optimization software company [AspenTech][2], who says that when dealing with the slew of new technologies that fall under the IoT umbrella, you need people who can understand how to configure the technology and interpret the data. Flynn sees a growing need for existing educational institutions to house IoT-specific programs, as well as an opportunity for new IoT-focused private colleges, offering a well -ounded curriculum - -“In the future,” Flynn told me, “IoT projects will differ tremendously from the general data management and automation projects of today. … The future requires a more holistic set of skills and cross-trading capabilities so that we’re all speaking the same language.” - -**[ Also read:  [20 hot jobs ambitious IT pros should shoot for][3] ]** - -With the IoT growing 30% a year, Flynn added, rather than a few specific skills, “everything from traditional deployment skills, like networking and infrastructure, to database and reporting skills and, frankly, even basic data science, need to be understood together and used together.” - -### Calling all IoT consultants - -“The first big opportunity for IoT-educated people is in the consulting field,” Flynn predicted. “As consulting companies adapt or die to the industry trends … having IoT-trained people on staff will help position them for IoT projects and make a claim in the new line of business: IoT consulting.” - -The problem is especially acute for startups and smaller companies. “The bigger the organization, the more likely they have a means to hire different people across different lines of skillsets,” Flynn said. “But for smaller organizations and smaller IoT projects, you need someone who can do both.” - -Both? Or _everything?_ The IoT “requires a combination of all knowledge and skillsets,” Flynn said, noting that “many of the skills aren’t new, they’ve just never been grouped together or taught together before.” - -**[ [Looking to upgrade your career in tech? This comprehensive online course teaches you how.][4] ]** - -### The IoT expert of the future - -True IoT expertise starts with foundational instrumentation and electrical skills, Flynn said, which can help workers implement new wireless transmitters and boost technology for better battery life and power consumption. - -“IT skills, like networking, IP addressing, subnet masks, cellular and satellite are also pivotal IoT needs,” Flynn said. He also sees a need for database management skills and cloud management and security expertise, “especially as things like [advanced process control] APC and sending sensor data directly to databases and data lakes become the norm.” - -### Where will IoT experts come from? - -Flynn said standardized formal education courses would be the best way to make sure that graduates or certificate holders have the right set of skills. He even laid out a sample curriculum: “Start in chronological order with the basics like [Electrical & Instrumentation] E&I and measurement. Then teach networking, and then database administration and cloud courses should follow that. This degree could even be looped into an existing engineering course, and it would probably take two years … to complete the IoT component.” - -While corporate training could also play role, “that’s easier said than done,” Flynn warned. “Those trainings will need to be organization-specific efforts and pushes.” - -Of course, there are already [plenty of online IoT training courses and certificate programs][5]. But, ultimately, the responsibility lies with the workers themselves. - -“Upskilling is incredibly important in this world as tech continues to transform industries,” Flynn said. “If that upskilling push doesn’t come from your employer, then online courses and certifications would be an excellent way to do that for yourself. We just need those courses to be created. ... I could even see organizations partnering with higher-education institutions that offer these courses to give their employees better access to it. Of course, the challenge with an IoT program is that it will need to constantly evolve to keep up with new advancements in tech.” - -**[ For more on IoT, see [tips for securing IoT on your network][6], our list of [the most powerful internet of things companies][7] and learn about the [industrial internet of things][8]. | Get regularly scheduled insights by [signing up for Network World newsletters][9]. ]** - -Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html - -作者:[Fredric Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Fredric-Paul/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg -[2]: https://www.aspentech.com/ -[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html -[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career -[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training -[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb -[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb -[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb -[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb -[10]: https://www.facebook.com/NetworkWorld/ -[11]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191010 The biggest risk to uptime- Your staff.md b/sources/talk/20191010 The biggest risk to uptime- Your staff.md deleted file mode 100644 index e420e50c29..0000000000 --- a/sources/talk/20191010 The biggest risk to uptime- Your staff.md +++ /dev/null @@ -1,65 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (sthwhl) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The biggest risk to uptime? Your staff) -[#]: via: (https://www.networkworld.com/article/3444762/the-biggest-risk-to-uptime-your-staff.html) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -The biggest risk to uptime? Your staff -====== -Human error is the chief cause of downtime, a new study finds. Imagine that. -Getty Images - -There was an old joke: "To err is human, but to really foul up you need a computer." Now it seems the reverse is true. The reliability of [data center][1] equipment is vastly improved but the humans running them have not kept up and it's a threat to uptime. - -The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent. - -[[Get regularly scheduled insights by signing up for Network World newsletters. ]][2] - -And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million. - -[][3] - -BrandPost Sponsored by HPE - -[Take the Intelligent Route with Consumption-Based Storage][3] - -Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. - -In Uptime's April 2019 survey, 60 percent of respondents believed that their most recent significant downtime incident could have been prevented with better management/processes or configuration. For outages that cost greater than $1 million, this figure jumped to 74 percent. - -However, the end fault is not necessarily with the staff, Uptime argues, but with management that has failed them. - -Advertisement - -"Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated," wrote Kevin Heslin, chief editor of the Uptime Institute Journal in a [blog post][4]. - -"However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime," Heslin went on to say. - -Uptime noted that the complexity of a company’s infrastructure, especially the distributed nature of it, can increase the risk that simple errors will cascade into a service outage and said companies need to be aware of the greater risk involved with greater complexity. - -On the staffing side, it cautioned against expanding critical IT capacity faster than the company can attract and apply the resources to manage that infrastructure and to be aware of any staffing and skills shortage before they start to impair mission-critical operations. - -Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3444762/the-biggest-risk-to-uptime-your-staff.html - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html -[2]: https://www.networkworld.com/newsletters/signup.html -[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) -[4]: https://journal.uptimeinstitute.com/how-to-avoid-outages-try-harder/ -[5]: https://www.facebook.com/NetworkWorld/ -[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20210125 7 ways open source was essential to business in 2020.md b/sources/talk/20210125 7 ways open source was essential to business in 2020.md new file mode 100644 index 0000000000..57dea0448a --- /dev/null +++ b/sources/talk/20210125 7 ways open source was essential to business in 2020.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 ways open source was essential to business in 2020) +[#]: via: (https://opensource.com/article/21/1/open-source-business) +[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) + +7 ways open source was essential to business in 2020 +====== +Moving work online created challenges and opportunities for innovation +in 2020. +![A chair in a field.][1] + +The COVID-19 pandemic created many new challenges for businesses in 2020 as they rapidly moved non-essential workers to remote operations. However, it also created tremendous opportunities for innovation as people searched for effective ways to work and collaborate virtually. + +Opensource.com responded to the need by publishing a variety of articles in 2020 on working better with open source. Since it appears working remotely is here to stay for the foreseeable future, make sure you're doing everything you can to adapt by reading the top seven articles about open source business from 2020. + +### Open source live streaming with Open Broadcaster Software + +If you want to start reaching your customers with live streaming, Seth Kenlon's article on [open source live streaming][2] explains how to use Open Broadcaster Software (OBS) to connect to various streaming services to share your skills or gameplay. He outlines OBS and offers a detailed walk-through of its installation and use, making your start in streaming quick and easy. Seth also explains how to connect your devices to a streaming server and shares a powerful message about the value of teaching others and broadcasting words of encouragement. + +### Manage knowledge with BlueSpice, an open source alternative to Confluence + +Most enterprises have a wiki for sharing and managing knowledge across their teams. Instead of risking vendor lock-in with proprietary software, Martin Loschwitz recommends [considering BlueSpice][3], an open source alternative to Confluence. He covers the structural differences between BlueSpice and Confluence, how to use the optional BlueSpice Farm to connect multiple wikis, and BlueSpice's amazing search capabilities that use Elasticsearch to enable narrower searches. Martin also gets into BlueSpice's compliance and security features, editor functions, extensions, and design features that match your wiki to your company's branding. + +### Choosing open source as a marketing strategy + +Lucas Galvanni and Nathalie Risbakk discuss how to use [open source in your marketing strategy][4]. They use the example of Brewdog, which opened its beer recipes to the public, a strategy that paid off by making the brand extremely popular with the homebrewing community. They also cover developing marketing strategies based on customers' needs and desires and explain how moving their company to an open source model helped it engage with the community. Finally, they get into the challenges, risks, and rewards of open source with some final notes about building success over time. + +### 7 Ways NOT to manage your remote team + +Matt Shealy says [remote teams are susceptible to miscommunication][5], but rethinking how things have always been done can help avoid disaster. His tips include continuous training on technical and emotional topics, creating standards around communications in remote chat systems, and keeping work on schedule. Matt also recommends keeping your eye on the big picture with goals and milestones, avoiding micromanagement, and promoting diversity to leverage different viewpoints and help teams shine. He also describes the benefits and barriers of teams working across different time zones and recommends keeping your eye on costs to find a middle ground to keep trust between IT and finance teams. + +### 5 humans review 5 open source video chat tools + +During the lockdowns in summer 2020, Opensource.com writers Matt Broberg, Alan Formy-Duval, Chris Hermansen, Seth Kenlon, and I joined forces to [review open source video chat tools][6]. We looked at the tools' security risks, capabilities, and performance, considering our various locations and internet quality, to discover what's out there in open source video-conferencing solutions. This review may give you new ideas on tools to improve your video chats. + +### Love or hate chat? 4 best practices for remote teams + +To help the many teams working remotely, Jen Wike Huger offers [best practices for using chat][7] in your day-to-day life. Before getting into her tips, Jen recommends asking team members about their comfort level using chat to keep in touch during the workday. Next, she offers best practices, including creating rooms and threads, setting expectations around when people are expected to respond (or not) in chat, and communicating clearly and kindly. + +### The FSF reveals the tools they use for chat, video, and more + +The Free Software Foundation's Greg Farough shares the [software the organization uses for remote communications][8]. He explains that self-reliance on some of the tools (by self-hosting or having a friend or colleague host them) can be difficult but beneficial. He concludes by saying, "This is just a small selection of the huge amount of free software out there, all ready to be used, shared, and improved by the community," and asks the community to share knowledge "to help people find ways of communicating that put user freedom as a priority." + +### 2020: The year open source remote business became essential + +These seven articles offer many suggestions for teams working remotely and which open source tools will help them do it better. In 2020, open source software proved its value for expanding and improving work across great distances. And these tools will remain key business enablers in 2021, as many of us continue working from home. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/open-source-business + +作者:[Jessica Cherry][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cherrybomb +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_WorkInPublic_4618517_1110_CS_A.png?itok=RwVrWArk (A chair in a field.) +[2]: https://opensource.com/article/20/4/open-source-live-stream +[3]: https://opensource.com/article/20/9/bluespice +[4]: https://opensource.com/article/20/7/open-source-marketing +[5]: https://opensource.com/article/20/1/ways-not-manage-remote-team +[6]: https://opensource.com/article/20/5/open-source-video-conferencing +[7]: https://opensource.com/article/20/4/chat-tools-best-practices +[8]: https://opensource.com/article/20/5/free-software-communication diff --git a/sources/talk/20210207 The Real Novelty of the ARPANET.md b/sources/talk/20210207 The Real Novelty of the ARPANET.md new file mode 100644 index 0000000000..5e60919871 --- /dev/null +++ b/sources/talk/20210207 The Real Novelty of the ARPANET.md @@ -0,0 +1,263 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Real Novelty of the ARPANET) +[#]: via: (https://twobithistory.org/2021/02/07/arpanet.html) +[#]: author: (Two-Bit History https://twobithistory.org) + +The Real Novelty of the ARPANET +====== + +If you run an image search for the word “ARPANET,” you will find lots of maps showing how the [government research network][1] expanded steadily across the country throughout the late ’60s and early ’70s. I’m guessing that most people reading or hearing about the ARPANET for the first time encounter one of these maps. + +Obviously, the maps are interesting—it’s hard to believe that there were once so few networked computers that their locations could all be conveyed with what is really pretty lo-fi cartography. (We’re talking 1960s overhead projector diagrams here. You know the vibe.) But the problem with the maps, drawn as they are with bold lines stretching across the continent, is that they reinforce the idea that the ARPANET’s paramount achievement was connecting computers across the vast distances of the United States for the first time. + +Today, the internet is a lifeline that keeps us tethered to each other even as an airborne virus has us all locked up indoors. So it’s easy to imagine that, if the ARPANET was the first draft of the internet, then surely the world that existed before it was entirely disconnected, since that’s where we’d be without the internet today, right? The ARPANET must have been a big deal because it connected people via computers when that hadn’t before been possible. + +That view doesn’t get the history quite right. It also undersells what made the ARPANET such a breakthrough. + +### The Debut + +The Washington Hilton stands near the top of a small rise about a mile and a half northeast of the National Mall. Its two white-painted modern facades sweep out in broad semicircles like the wings of a bird. The New York Times, reporting on the hotel’s completion in 1965, remarked that the building looks “like a sea gull perched on a hilltop nest.”[1][2] + +The hotel hides its most famous feature below ground. Underneath the driveway roundabout is an enormous ovoid event space known as the International Ballroom, which was for many years the largest pillar-less ballroom in DC. In 1967, the Doors played a concert there. In 1968, Jimi Hendrix also played a concert there. In 1972, a somewhat more sedate act took over the ballroom to put on the inaugural International Conference on Computing Communication, where a promising research project known as the ARPANET was demonstrated publicly for the first time. + +The 1972 ICCC, which took place from October 24th to 26th, was attended by about 800 people.[2][3] It brought together all of the leading researchers in the nascent field of computer networking. According to internet pioneer Bob Kahn, “if somebody had dropped a bomb on the Washington Hilton, it would have destroyed almost all of the networking community in the US at that point.”[3][4] + +Not all of the attendees were computer scientists, however. An advertisement for the conference claimed it would be “user-focused” and geared toward “lawyers, medical men, economists, and government men as well as engineers and communicators.”[4][5] Some of the conference’s sessions were highly technical, such as the session titled “Data Network Design Problems I” and its sequel session, “Data Network Design Problems II.” But most of the sessions were, as promised, focused on the potential social and economic impacts of computer networking. One session, eerily prescient today, sought to foster a discussion about how the legal system could act proactively “to safeguard the right of privacy in the computer data bank.”[5][6] + +The ARPANET demonstration was intended as a side attraction of sorts for the attendees. Between sessions, which were held either in the International Ballroom or elsewhere on the lower level of the hotel, attendees were free to wander into the Georgetown Ballroom (a smaller ballroom/conference room down the hall from the big one),[6][7] where there were 40 terminals from a variety of manufacturers set up to access the ARPANET.[7][8] These terminals were dumb terminals—they only handled input and output and could do no computation on their own. (In fact, in 1972, it’s likely that all of these terminals were hardcopy terminals, i.e. teletype machines.) The terminals were all hooked up to a computer known as a Terminal Interface Message Processor or TIP, which sat on a raised platform in the middle of the room. The TIP was a kind of archaic router specially designed to connect dumb terminals to the ARPANET. Using the terminals and the TIP, the ICCC attendees could experiment with logging on and accessing some of the computers at the 29 host sites then comprising the ARPANET.[8][9] + +To exhibit the network’s capabilities, researchers at the host sites across the country had collaborated to prepare 19 simple “scenarios” for users to experiment with. These scenarios were compiled into [a booklet][10] that was handed to conference attendees as they tentatively approached the maze of wiring and terminals.[9][11] The scenarios were meant to prove that the new technology worked but also that it was useful, because so far the ARPANET was “a highway system without cars,” and its Pentagon funders hoped that a public demonstration would excite more interest in the network.[10][12] + +The scenarios thus showed off a diverse selection of the software that could be accessed over the ARPANET: There were programming language interpreters, one for a Lisp-based language at MIT and another for a numerical computing environment called Speakeasy hosted at UCLA; there were games, including a chess program and an implementation of Conway’s Game of Life; and—perhaps most popular among the conference attendees—there were several AI chat programs, including the famous ELIZA chat program developed at MIT by Joseph Weizenbaum. + +The researchers who had prepared the scenarios were careful to list each command that users were expected to enter at their terminals. This was especially important because the sequence of commands used to connect to any given ARPANET host could vary depending on the host in question. To experiment with the AI chess program hosted on the MIT Artificial Intelligence Laboratory’s PDP-10 minicomputer, for instance, conference attendees were instructed to enter the following: + +_`[LF]`, `[SP]`, and `[CR]` below stand for the line feed, space, and carriage return keys respectively. I’ve explained each command after `//`, but this syntax was not used for the annotations in the original._ + +``` +@r [LF] // Reset the TIP +@e [SP] r [LF] // "Echo remote" setting, host echoes characters rather than TIP +@L [SP] 134 [LF] // Connect to host number 134 +:login [SP] iccXXX [CR] // Login to the MIT AI Lab's system, where "XXX" should be user's initials +:chess [CR] // Start chess program +``` + +If conference attendees were successfully able to enter those commands, their reward was the opportunity to play around with some of the most cutting-edge chess software available at the time, where the layout of the board was represented like this: + +``` +BR BN BB BQ BK BB BN BR +BP BP BP BP ** BP BP BP +-- ** -- ** -- ** -- ** +** -- ** -- BP -- ** -- +-- ** -- ** WP ** -- ** +** -- ** -- ** -- ** -- +WP WP WP WP -- WP WP WP +WR WN WB WQ WK WB WN WR +``` + +In contrast, to connect to UCLA’s IBM System/360 and run the Speakeasy numerical computing environment, conference attendees had to enter the following: + +``` +@r [LF] // Reset the TIP +@t [SP] o [SP] L [LF] // "Transmit on line feed" setting +@i [SP] L [LF] // "Insert line feed" setting, i.e. send line feed with each carriage return +@L [SP] 65 [LF] // Connect to host number 65 +tso // Connect to IBM Time-Sharing Option system +logon [SP] icX [CR] // Log in with username, where "X" should be a freely chosen digit +iccc [CR] // This is the password (so secure!) +speakez [CR] // Start Speakeasy +``` + +Successfully running that gauntlet gave attendees the power to multiply and transpose and do other operations on matrices as quickly as they could input them at their terminal: + +``` +:+! a=m*transpose(m);a [CR] +:+! eigenvals(a) [CR] +``` + +Many of the attendees were impressed by the demonstration, but not for the reasons that we, from our present-day vantage point, might assume. The key piece of context hard to keep in mind today is that, in 1972, being able to use a computer remotely, even from a different city, was not new. Teletype devices had been used to talk to distant computers for decades already. Almost a full five years before the ICCC, Bill Gates was in a Seattle high school using a teletype to run his first BASIC programs on a General Electric computer housed elsewhere in the city. Merely logging in to a host computer and running a few commands or playing a text-based game was routine. The software on display here was pretty neat, but the two scenarios I’ve told you about so far could ostensibly have been experienced without going over the ARPANET. + +Of course, something new was happening under the hood. The lawyers, policy-makers, and economists at the ICCC might have been enamored with the clever chess program and the chat bots, but the networking experts would have been more interested in two other scenarios that did a better job of demonstrating what the ARPANET project had achieved. + +The first of these scenarios involved a program called `NETWRK` running on MIT’s ITS operating system. The `NETWRK` command was the entrypoint for several subcommands that could report various aspects of the ARPANET’s operating status. The `SURVEY` subcommand reported which hosts on the network were functioning and available (they all fit on a single list), while the `SUMMARY.OF.SURVEY` subcommand aggregated the results of past `SURVEY` runs to report an “up percentage” for each host as well as how long, on average, it took for each host to respond to messages. The output of the `SUMMARY.OF.SURVEY` subcommand was a table that looked like this: + +``` +--HOST-- -#- -%-UP- -RESP- +UCLA-NMC 001 097% 00.80 +SRI-ARC 002 068% 01.23 +UCSB-75 003 059% 00.63 +... +``` + +The host number field, as you can see, has room for no more than three digits (ha!). Other `NETWRK` subcommands allowed users to look at summary of survey results over a longer historical period or to examine the log of survey results for a single host. + +The second of these scenarios featured a piece of software called the SRI-ARC Online System being developed at Stanford. This was a fancy piece of software with lots of functionality (it was the software system that Douglas Engelbart demoed in the “Mother of All Demos”), but one of the many things it could do was make use of what was essentially a file hosting service run on the host at UC Santa Barbara. From a terminal at the Washington Hilton, conference attendees could copy a file created at Stanford onto the host at UCSB simply by running a `copy` command and answering a few of the computer’s questions: + +_`[ESC]`, `[SP]`, and `[CR]` below stand for the escape, space, and carriage return keys respectively. The words in parentheses are prompts printed by the computer. The escape key is used to autocomplete the filename on the third line. The file being copied here is called `sample.txt;1`, where the trailing one indicates the file’s version number and `` indicates the directory. This was a convention for filenames used by the TENEX operating system._[11][13] + +``` +@copy +(TO/FROM UCSB) to +(FILE) sample [ESC] .TXT;1 [CR] +(CREATE/REPLACE) create +``` + +These two scenarios might not look all that different from the first two, but they were remarkable. They were remarkable because they made it clear that, on the ARPANET, humans could talk to computers but computers could also talk to _each other._ The `SURVEY` results collected at MIT weren’t collected by a human regularly logging in to each machine to check if it was up—they were collected by a program that knew how to talk to the other machines on the network. Likewise, the file transfer from Stanford to UCSB didn’t involve any humans sitting at terminals at either Stanford or UCSB—the user at a terminal in Washington DC was able to get the two computers to talk each other merely by invoking a piece of software. Even more, it didn’t matter which of the 40 terminals in the Ballroom you were sitting at, because you could view the MIT network monitoring statistics or store files at UCSB using any of the terminals with almost the same sequence of commands. + +This is what was totally new about the ARPANET. The ICCC demonstration didn’t just involve a human communicating with a distant computer. It wasn’t just a demonstration of remote I/O. It was a demonstration of software remotely communicating with other software, something nobody had seen before. + +To really appreciate why it was this aspect of the ARPANET project that was important and not the wires-across-the-country, physical connection thing that the host maps suggest (the wires were leased phone lines anyhow and were already there!), consider that, before the ARPANET project began in 1966, the ARPA offices in the Pentagon had a terminal room. Inside it were three terminals. Each connected to a different computer; one computer was at MIT, one was at UC Berkeley, and another was in Santa Monica.[12][14] It was convenient for the ARPA staff that they could use these three computers even from Washington DC. But what was inconvenient for them was that they had to buy and maintain terminals from three different manufacturers, remember three different login procedures, and familiarize themselves with three different computing environments in order to use the computers. The terminals might have been right next to each other, but they were merely extensions of the host computing systems on the other end of the wire and operated as differently as the computers did. Communicating with a distant computer was possible before the ARPANET; the problem was that the heterogeneity of computing systems limited how sophisticated the communication could be. + +### Come Together, Right Now + +So what I’m trying to drive home here is that there is an important distinction between statement A, “the ARPANET connected people in different locations via computers for the first time,” and statement B, “the ARPANET connected computer systems to each other for the first time.” That might seem like splitting hairs, but statement A elides some illuminating history in a way that statement B does not. + +To begin with, the historian Joy Lisi Rankin has shown that people were socializing in cyberspace well before the ARPANET came along. In _A People’s History of Computing in the United States_, she describes several different digital communities that existed across the country on time-sharing networks prior to or apart from the ARPANET. These time-sharing networks were not, technically speaking, computer networks, since they consisted of a single mainframe computer running computations in a basement somewhere for many dumb terminals, like some portly chthonic creature with tentacles sprawling across the country. But they nevertheless enabled most of the social behavior now connoted by the word “network” in a post-Facebook world. For example, on the Kiewit Network, which was an extension of the Dartmouth Time-Sharing System to colleges and high schools across the Northeast, high school students collaboratively maintained a “gossip file” that allowed them to keep track of the exciting goings-on at other schools, “creating social connections from Connecticut to Maine.”[13][15] Meanwhile, women at Mount Holyoke College corresponded with men at Dartmouth over the network, perhaps to arrange dates or keep in touch with boyfriends.[14][16] This was all happening in the 1960s. Rankin argues that by ignoring these early time-sharing networks we impoverish our understanding of how American digital culture developed over the last 50 years, leaving room for a “Silicon Valley mythology” that credits everything to the individual genius of a select few founding fathers. + +As for the ARPANET itself, if we recognize that the key challenge was connecting the computer _systems_ and not just the physical computers, then that might change what we choose to emphasize when we tell the story of the innovations that made the ARPANET possible. The ARPANET was the first ever packet-switched network, and lots of impressive engineering went into making that happen. I think it’s a mistake, though, to say that the ARPANET was a breakthrough because it was the first packet-switched network and then leave it at that. The ARPANET was meant to make it easier for computer scientists across the country to collaborate; that project was as much about figuring out how different operating systems and programs written in different languages would interface with each other than it was about figuring out how to efficiently ferry data back and forth between Massachusetts and California. So the ARPANET was the first packet-switched network, but it was also an amazing standards success story—something I find especially interesting given [how][17] [many][18] [times][19] I’ve written about failed standards on this blog. + +Inventing the protocols for the ARPANET was an afterthought even at the time, so naturally the job fell to a group made up largely of graduate students. This group, later known as the Network Working Group, met for the first time at UC Santa Barbara in August of 1968.[15][20] There were 12 people present at that first meeting, most of whom were representatives from the four universities that were to be the first host sites on the ARPANET when the equipment was ready.[16][21] Steve Crocker, then a graduate student at UCLA, attended; he told me over a Zoom call that it was all young guys at that first meeting, and that Elmer Shapiro, who chaired the meeting, was probably the oldest one there at around 38. ARPA had not put anyone in charge of figuring out how the computers would communicate once they were connected, but it was obvious that some coordination was necessary. As the group continued to meet, Crocker kept expecting some “legitimate adult” with more experience and authority to fly out from the East Coast to take over, but that never happened. The Network Working Group had ARPA’s tacit approval—all those meetings involved lots of long road trips, and ARPA money covered the travel expenses—so they were it.[17][22] + +The Network Working Group faced a huge challenge. Nobody had ever sat down to connect computer systems together in a general-purpose way; that flew against all of the assumptions that prevailed in computing in the late 1960s: + +> The typical mainframe of the period behaved as if it were the only computer in the universe. There was no obvious or easy way to engage two diverse machines in even the minimal communication needed to move bits back and forth. You could connect machines, but once connected, what would they say to each other? In those days a computer interacted with devices that were attached to it, like a monarch communicating with his subjects. Everything connected to the main computer performed a specific task, and each peripheral device was presumed to be ready at all times for a fetch-my-slippers type command…. Computers were strictly designed for this kind of interaction; they send instructions to subordinate card readers, terminals, and tape units, and they initiate all dialogues. But if another device in effect tapped the computer on the shoulder with a signal and said, “Hi, I’m a computer too,” the receiving machine would be stumped.[18][23] + +As a result, the Network Working Group’s progress was initially slow.[19][24] The group did not settle on an “official” specification for any protocol until June, 1970, nearly two years after the group’s first meeting.[20][25] + +But by the time the ARPANET was to be shown off at the 1972 ICCC, all the key protocols were in place. A scenario like the chess scenario exercised many of them. When a user ran the command `@e r`, short for `@echo remote`, that instructed the TIP to make use of a facility in the new TELNET virtual teletype protocol to inform the remote host that it should echo the user’s input. When a user then ran the command `@L 134`, short for `@login 134`, that caused the TIP to invoke the Initial Connection Protocol with host 134, which in turn would cause the remote host to allocate all the necessary resources for the connection and drop the user into a TELNET session. (The file transfer scenario I described may well have made use of the File Transfer Protocol, though that protocol was only ready shortly before the conference.[21][26]) All of these protocols were known as “level three” protocols, and below them were the host-to-host protocol at level two (which defined the basic format for the messages the hosts should expect from each other), and the host-to-IMP protocol at level one (which defined how hosts communicated with the routing equipment they were linked to). Incredibly, the protocols all worked. + +In my view, the Network Working Group was able to get everything together in time and just generally excel at its task because it adopted an open and informal approach to standardization, as exemplified by the famous Request for Comments (RFC) series of documents. These documents, originally circulated among the members of the Network Working Group by snail mail, were a way of keeping in touch between meetings and soliciting feedback to ideas. The “Request for Comments” framing was suggested by Steve Crocker, who authored the first RFC and supervised the RFC mailing list in the early years, in an attempt to emphasize the open-ended and collaborative nature of what the group was trying to do. That framing, and the availability of the documents themselves, made the protocol design process into a melting pot of contributions and riffs on other people’s contributions where the best ideas could emerge without anyone losing face. The RFC process was a smashing success and is still used to specify internet standards today, half a century later. + +It’s this legacy of the Network Working Group that I think we should highlight when we talk about ARPANET’s impact. Though today one of the most magical things about the internet is that it can connect us with people on the other side of the planet, it’s only slightly facetious to say that that technology has been with us since the 19th century. Physical distance was conquered well before the ARPANET by the telegraph. The kind of distance conquered by the ARPANET was instead the logical distance between the operating systems, character codes, programming languages, and organizational policies employed at each host site. Implementing the first packet-switched network was of course a major feat of engineering that should also be mentioned, but the problem of agreeing on standards to connect computers that had never been designed to play nice with each other was the harder of the two big problems involved in building the ARPANET—and its solution was the most miraculous part of the ARPANET story. + +In 1981, ARPA issued a “Completion Report” reviewing the first decade of the ARPANET’s history. In a section with the belabored title, “Technical Aspects of the Effort Which Were Successful and Aspects of the Effort Which Did Not Materialize as Originally Envisaged,” the authors wrote: + +> Possibly the most difficult task undertaken in the development of the ARPANET was the attempt—which proved successful—to make a number of independent host computer systems of varying manufacture, and varying operating systems within a single manufactured type, communicate with each other despite their diverse characteristics.[22][27] + +There you have it from no less a source than the federal government of the United States. + +_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][28] on Twitter or subscribe to the [RSS feed][29] to make sure you know when a new post is out._ + +_Previously on TwoBitHistory…_ + +> It's been too long, I know, but I finally got around to writing a new post. This one is about how REST APIs should really be known as FIOH APIs instead (Fuck It, Overload HTTP): +> +> — TwoBitHistory (@TwoBitHistory) [June 28, 2020][30] + + 1. “Hilton Hotel Opens in Capital Today.” _The New York Times_, 20 March 1965, . Accessed 7 Feb. 2021. [↩︎][31] + + 2. James Pelkey. _Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968-1988,_ Chapter 4, Section 12, 2007, . Accessed 7 Feb. 2021. [↩︎][32] + + 3. Katie Hafner and Matthew Lyon. _Where Wizards Stay Up Late: The Origins of the Internet_. New York, Simon & Schuster, 1996, p. 178. [↩︎][33] + + 4. “International Conference on Computer Communication.” _Computer_, vol. 5, no. 4, 1972, p. c2, . Accessed 7 Feb. 2021. [↩︎][34] + + 5. “Program for the International Conference on Computer Communication.” _The Papers of Clay T. Whitehead_, Box 42, . Accessed 7 Feb. 2021. [↩︎][35] + + 6. It’s actually not clear to me which room was used for the ARPANET demonstration. Lots of sources talk about a “ballroom,” but the Washington Hilton seems to consider the room with the name “Georgetown” more of a meeting room. So perhaps the demonstration was in the International Ballroom instead. But RFC 372 alludes to a booking of the “Georgetown Ballroom” for the demonstration. A floorplan of the Washington Hilton can be found [here][36]. [↩︎][37] + + 7. Hafner, p. 179. [↩︎][38] + + 8. ibid., p. 178. [↩︎][39] + + 9. Bob Metcalfe. “Scenarios for Using the ARPANET.” _Collections-Computer History Museum_, . Accessed 7 Feb. 2021. [↩︎][40] + + 10. Hafner, p. 176. [↩︎][41] + + 11. Robert H. Thomas. “Planning for ACCAT Remote Site Operations.” BBN Report No. 3677, October 1977, . Accessed 7 Feb. 2021. [↩︎][42] + + 12. Hafner, p. 12. [↩︎][43] + + 13. Joy Lisi Rankin. _A People’s History of Computing in the United States_. Cambridge, MA, Harvard University Press, 2018, p. 84. [↩︎][44] + + 14. Rankin, p. 93. [↩︎][45] + + 15. Steve Crocker. Personal interview. 17 Dec. 2020. [↩︎][46] + + 16. Crocker sent me the minutes for this meeting. The document lists everyone who attended. [↩︎][47] + + 17. Steve Crocker. Personal interview. [↩︎][48] + + 18. Hafner, p. 146. [↩︎][49] + + 19. “Completion Report / A History of the ARPANET: The First Decade.” BBN Report No. 4799, April 1981, , p. II-13. [↩︎][50] + + 20. I’m referring here to RFC 54, “Official Protocol Proffering.” [↩︎][51] + + 21. Hafner, p. 175. [↩︎][52] + + 22. “Completion Report / A History of the ARPANET: The First Decade,” p. II-29. [↩︎][53] + + + + +-------------------------------------------------------------------------------- + +via: https://twobithistory.org/2021/02/07/arpanet.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/ARPANET +[2]: tmp.pnPpRrCI3S#fn:1 +[3]: tmp.pnPpRrCI3S#fn:2 +[4]: tmp.pnPpRrCI3S#fn:3 +[5]: tmp.pnPpRrCI3S#fn:4 +[6]: tmp.pnPpRrCI3S#fn:5 +[7]: tmp.pnPpRrCI3S#fn:6 +[8]: tmp.pnPpRrCI3S#fn:7 +[9]: tmp.pnPpRrCI3S#fn:8 +[10]: https://archive.computerhistory.org/resources/access/text/2019/07/102784024-05-001-acc.pdf +[11]: tmp.pnPpRrCI3S#fn:9 +[12]: tmp.pnPpRrCI3S#fn:10 +[13]: tmp.pnPpRrCI3S#fn:11 +[14]: tmp.pnPpRrCI3S#fn:12 +[15]: tmp.pnPpRrCI3S#fn:13 +[16]: tmp.pnPpRrCI3S#fn:14 +[17]: https://twobithistory.org/2018/05/27/semantic-web.html +[18]: https://twobithistory.org/2018/12/18/rss.html +[19]: https://twobithistory.org/2020/01/05/foaf.html +[20]: tmp.pnPpRrCI3S#fn:15 +[21]: tmp.pnPpRrCI3S#fn:16 +[22]: tmp.pnPpRrCI3S#fn:17 +[23]: tmp.pnPpRrCI3S#fn:18 +[24]: tmp.pnPpRrCI3S#fn:19 +[25]: tmp.pnPpRrCI3S#fn:20 +[26]: tmp.pnPpRrCI3S#fn:21 +[27]: tmp.pnPpRrCI3S#fn:22 +[28]: https://twitter.com/TwoBitHistory +[29]: https://twobithistory.org/feed.xml +[30]: https://twitter.com/TwoBitHistory/status/1277259930555363329?ref_src=twsrc%5Etfw +[31]: tmp.pnPpRrCI3S#fnref:1 +[32]: tmp.pnPpRrCI3S#fnref:2 +[33]: tmp.pnPpRrCI3S#fnref:3 +[34]: tmp.pnPpRrCI3S#fnref:4 +[35]: tmp.pnPpRrCI3S#fnref:5 +[36]: https://www3.hilton.com/resources/media/hi/DCAWHHH/en_US/pdf/DCAWH.Floorplans.Apr25.pdf +[37]: tmp.pnPpRrCI3S#fnref:6 +[38]: tmp.pnPpRrCI3S#fnref:7 +[39]: tmp.pnPpRrCI3S#fnref:8 +[40]: tmp.pnPpRrCI3S#fnref:9 +[41]: tmp.pnPpRrCI3S#fnref:10 +[42]: tmp.pnPpRrCI3S#fnref:11 +[43]: tmp.pnPpRrCI3S#fnref:12 +[44]: tmp.pnPpRrCI3S#fnref:13 +[45]: tmp.pnPpRrCI3S#fnref:14 +[46]: tmp.pnPpRrCI3S#fnref:15 +[47]: tmp.pnPpRrCI3S#fnref:16 +[48]: tmp.pnPpRrCI3S#fnref:17 +[49]: tmp.pnPpRrCI3S#fnref:18 +[50]: tmp.pnPpRrCI3S#fnref:19 +[51]: tmp.pnPpRrCI3S#fnref:20 +[52]: tmp.pnPpRrCI3S#fnref:21 +[53]: tmp.pnPpRrCI3S#fnref:22 diff --git a/sources/talk/20210209 Understanding Linus-s Law for open source security.md b/sources/talk/20210209 Understanding Linus-s Law for open source security.md new file mode 100644 index 0000000000..bbaf38fdb0 --- /dev/null +++ b/sources/talk/20210209 Understanding Linus-s Law for open source security.md @@ -0,0 +1,90 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Understanding Linus's Law for open source security) +[#]: via: (https://opensource.com/article/21/2/open-source-security) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Understanding Linus's Law for open source security +====== +Linus's Law is that given enough eyeballs, all bugs are shallow. How +does this apply to open source software security? +![Hand putting a Linux file folder into a drawer][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. This article discusses Linux's influence on the security of open source software. + +An often-praised virtue of open source software is that its code can be reviewed (or "audited," as security professionals like to say) by anyone and everyone. However, if you actually ask many open source users when the last time they reviewed code was, you might get answers ranging from a blank stare to an embarrassed murmur. And besides, there are some really big open source applications out there, so it can be difficult to review every single line of code effectively. + +Extrapolating from these slightly uncomfortable truths, you have to wonder: When nobody looks at the code, does it really matter whether it's open or not? + +### Should you trust open source? + +We tend to make a trite assumption in hobbyist computing that open source is "more secure" than anything else. We don't often talk about what that means, what the basis of comparison is ("more" secure than what?), or how the conclusion has even been reached. It's a dangerous statement to make because it implies that as long as you call something _open source_, it automatically and magically inherits enhanced security. That's not what open source is about, and in fact, it's what open source security is very much against. + +You should never assume an application is secure unless you have personally audited and understood its code. Once you have done this, you can assign _ultimate trust_ to that application. Ultimate trust isn't a thing you do on a computer; it's something you do in your own mind: You trust software because you choose to believe that it is secure, at least until someone finds a way to exploit that software. + +You're the only person who can place ultimate trust in that code, so every user who wants that luxury must audit the code for themselves. Taking someone else's word for it doesn't count! + +So until you have audited and understood a codebase for yourself, the maximum trust level you can give to an application is a spectrum ranging from approximately, _not trustworthy at all_ to _pretty trustworthy_. There's no cheat sheet for this. It's a personal choice you must make for yourself. If you've heard from people you strongly trust that an application is secure, then you might trust that software more than you trust something for which you've gotten no trusted recommendations. + +Because you cannot audit proprietary (non-open source) code, you can never assign it _ultimate trust_. + +### Linus's Law + +The reality is, not everyone is a programmer, and not everyone who is a programmer has the time to dedicate to reviewing hundreds and hundreds of lines of code. So if you're not going to audit code yourself, then you must choose to trust (to some degree) the people who _do_ audit code. + +So exactly who does audit code, anyway? + +Linus's Law asserts that _given enough eyeballs, all bugs are shallow_, but we don't really know how many eyeballs are "enough." However, don't underestimate the number. Software is very often reviewed by more people than you might imagine. The original developer or developers obviously know the code that they've written. However, open source is often a group effort, so the longer code is open, the more software developers end up seeing it. A developer must review major portions of a project's code because they must learn a codebase to write new features for it. + +Open source packagers also get involved with many projects in order to make them available to a Linux distribution. Sometimes an application can be packaged with almost no familiarity with the code, but often a packager gets familiar with a project's code, both because they don't want to sign off on software they don't trust and because they may have to make modifications to get it to compile correctly. Bug reporters and triagers also sometimes get familiar with a codebase as they try to solve anomalies ranging from quirks to major crashes. Of course, some bug reporters inadvertently reveal code vulnerabilities not by reviewing it themselves but by bringing attention to something that obviously doesn't work as intended. Sysadmins frequently get intimately familiar with the code of an important software their users rely upon. Finally, there are security researchers who dig into code exclusively to uncover potential exploits. + +### Trust and transparency + +Some people assume that because major software is composed of hundreds of thousands of lines of code, it's basically impossible to audit. Don't be fooled by how much code it takes to make an application run. You don't actually have to read millions of lines. Code is highly structured, and exploitable flaws are rarely just a single line hidden among the millions of lines; there are usually whole functions involved. + +There are exceptions, of course. Sometimes a serious vulnerability is enabled with just one system call or by linking to one flawed library. Luckily, those kinds of errors are relatively easy to notice, thanks to the active role of security researchers and vulnerability databases. + +Some people point to bug trackers, such as the [Common Vulnerabilities and Exposures (CVE)][2] website, and deduce that it's actually as plain as day that open source isn't secure. After all, hundreds of security risks are filed against lots of open source projects, out in the open for everyone to see. Don't let that fool you, though. Just because you don't get to see the flaws in closed software doesn't mean those flaws don't exist. In fact, we know that they do because exploits are filed against them, too. The difference is that _all_ exploits against open source applications are available for developers (and users) to see so those flaws can be mitigated. That's part of the system that boosts trust in open source, and it's wholly missing from proprietary software. + +There may never be "enough" eyeballs on any code, but the stronger and more diverse the community around the code, the better chance there is to uncover and fix weaknesses. + +### Trust and people + +In open source, the probability that many developers, each working on the same project, have noticed something _not secure_ but have all remained equally silent about that flaw is considered to be low because humans rarely mutually agree to conspire in this way. We've seen how disjointed human behavior can be recently with COVID-19 mitigation: + + * We've all identified a flaw (a virus). + * We know how to prevent it from spreading (stay home). + * Yet the virus continues to spread because one or more people deviate from the mitigation plan. + + + +The same is true for bugs in software. If there's a flaw, someone noticing it will bring it to light (provided, of course, that someone sees it). + +However, with proprietary software, there can be a high probability that many developers working on a project may notice something not secure but remain equally silent because the proprietary model relies on paychecks. If a developer speaks out against a flaw, then that developer may at best hurt the software's reputation, thereby decreasing sales, or at worst, may be fired from their job. Developers being paid to work on software in secret do not tend to talk about its flaws. If you've ever worked as a developer, you've probably signed an NDA, and you've been lectured on the importance of trade secrets, and so on. Proprietary software encourages, and more often enforces, silence even in the face of serious flaws. + +### Trust and software + +Don't trust software you haven't audited. + +If you must trust software you haven't audited, then choose to trust code that's exposed to many developers who independently are likely to speak up about a vulnerability. + +Open source isn't inherently more secure than proprietary software, but the systems in place to fix it are far better planned, implemented, and staffed. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/open-source-security + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer) +[2]: https://cve.mitre.org diff --git a/sources/talk/20210211 Understanding Open Governance Networks.md b/sources/talk/20210211 Understanding Open Governance Networks.md new file mode 100644 index 0000000000..771947bb39 --- /dev/null +++ b/sources/talk/20210211 Understanding Open Governance Networks.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Understanding Open Governance Networks) +[#]: via: (https://www.linux.com/news/understanding-open-governance-networks/) +[#]: author: (The Linux Foundation https://www.linuxfoundation.org/en/blog/understanding-open-governance-networks/) + +Understanding Open Governance Networks +====== + +![][1] + +Throughout the modern business era, industries and commercial operations have shifted substantially to digital processes. Whether you look at EDI as a means to exchange invoices or cloud-based billing and payment solutions today, businesses have steadily been moving towards increasing digital operations. In the last few years, we’ve seen the promises of digital transformation come alive, particularly in [industries that have shifted to software-defined models][2]. The next step of this journey will involve enabling digital transactions through decentralized networks. + +A fundamental adoption issue will be figuring out who controls and decides how a decentralized network is governed. It may seem oxymoronic at first, but decentralized networks still need governance. A future may hold autonomously self-governing decentralized networks, but this model is not accepted in industries today. The governance challenge with a decentralized network technology lies in who and how participants in a network will establish and maintain policies, network operations, on/offboarding of participants, setting fees, configurations, and software changes and are among the issues that will have to be decided to achieve a successful network. No company wants to participate or take a dependency on a network that is controlled or run by a competitor, potential competitor, or any single stakeholder at all for that matter. + +Earlier this year, [we presented a solution for Open Governance Networks][3] that enable an industry or ecosystem to govern itself in an open, inclusive, neutral, and participatory model. You may be surprised to learn that it’s based on best practices in open governance we’ve developed over decades of facilitating the world’s most successful and competitive open source projects. + +### The Challenge + +For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” There have been many other concerns raised and confusion created, as overnight headlines hyped cryptocurrency schemes. Despite all this, behind the scenes, and all along, sophisticated companies understood a distributed ledger technology would be a powerful enabler for tackling complex challenges in an industry, or even a section of an industry. + +At the Linux Foundation, we focused on enabling those organizations to collaborate on open source enterprise blockchain technologies within our Hyperledger community. That community has driven collaboration on every aspect of enterprise blockchain technology, including identity, security, and transparency. Like other Linux Foundation projects, these enterprise blockchain communities are open, collaborative efforts. We have had many vertical industry participants engage, from retail, automotive, aerospace, banking, and others participate with real industry challenges they needed to solve. And in this subset of cases, enterprise blockchain is the answer. + +The technology is ready. Enterprise blockchain has been through many proof-of-concept implementations, and we’ve already seen that many organizations have shifted to production deployments. A few notable examples are: + + * [Trust Your Supplier Network][4] 25 major corporate members from Anheuser-Busch InBev to UPS In production since September 2019. + * [Foodtrust][5] Launched Aug 2017 with ten members, now being used by all major retailers. + * [Honeywell][6] 50 vendors with storefronts in the new marketplace. In its first year, GoDirect Trade processed more than $5 million in online transactions. + + + +However, just because we have the technology doesn’t mean we have the appropriate conditions to solve adoption challenges. A certain set of challenges about networks’ governance have become a “last mile” problem for industry adoption. While there are many examples of successful production deployments and multi-stakeholder engagements for commercial enterprise blockchains already, specific adoption scenarios have been halted over uncertainty, or mistrust, over who and how a blockchain network will be governed. + +To precisely state the issue, in many situations, company A does not want to be dependent on, or trust, company B to control a network. For specific solutions that require broad industry participation to succeed, you can name any industry, and there will be company A and company B. + +We think the solution to this challenge will be Open Governance Networks. + +### The Linux Foundation vision of the Open Governance Network + +An Open Governance Network is a distributed ledger service, composed of nodes, operated under the policies and directions of an inclusive set of industry stakeholders. + +Open Governance Networks will set the policies and rules for participation in a decentralized ledger network that acts as an industry utility for transactions and data sharing among participants that have permissions on the network. The Open Governance Network model allows any organization to participate. Those organizations that want to be active in sharing the operational costs will benefit from having a representative say in the policies and rules for the network itself. The software underlying the Open Governance Network will be open source software, including the configurations and build tools so that anyone can validate whether a network node complies with the appropriate policies. + +Many who have worked with the Linux Foundation will realize an open, neutral, and participatory governance model under a nonprofit structure that has already been thriving for decades in successful open source software communities. All we’re doing here is taking the same core principles of what makes open governance work for open source software, open standards, and open collaboration and applying those principles to managing a distributed ledger. This is a model that the Linux Foundation has used successfully in other communities, such as the [Let’s Encrypt][7] certificate authority. + +Our ecosystem members trust the Linux Foundation to help solve this last mile problem using open governance under a neutral nonprofit entity. This is one solution to the concerns about neutrality and distributed control. In pan-industry use cases, it is generally not acceptable for one participant in the network to have power in any way that could be used as an advantage over someone else in the industry.  The control of a ledger is a valuable asset, and competitive organizations generally have concerns in allowing one entity to control this asset. If not hosted in a neutral environment for the community’s benefit, network control can become a leverage point over network users. + +We see this neutrality of control challenge as the primary reason why some privately held networks have struggled to gain widespread adoption. In order to encourage participation, industry leaders are looking for a neutral governance structure, and the Linux Foundation has proven the open governance models accomplish that exceptionally well. + +This neutrality of control issue is very similar to the rationale for public utilities. Because the economic model mirrors a public utility, we debated calling these “industry utility networks.” In our conversations, we have learned industry participants are open to sharing the cost burden to stand up and maintain a utility. Still, they want a low-cost, not profit-maximizing model. That is why our nonprofit model makes the most sense. + +It’s also not a public utility in that each network we foresee today would be restricted in participation to those who have a stake in the network, not any random person in the world. There’s a layer of human trust that our communities have been enabling on top of distributed networks, which started with the [Trust over IP Foundation][8]. + +Unlike public cryptocurrency networks where anyone can view the ledger or submit proposed transactions, industries have a natural need to limit access to legitimate parties in their industry. With minor adjustments to address the need for policies for transactions on the network, we believe a similar governance model applied to distributed ledger ecosystems can resolve concerns about the neutrality of control. + +### Understanding LF Open Governance Networks + +Open Governance Networks can be reduced to the following building block components: + + * Business Governance: Networks need a decision-making body to establish core policies (e.g., network policies), make funding and budget decisions, contracting with a network manager, and other business matters necessary for the network’s success. The Linux Foundation establishes a governing board to manage the business governance. + * Technical Governance: Networks will require software. A technical open source community will openly maintain the software, specifications, or configuration decisions implemented by the network nodes. The Linux Foundation establishes a technical steering committee to oversee technical projects, configurations, working groups, etc. + * Transaction Entity: Networks will require a transaction entity that will a) act as counterparty to agreements with parties transacting on the network, b) collect fees from participants, and c) execute contracts for operational support (e.g., hiring a network manager). + + + +Of these building blocks, the Linux Foundation already offers its communities the Business and Technical Governance needed for Open Governance Networks. The final component is the new, LF Open Governance Networks. + +LF Open Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation. + +If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at **[governancenetworks@linuxfoundation.org][9]** + +The post [Understanding Open Governance Networks][10] appeared first on [Linux Foundation][11]. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/understanding-open-governance-networks/ + +作者:[The Linux Foundation][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxfoundation.org/en/blog/understanding-open-governance-networks/ +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/wp-content/uploads/2021/02/understanding-opengovnetworks.png +[2]: https://www.linuxfoundation.org/blog/2020/09/software-defined-vertical-industries-transformation-through-open-source/ +[3]: https://www.linuxfoundation.org/blog/2020/10/introducing-the-open-governance-network-model/ +[4]: https://www.hyperledger.org/learn/publications/chainyard-case-study +[5]: https://www.hyperledger.org/learn/publications/walmart-case-study +[6]: https://www.hyperledger.org/learn/publications/honeywell-case-study +[7]: https://letsencrypt.org/ +[8]: https://trustoverip.org/ +[9]: mailto:governancenetworks@linuxfoundation.org +[10]: https://www.linuxfoundation.org/en/blog/understanding-open-governance-networks/ +[11]: https://www.linuxfoundation.org/ diff --git a/sources/talk/20210212 18 ways to differentiate open source products from upstream suppliers.md b/sources/talk/20210212 18 ways to differentiate open source products from upstream suppliers.md new file mode 100644 index 0000000000..045a31836a --- /dev/null +++ b/sources/talk/20210212 18 ways to differentiate open source products from upstream suppliers.md @@ -0,0 +1,124 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (18 ways to differentiate open source products from upstream suppliers) +[#]: via: (https://opensource.com/article/21/2/differentiating-products-upstream-suppliers) +[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux) + +18 ways to differentiate open source products from upstream suppliers +====== +Open source products must create enough differentiated value that +customers will voluntarily pay for them versus another (or free) +product. +![Tips and gears turning][1] + +In the first three parts of this series, I explored [open source as a supply chain][2], [what a product is][3], and [what product managers do][4]. In this fourth article, I'll look at a plethora of methods to differentiate open source software products from their upstream open source projects. + +Since open source projects are essentially information products, these methods are likely to apply to many information-related products (think YouTube creators) that have a component of value given away for free. Product managers have to get creative when the information and material to build your product is freely available to users. + +### Creating and capturing value + +Product managers are responsible for creating solutions that attract and retain customers. To create a customer, they must provide value in exchange for money. Like a good salesperson, a product manager should never feel guilty about charging for their product (see [_How to sell an open source product_][5] by John Mark Walker). + +Products built on open source are fundamentally no different than any other products or service. They must create value for customers. In fact, they must create enough value that customers will voluntarily pay a price that is sufficient to pay for the development costs and return a profit. These products must also be differentiated from competing products and services, as well as upstream projects. + +![Inputs for creating value][6] + +(Scott McCarty, [CC BY-SA 4.0][7]) + +While products built on open source software are not fundamentally different from other products and services, there are some differences. First, some of the development costs are defrayed among all open source contributors. These costs can include code, testing, documentation, hardware, project hosting costs, etc. But even when development costs are defrayed in open source, costs are incurred by the vendor that productizes the code. These can include employee costs for research, analysis, security, performance testing, certification processes (e.g., collaborating with hardware vendors, cloud providers, etc.), and of course, sales and marketing. + +![Inputs for solving market problems][8] + +(Scott McCarty, [CC BY-SA 4.0][7]) + +Successful open source products must be able to charge a cost that is sufficient to pay for the defrayed upstream open source contributions (development costs) and the downstream productization costs (vendor costs). Stated another way, products can only charge a sufficient price if they create value that can only be captured by customers paying for them. That might sound harsh, but it's a reality for all products. There's a saying in product management: Pray to pay doesn't work. With that said, don't be too worried. There are ethical ways to capture value. + +### Types of value + +Fundamentally, there are two types of value: proprietary and non-proprietary. Proprietary is a bad word in open source software, but an attractive word in manufacturing, finance, and other industries. Many financial companies will highlight their proprietary algorithms and the same with drug companies and manufacturing processes. In software, proprietary value is often thought to be completely incongruous with free and open source software. People often assume proprietary value is a binary decision. It’s difficult for people to imagine proprietary value in the context of open source software without being artificially constrained by a license. However, as we’ll attempt to demonstrate, it’s not so clear cut. + +From an open source product management perspective, you can define proprietary value as anything that would be very difficult or nearly impossible for the customer to recreate themselves—or something the potential customer doesn't believe they can recreate. Commodity value is the opposite of proprietary. Commodity value is value the customer believes they could construct (or reconstruct) given enough time and money. + +Reconstructing commodity value instead of purchasing it makes sense only if it's cheaper or easier than buying a product. Stated another way, a good product should save a customer money compared to building the solution themselves. It's in this cost gap that open source products exist and thrive. + +With products built, or mostly built, on an open source supply chain, customers retain the "build versus buy" decision and can stop paying at any time. This is often true with open core products as well. As long as the majority of the supply chain is open source, the customer likely could rebuild a few components to get what they need. The open source product manager's job is the same as for any other product or service: to create and retain customers by providing value worth paying for. + +### Differentiators for open source product managers + +Like any artist, a product manager traffics in value as their medium. They must price and package their product. They must constantly strive to differentiate their product against competitors in the marketplace and the upstream projects that are suppliers to that product. However, the supply chain is but one tool a product manager can use to differentiate and create a customer. + +This is a less-than-exhaustive list that should give you some ideas about how product managers can differentiate their products and create value. As you read through the list, think deeply about whether a customer could recreate the value of each given enough time, money, and willpower. + + * **Supply chain:** Selecting the upstream suppliers is important. The upstream community's health is a selling point over products based on competing upstream projects. A perfect example of this kind of differentiation is with products such as OpenShift, Docker EE, and Mesosphere, which respectively rely on Kubernetes, Swarm, and Apache Mesos as upstream suppliers. Similar to how electric cars are replacing gasoline engines, the choice of technology and its supplier provide differentiation. + + * **Quality engineering:** Upstream continuous integration and continuous delivery (CI/CD) and user testing are wonderful bases for building a product. However, it's critical to ensure that the downstream product, often made up of multiple upstream projects, works well together with specific versions of all the components. Testing the entire solution together applies as much to differentiating from upstream suppliers as it does from competitive products. Customers want products that just work. + + * **Industry certifications:** Specific classes of customers, such as government, financial services, transportation, manufacturing, and telecom, often have certification requirements. Typically, these are related to security or auditing and are often quite expensive. Certifications are great because they differentiate the product from competitors and upstream. + + * **Hardware or cloud provider certifications:** The dirty secret of cheap hardware is that it changes all the time. Often this hardware has new capabilities with varying levels of maturity. Hardware certifications provide a level of confidence that the software will run well on a specific piece of hardware or cloud virtual machine. They also provide a level of assurance that the product company and the platform on which it is certified to run are committing to make it work well together. A potential customer could always vet hardware themselves, but they often don't have deep relationships with hardware vendors and cloud providers, making it difficult to demand fixes and patches. + + * **Ecosystem:** This represents access to a plethora of add-on solutions from third-party vendors. Again, the ecosystem provides some assurance that all the entities work together to make things work well. Small companies would likely find it difficult or impossible to demand that individual software vendors certify their privately built platforms. Integrations like these are usually quite expensive for an individual user and are best defrayed across a product's customers. + + * **Lifecycle:** Upstream projects are great because they move quickly and innovate. But many different versions of many different upstream projects can go into a single product's supply chain. Ensuring that all the versions of the different upstream projects work together over a given lifecycle is a lot of work. A longer lifecycle gives customers time to get a return on investment. Stated another way, users spend a lot of time and money planning and rolling out software. A product's lifecycle commitment ensures that customers can use the software and receive value from their investment for a reasonable amount of time. + + * **Packaging and distribution:** Once a vendor commits to supporting a product for a given lifecycle (e.g., five years), they must also commit to providing packaging and distribution during that time. Both products and cloud services need to provide customers the ability to plan a roadmap, execute a rollout, and expand over the desired lifecycle, so packages or services need to remain available for customer use. + + * **Documentation:** This is often overlooked by both open source projects and vendors. Aligning product documentation to the product lifecycle, versus the upstream supplier documentation, is extremely important. It's also important to document the entire solution working together, whether that's installation or use cases for end users. It's beneficial for customers to have documentation that applies to the specific combination of components they are using. + + * **Security:** Closely related to the product lifecycle, vendors must commit to providing security during the time the product is supported. This includes analyzing code, scoring vulnerabilities, patching those vulnerabilities, and verifying that they are patched. This is a particularly opportune area for products to differentiate themselves from upstream suppliers. It really is value creation through data. + + * **Performance:** Also closely related to product lifecycle, vendors must commit to providing performance testing, tuning recommendations, and sometimes even backporting performance improvements during the product's lifecycle. This is another opportune area for products. + + * **Indemnification:** This is essentially insurance in case the company using the software is sued by a patent troll. Often, the corporate legal team just won't have the skill set needed to defend themselves. While potential customers could pay a third party for legal services, would they know the software as well? + + * **Compute resources:** You simply can't get access to compute resources without paying for them. There are free trials, but sustained usage always requires paying, either through a cloud provider or by buying hardware. In fact, this is one of the main differentiated values provided by infrastructure as a service (IaaS) and software as a service (SaaS) cloud providers. This is quite differentiated from upstream suppliers because they will never have the budget to provide free compute, storage, and network. + + * **Consulting:** Access to operational knowledge to set up and use the software may be a differentiator. Clearly, a company can hire the talent, given enough budget and willpower, but talent can be difficult to find. In fact, one might argue that software vendors have a much better chance of attracting the top talent, essentially creating a talent vacuum for users trying to reconstruct the value themselves. + + * **Training:** Similar to consulting, the company that wrote, configured, released, and operated the software at scale often knows how to use it best. Again, a customer could hire the talent given enough budget and willpower. + + * **Operational knowledge:** IaaS and SaaS solutions often provide this kind of value. Similarly, knowledge bases and connected experiences that analyze an installed environment's configuration to provide the user with insights (e.g., OpenShift, Red Hat Insights) can provide this value. Operational knowledge is similar to training and consulting. + + * **Support:** This includes the ability to call for help or file support tickets and is similar to training, consulting, and operational knowledge. Support is often a crutch for open source product managers; again, customers can often recreate their own support organizations, depending on where they want to strategically invest budget and people, especially for level one and level two support. Level three support (e.g., kernel programmers) might be harder to hire. + + * **Proprietary code:** This is code that's not released under an open source license. A customer could always build a software development team and augment the open core code with the missing features they need. For the vendor, proprietary code has the downside of creating an unnatural competition between the upstream open source supplier and the downstream product. Furthermore, this unnatural split between open source and proprietary code does not provide the customer more value. It always feels like value is being unnaturally held back. I would argue that this is a very suboptimal form of value capture. + + * **Brand:** Brand equity is not easily measurable. It really comes down to a general level of trust. The customer needs to believe that the solution provider can and will help them when they need it. It's slow to build a brand and easy to lose it. Careful thought might reveal that the same is true with internal support organizations in a company. Users will quickly lose trust in internal support organizations, and it can take years to build it back up. + + + + +Reading through the list, do you think a potential customer could recreate the value of almost any of these items? The answer is almost assuredly yes. This is true with almost any product feature or capability, whether it's open source or even proprietary. Cloud providers build their own CPUs and hardware, and delivery companies (e.g., UPS, Amazon, etc.) sometimes build their own vehicles. Whether it makes sense to build or buy all depends on the business and its specific needs. + +### Add value in the right places + +The open source licensing model led to an explosion in the availability of components that can be assembled into a software product. Stated another way, it formed a huge supply chain of software. Product managers can create products from a completely open source supply chain (e.g., Red Hat, Chef, SUSE, etc.), or mix and match open source and proprietary technology (e.g., open core like Sourcefire or SugarCRM). Choosing a fully open source versus open core methodology should not be confused with solving a business problem. Customers only buy products that solve their problems. + +Enterprise open source products are solutions to problems, much like a vehicle sold by an auto manufacturer. The product manager for an open source product determines the requirements, things like the lifecycle (number of years), security (important certifications), performance (important workloads), and ecosystem (partners). Some of these requirements can be met by upstream suppliers (open source projects); some cannot. + +An open source product is a composition of value consumed through a supply chain of upstream vendors and new value added by the company creating it. This new value combined with the consumed value is often worth more together and sold at a premium. It's the responsibility of product teams (including engineering, quality, performance, security, legal, etc.) to add new value in the right places, at the right times, to make their open source products worth the price customers pay versus building out the infrastructure necessary to assemble, maintain, and support the upstream components themselves. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/differentiating-products-upstream-suppliers + +作者:[Scott McCarty][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning) +[2]: https://opensource.com/article/20/10/open-source-supply-chain +[3]: https://opensource.com/article/20/10/defining-product-open-source +[4]: https://opensource.com/article/20/11/open-source-product-teams +[5]: https://opensource.com/article/20/6/sell-open-source-software +[6]: https://opensource.com/sites/default/files/uploads/creatingvalue1.png (Inputs for creating value) +[7]: https://creativecommons.org/licenses/by-sa/4.0/ +[8]: https://opensource.com/sites/default/files/uploads/creatingvalue2.png (Inputs for solving market problems) diff --git a/sources/talk/20210212 How to adopt DevSecOps successfully.md b/sources/talk/20210212 How to adopt DevSecOps successfully.md new file mode 100644 index 0000000000..27d6b9c41e --- /dev/null +++ b/sources/talk/20210212 How to adopt DevSecOps successfully.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to adopt DevSecOps successfully) +[#]: via: (https://opensource.com/article/21/2/devsecops) +[#]: author: (Mike Calizo https://opensource.com/users/mcalizo) + +How to adopt DevSecOps successfully +====== +Integrating security throughout the software development lifecycle is +important, but it's not always easy. +![Target practice][1] + +Adopting [DevOps][2] can help an organization transform and speed how its software is delivered, tested, and deployed to production. This is the well-known "DevOps promise" that has led to such a large surge in adoption. + +We've all heard about the many successful DevOps implementations that changed how an organization approaches software innovation, making it fast and secure through agile delivery to get [ahead of competitors][3]. This is where we see DevOps' promises achieved and delivered. + +But on the flipside, some DevOps adoptions cause more issues than benefits. This is the DevOps dilemma where DevOps fails to deliver on its promises. + +There are many factors involved in an unsuccessful DevOps implementation, and a major one is security. A poor security culture usually happens when security is left to the end of the DevOps adoption process. Applying existing security processes to DevOps can delay projects, cause frustrations within the team, and create financial impacts that can derail a project. + +[DevSecOps][4] was designed to avoid this very situation. Its purpose "is to build on the mindset that 'everyone is responsible for security…'" It also makes security a consideration at all levels of DevOps adoption. + +### The DevSecOps process + +Before DevOps and DevSecOps, the app security process looked something like the image below. Security came late in the software delivery process, after the software was accepted for production. + +![Old software development process with security at the end][5] + +(Michael Calizo, [CC BY-SA 4.0][6]) + +Depending on the organization's security profile and risk appetite, the application might even bypass security reviews and processes during acceptance. At that point, the security review becomes an audit exercise to avoid unnecessary project delays. + +![Security as audit in software development][7] + +(Michael Calizo, [CC BY-SA 4.0][6]) + +The DevSecOps [manifesto][8] says that the reason to integrate security into dev and ops at all levels is to implement security with less friction, foster innovation, and make sure security and data privacy are not left behind. + +Therefore, DevSecOps encourages security practitioners to adapt and change their old, existing security processes and procedures. This may be sound easy, but changing processes, behavior, and culture is always difficult, especially in large environments. + +The DevSecOps principle's basic requirement is to introduce a security culture and mindset across the entire application development and deployment process. This means old security practices must be replaced by more agile and flexible methods so that security can iterate and adapt to the fast-changing environment. According to the DevSecOps manifesto, security needs to "operate like developers to make security and compliance available to be consumed as services." + +DevSecOps should look like the figure below, where security is embedded across the delivery cycle and can iterate every time there is a need for change or adjustment. + +![DevSecOps considers security throughout development][9] + +(Michael Calizo, [CC BY-SA 4.0][6]) + +### Common DevSecOps obstacles + +Any time changes are introduced, people find faults or issues with the new process. This is natural human behavior. The fear and inconvenience associated with learning new things are always met with adverse reactions; after all, humans are creatures of habit. + +Some common obstacles in DevSecOps adoption include: + + * **Vendor-defined DevOps/DevSecOps:** This means principles and processes are focused on product offerings, and the organization won't be able to build the approach. Instead, they will be limited to what the vendor provides. + * **Nervous people managers:** The fear of losing control is a real problem when change happens. Often, anxiety affects people managers' decision-making. + * **If ain't broke, don't fix it:** This is a common mindset, and you really can't blame people for thinking this way. But the idea that the old way will survive despite new ways of delivering software and solutions must be challenged. To adapt to the agile application lifecycle, you need to change the processes to support the speed and agility it requires. + * **The Netflix and Uber effect:** Everybody knows that Netflix and Uber have successfully implemented DevSecOps; therefore, many organizations want to emulate them. Because they have a different culture than your organization, simply emulating them won't work. + * **Lack of measurement:** DevOps and DevSecOps transformation must be measured against set goals. Metrics might include software delivery performance or overall organization performance over time. + * **Checklist-driven security:** By using a checklist, the security team follows the same old, static, and inflexible processes that are neither useful nor applicable to modern technologies that developers use to make software delivery lean and agile. The introduction of the "[as code][10]" approach requires security people to learn how to code. + * **Security as a special team:** This is especially true in organizations transitioning from the old ways of delivering software, where security is a separate entity, to DevOps. Because of the separations, trust is questionable among devs, ops, and security. This will cause the security team to spend unnecessary time reviewing and governing DevOps processes and building pipelines instead of working closely with developers and ops teams to improve the software delivery flow. + + + +### How to adopt DevSecOps successfully + +Adopting DevSecOps is not easy, but being aware of common obstacles and challenges is key to your success. + +Clearly, the biggest and most important change an organization needs to make is its culture. Cultural change usually requires executive buy-in, as a top-down approach is necessary to convince people to make a successful turnaround. You might hope that executive buy-in makes cultural change follow naturally, but don't expect smooth sailing—executive buy-in alone is not enough. + +To help accelerate cultural change, the organization needs leaders and enthusiasts that will become agents of change. Embed these people in the dev, ops, and security teams to serve as advocates and champions for culture change. This will also establish a cross-functional team that will share successes and learnings with other teams to encourage wider adoption. + +Once that is underway, the organization needs a DevSecOps use-case to start with, something small with a high potential for success. This enables the team to learn, fail, and succeed without affecting the organization's core business. + +The next step is to identify and agree on the definition of success. The DevSecOps adoption needs to be measurable; to do that, you need a dashboard that shows metrics such as: + + * Lead time for a change + * Deployment frequency + * Mean time to restore + * Change failure + + + +These metrics are a critical requirement to be able to identify processes and other things that require improvement. It's also a tool to declare if an adoption is a win or a bust. This methodology is called [event-driven transformation][11]. + +### Conclusion + +When implemented properly, DevOps enables an organization to deliver software to production quickly and gain advantages over competitors. DevOps allows it to fail small and recover faster by enabling flexibility and efficiency to go to market early. + +In summary, DevOps and DevSecOps adoption needs: + + * Cultural change + * Executive buy-in + * Leaders and enthusiasts to act as evangelists + * Cross-functional teams + * Measurable indicators + + + +Ultimately, the solution to the DevSecOps dilemma relies on cultural change to make the organization better. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/devsecops + +作者:[Mike Calizo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mcalizo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/target-security.png?itok=Ca5-F6GW (Target practice) +[2]: https://opensource.com/resources/devops +[3]: https://www.imd.org/research-knowledge/articles/the-battle-for-digital-disruption-startups-vs-incumbents/ +[4]: http://www.devsecops.org/blog/2015/2/15/what-is-devsecops +[5]: https://opensource.com/sites/default/files/uploads/devsecops_old-process.png (Old software development process with security at the end) +[6]: https://creativecommons.org/licenses/by-sa/4.0/ +[7]: https://opensource.com/sites/default/files/uploads/devsecops_security-as-audit.png (Security as audit in software development) +[8]: https://www.devsecops.org/ +[9]: https://opensource.com/sites/default/files/uploads/devsecops_process.png (DevSecOps considers security throughout development) +[10]: https://www.oreilly.com/library/view/devopssec/9781491971413/ch04.html +[11]: https://www.openshift.com/blog/exploring-a-metrics-driven-approach-to-transformation diff --git a/sources/talk/20210213 5 reasons why I love coding on Linux.md b/sources/talk/20210213 5 reasons why I love coding on Linux.md new file mode 100644 index 0000000000..64c968d669 --- /dev/null +++ b/sources/talk/20210213 5 reasons why I love coding on Linux.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 reasons why I love coding on Linux) +[#]: via: (https://opensource.com/article/21/2/linux-programming) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +5 reasons why I love coding on Linux +====== +Linux is a great platform for programming—it's logical, easy to see the +source code, and very efficient. +![Terminal command prompt on orange background][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different ways to use Linux. Here I'll explain why so many programmers choose Linux. + +When I first started using Linux, it was for its excellent multimedia support because I worked in film production. We found that the typical proprietary video editing applications couldn't handle most of the footage we were pulling from pretty much any device that could record an image. At the time, I wasn't aware that Linux had a reputation as an operating system for servers and programmers. The more I did on Linux, the more I wanted to control every aspect of it. And in the end, I discovered that a computer was at its most powerful when its user could "speak" its language. Within a few years of switching to Linux, I was scripting unattended video edits, stringing together audio files, bulk editing photos, and anything else I could imagine and then invent a solution for. It didn't take long for me to understand why programmers loved Linux, but it was Linux that taught me to love programming. + +It turns out that Linux is an excellent platform for programmers, both new and experienced. It's not that you _need_ Linux to program. There are successful developers on all different kinds of platforms. However, Linux has much to offer developers. Here are a few things I've found useful. + +### Foundations of logic + +Linux is built around [automation][2]. It's very intentional that staple applications on Linux can at the very least launch from a terminal with additional options, and often they can even be used entirely from the terminal. This idea is sometimes mistaken as a primitive computing model because people mistakenly believe that writing software that operates from a terminal is just doing the bare minimum to get a working application. This is an unfortunate misunderstanding of how code works, but it's something many of us are guilty of from time to time. We think _more_ is always better, so an application containing 1,000 lines of code must be 100 times better than one with ten lines of code, right? The truth is that all things being equal, the application with greater flexibility is preferable, regardless of how that translates to lines of code. + +On Linux, a process that might take you an hour when done manually can literally be reduced to one minute with the right terminal command, and possibly less when parsed out to [GNU Parallel][3]. This phenomenon requires a shift in how you think about what a computer does. For instance, if your task is to add a cover image to 30 PDF files, you might think this is a sensible workflow: + + 1. Open a PDF in a PDF editor + 2. Open the front cover + 3. Append a PDF to the end of the cover file + 4. Save the file as a new PDF + 5. Repeat this process for each old PDF (but do not duplicate this process for each new PDF) + + + +It's mostly common sense stuff, and while it's painfully repetitive, it does get the job done. On Linux, however, you're able to work smarter than that. The thought process is similar: First, you devise the steps required for a successful result. After some research, you'd find out about the `pdftk-java` command, and you'd discover a simple solution: + + +``` +$ pdftk A=cover.pdf B=document_1.pdf \ + cat A B \ + output doc+cover_1.pdf +``` + +Once you've proven to yourself that the command works on one document, you take time to learn about looping options, and you settle on a parallel operation: + + +``` +$ find ~/docs/ -name "*.pdf" | \ + parallel pdftk A=cover.pdf B={} \ + cat A B \ + output {.}.cover.pdf +``` + +It's a slightly different way of thinking because the "code" you write processes data differently than the enforced linearity you're used to. However, getting out of your old linear way of thinking is important for writing actual code later, and it has the side benefit of empowering you to compute smarter. + +### Code connections + +No matter what platform you're programming for when you write code, you're weaving an intricate latticework of invisible connections between many different files. In all but the rarest cases, code draws from headers and relies on external libraries to become a complete program. This happens on any platform, but Linux tends to encourage you to understand this for yourself, rather than blindly trusting the platform's development kit to take care of it for you. + +Now, there's nothing wrong with trusting a development kit to resolve _library_ and _include_ paths. On the contrary, that kind of abstraction is a luxury you should enjoy. However, if you never understand what's happening, then it's a lot harder to override the process when you need to do something that a dev kit doesn't know about or to troubleshoot problems when they arise. + +This translates across platforms, too. You can develop code on Linux that you intend to run on Linux as well as other operating systems, and your understanding of how code compiles helps you hit your targets. + +Admittedly, you don't learn these lessons just by using Linux. It's entirely possible to blissfully code in a good IDE and never give a thought to what version of a library you have installed or where the development headers are located. However, Linux doesn't hide anything from you. It's easy to dig down into the system, locate the important parts, and read the code they contain. + +### Existing code + +Knowing where headers and libraries are located is useful, but having them to reference is yet another added bonus to programming on Linux. On Linux, you get to see the source code of basically anything you want (excluding applications that aren't open source but that run on Linux). The benefit here cannot be overstated. As you learn more about programming in general or about programming something new to you, you can learn a lot by referring to existing code on your Linux system. Many programmers have learned how to code by reading other people's open source code. + +On proprietary systems, you might find developer documentation with code samples in it. That's great because documentation is important, but it pales in comparison to locating the exact functionality you want to implement and then finding the source code demonstrating how it was done in an application you use every day. + +### Direct access to peripherals + +Something I often took for granted after developing code for media companies using Linux is access to peripherals. For instance, when connecting a video camera to Linux, you can pull incoming data from **/dev/video0** or similar. Everything's in **/dev**, and it's always the shortest path between two points to get there. + +That's not the case on other platforms. Connecting to systems outside of the operating system is often a maze of SDKs, restricted libraries, and sometimes NDAs. This, of course, varies depending on what you're programming for, but it's hard to beat Linux's simple and predictable interface. + +### Abstraction layers + +Conversely, Linux also provides a healthy set of abstraction layers for when direct access or manual coding ends up creating more work than you want to do. There are conveniences found in Qt and Java, and there are whole stacks like Pulse Audio, Pipewire, and gstreamer. Linux _wants_ you to be able to code, and it shows. + +### Add to this list + +There are more reasons that make coding on Linux a pleasure. Some are broad concepts and others are overly-specific details that have saved me hours of frustration. Linux is a great place to be, no matter what platform you're targeting. Whether you're just learning about programming or you're a coder looking for a new digital home, there's no better workspace for programming than Linux. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-programming + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) +[2]: https://opensource.com/article/20/11/orchestration-vs-automation +[3]: https://opensource.com/article/18/5/gnu-parallel diff --git a/sources/talk/20210213 How open source provides students with real-world experience.md b/sources/talk/20210213 How open source provides students with real-world experience.md new file mode 100644 index 0000000000..ae5db3c45e --- /dev/null +++ b/sources/talk/20210213 How open source provides students with real-world experience.md @@ -0,0 +1,112 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How open source provides students with real-world experience) +[#]: via: (https://opensource.com/article/21/2/open-source-student) +[#]: author: (Laura Novich https://opensource.com/users/laura-novich) + +How open source provides students with real-world experience +====== +Contributing to open source gives students the real-world experience +required to land a good job. +![Woman sitting in front of her computer][1] + +In the movie _The Secret of My Success_, Brantley Foster (played by Michael J. Fox) expresses the exact thought that goes through every new graduate's mind: "How can I get any experience until I get a job that _gives_ me experience?" + +The hardest thing to do when starting a new career is to get experience. Often this creates a paradox. How do you get work with no experience, and how do you get experience with no work? + +In the open source software world, this conundrum is a bit less daunting because your experience is what you make of it. By working with open source projects sponsored by open source software (OSS) companies, you gain experience working on projects you like for companies that make you feel important, and then you use that experience to help find employment. + +Most companies would never allow newbies to touch their intellectual property and collateral without signing a non-disclosure agreement (NDA) or going through some kind of training or security check. However, when your source code is open and anyone in the world can contribute to it (in addition to copy it and use it), this is no longer an issue. In fact, open source companies embrace their contributors and create communities where students can easily get their feet wet and find their way in coding, testing, and documentation. Most open source companies depend on the contributions of others to get work done. This means the contributors work for free simply because they want to. For students, it translates into an unpaid internship and getting some real-world experience. + +At [Our Best Words][2], we decided to run a pilot project to see if our students could work in an open source documentation project and find the experience beneficial to jumpstarting their new careers in technical communication. + +![GitLab screenshot][3] + +(Laura Novich, [CC BY-SA 4.0][4]) + +I was the initiator and point of contact for the project, and I approached several companies. The company that gave us the most positive response was [GitLab][5]. GitLab is a company that creates software for Git repository management, issue tracking, and continuous integration/continuous delivery (CI/CD) pipeline management. Its software is used by hundreds of thousands of organizations worldwide, and in 2019, it announced it achieved $100 million in annual recurring revenue. + +GitLab's [Mike Jang][6] connected me with [Marcin Sedlak-Jakubowski][7] and [Marcia Dias Ramos][8], who are located closer to OBW's offices in Israel. We met to hammer out the details, and everyone had their tasks to launch the pilot in mid-September. Mike, Marcia, and Marcin hand-picked 19 issues for the students to solve. Each issue was tagged _Tich-Tov-only_ for OBW students, and any contributor who was not an OBW student was not allowed to work on the issue. + +To prepare the students, I held several demonstrations with GitLab. The students had never used the software before, and some were quite nervous. As the backbone of GitLab is Git, a software tool the students were already familiar with, it wasn't too hard to learn. Following the demonstrations, I sent the students a link to a Google Drive folder with tutorials, a FAQ, and other valuable resources. + +The issues the students were assigned came from GitLab's documentation. The documentation is written in Markdown and is checked with a linter (a static code analysis tool) called Vale. The students' assignments were to fix issues that the Vale linter found. The changes included fixing spelling, grammar, usage, and voice. In some cases, entire pages had to be rewritten. + +As I wanted this project to run smoothly and successfully, we decided to limit the pilot to seven of our 14 students. This allowed me to manage the project more closely and to make sure each student had only two to three issues to handle during the two-month time period of the project. + +![GitLab repo][9] + +(Laura Novich, [CC BY-SA 4.0][4]) + +The OBW students who were part of this project (with links to their GitLab profiles) were: + + * [Aaron Gorsuch][10] + * [Anna Lester][11] + * [Avi Chazen][12] + * [Ela Greenberg][13] + * [Rachel Gottesman][14] + * [Stefanie Saffern][15] + * [Talia Novich][16] + + + +We worked mostly during September and October and wrapped up the project in November. Every issue the students had was put on a Kanban board. We had regular standup meetings, where we discussed what we were doing and any issues causing difficulty. There were many teachable moments where I would help with repository issues, troubleshooting merge requests, and helping the students understand technical writing theories in practice. + +![Kanban board][17] + +(Laura Novich, [CC BY-SA 4.0][4]) + +November came faster than we expected and looking back, the project ended way too quickly. About midway in, I collected feedback from Marcin, Marcia, and Mike, and they told me their experience was positive. They told us that once we were done, we could take on more issues than the original allotment assigned to the group, if we wanted. + +One student, Rachel Gottesman, did just that. She completed 33 merge requests and helped rewrite pages of GitLab's documentation. She was so instrumental for the 13.7 release that [GitLab announced][18] that Rachel is the MVP for the release! We at OBW couldn't be more thrilled! _Congratulations, Rachel!_ + +![Rachel Gottesman's GitLab MVP award][19] + +(Laura Novich, [CC BY-SA 4.0][4]) + +Rachel's name appears [on GitLab's MVP page][20]. + +We are gearing up for our new year and a new course. We plan to run this project again as part of our [Software Documentation for Technical Communication Professionals][21] course. + +* * * + +_This article originally appeared on [Our Best Words Blog][22] and is republished with permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/open-source-student + +作者:[Laura Novich][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/laura-novich +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer) +[2]: https://ourbestwords.com/ +[3]: https://opensource.com/sites/default/files/uploads/gitlab.png (GitLab screenshot) +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://about.gitlab.com/ +[6]: https://www.linkedin.com/in/mijang/ +[7]: https://www.linkedin.com/in/marcin-sedlak-jakubowski-91143a124/ +[8]: https://www.linkedin.com/in/marciadiasramos/ +[9]: https://opensource.com/sites/default/files/uploads/gitlab_obw_repo.png (GitLab repo) +[10]: https://gitlab.com/aarongor +[11]: https://gitlab.com/Anna-Lester +[12]: https://gitlab.com/avichazen +[13]: https://gitlab.com/elagreenberg +[14]: https://gitlab.com/gottesman.rachelg +[15]: https://gitlab.com/stefaniesaffern +[16]: https://gitlab.com/TaliaNovich +[17]: https://opensource.com/sites/default/files/uploads/kanban.png (Kanban board) +[18]: https://release-13-7.about.gitlab-review.app/releases/2020/12/22/gitlab-13-7-released/ +[19]: https://opensource.com/sites/default/files/uploads/rachelgottesmanmvp.png (Rachel Gottesman's GitLab MVP award) +[20]: https://about.gitlab.com/community/mvp/ +[21]: https://ourbestwords.com/training-courses/skills-courses/ +[22]: https://ourbestwords.com/topic/blog/ diff --git a/sources/talk/20210214 Give something from the heart to the public domain.md b/sources/talk/20210214 Give something from the heart to the public domain.md new file mode 100644 index 0000000000..07ea72f223 --- /dev/null +++ b/sources/talk/20210214 Give something from the heart to the public domain.md @@ -0,0 +1,70 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Give something from the heart to the public domain) +[#]: via: (https://opensource.com/article/21/2/public-domain) +[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike) + +Give something from the heart to the public domain +====== +The team at Creative Commons wants you to share a creation to the Public +Domain as a show of support for openly sharing content. +![Red Lego Heart][1] + +Did you know that most of the articles published on Opensource.com are licensed under Creative Commons [BY-SA 4.0][2]? + +One of the biggest reasons our editorial team decided on this license over 10 years ago is because we support the idea that the best content is _shared_ content. As we strive to be open, our goal is for any many people as possible to have access to the information we're putting out there to support our mission to help others learn and grow and to explore new open source worlds. + +So, what does this have to do with Valentine's Day (or Friend's Day in Finland)? + +The team at Creative Commons wants you to [share a creation][3] to the [Public Domain][4] as a show of support for openly sharing content. It could be an image, song, artwork, poem, GIF, research paper... the list goes on! But "why?" I hear you asking. What's so special about the Public Domain? Well, did you hear about Nathan Evans's rendition of the sea shanty song "The Wellerman" that went viral? Here's what Creative Commons CEO Catherine Stihler [had to say about it][5]: + +"It is important to note that this flourishing creative scene is only possible because sea shanties are in the public domain—not under restrictive copyright rules. Therefore, they can be played, reused, dueted, remixed, and transformed. This, combined with the internet, means a postal worker in Airdrie can reach a global audience within seconds. Thanks to emerging technologies and social platforms, the public domain can both enable creativity and benefit from it with the invention of new works that are also free of copyright restrictions. (The hope is that these new works are put back into the public domain!) This expressiveness in new works and collaborations is bringing joy and uplifting our spirits as we continue to face daunting challenges." + +So, I'll add a twist on their Valentine's Day challenge: If you want to share something you've created but aren't sure about going totally Public Domain, check out the other [options for licensing][6] from CC BY to CC0. And to lower the bar even more to entice you to join in: You don't have to create something new, right now. Search through your archives for something you created last year or five years ago, and give it new life today with a new license. + + 1. Find the thing you want to share + 2. If it's not already, prepare it in digital format + 3. [Select the place][6] you want to share it to, like the Internet Archive, Flickr, etc. + 4. Upload and append attribution including "Public Domain" or [other Creative Commons open license][7] + + + +I'm a plant person, so [here's mine][8]! + +![Neon-colored herbal bundles][9] + +“[Neon-colored herbal bundles][8]” by Jen Wike Huger is licensed under Public Domain. + +If you aren't in the creating mood but want to find some great CC-licensed content, use the [search.creativecommons.org][10] tool. Then, take a peek at the best way to [give credit back][11] when you use what you find. + +Virtual high-fives and sweet treats from our team to you and yours! + +You don't need to be a master coder to contribute to open source. Jade Wang shares 8 ways you can... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/public-domain + +作者:[Jen Wike Huger][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jen-wike +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/heart_lego_ccby20.jpg?itok=VRpHf4iU (Red Lego Heart) +[2]: https://creativecommons.org/licenses/by-sa/4.0/ +[3]: https://creativecommons.org/2021/02/10/open-sharing-is-caring/ +[4]: https://creativecommons.org/share-your-work/public-domain/ +[5]: https://creativecommons.org/2021/02/03/the-postal-worker-a-sea-shanty-and-the-public-domain/ +[6]: https://creativecommons.org/share-your-work/ +[7]: https://creativecommons.org/choose/ +[8]: https://archive.org/details/neon_inverted_herbal_bundle +[9]: https://opensource.com/sites/default/files/uploads/neon_inverted_herbal_bundle.jpg (Neon-colored herbal bundles) +[10]: https://search.creativecommons.org/ +[11]: https://creativecommons.org/use-remix/attribution/ diff --git a/sources/talk/20210215 Why everyone should try using Linux.md b/sources/talk/20210215 Why everyone should try using Linux.md new file mode 100644 index 0000000000..d32154a80b --- /dev/null +++ b/sources/talk/20210215 Why everyone should try using Linux.md @@ -0,0 +1,107 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why everyone should try using Linux) +[#]: via: (https://opensource.com/article/21/2/try-linux) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Why everyone should try using Linux +====== +If you're curious about Linux, the open source community makes sure it's +easy to try. +![Woman sitting in front of her computer][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Let's explore why anyone can try Linux. + +Linux can seem mysterious to the uninitiated. People talk about Linux like it's something only meant for computer programmers or sysadmins, yet people also talk about running Linux on laptops or mobile devices. Tech sites list the [top 10 Linux commands][2] you [need to know][3], and yet there's just as much talk about [how exciting the Linux desktops (plural!) are][4]. + +So what's Linux all about, really? + +Well, one of the many important things about Linux is its dedication to availability. If you're curious about Linux, the open source community makes sure it's easy to try. + +### Try Linux without trying Linux + +It doesn't matter whether you're switching from Windows to Linux, Windows to Mac (or Mac to Windows), or from Mac to Linux: Changing your computing style is hard. It's not often discussed, but modern computing is a strangely personal thing. Everyone gets comfortable with their own computer, and they tend to dislike big changes. Just shifting back and forth between my work computer and my personal computer (both of which run the same OS) requires mental and muscle adjustment for me, simply because the two are "optimized" for different kinds of activities. Even though I inevitably end up being referred to just as "the guy who knows computers" in my local communities (and I do like to think that I do), were I forced to change the OS I use, my sense of productivity and enjoyment would plummet for a week or two. The thing is, it's mostly superficial. I know how to use other operating systems, but I'd need time to establish new instincts and habits. I'd need a chance to remember where some of the small configuration options are located in a new OS, and I'd have to discover new features to capitalize upon. + +For this reason, I often tell people who are curious about Linux that the first step is to use Linux _applications_. There aren't actually that many applications exclusive to Linux, largely because most apps on Linux are open source and are therefore eagerly ported between all platforms. But there are lots of applications commonly found on Linux that you can also try on your current non-Linux platform. Make it a goal to replace the applications you default to, either by force of habit or convenience, with open source equivalents. + +### Replacing apps + +The end goal of this exercise is to make a soft transition to Linux by way of the applications you'll eventually be running. Once you're used to a new set of applications, there's not much left to get used to on Linux, aside from system settings and file management. + +If you're not sure what applications you use the most, just take a look at your **Recent Applications** menu (if your OS doesn't have a Recent Applications menu, then it might be time to switch to Linux). Once you've identified your must-have applications, take a look at our [Application Alternatives][5] page to learn about many of the common open source apps considered equivalents to popular proprietary ones. + +### Getting Linux + +Not all open source applications get ported to all platforms, and many ultimately benefit from running on Linux. Eventually, if you're keen to switch to Linux, it pays to complete the transition. Luckily, getting Linux is as easy as downloading it, almost as if it were just another open source app. + +Usually, a Linux installation image features a Live mode and an Installation mode. That means you can boot from a Linux drive and use it in Live mode without actually installing it to your computer. This is a great way to get a hint of what the OS is like, but it's only a temporary experience because data isn't retained between boots. However, there are distributions, like [Porteus Linux][6], specially designed for running exclusively off of a USB thumb drive so that your personal data is retained. I keep a Porteus drive on my keychain, so no matter where I am, I have a bootable Linux image with me. + +![Porteus Live Linux distribution][7] + +The Porteus Live Linux distribution desktop. + +### The best Linux + +Once you've decided to install Linux, you'll find plenty of distributions available for the low cost of $0. If you're used to having only one or two choices in an operating system (for instance, a Home Edition and a Business Edition), then you probably find the idea of several different _distributions_ of Linux confusing. It seems more complicated than it actually is, though. Linux is Linux, and it rarely makes much of a difference whose "flavor" you download and install. The big names, like [Fedora][8], [Debian][9], [Linux Mint][10], and [Elementary][11], all deliver the same experience with a slightly unique emphasis. + + * Fedora is famous for being first to update its software + * Linux Mint provides easy options to install missing drivers + * Debian is well known for its long-term support, so updates are slow, but reliability is high + * Elementary provides a beautiful desktop experience and several special, custom-built applications + + + +Ultimately, the "best" Linux is the Linux that works best for you. I mean that literally: The best Linux for you is the Linux you try and find that all your computer features still work as expected. + +### Getting drivers + +Most drivers are already bundled in the Linux kernel (or as a kernel module). This is especially true when you're dealing with parts and peripherals a year or two old. With equipment that's been on the market for a while, Linux programmers have had the chance to develop drivers or integrate drivers from companies into the system. I've surprised more than just a few people by just attaching a Wacom graphics tablet, or a game controller, or a printer or scanner, to my Linux computer and immediately start using it, with no driver download, no installation, and little to no configuration. + +Proprietary operating systems have two strategies for dealing with device drivers. They either restrict what devices you're supposed to use with the OS, or they pay companies to write and ship drivers along with the devices. The first method is uncomfortably restrictive, but most people are (sadly) acclimated to the idea that some devices only work on some platforms. The latter method is considered a luxury, but in practice, it also has the disadvantage of being dependent upon the driver programmer. There are plenty of computer peripherals in thrift stores in perfect condition but essentially dead because the manufacturer no longer maintains the drivers the devices require to function with modern systems. + +For Linux, drivers are developed either by the manufacturer or by Linux programmers. This can cause some driver integration to be delayed (for instance, a new device might not work on Linux until six months after release), but it has the distinct advantage of long-term support. + +Should you find that there's a driver that hasn't made its way into a distribution yet. You can wait a few months and try again in hopes that a driver has been integrated into the installer image, or you can just try a different distribution to see whether the driver is there. + +### Paying for Linux + +You can avoid all the choices and any concern about compatibility by buying a PC already loaded with Linux or a PC certified for Linux. Several vendors are offering Linux pre-installed, including all [System76][12] computers, and select [Lenovo][13] models. Additionally, all Lenovo models are [Certified for Linux][14] compatibility. + +This is by far the easiest way to try Linux. + +### Working toward Linux + +Here's a challenge for you. Go through each application you have installed, and find a potential open source replacement. Select an application you use frequently, but not daily, and the next time you're about to use it, launch the open source one instead. Take the time you need to learn the new application until you're either able to adopt it as your new default or until you determine that you need to try a different option. + +The more open source products you use, the more you'll be ready to launch into the exciting world of Linux. And eventually, you'll be running the Linux desktop, along with your favorite open source apps. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/try-linux + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_3.png?itok=qw2A18BM (Woman sitting in front of her computer) +[2]: https://opensource.com/article/19/12/linux-commands +[3]: https://opensource.com/article/18/4/10-commands-new-linux-users +[4]: https://opensource.com/article/20/5/linux-desktops +[5]: https://opensource.com/alternatives +[6]: http://porteus.org +[7]: https://opensource.com/sites/default/files/porteus5.png +[8]: http://getfedora.org +[9]: http://debian.org +[10]: http://linuxmint.com +[11]: http://elementary.io +[12]: http://system76.com +[13]: http://lenovo.com +[14]: https://forums.lenovo.com/t5/Linux-Operating-Systems/ct-p/lx_en diff --git a/sources/talk/20210216 How Ansible got started and grew.md b/sources/talk/20210216 How Ansible got started and grew.md new file mode 100644 index 0000000000..baa71d6ea1 --- /dev/null +++ b/sources/talk/20210216 How Ansible got started and grew.md @@ -0,0 +1,86 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How Ansible got started and grew) +[#]: via: (https://opensource.com/article/21/2/ansible-origin-story) +[#]: author: (Ben Rometsch https://opensource.com/users/flagsmith) + +How Ansible got started and grew +====== +Ansible's founder Michael DeHaan shares how his background led him to +develop the IT automation software. +![gears and lightbulb to represent innovation][1] + +Recently, Flagsmith founder Ben Rometsch [spoke to Michael DeHaan][2], founder of open source IT automation software Ansible (now part of IBM/Red Hat), on [The Craft of Open Source podcast][3] about how he developed Ansible and what he's been doing since. + +> "If people aren't successful trying (your app/tool) out in about 30 minutes, they're going to move on. You have to make somebody successful within their lunch hour. I spent a lot of time thinking about the documentation." +> —Michael DeHaan + +His remarks were fascinating and enlightening, especially for anyone interested in open source software as a developer, creator, or community member. Read on for a summary of the conversation. + +### How Michael started in open source + +When Michael joined Red Hat's emerging technologies team in 2005, it was the beginning of his journey with open source projects. The team gave him carte blanche to work on any projects he wanted, as long as they helped customers. + +At the time, Xen and KVM were becoming available, and the team wanted to create a good solution to automate a PXE bare-metal infrastructure. Michael created his first open source project, [Cobbler][4], to automate those installs, and it became fairly widely used. + +Func was Michael's next open source project. It came from filling in the gaps between software that enabled bare-metal provisioning and configuration management tools. Func became fairly well deployed, with Fedora using it in some of its infrastructure. Some of Func's concepts and ideas later influenced Ansible. + +### The birth of Ansible + +Ansible came after Michael spent a short spell working for Puppet. Afterward, he worked for another company that was trying to create an integration, but the job wasn't a good fit, and he wanted to return to working on a project in the open source community. + +Frustrated that it still took several days (or longer) to get a setup working due to [DNS][5] and [NTP][6] problems, Michael decided to create an open source solution to automate installations. The idea was to build something SSH- and push-based, without a load of management agents. + +[Ansible][7] was the result of this design goal. It provided an easy, quick solution, rather than spending hours or days using tools like Chef and Puppet. At the time, companies were employing full-time teams of people to manage cloud installations and configurations. Ansible provided a solution that one person could employ in less than one day. + +### Ansible: The right tool for the right time + +Part of Ansible's success was due to timing, Michael says. There was a demand for greater cloud integration and a quick, convenient way to upload content and install apps on cloud systems. + +It would be difficult for a full-time developer to have the time to create something like Ansible in today's world, as things are much more highly scheduled. He says people expect a complete product, not a work in progress with a supportive user community that has the time to experiment and play with a new coding language. Times have changed. + +Also, Ansible was released in 2012, when the demand for [automation was growing][8] more rapidly than now. While some companies have mastered the art of automation and are focusing more on Kubernetes and other things, others continue to discover it. Most companies do their own programming now, with teams of developers creating fully polished solutions. These days, people are using React and Kubernetes, which build on dependencies, rather than starting from scratch on an open source project. + +With Ansible, most of the open source work was in the libraries and the packaging. Michael remembers that they were learning as they went along, initially unsure of how to run a systems-management project in a way that gets lots of feedback from people and can evolve. Puppet and Chef provided a template for how to accomplish this, mostly through trial and error. Michael set up a large number of IRC channel conversations, which allowed direct communication with various use cases. + +Ansible's documentation was key for developers taking it to the next level. It was simple, to the point, used short and persuasive sentences, and included free trial offers. The documentation made it clear to users that Ansible could be run on their own infrastructure without security concerns. + +### Marketing and promotion + +The buzz around Ansible was generated mainly through blog posts, articles, tutorials, and opinions shared on social media. Occasionally, it would break through onto the pages of Hacker News, or a Reddit thread would become popular.  + +Michael focused on creating shareworthy content that people could link to. He also made sure that he read every comment and tweet to gain as much feedback as possible to take the project in the right direction. + +As momentum grew, Michael hired some employees to help him scale up, but he retained creative control and overall project management. When Michael left the Ansible team in 2015, there were around 400 modules, which made management pretty tricky. + +### Life after Ansible + +Michael now works mainly on Django and Python software development, with some apps and projects on the side, including the [WARP][9] music sequencer. + +To discover more about Ansible and Michael's views on today's open source landscape, burnout, GitHub, Google Reader, and much more, [check out the full podcast][2]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/ansible-origin-story + +作者:[Ben Rometsch][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/flagsmith +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation) +[2]: https://www.flagsmith.com/podcast/03-ansible +[3]: https://www.flagsmith.com/podcast +[4]: https://en.wikipedia.org/wiki/Cobbler_(software) +[5]: https://en.wikipedia.org/wiki/Domain_Name_System +[6]: https://en.wikipedia.org/wiki/Network_Time_Protocol +[7]: https://www.ansible.com/ +[8]: https://www.redhat.com/en/blog/new-report-finds-automation-paves-way-business-and-technical-benefits-alike +[9]: https://warpseq.com/ diff --git a/sources/talk/20210216 What does being -technical- mean.md b/sources/talk/20210216 What does being -technical- mean.md new file mode 100644 index 0000000000..c6ef69f293 --- /dev/null +++ b/sources/talk/20210216 What does being -technical- mean.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What does being 'technical' mean?) +[#]: via: (https://opensource.com/article/21/2/what-technical) +[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych) + +What does being 'technical' mean? +====== +Dividing people into "technical" and "non-technical" labels harms people +and organizations. Learn why in part 1 of this series. +![question mark in chalk][1] + +The word "technical" describes many subjects and disciplines: _technical_ knock-out, _technical_ foul, _technical_ courses for rock-climbing competitions, and _technical_ scores for figure skating in sports. The popular cooking show _The Great British Bake-Off_ includes a _technical_ baking challenge. Anybody who has participated in the theatre may be familiar with _technical_ week, the week before the opening night of play or musical. + +As you can see, the word _technical_ does not apply strictly to software engineering and operations, so when we call a person or a role "technical," what do we mean, and why do we use the term? + +Over my 20-year career in tech, these questions have intrigued me, so I decided to explore this through a series of interviews. I am not an engineer, and I don't write code, yet this does not make me non-technical. But I'm regularly labeled such. I consider myself technical, and through this series, I hope you will come to understand why. + +I know I'm not alone in this. It is important to discuss because how a person or role is defined and viewed affects their confidence and ability to do their job well. If they feel crushed or disrespected, it will bring down their work quality and squash innovation and new ideas. It all trickles down, you see, so how can we improve this situation? + +I started by interviewing seven people across a variety of roles. + +In this series, I'll explore the meaning behind the word "technical," the technical continuum, the unintended side effects of categorizing people as technical or non-technical, and technical roles that are often considered non-technical. + +### Defining technical and non-technical + +To start, we need definitions. According to Dictionary.com, "technical" is an adjective with multiple meanings, including: + + * Belonging or pertaining to an art, science, or the like + * Skilled in or familiar in a practical way with a particular art or trade + * Technically demanding or difficult (typically used in sports or arts) + + + +The term "non-technical" is often used in tech companies to describe people in non-engineering roles. The definition of "non-technical" is "not relating to, characteristic of, or skilled in a particular field of activity and its terminology." + +As somebody who writes and speaks about technology, I consider myself technical. It is impossible to write or speak about a technical subject if you aren't familiar with the field and the terminology. With this understanding, everyone who works in tech is technical. + +### Why we assign labels + +So why does technical vs. non-technical matter in the technology field? What are we trying to achieve by assigning these labels? Is there a good reason, was there a good reason, and have we gotten away from those reasons and need to re-evaluate? Let's discuss. + +When I hear people talk about technical vs. non-technical people, I can't help but think of the Dr. Seuss story [_The Sneetches_][2]. Having a star (or not) was seen as something to aspire to. The Sneetches got into an infinite loop trying to achieve the right status. + +Labels can serve a purpose, but when they force a hierarchy of one group being viewed as better than another, they can become dangerous. Think about your organization or your department: Which group—sales, support, marketing, QA, engineering, etc.—is above or below another in importance? + +Even if it's not spoken directly or written somewhere, there is likely an understood hierarchy. These hierarchies often exist within disciplines, as well. Liz Harris, a technical content manager, says there are degrees of "technicalness" within the technical writing community. "Within technical writers, there's a hierarchy where the more technical you are, the more you get paid, and often the more you get listened to in the technical writing community." + +The term "technical" is often used to refer to the level of depth or expertise a person has on a subject. A salesperson may ask for a technical resource to help a customer. By working in tech, they are technical, but they need somebody with deeper expertise and knowledge about a subject than they have. So requesting a technical resource may be vague. Do you need a person with in-depth knowledge of the product? Do you need a person with knowledge of the infrastructure stack? Or somebody to write down steps on how to configure the API? + +Instead of viewing people as either technical or not, we need to start viewing technical ability as a continuum. What does this mean? Mary Thengvall, a director of developer relations, describes how she categorizes the varying depths of technical knowledge needed for a particular role. For instance, projects can require a developer, someone with a developer background, or someone tech-savvy. It's the people who fall into the tech-savvy category who often get labeled as non-technical. + +According to Mary, you're tech-savvy if "you can explain [a technical] topic, you know your way around the product, you know the basics of what to say and what not to say. You don't have to have a technical background, but you need to know the high-level technical information and then also who to point people to for more information." + +### The problem with labels + +When we're using labels to get specific about what we need to get a job done, they can be helpful, like "developer," "developer background," and "tech-savvy." But when we use labels too broadly, putting people into one of two groups can lead to a sense of "less than" and "better than." + +When a label becomes a reality, whether intended or not, we must look at ourselves and reassess our words, labels, and intentions. + +Senior product manager Leon Stigter offers his perspective: "As a collective industry, we are building more technology to make it easier for everyone to participate. If we say to everyone, 'you're not technical, or 'you are technical' and divide them into groups, people that are labeled as non-technical may never think, 'I can do this myself.' We actually need all those people to really think about where we are going as an industry, as a community, and I would almost say as human beings." + +#### Identity + +If we attach our identities to a label, what happens when we think that label no longer applies? When Adam Gordon Bell moved from being a developer to a manager, he struggled because he always identified as technical, and as a manager, those technical skills weren't being used. He felt he was no longer contributing value. Writing code does not provide more value than helping team members grow their careers or making sure a project is delivered on time. There is value in all roles because they are all needed to ensure the creation, execution, and delivery of goods and services. + +"I think that the reason I became a manager was that we had a very smart team and a lot of really skilled people on it, and we weren't always getting the most amazing work done. So the technical skills were not the limiting factor, right? And I think that often they're not," says Adam. + +Leon Stigter says that the ability to get people to work together and get amazing work done is a highly valued skill and should not be less valued than a technical role. + +#### Confidence + +[Impostor syndrome][3] is the inability to recognize your competence and knowledge, leading to reduced confidence and the ability to get your work done and done well. Impostor syndrome can kick in when you apply to speak at a conference, submit an article to a tech publication, or apply for a job. Impostor syndrome is the tiny voice that says: + + * "I'm not technical enough for this role." + * "I know more technical people that would do a better job delivering this talk." + * "I can't write for a technical publication like Opensource.com. I work in marketing." + + + +These voices can get louder the more often you label somebody or yourself as non-technical. This can easily result in not hearing new voices at conferences or losing talented people on your team. + +#### Stereotypes + +What image do you see when you think of somebody as technical? What are they wearing? What other characteristics do they have? Are they outgoing and talkative, or are they shy and quiet? + +Shailvi Wakhlu, a senior director of data, started her career as a software engineer and transitioned to data and analytics. "When I was working as a software engineer, a lot of people made assumptions about me not being very technical because I was very talkative, and apparently that automatically means you're not technical. They're like, 'Wait. You're not isolating in a corner. That means you're not technical,'" she reports. + +Our stereotypes of who is technical vs. non-technical can influence hiring decisions or whether our community is inclusive. You may also offend somebody—even a person you need help from. Years ago, I was working at the booth at a conference and asked somebody if I could help them. "I'm looking for the most technical person here," he responded. He then went off in search of an answer to his question. A few minutes later, the sales rep in the booth walked over to me with the gentleman and said, "Dawn, you're the best person to answer this man's question." + +#### Stigma + +Over time, we've inflated the importance of "technical" skills, which has led to the label "non-technical" being used in a derogatory way. As technology boomed, the value placed on people who code increased because that skill brought new products and ways of doing business to market and directly helped the bottom line. However, now we see people intentionally place technical roles above non-technical roles in ways that hinder their companies' growth and success. + +Interpersonal skills are often referred to as non-technical skills. Yet, there are highly technical aspects to them, like providing step-by-step instructions on how to complete a task or determining the most appropriate words to convey a message or a point. These skills also are often more important in determining your ability to be successful at work. + +**[Read next: [Goodbye soft skills, hello core skills: Why IT must rebrand this critical competency][4]]** + +Reading through articles and definitions on Urban Dictionary, it's no wonder people feel justified in their labeling and others develop impostor syndrome or feel like they've lost their identity. When performing a search online, Urban Dictionary definitions often appear in the top search results. The website started about 20 years ago as a crowdsourced dictionary defining slang, cultural expressions, and other terms, and it has turned into a site filled with hostile and negative definitions. + +Here are a few examples: Urban Dictionary defines a non-technical manager as "a person that does not know what the people they manage are meant to do." + +Articles that provide tips for how to talk to "non-technical" people include phrases like: + + * "If I struggled, how on earth did the non-technical people cope?" + * "Among today's career professionals, developers and engineers have some of the most impressive skill sets around, honed by years of tech training and real-world experience." + + + +These sentences imply that non-engineers are inferior, that their years of training and real-world experiences are somehow not as impressive. One person I spoke to for this project was Therese Eberhard. Her job is what many consider non-technical. She's a scenic painter. She paints props and scenery for film and theatre. Her job is to make sure props like Gandalf's cane appear lifelike rather than like a plastic toy. There's a lot of problem-solving and experimenting with chemical reactions required to be successful in this role. Therese honed these skills over years of real-world experience, and, to me, that's quite impressive. + +#### Gatekeeping + +Using labels erects barriers and can lead to gatekeeping to decide who can be let into our organization, our teams, our communities. + +According to Eddie Jaoude, an open source developer, "The titles 'technical,' 'developer,' or 'tester' create barriers or authority where it shouldn't be. And I've really come to the point of view where it's about adding value to the team or for the project—the title is irrelevant." + +If we view each person as a team member who should contribute value in one way or another, not on whether they write documentation or test cases or code, we will be placing importance based on what really matters and creating a team that gets amazing work done. If a test engineer wants to learn to write code or a coder wants to learn how to talk to people at events, why put up barriers to prevent that growth? Embrace team members' eagerness to learn and change and evolve in any direction that serves the team and the company's mission. + +If somebody is failing in a role, instead of writing them off as "not technical enough," examine what the problem really is. Did you need somebody skilled at JavaScript, and the person is an expert in a different programming language? It's not that they're not technical. There is a mismatch in skills and knowledge. You need the right people doing the right role. If you force somebody skilled at business analysis and writing acceptance criteria into a position where they have to write automated test cases, they'll fail. + +### How to retire the labels + +If you're ready to shift the way you think about the labels technical and non-technical, here are a few tips to get started. + +#### Find alternative words + +I asked everyone I interviewed what words we could use instead of technical and non-technical. No one had an answer! I think the challenge here is that we can't boil it down to a single word. To replace the terms, you need to use more words. As I wrote earlier, what we need to do is get more specific. + +How many times have you said or heard a phrase like: + + * "I'm looking for a technical resource for this project." + * "That candidate isn't technical enough." + * "Our software is designed for non-technical users." + + + +These uses of the words technical and non-technical are vague and don't convey their full meaning. A truer and more detailed look at what you need may result in: + + * "I'm looking for a person with in-depth knowledge of how to configure Kubernetes." + * "That candidate didn't have deep enough knowledge of Go." + * "Our software is designed for sales and marketing teams." + + + +#### Embrace a growth mindset + +Knowledge and skills are not innate. They are developed over hours or years of practice and experience. Thinking, "I'm just not technical enough" or "I can't learn how to do marketing" reflects a fixed mindset. You can learn technical abilities in any direction you want to grow. Make a list of your skills—ones you think of as technical and some as non-technical—but make them specific (like in the list above). + +#### Recognize everyone's contributions + +If you work in the tech industry, you're technical. Everyone has a part to play in a project's or company's success. Share the accolades with everyone who contributes, not only a select few. Recognize the product manager who suggested a new feature, not only the engineers who built it. Recognize the writer whose article went viral and generated new leads for your company. Recognize the data analyst who found new patterns in the data. + +### Next steps + +In the next article in this series, I'll explore non-engineering roles in tech that are often labeled "non-technical." + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/what-technical + +作者:[Dawn Parzych][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dawnparzych +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/question-mark_chalkboard.jpg?itok=DaG4tje9 (question mark in chalk) +[2]: https://en.wikipedia.org/wiki/The_Sneetches_and_Other_Stories +[3]: https://opensource.com/business/15/9/tips-avoiding-impostor-syndrome +[4]: https://enterprisersproject.com/article/2019/8/why-soft-skills-core-to-IT diff --git a/sources/talk/20210217 4 tech jobs for people who don-t code.md b/sources/talk/20210217 4 tech jobs for people who don-t code.md new file mode 100644 index 0000000000..63af4450e1 --- /dev/null +++ b/sources/talk/20210217 4 tech jobs for people who don-t code.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 tech jobs for people who don't code) +[#]: via: (https://opensource.com/article/21/2/non-engineering-jobs-tech) +[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych) + +4 tech jobs for people who don't code +====== +There are many roles in tech for people who aren't engineers. Explore +some of them in part 2 of this series. +![Looking at a map][1] + +In the [first article in this series][2], I explained how the tech industry divides people and roles into "technical" or "non-technical" categories and the problems associated with this. The tech industry makes it difficult for people interested in tech—but not coding—to figure out where they fit in and what they can do. + +If you're interested in technology or open source but aren't interested in coding, there are roles available for you. Any of these positions at a tech company likely require somebody who is tech-savvy but does not necessarily write code. You do, however, need to know the terminology and understand the product. + +I've recently noticed the addition of the word "technical" onto job titles such as technical account manager, technical product manager, technical community manager, etc. This mirrors the trend a few years ago where the word "engineer" was tacked onto titles to indicate the role's technical needs. After a while, everybody has the word "engineer" in their title, and the classification loses some of its allure. + +As I sat down to write these articles, this tweet from Tim Banks appeared in my timeline: + +> Women who've made career changes into tech, but aren't developers (think like infosec, data science/analysts, infra engineers, etc), what are some things you'd wished you'd known, resources that were valuable, or advice you'd have for someone looking to make a similar change? +> +> — Tim Banks is a buttery biscuit (@elchefe) [December 15, 2020][3] + +This follows the advice in my first article: Tim does not simply ask about "non-technical roles"; he provides more significant context. On a medium like Twitter, where every character counts, those extra characters make a difference. These are _technical_ roles. Calling them non-technical to save characters in a tweet would have changed the impact and meaning. + +Here's a sampling of non-engineering roles in tech that require technical knowledge. + +### Technical writer + +A [technical writer's job][4] is to transfer factual information between two or more parties. Traditionally, a technical writer provides instructions or documentation on how to use a technology product. Recently, I've seen the term "technical writer" refer to people who write other forms of content. Tech companies want a person to write blog posts for their developer audience, and this skill is different from copywriting or content marketing. + +**Technical skills required:** + + * Writing + * User knowledge or experience with a specific technology + * The ability to quickly come up to speed on a new product or feature + * Skill in various authoring environments + + + +**Good for people who:** + + * Can plainly provide step-by-step instructions + * Enjoy collaborating + * Have a passion for the active voice and Oxford comma + * Enjoy describing the what and how + + + +### Product manager + +A [product manager][5] is responsible for leading a product's strategy. Responsibilities may include gathering and prioritizing customers' requirements, writing business cases, and training the sales force. Product managers work cross-functionally to successfully launch a product using a combination of creative and technical skills. Product managers require deep product expertise. + +**Technical skills required:** + + * Hands-on product knowledge and the ability to configure or run a demo + * Knowledge of the technological ecosystem related to the product + * Analytical and research skills + + + +**Good for people who:** + + * Enjoy strategizing and planning what comes next + * Can see a common thread in different people's needs + * Can articulate the business needs and requirements + * Enjoy describing the why + + + +### Data analyst + +Data analysts are responsible for collecting and interpreting data to help drive business decisions such as whether to enter a new market, what customers to target, or where to invest. The role requires knowing how to use all of the potential data available to make decisions. We tend to oversimplify things, and data analysis is often over-simplified. Getting the right information isn't as simple as writing a query to "select all limit 10" to get the top 10 rows. You need to know what tables to join. You need to know how to sort. You need to know whether the data needs to be cleaned up in some way before or after running the query. + +**Technical skills required:** + + * Knowledge of SQL, Python, and R + * Ability to see and extract patterns in data + * Understanding how things function end to end + * Critical thinking + * Machine learning + + + +**Good for people who:** + + * Enjoy problem-solving + * Desire to learn and ask questions + + + +### Developer relations + +[Developer relations][6] is a relatively new discipline in technology. It encompasses the roles of [developer advocate][7], developer evangelist, and developer marketing, among others. These roles require you to communicate with developers, build relationships with them, and help them be more productive. You advocate for the developers' needs to the company and represent the company to the developer. Developer relations can include writing articles, creating tutorials, recording podcasts, speaking at conferences, and creating integrations and demos. Some say you need to have worked as a developer to move into developer relations. I did not take that path, and I know many others who haven't. + +**Technical skills required:** + +These will be highly dependent on the company and the role. You will need some (not all) depending on your focus. + + * Understanding technical concepts related to the product + * Writing + * Video and audio editing for tutorials and podcasts + * Speaking + + + +**Good for people who:** + + * Have empathy and want to teach and empower others + * Can advocate for others + * Are creative + + + +### Endless possibilities + +This is not a comprehensive list of all the non-engineering roles available in tech, but a sampling of some of the roles available for people who don't enjoy writing code daily. If you're interested in a tech career, look at your skills and what roles would be a good fit. The possibilities are endless. To help you in your journey, in the final article in this series, I'll share some advice from people who are in these roles. + +There are lots of non-code ways to contribute to open source: Here are three alternatives. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/non-engineering-jobs-tech + +作者:[Dawn Parzych][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dawnparzych +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) +[2]: https://opensource.com/article/21/2/what-does-it-mean-be-technical +[3]: https://twitter.com/elchefe/status/1338933320147750915?ref_src=twsrc%5Etfw +[4]: https://opensource.com/article/17/5/technical-writing-job-interview-tips +[5]: https://opensource.com/article/20/2/product-management-open-source-company +[6]: https://www.marythengvall.com/blog/2019/5/22/what-is-developer-relations-and-why-should-you-care +[7]: https://opensource.com/article/20/10/open-source-developer-advocates diff --git a/sources/talk/20210218 Not an engineer- Find out where you belong.md b/sources/talk/20210218 Not an engineer- Find out where you belong.md new file mode 100644 index 0000000000..0a6589a01b --- /dev/null +++ b/sources/talk/20210218 Not an engineer- Find out where you belong.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Not an engineer? Find out where you belong) +[#]: via: (https://opensource.com/article/21/2/advice-non-technical) +[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych) + +Not an engineer? Find out where you belong +====== +Whether you've been working for decades or are just starting in +non-engineering tech role, this advice can help you figure out where you +belong. +![Looking at a map for career journey][1] + +In the [first article in this series][2], I explained the problems with dividing people and roles into "technical" or "non-technical" categories. In the [second article][3], I shared some of the tech roles for people who don't code. Here, I'll wrap up this exploration into what it means to be technical or non-technical with some recommendations to help you on your journey. + +Whether you've been working in tech for decades, are just starting, or are looking to change careers, consider the advice in this article from people who have been labeled as non-technical but are succeeding in tech roles. + +> "Don't tie up what you do and your identity. Get them separate." +> —Adam Gordon Bell, Developer Relations, Earthly Technologies + +Switching roles doesn't mean your skills will disappear and you no longer have value. If you take on a new role, you need to focus on that role's critical skills. It takes time to develop skills. Take your time, and figure out the important skills for the new position. + +If you manage engineers, encourage them to develop their non-engineering skills and their technical skills. These skills often make a more significant difference in career growth and success than coding and technical skills. + +### Be yourself + +> "Don't let other people define whether you are technical or not-technical. "What's technical and what's not, and whether that's important or not is something the people have to figure out for themselves." +> —Adam Gordon Bell + +> "Don't ever start a conversation with, 'I'm not technical.' It can come across as, 'I need to warn you about this thing,' which is never a good impression to make for an interview, but it also has the potential to come across as a lack of confidence in your skills." +> —Mary Thengvall, Director of Developer Relations, Camunda + +Avoid the stereotypes; not all engineers like Star Wars or Star Trek. Not all engineers wear hoodies. Engineers can speak to people. + +> "People have a lot of perceptions about the technical, non-technical piece, in terms of how you present. When I was working in the office, I would wear a dress because that's how I feel comfortable." +> —Shailvi Wakhlu, Senior Director of Data, Strava + +### Know your worth + +As I discussed in the first article, being labeled non-technical can lead to [impostor syndrome][4]. Recognize your value, and don't apply the non-technical label to yourself because it can limit earning potential and career growth. + +> "People kept reboxing me into something else because they thought I was different from the normal stereotype that they had for an engineer. I'm so glad that I didn't listen to any of those people because inherently they were also telling me to go for a lesser-paying job on the basis of something that was in addition to the skills that I had." +> —Shailvi Wakhlu + +> "It is more likely the younger and … the woman in tech, especially new to tech, who's going to have imposter syndrome, who's going to consider themselves not technical enough. Like, 'oh, I only do front-end.' What do you mean you _only_ do front-end? Front-end is incredibly hard." +> —Liz Harris + +### Find where you can add value and help people + +You don't need to create a pull request to participate in open source. + +> "I always say to people when they try to contribute to open source projects, 'don't think, it's got to be a commit, it's got to be a pull request.' It's like, 'No. How can you add value to that project?' If you haven't got time to do the pull request, are you raising an issue and putting the points down?" +> —Eddie Jaoude, Open Source Developer, Jaoude Studios + +### Diversity of thought leads to success + +See the value and contributions of all roles and people. Don't pigeonhole them into a set of abilities based on their title. + +> "Realize how important everybody is, including yourself at all times, and the overall picture of things. Being creative shouldn't be ego-driven. Realize that you can always be better. You can also be worse at what you do. Don't be afraid to ask for help, and realize that we're in there together." +> —Therese Eberhard, scenic painter for film, commercials, and video + +> "The hackathons that I have attended where we've all been technical, we've built a great team of four or five hardcore coders, we've lost. I kid you not, we've lost. And before COVID, I won six previous hackathons, and half the team was focused on other areas. In the hackathons we won, half the team would be considered non-technical by most people, although I do not like this term, as it is about adding value to the team/project. We won because we had so many different perspectives on what we were building." +> —Eddie Jaoude + +> "The more we can move away from those labels of technical/not technical, developer/not developer, and understand that there's a continuum, the better off we're all going to be as far as hiring the right people for the job, instead of letting ourselves get hung up by the presumption that you need a technical team." +> —Mary Thengvall + +The more diverse our communities and teams are, the more inclusive they are. + +> "I honestly think the most important thing, whether it's from a community perspective or whether it's from a product perspective, just in general, we have to make sure that we built an inclusive community, not just for our products, not just for the technology that we're working on, but as a human society in general. And I'm going to guess… I'm going to wager that if we do that, as a human species, we will actually be better than what we were yesterday." +> —Leon Stigter, Senior Product Manager, Lightbend + +If you work in a non-coding tech role, what advice would you give people who consider themselves "non-technical" (or are considered by others to be so)? Share your insights in the comments. + +There are lots of non-code ways to contribute to open source: Here are three alternatives. + +This year's keynote speaker at the annual All Things Open conference is Red Hat's DeLisa Alexander... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/advice-non-technical + +作者:[Dawn Parzych][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dawnparzych +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey) +[2]: https://opensource.com/article/21/2/what-does-it-mean-be-technical +[3]: https://opensource.com/article/21/2/non-engineering-jobs-tech +[4]: https://opensource.com/business/15/9/tips-avoiding-impostor-syndrome diff --git a/sources/tech/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md b/sources/tech/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md new file mode 100644 index 0000000000..4a64e58971 --- /dev/null +++ b/sources/tech/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md @@ -0,0 +1,154 @@ +[#]: collector: (Chao-zhi) +[#]: translator: (Chao-zhi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Typeset your docs with LaTeX and TeXstudio on Fedora) +[#]: via: (https://fedoramagazine.org/typeset-latex-texstudio-fedora/) +[#]: author: (Julita Inca Chiroque ) + +Typeset your docs with LaTeX and TeXstudio on Fedora +====== +![](https://fedoramagazine.org/wp-content/uploads/2017/07/latex-texstudio-945x400.jpg) + +LaTeX is [a document preparation system][1] for high-quality typesetting. It’s often used for larger technical or scientific documents. However, you can use LaTeX for almost any form of publishing. Teachers can edit their exams and syllabi, and students can present their thesis and reports for classes. This article gets you started with the TeXstudio app. TeXstudio makes it easy to edit LaTeX documents. + +### Launching TeXstudio + +If you’re using Fedora Workstation, launch Software, and type TeXstudio to search for the app. Then select Install to add TeXstudio to your system. You can also launch the app from Software, or go to the shell overview as usual. + +Alternately, if you use a terminal, type texstudio. If the package isn’t installed, the system prompts you to install it. Type y to start the installation. + +``` +$ texstudio +bash: texstudio: command not found... +Install package 'texstudio' to provide command 'texstudio'? [N/y] y +``` + +LaTeX commands typically start with a backslash (), and command parameters are enclosed in curly braces { }. Start by declaring the type of the documentclass. This example shows you the document class is an article. + +Then, once you declare the documentclass, mark the beginning and the end of the document with begin and end. In between these commands, write a paragraph similar to the following: + +``` +\documentclass{article} +\begin{document} +The Fedora Project is a project sponsored by Red Hat primarily to co-ordinate the development of the Linux-based Fedora operating system, operating with the vision that the project "creates a world where free culture is welcoming and widespread, collaboration is commonplace, and people control their content and devices". The Fedora Project was founded on 22 September 2003 when Red Hat decided to split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and a community-based operating system, Fedora. +\end{document} +``` + +![](https://fedoramagazine.org/wp-content/uploads/2017/07/Screenshot-from-2017-10-05-20-19-15.png) + +### Working with spacing + +To create a paragraph break, leave one or more blank spaces between text.  Here’s an example with four paragraphs: + +![](https://fedoramagazine.org/wp-content/uploads/2017/07/Screenshot-from-2017-10-18-14-24-42.png) + +You can see from the example that more than one line break doesn’t create additional blank space between paragraphs. However, if you do need to leave additional space, use the commands hspace and vspace. These add horizontal and vertical space, respectively. Here is some example code that shows additional spacing around paragraphs: + +``` +\documentclass{article} +\begin{document} + +\hspace{2.5cm} The four essential freedoms + +\vspace{0.6cm} +A program is free software if the program's users have the 4 essential freedoms: + +The freedom to run the program as you wish, for any purpose (freedom 0).\vspace{0.2cm} +The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.\vspace{0.2cm} + +The freedom to redistribute copies so you can help your neighbour (freedom 2).\vspace{0.2cm} + +The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. + +\end{document} +``` + +![](https://fedoramagazine.org/wp-content/uploads/2017/07/Screenshot-from-2017-10-18-17-24-53.png) + +### Using Lists and Formats + +This example would look better if it presented the four essential freedoms of free software as a list. Set the list structure by using \begin{itemize} at the beginning of the list, and \end{itemize} at the end. Precede each item with the command \item. + +Additional format also helps make the text more readable. Useful commands for formatting include bold, italic, underline, huge, large, tiny and textsc to help emphasize text: + +``` +\documentclass{article} +\begin{document} + +\hspace{2cm} {\huge The four essential freedoms} + +\vspace{0.6cm} +\noindent {\large A program is free software if the program's users have the 4 essential freedoms}: +\begin{itemize} +\item \vspace{0.2cm} +\noindent \textbf{The freedom to run} the program as you wish, for any purpose \textit{(freedom 0)}. \vspace{0.2cm} +\item \noindent \textbf{The freedom to study} how the program works, and change it so it does your computing as you wish \textit{(freedom 1)}. Access to the source code is a precondition for this.\vspace{0.2cm} + +\item \noindent \textbf{The freedom to redistribute} copies so you can help your neighbour \textit{(freedom 2)}.\vspace{0.2cm} + +\item \noindent \textbf{The freedom to distribute copies of your modified versions} to others \textit{(freedom 3)}. \tiny{By doing this you can give the whole community a chance to benefit from your changes.\underline{\textsc{ Access to the source code is a precondition for this.}}} +\end{itemize} +\end{document} +``` + +### Adding columns, images and links + +Columns, images and links help add further information to your text. LaTeX includes functions for some advanced features as packages. The \usepackage command loads the package so you can make use of these features. + +For example, to make an image visible, you might use the command \usepackage{graphicx}. Or, to set up columns and links, use \usepackage{multicol} and \usepackage{hyperref}, respectively. + +The \includegraphics command places an image inline in your document. (For simplicity, include the graphics file in the same directory as your LaTeX source file.) + +Here’s an example that uses all these concepts. It also uses two PNG graphics files that were downloaded. Try your own graphics to see how they work. + +``` +\documentclass{article} +\usepackage{graphicx} +\usepackage{multicol} +\usepackage{hyperref} +\begin{document} + \textbf{GNU} + \vspace{1cm} + + GNU is a recursive acronym for "GNU's Not Unix!", chosen because GNU's design is Unix-like, but differs from Unix by being free software and containing no Unix code. + + Richard Stallman, the founder of the project, views GNU as a "technical means to a social end". Stallman has written about "the social aspects of software and how Free Software can create community and social justice." in his "Free Society" book. + \vspace{1cm} + + \textbf{Some Projects} + + \begin{multicols}{2} + Fedora + \url{https://getfedora.org} + \includegraphics[width=1cm]{fedora.png} + + GNOME + \url{https://getfedora.org} + \includegraphics[width=1cm]{gnome.png} + \end{multicols} + +\end{document} +``` + +[][2] + +The features here only scratch the surface of LaTeX capabilities. You can learn more about them at the project [help and documentation site][3]. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/typeset-latex-texstudio-fedora/ + +作者:[Julita Inca Chiroque][a] +选题:[Chao-zhi][b] +译者:[Chao-zhi][b] +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/yulytas/ +[b]: https://github.com/Chao-zhi +[1]:http://www.latex-project.org/about/ +[2]:https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/ +[3]:https://www.latex-project.org/help/ diff --git a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md deleted file mode 100644 index 8209a7959a..0000000000 --- a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md +++ /dev/null @@ -1,443 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (stevenzdg988) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS) -[#]: via: (https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/) -[#]: author: (SK https://www.ostechnix.com/author/sk/) - -Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS -====== - -![](https://www.ostechnix.com/wp-content/uploads/2019/02/lamp-720x340.jpg) - -**LAMP** stack is a popular, open source web development platform that can be used to run and deploy dynamic websites and web-based applications. Typically, LAMP stack consists of Apache webserver, MariaDB/MySQL databases, PHP/Python/Perl programming languages. LAMP is the acronym of **L** inux, **M** ariaDB/ **M** YSQL, **P** HP/ **P** ython/ **P** erl. This tutorial describes how to install Apache, MySQL, PHP (LAMP stack) in Ubuntu 18.04 LTS server. - -### Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS - -For the purpose of this tutorial, we will be using the following Ubuntu testbox. - - * **Operating System** : Ubuntu 18.04.1 LTS Server Edition - * **IP address** : 192.168.225.22/24 - - - -#### 1. Install Apache web server - -First of all, update Ubuntu server using commands: - -``` -$ sudo apt update - -$ sudo apt upgrade -``` - -Next, install Apache web server: - -``` -$ sudo apt install apache2 -``` - -Check if Apache web server is running or not: - -``` -$ sudo systemctl status apache2 -``` - -Sample output would be: - -``` -● apache2.service - The Apache HTTP Server - Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: en - Drop-In: /lib/systemd/system/apache2.service.d - └─apache2-systemd.conf - Active: active (running) since Tue 2019-02-05 10:48:03 UTC; 1min 5s ago - Main PID: 2025 (apache2) - Tasks: 55 (limit: 2320) - CGroup: /system.slice/apache2.service - ├─2025 /usr/sbin/apache2 -k start - ├─2027 /usr/sbin/apache2 -k start - └─2028 /usr/sbin/apache2 -k start - -Feb 05 10:48:02 ubuntuserver systemd[1]: Starting The Apache HTTP Server... -Feb 05 10:48:03 ubuntuserver apachectl[2003]: AH00558: apache2: Could not reliably -Feb 05 10:48:03 ubuntuserver systemd[1]: Started The Apache HTTP Server. -``` - -Congratulations! Apache service is up and running!! - -##### 1.1 Adjust firewall to allow Apache web server - -By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https ports by following the below steps. - -First, list out the application profiles available on your Ubuntu system using command: - -``` -$ sudo ufw app list -``` - -Sample output: - -``` -Available applications: -Apache -Apache Full -Apache Secure -OpenSSH -``` - -As you can see, Apache and OpenSSH applications have installed UFW profiles. You can list out information about each profile and its included rules using “ **ufw app info “Profile Name”** command. - -Let us look into the **“Apache Full”** profile. To do so, run: - -``` -$ sudo ufw app info "Apache Full" -``` - -Sample output: - -``` -Profile: Apache Full -Title: Web Server (HTTP,HTTPS) -Description: Apache v2 is the next generation of the omnipresent Apache web -server. - -Ports: -80,443/tcp -``` - -As you see, “Apache Full” profile has included the rules to enable traffic to the ports **80** and **443** : - -Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile: - -``` -$ sudo ufw allow in "Apache Full" -Rules updated -Rules updated (v6) -``` - -If you don’t want to allow https traffic, but only http (80) traffic, run: - -``` -$ sudo ufw app info "Apache" -``` - -##### 1.2 Test Apache Web server - -Now, open your web browser and access Apache test page by navigating to **** or ****. - -![](https://www.ostechnix.com/wp-content/uploads/2016/06/apache-2.png) - -If you are see a screen something like above, you are good to go. Apache server is working! - -#### 2. Install MySQL - -To install MySQL On Ubuntu, run: - -``` -$ sudo apt install mysql-server -``` - -Verify if MySQL service is running or not using command: - -``` -$ sudo systemctl status mysql -``` - -**Sample output:** - -``` -● mysql.service - MySQL Community Server -Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enab -Active: active (running) since Tue 2019-02-05 11:07:50 UTC; 17s ago -Main PID: 3423 (mysqld) -Tasks: 27 (limit: 2320) -CGroup: /system.slice/mysql.service -└─3423 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid - -Feb 05 11:07:49 ubuntuserver systemd[1]: Starting MySQL Community Server... -Feb 05 11:07:50 ubuntuserver systemd[1]: Started MySQL Community Server. -``` - -Mysql is running! - -##### 2.1 Setup database administrative user (root) password - -By default, MySQL **root** user password is blank. You need to secure your MySQL server by running the following script: - -``` -$ sudo mysql_secure_installation -``` - -You will be asked whether you want to setup **VALIDATE PASSWORD plugin** or not. This plugin allows the users to configure strong password for database credentials. If enabled, It will automatically check the strength of the password and enforces the users to set only those passwords which are secure enough. **It is safe to leave this plugin disabled**. However, you must use a strong and unique password for database credentials. If don’t want to enable this plugin, just press any key to skip the password validation part and continue the rest of the steps. - -If your answer is **Yes** , you will be asked to choose the level of password validation. - -``` -Securing the MySQL server deployment. - -Connecting to MySQL using a blank password. - -VALIDATE PASSWORD PLUGIN can be used to test passwords -and improve security. It checks the strength of password -and allows the users to set only those passwords which are -secure enough. Would you like to setup VALIDATE PASSWORD plugin? - -Press y|Y for Yes, any other key for No y -``` - -The available password validations are **low** , **medium** and **strong**. Just enter the appropriate number (0 for low, 1 for medium and 2 for strong password) and hit ENTER key. - -``` -There are three levels of password validation policy: - -LOW Length >= 8 -MEDIUM Length >= 8, numeric, mixed case, and special characters -STRONG Length >= 8, numeric, mixed case, special characters and dictionary file - -Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: -``` - -Now, enter the password for MySQL root user. Please be mindful that you must use password for mysql root user depending upon the password policy you choose in the previous step. If you didn’t enable the plugin, just use any strong and unique password of your choice. - -``` -Please set the password for root here. - -New password: - -Re-enter new password: - -Estimated strength of the password: 50 -Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y -``` - -Once you entered the password twice, you will see the password strength (In our case it is **50** ). If it is OK for you, press Y to continue with the provided password. If not satisfied with password length, press any other key and set a strong password. I am OK with my current password, so I chose **y**. - -For the rest of questions, just type **y** and hit ENTER. This will remove anonymous user, disallow root user login remotely and remove test database. - -``` -Remove anonymous users? (Press y|Y for Yes, any other key for No) : y -Success. - -Normally, root should only be allowed to connect from -'localhost'. This ensures that someone cannot guess at -the root password from the network. - -Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y -Success. - -By default, MySQL comes with a database named 'test' that -anyone can access. This is also intended only for testing, -and should be removed before moving into a production -environment. - -Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y -- Dropping test database... -Success. - -- Removing privileges on test database... -Success. - -Reloading the privilege tables will ensure that all changes -made so far will take effect immediately. - -Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y -Success. - -All done! -``` - -That’s it. Password for MySQL root user has been set. - -##### 2.2 Change authentication method for MySQL root user - -By default, MySQL root user is set to authenticate using the **auth_socket** plugin in MySQL 5.7 and newer versions on Ubuntu. Even though it enhances the security, it will also complicate things when you access your database server using any external programs, for example phpMyAdmin. To fix this issue, you need to change authentication method from **auth_socket** to **mysql_native_password**. To do so, login to your MySQL prompt using command: - -``` -$ sudo mysql -``` - -Run the following command at the mysql prompt to find the current authentication method for all mysql user accounts: - -``` -SELECT user,authentication_string,plugin,host FROM mysql.user; -``` - -**Sample output:** - -``` -+------------------|-------------------------------------------|-----------------------|-----------+ -| user | authentication_string | plugin | host | -+------------------|-------------------------------------------|-----------------------|-----------+ -| root | | auth_socket | localhost | -| mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | -| mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | -| debian-sys-maint | *F126737722832701DD3979741508F05FA71E5BA0 | mysql_native_password | localhost | -+------------------|-------------------------------------------|-----------------------|-----------+ -4 rows in set (0.00 sec) -``` - -![][2] - -As you see, mysql root user uses `auth_socket` plugin for authentication. - -To change this authentication to **mysql_native_password** method, run the following command at mysql prompt. Don’t forget to replace **“password”** with a strong and unique password of your choice. If you have enabled VALIDATION plugin, make sure you have used a strong password based on the current policy requirements. - -``` -ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; -``` - -Update the changes using command: - -``` -FLUSH PRIVILEGES; -``` - -Now check again if the authentication method is changed or not using command: - -``` -SELECT user,authentication_string,plugin,host FROM mysql.user; -``` - -Sample output: - -![][3] - -Good! Now the myql root user can authenticate using password to access mysql shell. - -Exit from the mysql prompt: - -``` -exit -``` - -#### 3\. Install PHP - -To install PHP, run: - -``` -$ sudo apt install php libapache2-mod-php php-mysql -``` - -After installing PHP, create **info.php** file in the Apache root document folder. Usually, the apache root document folder will be **/var/www/html/** or **/var/www/** in most Debian based Linux distributions. In Ubuntu 18.04 LTS, it is **/var/www/html/**. - -Let us create **info.php** file in the apache root folder: - -``` -$ sudo vi /var/www/html/info.php -``` - -Add the following lines: - -``` - -``` - -Press ESC key and type **:wq** to save and quit the file. Restart apache service to take effect the changes. - -``` -$ sudo systemctl restart apache2 -``` - -##### 3.1 Test PHP - -Open up your web browser and navigate to **** URL. - -You will see the php test page now. - -![](https://www.ostechnix.com/wp-content/uploads/2019/02/php-test-page.png) - -Usually, when a user requests a directory from the web server, Apache will first look for a file named **index.html**. If you want to change Apache to serve php files rather than others, move **index.php** to first position in the **dir.conf** file as shown below - -``` -$ sudo vi /etc/apache2/mods-enabled/dir.conf -``` - -Here is the contents of the above file. - -``` - -DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm - - -# vim: syntax=apache ts=4 sw=4 sts=4 sr noet -``` - -Move the “index.php” file to first. Once you made the changes, your **dir.conf** file will look like below. - -``` - -DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm - - -# vim: syntax=apache ts=4 sw=4 sts=4 sr noet -``` - -Press **ESC** key and type **:wq** to save and close the file. Restart Apache service to take effect the changes. - -``` -$ sudo systemctl restart apache2 -``` - -##### 3.2 Install PHP modules - -To improve the functionality of PHP, you can install some additional PHP modules. - -To list the available PHP modules, run: - -``` -$ sudo apt-cache search php- | less -``` - -**Sample output:** - -![][4] - -Use the arrow keys to go through the result. To exit, type **q** and hit ENTER key. - -To find the details of any particular php module, for example **php-gd** , run: - -``` -$ sudo apt-cache show php-gd -``` - -To install a php module run: - -``` -$ sudo apt install php-gd -``` - -To install all modules (not necessary though), run: - -``` -$ sudo apt-get install php* -``` - -Do not forget to restart Apache service after installing any php module. To check if the module is loaded or not, open info.php file in your browser and check if it is present. - -Next, you might want to install any database management tools to easily manage databases via a web browser. If so, install phpMyAdmin as described in the following link. - -Congratulations! We have successfully setup LAMP stack in Ubuntu 18.04 LTS server. - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[stevenzdg988](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-1.png -[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-2.png -[4]: http://www.ostechnix.com/wp-content/uploads/2016/06/php-modules.png diff --git a/sources/tech/20190215 Make websites more readable with a shell script.md b/sources/tech/20190215 Make websites more readable with a shell script.md deleted file mode 100644 index 06b748cfb5..0000000000 --- a/sources/tech/20190215 Make websites more readable with a shell script.md +++ /dev/null @@ -1,258 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Make websites more readable with a shell script) -[#]: via: (https://opensource.com/article/19/2/make-websites-more-readable-shell-script) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) - -Make websites more readable with a shell script -====== -Calculate the contrast ratio between your website's text and background to make sure your site is easy to read. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ) - -If you want people to find your website useful, they need to be able to read it. The colors you choose for your text can affect the readability of your site. Unfortunately, a popular trend in web design is to use low-contrast colors when printing text, such as gray text on a white background. Maybe that looks really cool to the web designer, but it is really hard for many of us to read. - -The W3C provides Web Content Accessibility Guidelines, which includes guidance to help web designers pick text and background colors that can be easily distinguished from each other. This is called the "contrast ratio." The W3C definition of the contrast ratio requires several calculations: given two colors, you first compute the relative luminance of each, then calculate the contrast ratio. The ratio will fall in the range 1 to 21 (typically written 1:1 to 21:1). The higher the contrast ratio, the more the text will stand out against the background. For example, black text on a white background is highly visible and has a contrast ratio of 21:1. And white text on a white background is unreadable at a contrast ratio of 1:1. - -The [W3C says body text][1] should have a contrast ratio of at least 4.5:1 with headings at least 3:1. But that seems to be the bare minimum. The W3C also recommends at least 7:1 for body text and at least 4.5:1 for headings. - -Calculating the contrast ratio can be a chore, so it's best to automate it. I've done that with this handy Bash script. In general, the script does these things: - - 1. Gets the text color and background color - 2. Computes the relative luminance of each - 3. Calculates the contrast ratio - - - -### Get the colors - -You may know that every color on your monitor can be represented by red, green, and blue (R, G, and B). To calculate the relative luminance of a color, my script will need to know the red, green, and blue components of the color. Ideally, my script would read this information as separate R, G, and B values. Web designers might know the specific RGB code for their favorite colors, but most humans don't know RGB values for the different colors. Instead, most people reference colors by names like "red" or "gold" or "maroon." - -Fortunately, the GNOME [Zenity][2] tool has a color-picker app that lets you use different methods to select a color, then returns the RGB values in a predictable format of "rgb( **R** , **G** , **B** )". Using Zenity makes it easy to get a color value: - -``` -color=$( zenity --title 'Set text color' --color-selection --color='black' ) -``` - -In case the user (accidentally) clicks the Cancel button, the script assumes a color: - -``` -if [ $? -ne 0 ] ; then -        echo '** color canceled .. assume black' -        color='rgb(0,0,0)' -fi -``` - -My script does something similar to set the background color value as **$background**. - -### Compute the relative luminance - -Once you have the foreground color in **$color** and the background color in **$background** , the next step is to compute the relative luminance for each. On its website, the [W3C provides an algorithm][3] to compute the relative luminance of a color. - -> For the sRGB colorspace, the relative luminance of a color is defined as -> **L = 0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated R + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated G + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated B** where R, G and B are defined as: -> -> if RsRGB <= 0.03928 then R = RsRGB/12.92 -> else R = ((RsRGB+0.055)/1.055) ^ 2.4 -> -> if GsRGB <= 0.03928 then G = GsRGB/12.92 -> else G = ((GsRGB+0.055)/1.055) ^ 2.4 -> -> if BsRGB <= 0.03928 then B = BsRGB/12.92 -> else B = ((BsRGB+0.055)/1.055) ^ 2.4 -> -> and RsRGB, GsRGB, and BsRGB are defined as: -> -> RsRGB = R8bit/255 -> -> GsRGB = G8bit/255 -> -> BsRGB = B8bit/255 - -Since Zenity returns color values in the format "rgb( **R** , **G** , **B** )," the script can easily pull apart the R, B, and G values to compute the relative luminance. AWK makes this a simple task, using the comma as the field separator ( **-F,** ) and using AWK's **substr()** string function to pick just the text we want from the "rgb( **R** , **G** , **B** )" color value: - -``` -R=$( echo $color | awk -F, '{print substr($1,5)}' ) -G=$( echo $color | awk -F, '{print $2}' ) -B=$( echo $color | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) -``` - -**(For more on extracting and displaying data with AWK,[Get our AWK cheat sheet][4].)** - -Calculating the final relative luminance is best done using the BC calculator. BC supports the simple if-then-else needed in the calculation, which makes this part simple. But since BC cannot directly calculate exponentiation using a non-integer exponent, we need to do some extra math using the natural logarithm instead: - -``` -echo "scale=4 -rsrgb=$R/255 -gsrgb=$G/255 -bsrgb=$B/255 -if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) ) -if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) ) -if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) ) -0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l -``` - -This passes several instructions to BC, including the if-then-else statements that are part of the relative luminance formula. BC then prints the final value. - -### Calculate the contrast ratio - -With the relative luminance of the text color and the background color, now the script can calculate the contrast ratio. The [W3C determines the contrast ratio][5] with this formula: - -> (L1 + 0.05) / (L2 + 0.05), where -> L1 is the relative luminance of the lighter of the colors, and -> L2 is the relative luminance of the darker of the colors - -Given two relative luminance values **$r1** and **$r2** , it's easy to calculate the contrast ratio using the BC calculator: - -``` -echo "scale=2 -if ( $r1 > $r2 ) { l1=$r1; l2=$r2 } else { l1=$r2; l2=$r1 } -(l1 + 0.05) / (l2 + 0.05)" | bc -``` - -This uses an if-then-else statement to determine which value ( **$r1** or **$r2** ) is the lighter or darker color. BC performs the resulting calculation and prints the result, which the script can store in a variable. - -### The final script - -With the above, we can pull everything together into a final script. I use Zenity to display the final result in a text box: - -``` -#!/bin/sh -# script to calculate contrast ratio of colors - -# read color and background color: -# zenity returns values like 'rgb(255,140,0)' and 'rgb(255,255,255)' - -color=$( zenity --title 'Set text color' --color-selection --color='black' ) -if [ $? -ne 0 ] ; then -        echo '** color canceled .. assume black' -        color='rgb(0,0,0)' -fi - -background=$( zenity --title 'Set background color' --color-selection --color='white' ) -if [ $? -ne 0 ] ; then -        echo '** background canceled .. assume white' -        background='rgb(255,255,255)' -fi - -# compute relative luminance: - -function luminance() -{ -        R=$( echo $1 | awk -F, '{print substr($1,5)}' ) -        G=$( echo $1 | awk -F, '{print $2}' ) -        B=$( echo $1 | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) - -        echo "scale=4 -rsrgb=$R/255 -gsrgb=$G/255 -bsrgb=$B/255 -if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) ) -if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) ) -if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) ) -0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l -} - -lum1=$( luminance $color ) -lum2=$( luminance $background ) - -# compute contrast - -function contrast() -{ -        echo "scale=2 -if ( $1 > $2 ) { l1=$1; l2=$2 } else { l1=$2; l2=$1 } -(l1 + 0.05) / (l2 + 0.05)" | bc -} - -rel=$( contrast $lum1 $lum2 ) - -# print results - -( cat< “Linux and specifically Arch are all about freedom of choice, we provide a basic install that lets you explore those choices with a small layer of convenience. We will never judge you by installing GUI apps like Pamac or even work with sandbox solutions like Flatpak or Snaps. It’s up to you what you are installing to make EndeavourOS work in your circumstances, that’s the main difference we have with Antergos or Manjaro, but like Antergos we will try to help you if you run into a problem with one of your installed packages.” - -### Experiencing EndeavourOS - -I installed EndeavourOS in [VirtualBox][14] and took a look around. When I first booted from the image, I was greeted by a little box with links to the EndeavourOS site about installing. It also has a button to install and one to manually partition the drive. The Calamares installer worked very smoothly for me. - -After I rebooted into a fresh install of EndeavourOS, I was greeted by a colorful themed XFCE desktop. I was also treated to a bunch of notification balloons. Most Arch-based distros I’ve used come with a GUI tool like [pamac][15] or [octopi][16] to keep the system up-to-date. EndeavourOS comes with [kalu][17]. (kalu stands for “Keeping Arch Linux Up-to-date”.) It checks for updated packages, Arch Linux News, updated AUR packages and more. Once it sees an update for any of those areas, it will create a notification balloon. - -I took a look through the menu to see what was installed by default. The answer is not much, not even an office suite. If they intend for EndeavourOS to be a blank canvas for anyone to create the system they want. they are headed in the right direction. - -![Endeavouros Desktop][18] - -### Final Thoughts - -EndeavourOS is still very young. The first stable release was issued only 3 weeks ago. It is missing some stuff, most importantly an online installer. That being said, it is possible to gauge where EndeavourOS will be heading. - -[][19] - -Suggested read  An Overview of Intel's Clear Linux, its Features and Installation Procedure - -While it is not an exact clone of Antergos, EndeavourOS wants to replicate the most important part of Antergos the welcoming, friendly community. All to often, the Linux community can seem to be unwelcoming and downright hostile to the beginner. I’ve seen more and more people trying to combat that negativity and bring more people into Linux. With the EndeavourOS team making that their main focus, a great distro can be the only result. - -If you are currently using Antergos, there is a way for you to [switch to EndeavourOS without performing a clean install.][20] - -If you want an exact clone of Antergos, I would recommend checking out [RebornOS][21]. They are currently working on a replacement of the Cnchi installer named Fenix. - -Have you tried EndeavourOS already? How’s your experience with it? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/endeavouros/ - -作者:[John Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/john/ -[b]: https://github.com/lujun9972 -[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-logo.png?ssl=1 -[2]: https://itsfoss.com/antergos-linux-discontinued/ -[3]: https://endeavouros.com/ -[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-first-boot.png?resize=800%2C600&ssl=1 -[5]: https://endeavouros.com/info-2/ -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-installing.png?resize=800%2C600&ssl=1 -[7]: https://endeavouros.com/endeavouros-first-stable-release-has-arrived/ -[8]: https://forum.antergos.com/topic/11780/endeavour-antergos-community-s-next-stage -[9]: https://endeavouros.com/what-to-expect-on-the-first-release/ -[10]: https://calamares.io/ -[11]: https://itsfoss.com/veltos-linux/ -[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-updating-with-kalu.png?resize=800%2C600&ssl=1 -[13]: https://endeavouros.com/second-week-after-the-stable-release/ -[14]: https://itsfoss.com/install-virtualbox-ubuntu/ -[15]: https://aur.archlinux.org/packages/pamac-aur/ -[16]: https://octopiproject.wordpress.com/ -[17]: https://github.com/jjk-jacky/kalu -[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-desktop.png?resize=800%2C600&ssl=1 -[19]: https://itsfoss.com/clear-linux/ -[20]: https://forum.endeavouros.com/t/how-to-switch-from-antergos-to-endevouros/105/2 -[21]: https://rebornos.org/ diff --git a/sources/tech/20191225 Making trade-offs when writing Python code.md b/sources/tech/20191225 Making trade-offs when writing Python code.md deleted file mode 100644 index 47da30cb93..0000000000 --- a/sources/tech/20191225 Making trade-offs when writing Python code.md +++ /dev/null @@ -1,61 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Making trade-offs when writing Python code) -[#]: via: (https://opensource.com/article/19/12/zen-python-trade-offs) -[#]: author: (Moshe Zadka https://opensource.com/users/moshez) - -Making trade-offs when writing Python code -====== -This is part of a special series about the Zen of Python focusing on the -seventh, eighth, and ninth principles: readability, special cases, and -practicality. -![Brick wall between two people, a developer and an operations manager][1] - -Software development is a discipline rife with trade-offs. For every choice, there is an equally defensible but opposite choice. Make a method private? You're encouraging copy-paste. Make a method public? You're committing prematurely to an interface. - -Software developers make hard choices every minute. While all the principles in the [Zen of Python][2] cover trade-offs to some extent, the following principles take the hardest, coldest look at some trade-offs. - -### Readability counts. - -In some sense, this middle principle is indeed the center of the entire Zen of Python. The Zen is not about writing efficient programs. It is not even about writing robust programs, for the most part. It is about writing programs that _other people can read_. - -Reading code, by its nature, happens after the code has been added to the system. Often, it happens long after. Neglecting readability is the easiest choice since it does not hurt right now. Whatever the reason for adding new code—a painful bug or a highly requested feature—it does hurt. Right now. - -In the face of immense pressure to throw readability to the side and just "solve the problem," the Zen of Python reminds us: readability counts. Writing the code so it can be read is a form of compassion for yourself and others. - -### Special cases aren't special enough to break the rules. - -There is always an excuse. This bug is particularly painful; let's not worry about simplicity. This feature is particularly urgent; let's not worry about beauty. The domain rules covering this case are particularly hairy; let's not worry about nesting levels. - -Once we allow special pleading, the dam wall breaks, and there are no more principles; things devolve into a Mad Max dystopia with every programmer for themselves, trying to find the best excuses. - -Discipline requires commitment. It is only when things are hard, when there is a strong temptation, that a software developer is tested. There is always a valid excuse to break the rules, and that's why the rules must be kept the rules. Discipline is the art of saying no to exceptions. No amount of explanation can change that. - -### Although, practicality beats purity. - -> "If you think only of hitting, springing, striking, or touching the enemy, you will not be able actually to cut him." -> — Miyamoto Musashi, _[The Book of Water][3]_ - -Ultimately, software development is a practical discipline. Its goal is to solve real problems, faced by real people. Practicality beats purity: above all else, we must _solve the problem_. If we think only about readability, simplicity, or beauty, we will not be able to actually _solve the problem_. - -As Musashi suggested, the primary goal of every code change should be to _solve a problem_. The problem must be foremost in our minds. If we waver from it and think only of the Zen of Python, we have failed the Zen of Python. This is another one of those contradictions inherent in the Zen of Python. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/12/zen-python-trade-offs - -作者:[Moshe Zadka][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/moshez -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager) -[2]: https://www.python.org/dev/peps/pep-0020/ -[3]: https://en.wikipedia.org/wiki/The_Book_of_Five_Rings#The_Book_of_Water diff --git a/sources/tech/20191226 How the Zen of Python handles errors.md b/sources/tech/20191226 How the Zen of Python handles errors.md deleted file mode 100644 index 030c889db3..0000000000 --- a/sources/tech/20191226 How the Zen of Python handles errors.md +++ /dev/null @@ -1,56 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How the Zen of Python handles errors) -[#]: via: (https://opensource.com/article/19/12/zen-python-errors) -[#]: author: (Moshe Zadka https://opensource.com/users/moshez) - -How the Zen of Python handles errors -====== -This is part of a special series about the Zen of Python focusing on the -10th and 11th principles: on the silence (or not) of errors. -![a checklist for a team][1] - -Handling "exceptional conditions" is one of the most debated issues in programming. That could be because the stakes are high: mishandled error values can bring down even the largest systems. Since "exception conditions," by nature, are the least tested but occur with unpleasant frequency, correctly handling them can often distinguish a system that horror stories are told about to a system that "just works." - -From Java's **checked** exceptions through Erlang's fault isolation to Haskell's **Maybe**, different languages have remarkably different attitudes to error handling. - -The [Zen][2] offers Python's meditation on the topic. - -### Errors should never pass silently… - -Before the Zen of Python was a twinkle in Tim Peters' eye, before Wikipedia became informally known as "wiki," the first WikiWiki site, [C2][3], existed as a trove of programming guidelines. These are principles that mostly came out of a [Smalltalk][4] programming community. Smalltalk's ideas influenced many object-oriented languages, Python included. - -The C2 wiki defines the Samurai Principle: "return victorious, or not at all." In Pythonic terms, it encourages eschewing sentinel values, such as returning **None** or **-1** to indicate an inability to complete the task, in favor of raising exceptions. A **None** is silent: it looks like a value and can be put in a variable and passed around. Sometimes, it is even a _valid_ return value. - -The principle here is that if a function cannot accomplish its contract, it should "fail loudly": raise an exception. The raised exception will never look like a possible value. It will skip past the **returned_value = call_to_function(parameter)** line and go up the stack, potentially crashing the program. - -A crash is straightforward to debug: there is a stack trace indicating the problem as well as the call stack. The failure might mean that a necessary condition for the program was not met, and human intervention is needed. It might mean that the program's logic is faulty. In either case, the loud failure is better than a hidden, "missing" value, infecting the program's valid data with **None**, until it is used somewhere and an error message says "**None does not have method split**," which you probably already knew. - -### Unless explicitly silenced. - -Exceptions sometimes need to be explicitly caught. We might anticipate some of the lines in a file are misformatted and want to handle those in a special way, maybe by putting them in a "lines to be looked at by a human" file, instead of crashing the entire program. - -Python allows us to catch exceptions with **except**. This means errors can be _explicitly_ silenced. This explicitness means that the **except** line is visible in code reviews. It makes sense to question why this is the right place to silence, and potentially recover from, the exception. It makes sense to ask if we are catching too many exceptions or too few. - -Because this is all explicit, it is possible for someone to read the code and understand which exceptional conditions are recoverable. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/12/zen-python-errors - -作者:[Moshe Zadka][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/moshez -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk (a checklist for a team) -[2]: https://www.python.org/dev/peps/pep-0020/ -[3]: https://wiki.c2.com/ -[4]: https://en.wikipedia.org/wiki/Smalltalk diff --git a/sources/tech/20200223 The Zen of Go.md b/sources/tech/20200223 The Zen of Go.md index c4143aed32..249c3790cd 100644 --- a/sources/tech/20200223 The Zen of Go.md +++ b/sources/tech/20200223 The Zen of Go.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: ( chensanle ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20200323 5 Python scripts for automating basic community management tasks.md b/sources/tech/20200323 5 Python scripts for automating basic community management tasks.md deleted file mode 100644 index 3f8dd00aa1..0000000000 --- a/sources/tech/20200323 5 Python scripts for automating basic community management tasks.md +++ /dev/null @@ -1,114 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 Python scripts for automating basic community management tasks) -[#]: via: (https://opensource.com/article/20/3/automating-community-management-python) -[#]: author: (Rich Bowen https://opensource.com/users/rbowen) - -5 Python scripts for automating basic community management tasks -====== -If you have to do something three times, try to automate it. -![shapes of people symbols][1] - -I've [written before about what a community manager does][2], and if you ask ten community managers, you'll get 12 different answers. Mostly, though, you do what the community needs for you to do at any given moment. And a lot of it can be repetitive. - -Back when I was a sysadmin, I had a rule: if I had to do something three times, I'd try to automate it. And, of course, these days, with awesome tools like Ansible, there's a whole science to that. - -Some of what I do on a daily or weekly basis involves looking something up in a few places and then generating some digest or report of that information to publish elsewhere. A task like that is a perfect candidate for automation. None of this is [rocket surgery][3], but when I've shared some of these scripts with colleagues, invariably, at least one of them turns out to be useful. - -[On GitHub][4], I have several scripts that I use every week. None of them are complicated, but they save me a few minutes every time. Some of them are in Perl because I'm almost 50. Some of them are in Python because a few years ago, I decided I needed to learn Python. Here's an overview: - -### **[tshirts.py][5]** - -This simple script takes a number of Tshirts that you're going to order for an event and tells you what the size distribution should be. It spreads them on a normal curve (also called a bell curve), and, in my experience, this coincides pretty well with what you'll actually need for a normal conference audience. You might want to adjust the script to slightly larger if you're using it in the USA, slightly smaller if you're using it in Europe. YMMV. - -Usage: - - -``` -[rbowen@sasha:community-tools/scripts]$ ./tshirts.py                                                                                                                                                           -How many shirts? 300 -For a total of 300 shirts, order: - -30.0 small -72.0 medium -96.0 large -72.0 xl -30.0 2xl -``` - -### **[followers.py][6]** - -This script provides me with the follower count for Twitter handles I care about. - -This script is only 14 lines long and isn't exciting, but it saves me perhaps ten minutes of loading web pages and looking for a number. - -You'll need to edit the feeds array to add the accounts you care about: - - -``` -feeds = [ -        'centosproject', -        'centos' -        ]; -``` - -NB: It probably won't work if you're running it outside of English-speaking countries, because it's just a simple screen-scraping script that reads HTML and looks for particular information buried within it. So when the output is in a different language, the regular expressions won't match. - -Usage: - - -``` -[rbowen@sasha:community-tools/scripts]$ ./followers.py                                                                                                                                                                           -centosproject: 11,479 Followers -centos: 18,155 Followers -``` - -### **[get_meetups][7]** - -This script fits into another category—API scripts. This particular script uses the [meetup.com][8] API to look for meetups on a particular topic in a particular area and time range so that I can report them to my community. Many of the services you rely on provide an API so that your scripts can look up information without having to manually look through web pages. Learning how to use those APIs can be frustrating and time-consuming, but you'll end up with skills that will save you a LOT of time. - -_Disclaimer: [meetup.com][8] changed their API in August of 2019, and I have not yet updated this script to the new API, so it doesn't actually work right now. Watch this repo for a fixed version in the coming weeks._ - -### **[centos-announcements.pl][9]** - -This script is considerably more complicated and extremely specific to my use case, but you probably have a similar situation. This script looks at a mailing list archive—in this case, the centos-announce mailing list—and finds messages that are in a particular format, then builds a report of those messages. Reports come in a couple of different formats—one for my monthly newsletter and one for scheduling messages (via Hootsuite) for Twitter. - -I use Hootsuite to schedule content for Twitter, and they have a convenient CSV (comma-separated value) format that lets you bulk-schedule a whole week of tweets in one go. Auto-generating that CSV from various data sources (i.e., mailing lists, blogs, other web pages) can save you a lot of time. Do note, however, that this should probably only be used for a first draft, which you then examine and edit yourself so that you don't end up auto-tweeting something you didn't intend to. - -### **[reporting.pl][10]** - -This script is also fairly specific to my particular needs, but the concept itself is universal. I send out a monthly mailing to the [CentOS SIGs][11] (Special Interest Groups), which are scheduled to report in that given month. This script simply tells me which SIGs those are this month, and writes the email that needs to go to them. - -It does not actually send that email, however, for a couple of reasons. One, I may wish to edit those messages before they go out. Two, while scripts sending email worked great in the old days, these days, they're likely to result in getting spam-filtered. - -### In conclusion - -There are some other scripts in that repo that are more or less specific to my particular needs, but I hope at least one of them is useful to you, and that the variety of what's there inspires you to automate something of your own. I'd love to see your handy automation script repos, too; link to them in the comments! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/3/automating-community-management-python - -作者:[Rich Bowen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/rbowen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Open%20Pharma.png?itok=GP7zqNZE (shapes of people symbols) -[2]: http://drbacchus.com/what-does-a-community-manager-do/ -[3]: https://6dollarshirts.com/rocket-surgery -[4]: https://github.com/rbowen/centos-community-tools/tree/master/scripts -[5]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/tshirts.py -[6]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/followers.py -[7]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/get_meetups -[8]: http://meetup.com -[9]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/centos-announcements.pl -[10]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/sig_reporting/reporting.pl -[11]: https://wiki.centos.org/SpecialInterestGroup diff --git a/sources/tech/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md b/sources/tech/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md deleted file mode 100644 index f2fd06793f..0000000000 --- a/sources/tech/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md +++ /dev/null @@ -1,250 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Getting Started With Pacman Commands in Arch-based Linux Distributions) -[#]: via: (https://itsfoss.com/pacman-command/) -[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) - -Getting Started With Pacman Commands in Arch-based Linux Distributions -====== - -_**Brief: This beginner’s guide shows you what you can do with pacmancommands in Linux, how to use them to find new packages, install and upgrade new packages, and clean your system.**_ - -The [pacman][1] package manager is one of the main difference between [Arch Linux][2] and other major distributions like Red Hat and Ubuntu/Debian. It combines a simple binary package format with an easy-to-use [build system][3]. The aim of pacman is to easily manage packages, either from the [official repositories][4] or the user’s own builds. - -If you ever used Ubuntu or Debian-based distributions, you might have used the apt-get or apt commands. Pacman is the equivalent in Arch Linux. If you [just installed Arch Linux][5], one of the first few [things to do after installing Arch Linux][6] is to learn to use pacman commands. - -In this beginner’s guide, I’ll explain some of the essential usage of the pacmand command that you should know for managing your Arch-based system. - -### Essential pacman commands Arch Linux users should know - -![][7] - -Like other package managers, pacman can synchronize package lists with the software repositories to allow the user to download and install packages with a simple command by solving all required dependencies. - -#### Install packages with pacman - -You can install a single package or multiple packages using pacman command in this fashion: - -``` -pacman -S _package_name1_ _package_name2_ ... -``` - -![Installing a package][8] - -The -S stands for synchronization. It means that pacman first synchronizes - -The pacman database categorises the installed packages in two groups according to the reason why they were installed: - - * **explicitly-installed**: the packages that were installed by a generic pacman -S or -U command - * **dependencies**: the packages that were implicitly installed because [required][9] by another package that was explicitly installed. - - - -#### Remove an installed package - -To remove a single package, leaving all of its dependencies installed: - -``` -pacman -R package_name_ -``` - -![Removing a package][10] - -To remove a package and its dependencies which are not required by any other installed package: - -``` -pacman -Rs _package_name_ -``` - -To remove dependencies that are no longer needed. For example, the package which needed the dependencies was removed. - -``` -pacman -Qdtq | pacman -Rs - -``` - -#### Upgrading packages - -Pacman provides an easy way to [update Arch Linux][11]. You can update all installed packages with just one command. This could take a while depending on how up-to-date the system is. - -The following command synchronizes the repository databases _and_ updates the system’s packages, excluding “local” packages that are not in the configured repositories: - -``` -pacman -Syu -``` - - * S stands for sync - * y is for refresh (local - * u is for system update - - - -Basically it is saying that sync to central repository (master package database), refresh the local copy of the master package database and then perform the system update (by updating all packages that have a newer version available). - -![System update][12] - -Attention! - -If you are an Arch Linux user before upgrading, it is advised to visit the [Arch Linux home page][2] to check the latest news for out-of-the-ordinary updates. If manual intervention is needed an appropriate news post will be made. Alternatively you can subscribe to the [RSS feed][13] or the [arch-announce mailing list][14]. - -Be also mindful to look over the appropriate [forum][15] before upgrading fundamental software (such as the kernel, xorg, systemd, or glibc), for any reported problems. - -**Partial upgrades are unsupported** at a rolling release distribution such as Arch and Manjaro. That means when new library versions are pushed to the repositories, all the packages in the repositories need to be rebuilt against the libraries. For example, if two packages depend on the same library, upgrading only one package, might break the other package which depends on an older version of the library. - -#### Use pacman to search for packages - -Pacman queries the local package database with the -Q flag, the sync database with the -S flag and the files database with the -F flag. - -Pacman can search for packages in the database, both in packages’ names and descriptions: - -``` -pacman -Ss _string1_ _string2_ ... -``` - -![Searching for a package][16] - -To search for already installed packages: - -``` -pacman -Qs _string1_ _string2_ ... -``` - -To search for package file names in remote packages: - -``` -pacman -F _string1_ _string2_ ... -``` - -To view the dependency tree of a package: - -``` -pactree _package_naenter code hereme_ -``` - -#### Cleaning the package cache - -Pacman stores its downloaded packages in /var/cache/pacman/pkg/ and does not remove the old or uninstalled versions automatically. This has some advantages: - - 1. It allows to [downgrade][17] a package without the need to retrieve the previous version through other sources. - 2. A package that has been uninstalled can easily be reinstalled directly from the cache folder. - - - -However, it is necessary to clean up the cache periodically to prevent the folder to grow in size. - -The [paccache(8)][18] script, provided within the [pacman-contrib][19] package, deletes all cached versions of installed and uninstalled packages, except for the most recent 3, by default: - -``` -paccache -r -``` - -![Clear cache][20] - -To remove all the cached packages that are not currently installed, and the unused sync database, execute: - -``` -pacman -Sc -``` - -To remove all files from the cache, use the clean switch twice, this is the most aggressive approach and will leave nothing in the cache folder: - -``` -pacman -Scc -``` - -#### Installing local or third-party packages - -Install a ‘local’ package that is not from a remote repository: - -``` -pacman -U _/path/to/package/package_name-version.pkg.tar.xz_ -``` - -Install a ‘remote’ package, not contained in an official repository: - -``` -pacman -U http://www.example.com/repo/example.pkg.tar.xz -``` - -### Bonus: Troubleshooting common errors with pacman - -Here are some common errors you may encounter while managing packages with pacman. - -#### Failed to commit transaction (conflicting files) - -If you see the following error: - -``` -error: could not prepare transaction -error: failed to commit transaction (conflicting files) -package: /path/to/file exists in filesystem -Errors occurred, no packages were upgraded. -``` - -This is happening because pacman has detected a file conflict and will not overwrite files for you. - -A safe way to solve this is to first check if another package owns the file (pacman -Qo _/path/to/file_). If the file is owned by another package, file a bug report. If the file is not owned by another package, rename the file which ‘exists in filesystem’ and re-issue the update command. If all goes well, the file may then be removed. - -Instead of manually renaming and later removing all the files that belong to the package in question, you may explicitly run _**pacman -S –overwrite glob package**_ to force pacman to overwrite files that match _glob_. - -#### Failed to commit transaction (invalid or corrupted package) - -Look for .part files (partially downloaded packages) in /var/cache/pacman/pkg/ and remove them. It is often caused by usage of a custom XferCommand in pacman.conf. - -#### Failed to init transaction (unable to lock database) - -When pacman is about to alter the package database, for example installing a package, it creates a lock file at /var/lib/pacman/db.lck. This prevents another instance of pacman from trying to alter the package database at the same time. - -If pacman is interrupted while changing the database, this stale lock file can remain. If you are certain that no instances of pacman are running then delete the lock file. - -Check if a process is holding the lock file: - -``` -lsof /var/lib/pacman/db.lck -``` - -If the above command doesn’t return anything, you can remove the lock file: - -``` -rm /var/lib/pacman/db.lck -``` - -If you find the PID of the process holding the lock file with lsof command output, kill it first and then remove the lock file. - -I hope you like my humble effort in explaining the basic pacman commands. Please leave your comments below and don’t forget to subscribe on our social media. Stay safe! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/pacman-command/ - -作者:[Dimitrios Savvopoulos][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/dimitrios/ -[b]: https://github.com/lujun9972 -[1]: https://www.archlinux.org/pacman/ -[2]: https://www.archlinux.org/ -[3]: https://wiki.archlinux.org/index.php/Arch_Build_System -[4]: https://wiki.archlinux.org/index.php/Official_repositories -[5]: https://itsfoss.com/install-arch-linux/ -[6]: https://itsfoss.com/things-to-do-after-installing-arch-linux/ -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/essential-pacman-commands.jpg?ssl=1 -[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-S.png?ssl=1 -[9]: https://wiki.archlinux.org/index.php/Dependency -[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-R.png?ssl=1 -[11]: https://itsfoss.com/update-arch-linux/ -[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Syu.png?ssl=1 -[13]: https://www.archlinux.org/feeds/news/ -[14]: https://mailman.archlinux.org/mailman/listinfo/arch-announce/ -[15]: https://bbs.archlinux.org/ -[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Ss.png?ssl=1 -[17]: https://wiki.archlinux.org/index.php/Downgrade -[18]: https://jlk.fjfi.cvut.cz/arch/manpages/man/paccache.8 -[19]: https://www.archlinux.org/packages/?name=pacman-contrib -[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-paccache-r.png?ssl=1 diff --git a/sources/tech/20200529 A new way to build cross-platform UIs for Linux ARM devices.md b/sources/tech/20200529 A new way to build cross-platform UIs for Linux ARM devices.md deleted file mode 100644 index 248d3a4dbc..0000000000 --- a/sources/tech/20200529 A new way to build cross-platform UIs for Linux ARM devices.md +++ /dev/null @@ -1,182 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (A new way to build cross-platform UIs for Linux ARM devices) -[#]: via: (https://opensource.com/article/20/5/linux-arm-ui) -[#]: author: (Bruno Muniz https://opensource.com/users/brunoamuniz) - -A new way to build cross-platform UIs for Linux ARM devices -====== -A proof of concept using AndroidXML and TotalCross provides an easier -way of creating UIs for Raspberry Pi and other devices. -![Digital images of a computer desktop][1] - -Creating a great user experience (UX) for your applications is a tough job, especially if you are developing embedded applications. Today, there are two types of graphical user interface (GUI) tools generally available for developing embedded software: either they involve complex technologies, or they are extremely expensive. - -However, we have created a proof of concept (PoC) for a new way to use existing, well-established tools to build user interfaces (UIs) for applications that run on desktop, mobile, embedded devices, and low-power Linux ARM devices. Our method uses Android Studio to draw the UI; [TotalCross][2] to render the Android XML on the device; a new [TotalCross API][3] called [KnowCode][4]; and a [Raspberry Pi 4][5] to execute the application. - -### Choosing Android Studio - -It's possible to build a responsive and beautiful UX for an application using the TotalCross API, but creating the UI in Android Studio shortens the time between prototyping and the real application. - -There are a lot of tools available to build UIs for applications, but [Android Studio][6] is the one developers worldwide use most often. In addition to its massive adoption, this tool is also super-intuitive to use, and it's really powerful for creating both simple and complex applications. The only drawback, in my opinion, is the computing power required to use the tool, which is way heavier than other integrated development environments (IDEs) like VSCode or its open source alternative, [VSCodium][7]. - -By thinking through these issues, we created a proof of concept using Android Studio to draw the UI and TotalCross to run the Android XML directly on the device. - -### Building the UI - -For our PoC, we wanted to create a home-appliance application to control temperature and other things and that would run on a Linux ARM device. - -![Home appliance application to control thermostat][8] - -(Bruno Muniz, [CC BY-SA 4.0][9]) - -We wanted to develop our application for a Raspberry Pi, so we used Android's [ConstraintLayout][10] to build a fixed-screen-size UI of 848x480 (the Raspberry Pi's resolution), but you can build responsive UIs with other layouts. - -Android XML adds a lot of flexibility for UI creation, making it easy to build rich user experiences for applications. In the XML below, we used two main components: [ImageView][11] and [TextView][12]. - - -``` -<ImageView -android:id="@+id/imageView6" -android:layout_width="273dp" -android:layout_height="291dp" -android:background="@drawable/Casa" -tools:layout_editor_absoluteX="109dp" -tools:layout_editor_absoluteY="95dp" /> -<TextView -android:id="@+id/insideTempEdit" -android:layout_width="94dp" -android:layout_height="92dp" -android:background="#F5F5F5" -android:text="20" -android:textAlignment="center" -android:gravity="center" -android:textColor="#000000" -android:textSize="67dp" -android:textStyle="bold" -tools:layout_editor_absoluteX="196dp" -tools:layout_editor_absoluteY="246dp" /> -``` - -The TextView elements are used to show some data to the user, like the temperature inside a building. Most ImageViews are used as buttons for user interaction with the UI, but they're also needed to implement the Events provided by the components on the screen. - -### Integrating with TotalCross - -The second technology in this PoC is TotalCross. We don't want to use anything from Android on the device because: - - 1. Our goal is to provide a great UI for Linux ARM. - 2. We want to achieve a low footprint on the device. - 3. We want the application to run on low-end hardware devices with low computing power (e.g., no GPU, low RAM, etc.). - - - -To begin, we created an empty TotalCross project using our [VSCode plugin][13]. Next, we saved a copy of the images inside the **drawable** folder and a copy of the Android XML file inside the **XML** folder—both are located inside the **Resources** folder: - -![Home Appliance file structure][14] - -(Bruno Muniz, [CC BY-SA 4.0][9]) - -To run the XML file using the TotalCross Simulator, we added a new TotalCross API called KnowCode and a MainWindow to load the XML. The code below uses the API to load and render the XML: - - -``` -public void initUI() { -    XmlScreenAbstractLayout xmlCont = XmlScreenFactory.create(“xml / homeApplianceXML.xml”); -    swap(xmlCont); -} -``` - -That's it! With only two commands, we can run an Android XML file using TotalCross. Here is how the XML performs on TotalCross' simulator: - -![TotalCross simulator running temperature application][15] - -(Bruno Muniz, [CC BY-SA 4.0][9]) - -There are two things remaining to finish this PoC: adding some events to provide user interaction and running it on a Raspberry Pi. - -### Adding events - -The KnowCode API provides a way to get an XML element by its ID (getControlByID) and change its behavior to do things like adding events, changing visibility, and more. - -For example, to enable users to change the temperature in their home or other building, we put plus and minus buttons on the bottom of the UI and a "click" event that increases or decreases the temperature one degree every time the buttons are clicked: - - -``` -[Button][16] plus = ([Button][16]) xmlCont.getControlByID("@+id/plus"); -[Label][17] insideTempLabel = ([Label][17]) xmlCont.getControlByID("@+id/insideTempLabel"); -plus.addPressListener(new PressListener() { -    @Override -    public void controlPressed(ControlEvent e) { -        try { -            [String][18] tempString = insideTempLabel.getText(); -            int temp; -            temp = Convert.toInt(tempString); -            insideTempLabel.setText(Convert.toString(++temp)); -        } catch (InvalidNumberException e1) { -            e1.printStackTrace(); -        } -    } -}); -``` - -### Testing on a Raspberry Pi 4 - -Finally, the last step! We ran the application on a device and checked the results. We just needed to package the application and deploy and run it on the target device. A [VNC][19] can also be used to check the application on the device. - -The entire application, including assets (images, etc.), Android XML, TotalCross, and the KnowCode API, is about 8MB on Linux ARM. - -Here's a demo of the application: - -![Application demo][20] - -(Bruno Muniz, [CC BY-SA 4.0][9]) - -In this example, the application was packaged only for Linux ARM, but the same app will run as a Linux desktop app, Android devices, Windows, Windows CE, and even iOS. - -All of the sample source code and the project are available in the [HomeApplianceXML GitHub][21] repository. - -### New possibilities with existing tools - -Creating GUIs for embedded applications doesn't need to be as hard as it is today. This proof of concept brings a new perspective on how to do this task easily—not only for embedded systems but for all major operating systems, all using the same code base. - -We are not aiming to create a new tool for designers or developers to build UI applications; our goal is to provide new possibilities for using the best tools that are already available. - -What's your opinion of this new way to build apps? Share your thoughts in the comments below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/5/linux-arm-ui - -作者:[Bruno Muniz][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/brunoamuniz -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop) -[2]: https://totalcross.com/ -[3]: https://yourapp.totalcross.com/knowcode-app -[4]: https://github.com/TotalCross/KnowCodeXML -[5]: https://www.raspberrypi.org/ -[6]: https://developer.android.com/studio -[7]: https://vscodium.com/ -[8]: https://opensource.com/sites/default/files/uploads/homeapplianceapp.png (Home appliance application to control thermostat) -[9]: https://creativecommons.org/licenses/by-sa/4.0/ -[10]: https://codelabs.developers.google.com/codelabs/constraint-layout/index.html#0 -[11]: https://developer.android.com/reference/android/widget/ImageView -[12]: https://developer.android.com/reference/android/widget/TextView -[13]: https://medium.com/totalcross-community/totalcross-plugin-for-vscode-4f45da146a0a -[14]: https://opensource.com/sites/default/files/uploads/homeappliancexml.png (Home Appliance file structure) -[15]: https://opensource.com/sites/default/files/uploads/totalcross-simulator_0.png (TotalCross simulator running temperature application) -[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+button -[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+label -[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string -[19]: https://tigervnc.org/ -[20]: https://opensource.com/sites/default/files/uploads/application.gif (Application demo) -[21]: https://github.com/TotalCross/HomeApplianceXML diff --git a/sources/tech/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md b/sources/tech/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md deleted file mode 100644 index 9231d75240..0000000000 --- a/sources/tech/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md +++ /dev/null @@ -1,201 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself) -[#]: via: (https://itsfoss.com/arch-based-linux-distros/) -[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) - -Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself -====== - -In the Linux community, [Arch Linux][1] has a cult following. This lightweight distribution provides the bleeding edge updates with a DIY (do it yourself) attitude. - -However, Arch is also aimed at more experienced users. As such, it is generally considered to be beyond the reach of those who lack the technical expertise (or persistence) required to use it. - -In fact, the very first steps, [installing Arch Linux itself is enough to scare many people off][2]. Unlike most other distributions, Arch Linux doesn’t have an easy to use graphical installer. You have to do disk partitions, connect to internet, mount drives and create file system etc using command line tools only. - -For those who want to experience Arch without the hassle of the complicated installation and set up, there exists a number of user-friendly Arch-based distributions. - -In this article, I’ll show you some of these Arch alternative distributions. These distributions come with graphical installer, graphical package manager and other tools that are easier to use than their command line alternatives. - -### Arch-based Linux distributions that are easier to set up and use - -![][3] - -Please note that this is not a ranking list. The numbers are just for counting purpose. Distribution at number two should not be considered better than distribution at number seven. - -#### 1\. Manjaro Linux - -![][4] - -[Manjaro][5] doesn’t need any introduction. It is one of the most popular Linux distributions for several years and it deserves it. - -Manjaro provides all the benefits of the Arch Linux combined with a focus on user-friendliness and accessibility. Manjaro is suitable for both newcomers and experienced Linux users alike. - -**For newcomers**, a user-friendly installer is provided, and the system itself is designed to work fully ‘straight out of the box’ with your [favourite desktop environment][6] (DE) or window manager. - -**For more experienced users,** Manjaro also offers versatility to suit every personal taste and preference. [Manjaro Architect][7] is giving the option to install any Manjaro flavour and offers unflavoured DE installation, filesystem ([recently introduced ZFS][8]) and bootloader choice for those who wants complete freedom to shape their system. - -Manjaro is also a rolling release cutting-edge distribution. However, unlike Arch, Manjaro tests the updates first and then provides it to its users. Stability also gets importance here. - -#### 2\. ArcoLinux - -![][9] - -[ArcoLinux][10] (previously known as ArchMerge) is a distribution based on Arch Linux. The development team offers three variations. ArcoLinux, ArcoLinuxD and ArcoLinuxB. - -ArcoLinux ****is a full-featured distribution that ships with the [Xfce desktop][11], [Openbox][12] and [i3 window managers][13]. - -**ArcoLinuxD** is a minimal distribution that includes scripts that enable power users to install any desktop and application. - -**ArcoLinuxB** is a project that gives users the power to build custom distributions, while also developing several community editions with pre-configured desktops, such as Awesome, bspwm, Budgie, Cinnamon, Deepin, GNOME, MATE and KDE Plasma. - -ArcoLinux also provides various video tutorials as it places strong focus on learning and acquiring Linux skills. - -#### Archlabs Linux - -![][14] - -[ArchLabs Linux][15] is a lightweight rolling release Linux distribution based on a minimal Arch Linux base with the [Openbox][16] window manager. [ArchLabs][17] is influenced and inspired by the look and feel of [BunsenLabs][18] with the intermediate to advanced user in mind. - -#### Archman Linux - -![][19] - -[Archman][20] is an independent project. Arch Linux distros in general are not ideal operating systems for users with little Linux experience. Considerable background reading is necessary for things to make sense with minimal frustration. Developers of Archman Linux are trying to change that reputation. - -Archman’s development is based on an understanding of development that includes user feedback and experience components. With the past experience of our team, the feedbacks and requests from the users are blended together and the road maps are determined and the build works are done. - -#### EndeavourOS - -![][21] - -When the popular Arch-based distribution [Antergos was discontinued in 2019][22], it left a friendly and extremely helpful community behind. The Antergos project ended because the system was too hard to maintain for the developers. - -Within a matter of days after the announcement, a few experienced users palnned on maintaining the former community by creating a new distribution to fill the void left by Antergos. That’s how [EndeavourOS][23] was born. - -[EndeavourOS][24] is lightweight and ships with a minimum amount of preinstalled apps. An almost blank canvas ready to personalise. - -#### RebornOS - -![][25] - -[RebornOS][26] developers’ goal is to bring the true power of Linux to everyone, with one ISO for 15 desktop environments and full of unlimited opportunities for customization. - -RebornOS also claims to have support for [Anbox][27] for running Android applications on desktop Linux. It also offers a simple kernel manager GUI tool. - -Coupled with [Pacman][28], the [AUR][29], and a customized version of Cnchi graphical installer, Arch Linux is finally available for even the least inexperienced users. - -#### Chakra Linux - -![][30] - -A community-developed GNU/Linux distribution with an emphasis on KDE and Qt technologies. [Chakra Linux][31] does not schedule releases for specific dates but uses a “Half-Rolling release” system. - -This means that the core packages of Chakra Linux are frozen and only updated to fix any security problems. These packages are updated after the latest versions have been thoroughly tested before being moved to permanent repository (about every six months). - -In addition to the official repositories, users can install packages from the Chakra Community Repository (CCR), which provides user made PKGINFOs and [PKGBUILD][32] scripts for software which is not included in the official repositories and is inspired by the Arch User Repository. - -#### Artix Linux - -![Artix Mate Edition][33] - -[Artix Linux][34] is a rolling-release distribution based on Arch Linux that uses [OpenRC][35], [runit][36] or [s6][37] init instead of [systemd][38]. - -Artix Linux has its own package repositories but as a pacman-based distribution, it can use packages from Arch Linux repositories or any other derivative distribution, even packages explicitly depending on systemd. The [Arch User Repository][29] (AUR) can also be used. - -#### BlackArch Linux - -![][39] - -BlackArch is a [penetration testing distribution][40] based on Arch Linux that provides a large amount of cyber security tools. It is specially created for penetration testers and security researchers. The repository contains more than 2400 [hacking and pen-testing tools][41] that can be installed individually or in groups. BlackArch Linux is compatible with existing Arch Linux packages. - -### Want real Arch Linux? Simplify the installation with graphical Arch installer - -If you want to use the actual Arch Linux but you are not comfortable with the difficult installation, fortunately you can download an Arch Linux iso baked with a graphical installer. - -An Arch installer is basically Arch Linux ISO with a relatively easy to use text-based installer. It is much easier than bare-bone Arch installation. - -#### Anarchy Installer - -![][42] - -The [Anarchy installer][43] intends to provide both novice and experienced Linux users a simple and pain free way to install Arch Linux. Install when you want it, where you want it, and however you want it. That is the Anarchy philosophy. - -Once you boot up the installer, you’ll be shown a simple [TUI menu][44], listing all the available installer options. - -#### Zen Installer - -![][45] - -The [Zen Installer][46] provides a full graphical (point and click) environment for installing Arch Linux. It provides support for installing multiple desktop environments, AUR, and all of the power and flexiblity of Arch Linux with the ease of a graphical installer. - -The ISO will boot the live environment, and then download the most current stable version of the installer after you connect to the internet. So, you will always get the newest installer with updated features. - -### Conclusion - -An Arch-based distribution is always an excellent hassle-free choice for the many users, but a graphical installer like Anarchy is at least a step closer to how Arch Linux truly tastes. - -In my opinion the [real beauty of Arch Linux is its installation process][2] and for a Linux enthusiast is an opportunity to learn rather than a hassle. Arch Linux and its derivatives has a lot for you mess up with, but It’s FOSS will unravel the mystery behind the scenes. See you at my next tutorial! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/arch-based-linux-distros/ - -作者:[Dimitrios Savvopoulos][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/dimitrios/ -[b]: https://github.com/lujun9972 -[1]: https://www.archlinux.org/ -[2]: https://itsfoss.com/install-arch-linux/ -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/arch-based-linux-distributions.png?ssl=1 -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/manjaro-20.jpg?ssl=1 -[5]: https://manjaro.org/ -[6]: https://itsfoss.com/best-linux-desktop-environments/ -[7]: https://itsfoss.com/manjaro-architect-review/ -[8]: https://itsfoss.com/manjaro-20-release/ -[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/arcolinux.png?ssl=1 -[10]: https://arcolinux.com/ -[11]: https://www.xfce.org/ -[12]: http://openbox.org/wiki/Main_Page -[13]: https://i3wm.org/ -[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archlabs.jpg?ssl=1 -[15]: https://itsfoss.com/archlabs-review/ -[16]: https://en.wikipedia.org/wiki/Openbox -[17]: https://archlabslinux.com/ -[18]: https://www.bunsenlabs.org/ -[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archman.png?ssl=1 -[20]: https://archman.org/en/ -[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/04_endeavouros_slide.jpg?ssl=1 -[22]: https://itsfoss.com/antergos-linux-discontinued/ -[23]: https://itsfoss.com/endeavouros/ -[24]: https://endeavouros.com/ -[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/RebornOS.png?ssl=1 -[26]: https://rebornos.org/ -[27]: https://anbox.io/ -[28]: https://itsfoss.com/pacman-command/ -[29]: https://itsfoss.com/aur-arch-linux/ -[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Chakra_Goedel_Screenshot.png?ssl=1 -[31]: https://www.chakralinux.org/ -[32]: https://wiki.archlinux.org/index.php/PKGBUILD -[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Artix_MATE_edition.png?ssl=1 -[34]: https://artixlinux.org/ -[35]: https://en.wikipedia.org/wiki/OpenRC -[36]: https://en.wikipedia.org/wiki/Runit -[37]: https://en.wikipedia.org/wiki/S6_(software) -[38]: https://en.wikipedia.org/wiki/Systemd -[39]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/BlackArch.png?ssl=1 -[40]: https://itsfoss.com/linux-hacking-penetration-testing/ -[41]: https://itsfoss.com/best-kali-linux-tools/ -[42]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/anarchy.jpg?ssl=1 -[43]: https://anarchyinstaller.org/ -[44]: https://en.wikipedia.org/wiki/Text-based_user_interface -[45]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/zen.jpg?ssl=1 -[46]: https://sourceforge.net/projects/revenge-installer/ diff --git a/sources/tech/20200615 LaTeX Typesetting - Part 1 (Lists).md b/sources/tech/20200615 LaTeX Typesetting - Part 1 (Lists).md deleted file mode 100644 index ea454aa0b6..0000000000 --- a/sources/tech/20200615 LaTeX Typesetting - Part 1 (Lists).md +++ /dev/null @@ -1,262 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (zhangxiangping) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (LaTeX Typesetting – Part 1 (Lists)) -[#]: via: (https://fedoramagazine.org/latex-typesetting-part-1/) -[#]: author: (Earl Ramirez https://fedoramagazine.org/author/earlramirez/) - -LaTeX Typesetting – Part 1 (Lists) -====== - -![][1] - -This series builds on the previous articles: [Typeset your docs with LaTex and TeXstudio on Fedora][2] and [LaTeX 101 for beginners][3]. This first part of the series is about LaTeX lists. - -### Types of lists - -LaTeX lists are enclosed environments, and each item in the list can take a line of text to a full paragraph. There are three types of lists available in LaTeX. They are: - - * **Itemized**: unordered or bullet - * **Enumerated**: ordered - * **Description**: descriptive - - - -### Creating lists - -To create a list, prefix each list item with the \_item_ command. Precede and follow the list of items with the \_begin_{<type>} and \_end_{<type>} commands respectively where <type> is substituted with the type of the list as illustrated in the following examples. - -#### Itemized list - -``` -\begin{itemize} - \item Fedora - \item Fedora Spin - \item Fedora Silverblue -\end{itemize} -``` - -![][4] - -#### Enumerated list - -``` -\begin{enumerate} - \item Fedora CoreOS - \item Fedora Silverblue - \item Fedora Spin -\end{enumerate} -``` - -![][5] - -#### Descriptive list - -``` -\begin{description} - \item[Fedora 6] Code name Zod - \item[Fedora 8] Code name Werewolf -\end{description} -``` - -![][6] - -### Spacing list items - -The default spacing can be customized by adding \_usepackage{enumitem}_ to the preamble. The _enumitem_ package enables the _noitemsep_ option and the \_itemsep_ command which you can use on your lists as illustrated below. - -#### Using the noitemsep option - -Enclose the _noitemsep_ option in square brackets and place it on the \_begin_ command as shown below. This option removes the default spacing. - -``` -\begin{itemize}[noitemsep] - \item Fedora - \item Fedora Spin - \item Fedora Silverblue -\end{itemize} -``` - -![][7] - -#### Using the \itemsep command - -The \_itemsep_ command must be suffixed with a number to indicate how much space there should be between the list items. - -``` -\begin{itemize} \itemsep0.75pt - \item Fedora Silverblue - \item Fedora CoreOS -\end{itemize} -``` - -![][8] - -### Nesting lists - -LaTeX supports nested lists up to four levels deep as illustrated below. - -#### Nested itemized lists - -``` -\begin{itemize}[noitemsep] - \item Fedora Versions - \begin{itemize} - \item Fedora 8 - \item Fedora 9 - \begin{itemize} - \item Werewolf - \item Sulphur - \begin{itemize} - \item 2007-05-31 - \item 2008-05-13 - \end{itemize} - \end{itemize} - \end{itemize} - \item Fedora Spin - \item Fedora Silverblue -\end{itemize} -``` - -![][9] - -#### Nested enumerated lists - -``` -\begin{enumerate}[noitemsep] - \item Fedora Versions - \begin{enumerate} - \item Fedora 8 - \item Fedora 9 - \begin{enumerate} - \item Werewolf - \item Sulphur - \begin{enumerate} - \item 2007-05-31 - \item 2008-05-13 - \end{enumerate} - \end{enumerate} - \end{enumerate} - \item Fedora Spin - \item Fedora Silverblue -\end{enumerate} -``` - -![][10] - -### List style names for each list type - -**Enumerated** | **Itemized** ----|--- -\alph* | $\bullet$ -\Alph* | $\cdot$ -\arabic* | $\diamond$ -\roman* | $\ast$ -\Roman* | $\circ$ -| $-$ - -### Default style by list depth - -**Level** | **Enumerated** | **Itemized** ----|---|--- -1 | Number | Bullet -2 | Lowercase alphabet | Dash -3 | Roman numerals | Asterisk -4 | Uppercase alphabet | Period - -### Setting list styles - -The below example illustrates each of the different itemiszed list styles. - -``` -% Itemize style -\begin{itemize} - \item[$\ast$] Asterisk - \item[$\diamond$] Diamond - \item[$\circ$] Circle - \item[$\cdot$] Period - \item[$\bullet$] Bullet (default) - \item[--] Dash - \item[$-$] Another dash -\end{itemize} -``` - -![][11] - -There are three methods of setting list styles. They are illustrated below. These methods are listed by priority; highest priority first. A higher priority will override a lower priority if more than one is defined for a list item. - -#### List styling method 1 – per item - -Enclose the name of the desired style in square brackets and place it on the \_item_ command as demonstrated below. - -``` -% First method -\begin{itemize} - \item[$\ast$] Asterisk - \item[$\diamond$] Diamond - \item[$\circ$] Circle - \item[$\cdot$] period - \item[$\bullet$] Bullet (default) - \item[--] Dash - \item[$-$] Another dash -\end{itemize} -``` - -#### List styling method 2 – on the list - -Prefix the name of the desired style with _label=_. Place the parameter, including the _label=_ prefix, in square brackets on the \_begin_ command as demonstrated below. - -``` -% Second method -\begin{enumerate}[label=\Alph*.] - \item Fedora 32 - \item Fedora 31 - \item Fedora 30 -\end{enumerate} -``` - -#### List styling method 3 – on the document - -This method changes the default style for the entire document. Use the \_renewcommand_ to set the values for the labelitems. There is a different labelitem for each of the four label depths as demonstrated below. - -``` -% Third method -\renewcommand{\labelitemi}{$\ast$} -\renewcommand{\labelitemii}{$\diamond$} -\renewcommand{\labelitemiii}{$\bullet$} -\renewcommand{\labelitemiv}{$-$} -``` - -### Summary - -LaTeX supports three types of lists. The style and spacing of each of the list types can be customized. More LaTeX elements will be explained in future posts. - -Additional reading about LaTeX lists can be found here: [LaTeX List Structures][12] - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/latex-typesetting-part-1/ - -作者:[Earl Ramirez][a] -选题:[lujun9972][b] -译者:[zhangxiangping](https://github.com/zxp93) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/earlramirez/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png -[2]: https://fedoramagazine.org/typeset-latex-texstudio-fedora -[3]: https://fedoramagazine.org/fedora-classroom-latex-101-beginners -[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-1.png -[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-2.png -[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-3.png -[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-4.png -[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-5.png -[9]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-7.png -[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-8.png -[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-9.png -[12]: https://en.wikibooks.org/wiki/LaTeX/List_Structures diff --git a/sources/tech/20200629 LaTeX typesetting part 2 (tables).md b/sources/tech/20200629 LaTeX typesetting part 2 (tables).md deleted file mode 100644 index 030f3be01a..0000000000 --- a/sources/tech/20200629 LaTeX typesetting part 2 (tables).md +++ /dev/null @@ -1,416 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (LaTeX typesetting part 2 (tables)) -[#]: via: (https://fedoramagazine.org/latex-typesetting-part-2-tables/) -[#]: author: (Earl Ramirez https://fedoramagazine.org/author/earlramirez/) - -LaTeX typesetting part 2 (tables) -====== - -![][1] - -LaTeX offers a number of tools to create and customise tables, in this series we will be using the tabular and tabularx environment to create and customise tables. - -### Basic table - -To create a table you simply specify the environment \begin{tabular}{columns} -``` - -``` - -\begin{tabular}{c|c} -    Release &amp;amp;Codename \\\ \hline -    Fedora  Core 1 &amp;amp;Yarrow \\\ -    Fedora Core 2 &amp;amp;Tettnang \\\ -    Fedora Core 3 &amp;amp;Heidelberg \\\ -    Fedora Core 4 &amp;amp;Stentz \\\ -\end{tabular} -``` - -``` - -![Basic Table][2] - -In the above example "{c|c}" in the curly bracket refers to the position of the text in the column. The below table summarises the positional argument together with the description. - -Position | Argument ----|--- -c | Position text in the centre -l | Position text left-justified -r | Position text right-justified -p{width} | Align the text at the top of the cell -m{width} | Align the text in the middle of the cell -b{width} | Align the text at the bottom of the cell - ->Both m{width} and b{width} requires the array package to be specified in the preamble. - -Using the example above, let us breakdown the important points used and describe a few more options that you will see in this series - -Option | Description ----|--- -& | Defines each cell, the ampersand is only used from the second column -\ | This terminates the row and start a new row -| -\hline | Specifies the horizontal line (optional) -*{num}{form} | This is handy when you have many columns and is an efficient way of limiting the repetition -| - -### Customising our table - -Now that some of the options available let create a table using the options described in the previous section. -``` - -``` - -\begin{tabular}{*{3}{|l|}} -\hline -    \textbf{Version} &amp;amp;\textbf{Code name} &amp;amp;\textbf{Year released} \\\ -\hline -    Fedora 6 &amp;amp;Zod &amp;amp;2006 \\\ \hline -    Fedora 7 &amp;amp;Moonshine &amp;amp;2007 \\\ \hline -    Fedora 8 &amp;amp;Werewolf &amp;amp;2007 \\\ -\hline -\end{tabular} -``` - -``` - -![Customise Table][3] - -### Managing long text - -With LaTeX if there are many texts in a column it will not be formatted well and does not look presentable. - -The below example shows how long text is formatted, we will use "blindtext" in the preamble so that we can produce sample text. -``` - -``` - -\begin{tabular}{|l|l|}\hline -    Summary &amp;amp;Description \\\ \hline -    Test &amp;amp;\blindtext \\\ -\end{tabular} -``` - -``` - -![Default Formatting][4] - -As you can see the text exceed the page width; however, there are a couple of options to overcome this challenge. - - * Specify the column width, for example, m{5cm} - * Utilise the tabularx environment, this requires tabularx package in the preamble. - - - -### Managing long text with column width - -By specifying the column width the text will be wrapped into the width as shown in the example below. -``` - -``` - -\begin{tabular}{|l|m{14cm}|} \hline -    Summary &amp;amp;Description \\\ \hline -    Test &amp;amp;\blindtext \\\ \hline -\end{tabular}\vspace{3mm} -``` - -``` - -![Column width][5] - -### Managing long text with tabularx - -Before we can leverage tabularx we need to add it in the preamble. Tabularx takes the following example - -**\begin{tabularx}{width}{columns}** -``` - -``` - -\begin{tabularx}{\textwidth}{|l|X|} \hline -Summary &amp;amp; Tabularx Description\\\ \hline -Text &amp;amp;\blindtext \\\ \hline -\end{tabularx} -``` - -``` - -![Tabularx][6] - -Notice that the column that we want the long text to be wrapped has a capital "X" specified. - -### Multirow and multicolumn - -There are times when you will need to merge rows and/or column. This section describes how it is accomplished. To use multirow and multicolumn add multirow to the preamble. - -### Multirow - -Multirow takes the following argument _\multirow{number_of_rows}{width}{text}_, let us look at the below example. -``` - -``` - -\begin{tabular}{|l|l|}\hline -    Release &amp;amp;Codename \\\ \hline -    Fedora Core 4 &amp;amp;Stentz \\\ \hline -    \multirow{2}{*}{MultiRow} &amp;amp;Fedora 8 \\\ -    &amp;amp;Werewolf \\\ \hline -\end{tabular} -``` - -``` - -![MultiRow][7] - -In the above example, two rows were specified, the ‘*’ tells LaTeX to automatically manage the size of the cell. - -### Multicolumn - -Multicolumn argument is _\multicolumn{number_of_columns}{cell_position}{text}_, below example demonstrates multicolumn. -``` - -``` - -\begin{tabular}{|l|l|l|}\hline -    Release &amp;amp;Codename &amp;amp;Date \\\ \hline -    Fedora Core 4 &amp;amp;Stentz &amp;amp;2005 \\\ \hline -    \multicolumn{3}{|c|}{Mulit-Column} \\\ \hline -\end{tabular} -``` - -``` - -![Multi-Column][8] - -### Working with colours - -Colours can be assigned to the text, an individual cell or the entire row. Additionally, we can configure alternating colours for each row. - -Before we can add colour to our tables we need to include _\usepackage[table]{xcolor}_ into the preamble. We can also define colours using the following colour reference [LaTeX Colour][9] or by adding an exclamation after the colour prefixed by the shade from 0 to 100. For example, _gray!30_ -``` - -``` - -\definecolor{darkblue}{rgb}{0.0, 0.0, 0.55} -\definecolor{darkgray}{rgb}{0.66, 0.66, 0.66} -``` - -``` - -Below example demonstrate this a table with alternate colours, \rowcolors take the following options _\rowcolors{row_start_colour}{even_row_colour}{odd_row_colour}_. -``` - -``` - -\rowcolors{2}{darkgray}{gray!20} -\begin{tabular}{c|c} -    Release &amp;amp;Codename \\\ \hline -    Fedora  Core 1 &amp;amp;Yarrow \\\ -    Fedora Core 2 &amp;amp;Tettnang \\\ -    Fedora Core 3 &amp;amp;Heidelberg \\\ -    Fedora Core 4 &amp;amp;Stentz \\\ -\end{tabular} -``` - -``` - -![Alt colour table][10] - -In addition to the above example, \rowcolor can be used to specify the colour of each row, this method works best when there are multi-rows. The following examples show the impact of using the \rowcolours with multi-row and how to work around it. - -![Impact on multi-row][11] - -As you can see the _multi-row_ is visible in the first row, to fix this we have to do the following. -``` - -``` - -\begin{tabular}{|l|l|}\hline -    \rowcolor{darkblue}\textsc{\color{white}Release}  &amp;amp;\textsc{\color{white}Codename} \\\ \hline -    \rowcolor{gray!10}Fedora Core 4 &amp;amp;Stentz \\\ \hline -    \rowcolor{gray!40}&amp;amp;Fedora 8 \\\ -    \rowcolor{gray!40}\multirow{-2}{*}{Multi-Row} &amp;amp;Werewolf \\\ \hline -\end{tabular} -``` - -``` - -![Multi-row][12] - -Let us discuss the changes that were implemented to resolve the multi-row with the alternate colour issue. - - * The first row started above the multi-row - * The number of rows was changed from 2 to -2, which means to read from the line above - * \rowcolor was specified for each row, more importantly, the multi-rows must have the same colour so that you can have the desired results. - - - -One last note on colour, to change the colour of a column you need to create a new column type and define the colour. The example below illustrates how to define the new column colour. -``` - -``` - -\newcolumntype{g}{&amp;gt;{\columncolor{darkblue}}l} -``` - -``` - -Let’s break it down - - * \newcolumntype{g}: defines the letter _g_ as the new column - * {>{\columncolor{darkblue}}l}: here we select our desired colour, and _l_ tells the column to be left-justified, this can be subsitued with _c_ or _r_ - - -``` - -``` - -\begin{tabular}{g|l} -    \textsc{Release}  &amp;amp;\textsc{Codename} \\\ \hline -    Fedora Core 4 &amp;amp;Stentz \\\ -    &amp;amp;Fedora 8 \\\ -    \multirow{-2}{*}{Multi-Row} &amp;amp;Werewolf \\\ -\end{tabular}\ -``` - -``` - -![Column Colour][13] - -### Landscape table - -There may be times when your table has many columns and will not fit elegantly in portrait. With the _rotating_ package in preamble you will be able to create a sideways table. The below example demonstrates this. - -For the landscape table, we will use the _sidewaystable_ environment and add the tabular environment within it, we also specified additional options. - - * \centering to position the table in the centre of the page - * \caption{} to give our table a name - * \label{} this enables us to reference the table in our document - - -``` - -``` - -\begin{sidewaystable} -\centering -\caption{Sideways Table} -\label{sidetable} -\begin{tabular}{ll} -    \rowcolor{darkblue}\textsc{\color{white}Release}  &amp;amp;\textsc{\color{white}Codename} \\\ -    \rowcolor{gray!10}Fedora Core 4 &amp;amp;Stentz \\\ -    \rowcolor{gray!40} &amp;amp;Fedora 8 \\\ -    \rowcolor{gray!40}\multirow{-2}{*}{Multi-Row} &amp;amp;Werewolf \\\ -\end{tabular}\vspace{3mm} -\end{sidewaystable} -``` - -``` - -![Sideways Table][14] - -### List and tables - -To include a list into a table you can use tabularx and include the list in the column where the _X_ is specified. Another option will be to use tabular but you must specify the column width. - -### List in tabularx -``` - -``` - -\begin{tabularx}{\textwidth}{|l|X|} \hline -    Fedora Version &amp;amp;Editions \\\ \hline -    Fedora 32 &amp;amp;\begin{itemize}[noitemsep] -        \item CoreOS -        \item Silverblue -        \item IoT -    \end{itemize} \\\ \hline -\end{tabularx}\vspace{3mm} -``` - -``` - -![List in tabularx][15] - -### List in tabular -``` - -``` - -\begin{tabular}{|l|m{6cm}|}\hline -        Fedora Version &amp;amp;Editions \\\ \hline -    Fedora 32 &amp;amp;\begin{itemize}[noitemsep] -        \item CoreOS -        \item Silverblue -        \item IoT -    \end{itemize} \\\ \hline -\end{tabular} -``` - -``` - -![List in tabular][16] - -### Conclusion - -LaTeX offers many ways to customise your table with tabular and tabularx, you can also add both tabular and tabularx within the table environment (\begin\table) to add the table name and to position the table. - -### LaTeX packages - -The packages used in this series are. -``` - -``` - -\usepackage{fullpage} -\usepackage{blindtext}  % add demo text -\usepackage{array} % used for column positions -\usepackage{tabularx} % adds tabularx which is used for text wrapping -\usepackage{multirow} % multi-row and multi-colour support -\usepackage[table]{xcolor} % add colour to the columns -\usepackage{rotating} % for landscape/sideways tables -``` - -``` - -### Additional Reading - -This was an intermediate lesson on tables; for more advanced information about tables and LaTex in general, you can go to [LaTeX Wiki][17] - -![][13] - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/latex-typesetting-part-2-tables/ - -作者:[Earl Ramirez][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/earlramirez/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png -[2]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-13.png -[3]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-23.png -[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-10.png -[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-11.png -[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-12.png -[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-15.png -[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-16.png -[9]: https://latexcolor.com -[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-17.png -[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-18.png -[12]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-19.png -[13]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-24.png -[14]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-20.png -[15]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-21.png -[16]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-22.png -[17]: https://en.wikibooks.org/wiki/LaTeX/Tables diff --git a/sources/tech/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md b/sources/tech/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md deleted file mode 100644 index b273ef33b3..0000000000 --- a/sources/tech/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md +++ /dev/null @@ -1,128 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension) -[#]: via: (https://itsfoss.com/material-shell/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension -====== - -There is something about tiling windows that attracts many people. Perhaps it looks good or perhaps it is time-saving if you are a fan of [keyboard shortcuts in Linux][1]. Or maybe it’s the challenge of using the uncommon tiling windows. - -![Tiling Windows in Linux | Image Source][2] - -From i3 to [Sway][3], there are so many tiling window managers available for Linux desktop. Configuring a tiling window manager itself requires a steep learning curve. - -This is why projects like [Regolith desktop][4] exist to give you preconfigured tiling desktop so that you can get started with tiling windows with less effort. - -Let me introduce you to a similar project named Material Shell that makes using tiling feature even easier than [Regolith][5]. - -### Material Shell GNOME Extension: Convert GNOME desktop into a tiling window manager - -[Material Shell][6] is a GNOME extension and that’s the best thing about it. This means that you don’t have to log out and log in to another desktop environment or window manager. You can enable or disable it from within your current session. - -I’ll list the features of Material Shell but it will be easier to see it in action: - -[Subscribe to our YouTube channel for more Linux videos][7] - -The project is called Material Shell because it follows the [Material Design][8] guideline and thus gives the applications an aesthetically pleasing interface. Here are its main features: - -#### Intuitive interface - -Material Shell adds a left panel for quick access. On this panel, you can find the system tray at the bottom and the search and workspaces on the top. - -All the new apps are added to the current workspace. You can create new workspace and switch to it for organizing your running apps into categories. This is the essential concept of workspace anyway. - -In Material Shell, every workspace can be visualized as a row with several apps rather than a box with several apps in it. - -#### Tiling windows - -In a workspace, you can see all your opened applications on the top all the time. By default, the applications are opened to take the entire screen like you do in GNOME desktop. You can change the layout to split it in half or multiple columns or a grid of apps using the layout changer in the top right corner. - -This video shows all the above features at a glance: - -#### Persistent layout and workspaces - -That’s not it. Material Shell also remembers the workspaces and windows you open so that you don’t have to reorganize your layout again. This is a good feature to have as it saves time if you are particular about which application goes where. - -#### Hotkeys/Keyboard shortcut - -Like any tiling windows manager, you can use keyboard shortcuts to navigate between applications and workspaces. - - * `Super+W` Navigate to the upper workspace. - * `Super+S` Navigate to the lower workspace. - * `Super+A` Focus the window at the left of the current window. - * `Super+D` Focus the window at the right of the current window. - * `Super+1`, `Super+2` … `Super+0` Navigate to specific workspace - * `Super+Q` Kill the current window focused. - * `Super+[MouseDrag]` Move window around. - * `Super+Shift+A` Move the current window to the left. - * `Super+Shift+D` Move the current window to the right. - * `Super+Shift+W` Move the current window to the upper workspace. - * `Super+Shift+S` Move the current window to the lower workspace. - - - -### Installing Material Shell - -Warning! - -Tiling windows could be confusing for many users. You should be familiar with GNOME Extensions to use it. Avoid trying it if you are absolutely new to Linux or if you are easily panicked if anything changes in your system. - -Material Shell is a GNOME extension. So, please [check your desktop environment][9] to make sure you are running _**GNOME 3.34 or higher version**_. - -I would also like to add that tiling windows could be confusing for many users. - -Apart from that, I noticed that after disabling Material Shell it removes the top bar from Firefox and the Ubuntu dock. You can get the dock back by disabling/enabling Ubuntu dock extension from the Extensions app in GNOME. I haven’t tried but I guess these problems should also go away after a system reboot. - -I hope you know [how to use GNOME extensions][10]. The easiest way is to just [open this link in the browser][11], install GNOME extension plugin and then enable the Material Shell extension. - -![][12] - -If you don’t like it, you can disable it from the same extension link you used earlier or use the GNOME Extensions app: - -![][13] - -**To tile or not?** - -I use multiple screens and I found that Material Shell doesn’t work well with multiple monitors. This is something the developer(s) can improve in the future. - -Apart from that, it’s a really easy to get started with tiling windows with Material Shell. If you try Material Shell and like it, appreciate the project by [giving it a star or sponsoring it on GitHub][14]. - -For some reasons, tiling windows are getting popular. Recently released [Pop OS 20.04][15] also added tiling window features. - -But as I mentioned previously, tiling layouts are not for everyone and it could confuse many people. - -How about you? Do you prefer tiling windows or you prefer the classic desktop layout? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/material-shell/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/ubuntu-shortcuts/ -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1 -[3]: https://itsfoss.com/sway-window-manager/ -[4]: https://itsfoss.com/regolith-linux-desktop/ -[5]: https://regolith-linux.org/ -[6]: https://material-shell.com -[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 -[8]: https://material.io/ -[9]: https://itsfoss.com/find-desktop-environment/ -[10]: https://itsfoss.com/gnome-shell-extensions/ -[11]: https://extensions.gnome.org/extension/3357/material-shell/ -[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1 -[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1 -[14]: https://github.com/material-shell/material-shell -[15]: https://itsfoss.com/pop-os-20-04-review/ diff --git a/sources/tech/20201013 My first day using Ansible.md b/sources/tech/20201013 My first day using Ansible.md deleted file mode 100644 index 3d3453cfa7..0000000000 --- a/sources/tech/20201013 My first day using Ansible.md +++ /dev/null @@ -1,304 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (My first day using Ansible) -[#]: via: (https://opensource.com/article/20/10/first-day-ansible) -[#]: author: (David Both https://opensource.com/users/dboth) - -My first day using Ansible -====== -A sysadmin shares information and advice about putting Ansible into -real-world use configuring computers on his network. -![People work on a computer server with devices][1] - -Getting a new computer, whether physical or virtual, up and running is time-consuming and requires a good deal of work—whether it's your first time or the 50th. For many years, I have used a series of scripts and RPMs that I created to install the packages I need and to perform many bits of configuration for my favorite tools. This approach has worked well and simplified my work, as well as reduced the amount of time I spend typing commands. - -I am always looking for better ways of doing things, and, for several years now, I have been hearing and reading about [Ansible][2], which is a powerful tool for automating system configuration and management. Ansible allows a sysadmin to define a specific state for each host in one or more playbooks and then performs whatever tasks are necessary to bring the host to that state. This includes installing or deleting various resources such as RPM or Apt packages, configuration and other files, users, groups, and much more. - -I have delayed learning how to use it for a long time because—stuff. Until recently, when I ran into a problem that I thought Ansible could easily solve. - -This article is not a complete how-to for getting started with Ansible; rather, it is intended to provide insight into some of the issues that I encountered and some information that I found only in some very obscure places. Much of the information I found in various online discussions and Q&A groups about Ansible was incorrect. Errors ranged from information that was really old with no indication of its date or provenance to information that was just wrong. - -The information in this article is known to work—although there might be other ways of accomplishing the same things—and it works with Ansible 2.9.13 and [Python][3] 3.8.5. - -### My problem - -All of my best learning experiences start with a problem I need to solve, and this was no exception. - -I have been working on a little project to modify the configuration files for the [Midnight Commander][4] (mc) file manager and pushing them out to various systems on my network for testing. Although I have a script to automate this, it still requires a bit of fussing with a command-line loop to provide the names of the systems to which I want to push the new code. The large number of changes I was making to the configuration files made it necessary for me to push the new ones frequently. But, just when I thought I had my new configuration just right, I would find a problem and need to do another push after making the fix. - -This environment made it difficult to keep track of which systems had the new files and which did not. I also have a couple of hosts that need to be treated differently. And my little bit of knowledge about Ansible suggested that it would probably be able to do all or most of what I need. - -### Getting started - -I had read a number of good articles and books about Ansible, but never in an "I have to get this working NOW!" kind of situation. And now was—well, NOW! - -In rereading these documents, I discovered that they mostly talk about how to install Ansible from GitHub using—wait for it—Ansible. That is cool, but I really just wanted to get started, so I installed it on my Fedora workstation using DNF and the version in the Fedora repository. Easy. - -But then I started looking for the file locations and trying to determine which configuration files I needed to modify, where to keep my playbooks, what a playbook even looks like, and what it does. I had lots of (so far) unanswered questions running around in my head. - -So, without further descriptions of my tribulations, here are the things I discovered and that got me going. - -### Configuration - -Ansible's configuration files are kept in `/etc/ansible`. Makes sense, right, since `/etc` is where system programs are supposed to keep their configuration files. The two files I needed to work with are `ansible.cfg` and `hosts`. - -#### ansible.cfg - -After getting started with some of the exercises I found in the documents and online, I began receiving warning messages about deprecating certain older Python files. So, I set `deprecation_warnings` to `false` in `ansible.cfg` and those angry red warning messages stopped: - - -``` -`deprecation_warnings = False` -``` - -Those warnings are important, so I will revisit them later and figure out what I need to do. But for now, they no longer clutter the screen and obfuscate the errors I actually need to be concerned about. - -#### The hosts file - -Not the same as the `/etc/hosts` file, the `hosts` file is also known as the inventory file, and it lists the hosts on your network. This file allows grouping hosts together in related sets, such as servers, workstations, and pretty much any designation you need. This file contains its own help and plenty of examples, so I won't go into boring detail here. However, there are some things to know. - -Hosts can be listed outside of any groups, but groups can be helpful in identifying hosts with one or more common characteristics. Groups use the INI format, so a server group looks like this: - - -``` -[servers] -server1 -server2 -...etc. -``` - -A hostname must be present in this file for Ansible to work on it. Even though some subcommands allow you to specify a hostname, the command will fail unless the hostname is in the `hosts` file. A host can also be listed in multiple groups. So `server1` might also be a member of the `[webservers]` group in addition to the `[servers]` group, and a member of the `[ubuntu]` group to differentiate it from Fedora servers. - -Ansible is smart. If the `all` argument is used for the hostname, Ansible scans the file and performs the defined tasks on all hosts listed in the file. Ansible will only attempt to work on each host once, no matter how many groups it appears in. This also means that there does not need to be a defined "all" group because Ansible can determine all hostnames in the file and create its own list of unique hostnames. - -Another little thing to look out for is multiple entries for a single host. I use `CNAME` records in my DNS zone file to create aliased names that point to the [A records][5] for some of my hosts. That way, I can refer to a host as `host1` or `h1` or `myhost`. If you use multiple hostnames for the same host in the `hosts` file, Ansible will try to perform its tasks on all of those hostnames; it has no way of knowing that they refer to the same host. The good news is that this does not affect the overall result; it just takes a bit more time as Ansible works on the secondary hostnames and determines that all of the operations have already been performed. - -### Ansible facts - -Most of the materials I have read on Ansible talk about [Ansible facts][6], which "are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more." This information is available in other ways, such as `lshw`, `dmidecode`, the `/proc` filesystem, and more, but Ansible generates a JSON file containing this information. Each time Ansible runs, it generates this facts data. There is an amazing amount of information in this data stream, all of which are in `<"variable-name": "value">` pairs. All of these variables are available for use within an Ansible playbook. The best way to understand the huge amount of information available is to display it yourself: - - -``` -`# ansible -m setup | less` -``` - -See what I mean? Everything you ever wanted to know about your host hardware and Linux distribution is there and usable in a playbook. I have not yet gotten to the point where I need to use those variables, but I am certain I will in the next couple of days. - -### Modules - -The `ansible` command above uses the `-m` option to specify the "setup" module. Ansible has many modules already built-in, so you do not need to use the `-m` for those. There are also many downloadable modules that can be installed, but the built-ins do everything I have needed for my current projects so far. - -### Playbooks - -Playbooks can be located almost anywhere. Since I need to run my playbooks as root, I placed mine in `/root/ansible`. As long as this directory is the present working directory (PWD) when I run Ansible, it can find my playbook. Ansible also has a runtime option to specify a different playbook and location. - -Playbooks can contain comments, although I have seen very few articles or books that mention this. As a sysadmin who believes in documenting everything, I find using comments can be very helpful. This is not so much about saying the same things in the comments as I do in the task name; rather, it is about identifying the purpose of groups of tasks and ensuring that I record my reasons for doing certain things in a certain way or order. This can help with debugging problems later when I may have forgotten my original thinking. - -Playbooks are simply collections of tasks that define the desired state of a host. A hostname or inventory group is specified at the beginning of the playbook and defines the hosts on which Ansible will run the playbook. - -Here is a sample of my playbook: - - -``` -################################################################################ -# This Ansible playbook updates Midnight commander configuration files.        # -################################################################################ -\- name: Update midnight commander configuration files -  hosts: all -  -  tasks: -  - name: ensure midnight commander is the latest version -    dnf: -      name: mc -      state: present - -  - name: create ~/.config/mc directory for root -    file: -      path: /root/.config/mc -      state: directory -      mode: 0755 -      owner: root -      group: root - -  - name: create ~/.config/mc directory for dboth -    file: -      path: /home/dboth/.config/mc -      state: directory -      mode: 0755 -      owner: dboth -      group: dboth - -  - name: copy latest personal skin -    copy: -      src: /root/ansible/UpdateMC/files/MidnightCommander/DavidsGoTar.ini -      dest: /usr/share/mc/skins/DavidsGoTar.ini -      mode: 0644 -      owner: root -      group: root - -  - name: copy latest mc ini file -    copy: -      src: /root/ansible/UpdateMC/files/MidnightCommander/ini -      dest: /root/.config/mc/ini -      mode: 0644 -      owner: root -      group: root - -  - name: copy latest mc panels.ini file -    copy: -      src: /root/ansible/UpdateMC/files/MidnightCommander/panels.ini -      dest: /root/.config/mc/panels.ini -      mode: 0644 -      owner: root -      group: root -<SNIP> -``` - -The playbook starts with its own name and the hosts it will act on—in this case, all of the hosts listed in my `hosts` file. The `tasks` section lists the specific tasks required to bring the host into compliance with the desired state. This playbook starts with a task in which Ansible's built-in DNF updates Midnight Commander if it is not the most recent release. The next tasks ensure that the required directories are created if they do not exist, and the remainder of the tasks copy the files to the proper locations. These `file` and `copy` tasks can also set the ownership and file modes for the directories and files. - -The details of my playbook are beyond the scope of this article, but I used a bit of a brute-force attack on the problem. There are other methods for determining which users need to have the files updated rather than using a task for each file for each user. My next objective is to simplify this playbook to use some of the more advanced techniques. - -Running a playbook is easy; just use the `ansible-playbook` command. The .yml extension stands for YAML. I have seen several meanings for that, but my bet is on "Yet Another Markup Language," despite the fact that some claim that YAML is not one. - -This command runs the playbook I created for updating my Midnight Commander files: - - -``` -`# ansible-playbook -f 10 UpdateMC.yml` -``` - -The `-f` option specifies that Ansible should fork up to 10 threads in order to perform operations in parallel. This can greatly speed overall task completion, especially when working on multiple hosts. - -### Output - -The output from a running playbook lists each task and the results. An `ok` means the machine state managed by the task is already defined in the task stanza. Because the state defined in the task is already true, Ansible did not need to perform the actions defined in the task stanza. - -The response `changed` indicates that Ansible performed the task specified in the stanza in order to bring it to the desired state. In this case, the machine state defined in the stanza was not true, so the actions defined were performed to make it true. On a color-capable terminal, the `TASK` lines are shown in color. On my host with my amber-on-black terminal color configuration, the `TASK` lines are shown in amber, `changed` lines are in brown, and `ok` lines are shown in green. Error lines are displayed in red. - -The following output is from the playbook I will eventually use to perform post-install configuration on new hosts: - - -``` -PLAY [Post-installation updates, package installation, and configuration] - -TASK [Gathering Facts] -ok: [testvm2] - -TASK [Ensure we have connectivity] -ok: [testvm2] - -TASK [Install all current updates] -changed: [testvm2] - -TASK [Install a few command line tools] -changed: [testvm2] - -TASK [copy latest personal Midnight Commander skin to /usr/share] -changed: [testvm2] - -TASK [create ~/.config/mc directory for root] -changed: [testvm2] - -TASK [Copy the most current Midnight Commander configuration files to /root/.config/mc] -changed: [testvm2] => (item=/root/ansible/PostInstallMain/files/MidnightCommander/DavidsGoTar.ini) -changed: [testvm2] => (item=/root/ansible/PostInstallMain/files/MidnightCommander/ini) -changed: [testvm2] => (item=/root/ansible/PostInstallMain/files/MidnightCommander/panels.ini) - -TASK [create ~/.config/mc directory in /etc/skel] -changed: [testvm2] - -<SNIP> -``` - -### The cow - -If you have the [cowsay][7] program installed on your computer, you will notice that the `TASK` names appear in the cow's speech bubble: - - -``` - ____________________________________ -< TASK [Ensure we have connectivity] > - ------------------------------------ -        \   ^__^ -         \  (oo)\\_______ -            (__)\       )\/\ -                ||----w | -                ||     || -``` - -If you do not have this fun feature and want it, install the cowsay package using your distribution's package manager. If you have this and don't want it, disable it with by setting `nocows = 1` in the `/etc/ansible/ansible.cfg` file. - -I like the cow and think it is fun, but it reduces the amount of screen space that can be used to display messages. So I disabled it after it started getting in the way. - -### Files - -As with my Midnight Commander task, it is frequently necessary to install and maintain files of various types. There are as many "best practices" for creating a directory tree for storing files used in playbooks as there are sysadmins—or at least as many as the number of authors writing books and articles about Ansible. - -I chose a simple structure that makes sense to me: - - -``` -/root/ansible -└── UpdateMC -    ├── files -    │   └── MidnightCommander -    │       ├── DavidsGoTar.ini -    │       ├── ini -    │       └── panels.ini -    └── UpdateMC.yml -``` - -You should use whatever structure works for you. Just be aware that some other sysadmin will likely need to work with whatever you set up, so there should be some level of logic to it. When I was using RPM and Bash scripts to perform my post-install tasks, my file repository was a bit scattered and definitely not structured with any logic. As I work through creating playbooks for many of my administrative tasks, I will introduce a much more logical structure for managing my files. - -### Multiple playbook runs - -It is safe to run a playbook as many times as needed or desired. Each task will only be executed if the state does not match the one specified in the task stanza. This makes it easy to recover from errors encountered during previous playbook runs. The playbook stops running when it encounters an error. - -While testing my first playbook, I made many mistakes and corrected them. Each additional run of the playbook—assuming my fix is a good one—skips the tasks whose state already matches the specified one and executes those that did not. When my fix works, the previously failed task completes successfully, and any tasks after that one in my playbook also execute—until it encounters another error. - -This also makes testing easy. I can add new tasks and, when I run the playbook, only those new tasks are performed because they are the only ones that do not match the test host's desired state. - -### Some thoughts - -Some tasks are not appropriate for Ansible because there are better methods for achieving a specific machine state. The use case that comes to mind is that of returning a VM to an initial state so that it can be used as many times as necessary to perform testing beginning with that known state. It is much easier to get the VM into the desired state and then to take a snapshot of the then-current machine state. Reverting to that snapshot is usually going to be easier and much faster than using Ansible to return the host to that desired state. This is something I do several times a day when researching articles or testing new code. - -After completing my playbook for updating Midnight Commander, I started a new playbook that I will use to perform post-installation tasks on newly installed Fedora hosts. I have already made good progress, and the playbook is a bit more sophisticated and less brute-force than my first one. - -On my very first day using Ansible, I created a playbook that solves a problem. I also started a second playbook that will solve the very big problem of post-install configuration. And I have learned a lot. - -Although I really like using [Bash][8] scripts for many of my administrative tasks, I am already finding that Ansible can do everything I want—and in a way that can maintain the system in the state I want. After only a single day of use, I am an Ansible fan. - -### Resources - -The most complete and useful document I have found is the [User Guide][9] on the Ansible website. This document is intended as a reference and not a how-to or getting-started document. - -Opensource.com has published many [articles about Ansible][10] over the years, and I have found most of them very helpful for my needs. The Enable Sysadmin website also has a lot of [Ansible articles][11] that I have found to be helpful. You can learn even more by checking out [AnsibleFest][12] happening this week (October 13-14, 2020). The event is completely virtual and free to register. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/10/first-day-ansible - -作者:[David Both][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices) -[2]: https://www.ansible.com/ -[3]: https://www.python.org/ -[4]: https://midnight-commander.org/ -[5]: https://en.wikipedia.org/wiki/List_of_DNS_record_types -[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html#ansible-facts -[7]: https://en.wikipedia.org/wiki/Cowsay -[8]: https://opensource.com/downloads/bash-cheat-sheet -[9]: https://docs.ansible.com/ansible/latest/user_guide/index.html -[10]: https://opensource.com/tags/ansible -[11]: https://www.redhat.com/sysadmin/topics/ansible -[12]: https://www.ansible.com/ansiblefest diff --git a/sources/tech/20201216 Understanding 52-bit virtual address support in the Arm64 kernel.md b/sources/tech/20201216 Understanding 52-bit virtual address support in the Arm64 kernel.md deleted file mode 100644 index 8051044255..0000000000 --- a/sources/tech/20201216 Understanding 52-bit virtual address support in the Arm64 kernel.md +++ /dev/null @@ -1,251 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: () -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Understanding 52-bit virtual address support in the Arm64 kernel) -[#]: via: (https://opensource.com/article/20/12/52-bit-arm64-kernel) -[#]: author: (Bhupesh Sharma https://opensource.com/users/bhsharma) - -Understanding 52-bit virtual address support in the Arm64 kernel -====== -The introduction of 64-bit hardware increased the need to handle larger -address spaces. -![Puzzle pieces coming together to form a computer screen][1] - -After 64-bit hardware became available, the need to handle larger address spaces (greater than 232 bytes) became obvious. With some vendors now offering servers with 64TiB (or more) of memory, x86_64 and arm64 now allow addressing adress spaces greater than 248 bytes (available with the default 48-bit address support). - -x86_64 addressed these use cases by enabling support for five-level page tables in both hardware and software. This enables addressing address spaces equal to 257 bytes (see [x86: 5-level paging enabling for v4.12][2] for details). It bumps the limits to 128PiB of virtual address space and 4PiB of physical address space. - -arm64 achieved the same thing by introducing two new architecture extensions—ARMv8.2 LVA (Large Virtual Addressing) and ARMv8.2 LPA (Large Physical Addressing). These allow 4PiB of virtual address space and 4 PiB of physical address space (i.e., 252 bits each, respectively). - -With ARMv8.2 architecture extensions available in new arm64 CPUs, the two new hardware extensions are now supported in open source software. - -Starting with Linux kernel version 5.4, the 52-bit (Large) Virtual Address (VA) and Physical Address (PA) support was introduced for arm64 architecture. Although the [kernel documentation][3] describes these features and how they impact the new kernels running on older CPUs (which don't support 52-bit VA extension in hardware) and newer CPUs (which support 52-bit VA extensions in hardware), it can be complex for average users to understand them and how they can "opt-in" to receiving VAs from a 52-bit space. - -Therefore, I will introduce these relatively new concepts in this article: - - 1. How the kernel memory layout got "flipped" for Arm64 after the support for these features was added - 2. The impact on userspace applications, especially the ones that provide debugging support (e.g., kexec-tools, makedumpfile, and crash-utility) - 3. How userspace applications can "opt-in" to receiving VAs from a 52-bit space by specifying an mmap hint parameter that is larger than 48 bits - - - -### ARMv8.2 architecture LVA and LPA extensions - -The ARMv8.2 architecture provides two important extensions: Large Virtual Addressing (LVA) and Large Physical Addressing (LPA). - -ARMv8.2-LVA supports a larger VA space for each translation table base register of up to 52 bits when using the 64KB translation granule. - -ARMv8.2-LPA allows: - - * A larger intermediate physical address (IPA) and PA space of up to 52 bits when using the 64KB translation granule - * A level 1 block size where the block covers a 4TB address range for the 64KB translation granule if the implementation supports 52 bits of PA - - - -_Note that these features are supported only in the AArch64 state._ - -Currently, the following Arm64 Cortex-A processors support ARMv8.2 extensions: - - * Cortex-A55 - * Cortex-A75 - * Cortex-A76 - - - -For more details, see the [Armv8 Architecture Reference Manual][4]. - -### Kernel memory layout on Arm64 - -With the ARMv8.2 extension adding support for LVA space (which is only available when running with a 64KB page size), the number of descriptors gets expanded in the first level of translation. - -User addresses have bits 63:48 set to 0, while the kernel addresses have the same bits set to 1. TTBRx selection is given by bit 63 of the virtual address. The `swapper_pg_dir` contains only kernel (global) mappings, while the user `pgd` contains only user (non-global) mappings. The `swapper_pg_dir` address is written to TTBR1 and never written to TTBR0. - -**AArch64 Linux memory layout with 64KB pages plus three levels (52-bit with hardware support):** - - -``` -  Start                 End                     Size            Use -  ----------------------------------------------------------------------- -  0000000000000000      000fffffffffffff           4PB          user -  fff0000000000000      fff7ffffffffffff           2PB          kernel logical memory map -  fff8000000000000      fffd9fffffffffff        1440TB          [gap] -  fffda00000000000      ffff9fffffffffff         512TB          kasan shadow region -  ffffa00000000000      ffffa00007ffffff         128MB          bpf jit region -  ffffa00008000000      ffffa0000fffffff         128MB          modules -  ffffa00010000000      fffff81ffffeffff         ~88TB          vmalloc -  fffff81fffff0000      fffffc1ffe58ffff          ~3TB          [guard region] -  fffffc1ffe590000      fffffc1ffe9fffff        4544KB          fixed mappings -  fffffc1ffea00000      fffffc1ffebfffff           2MB          [guard region] -  fffffc1ffec00000      fffffc1fffbfffff          16MB          PCI I/O space -  fffffc1fffc00000      fffffc1fffdfffff           2MB          [guard region] -  fffffc1fffe00000      ffffffffffdfffff        3968GB          vmemmap -  ffffffffffe00000      ffffffffffffffff           2MB          [guard region] -``` - -**Translation table lookup with 4KB pages:** - - -``` -  +--------+--------+--------+--------+--------+--------+--------+--------+ -  |63    56|55    48|47    40|39    32|31    24|23    16|15     8|7      0| -  +--------+--------+--------+--------+--------+--------+--------+--------+ -   |                 |         |         |         |         | -   |                 |         |         |         |         v -   |                 |         |         |         |   [11:0]  in-page offset -   |                 |         |         |         +-> [20:12] L3 index -   |                 |         |         +-----------> [29:21] L2 index -   |                 |         +---------------------> [38:30] L1 index -   |                 +-------------------------------> [47:39] L0 index -   +-------------------------------------------------> [63] TTBR0/1 -``` - -**Translation table lookup with 64KB pages:** - - -``` -  +--------+--------+--------+--------+--------+--------+--------+--------+ -  |63    56|55    48|47    40|39    32|31    24|23    16|15     8|7      0| -  +--------+--------+--------+--------+--------+--------+--------+--------+ -   |                 |    |               |              | -   |                 |    |               |              v -   |                 |    |               |            [15:0]  in-page offset -   |                 |    |               +----------> [28:16] L3 index -   |                 |    +--------------------------> [41:29] L2 index -   |                 +-------------------------------> [47:42] L1 index (48-bit) -   |                                                   [51:42] L1 index (52-bit) -   +-------------------------------------------------> [63] TTBR0/1 -``` - -  - -![][5] - -opensource.com - -### 52-bit VA support in the kernel - -Since the newer kernels with the LVA support should run well on older CPUs (which don't support LVA extension in hardware) and the newer CPUs (which support LVA extension in hardware), the chosen design approach is to have a single binary that supports 52 bit (and must be able to fall back to 48 bit at early boot time if the hardware feature is not present). That is, the VMEMMAP must be sized large enough for 52-bit VAs and also must be sized large enough to accommodate a fixed `PAGE_OFFSET`. - -This design approach requires the kernel to support the following variables for the new virtual address space: - - -``` -VA_BITS         constant        the *maximum* VA space size - -vabits_actual   variable        the *actual* VA space size -``` - -So, while `VA_BITS` denotes the maximum VA space size, the actual VA space supported (depending on the switch made at boot time) is indicated by `vabits_actual`. - -#### Flipping the kernel memory layout - -The design approach of keeping a single kernel binary requires the kernel .text to be in the higher addresses, such that they are invariant to 48/52-bit VAs. Due to the Kernel Address Sanitizer (KASAN) shadow being a fraction of the entire kernel VA space, the end of the KASAN shadow must also be in the higher half of the kernel VA space for both 48 and 52 bit. (Switching from 48 bit to 52 bit, the end of the KASAN shadow is invariant and dependent on `~0UL`, while the start address will "grow" towards the lower addresses). - -To optimize `phys_to_virt()` and `virt_to_phys()`, the `PAGE_OFFSET` is kept constant at `0xFFF0000000000000` (corresponding to 52 bit), this obviates the need for an extra variable read. The `physvirt` and `vmemmap` offsets are computed at early boot to enable this logic. - -Consider the following physical vs. virtual RAM address space conversion: - - -``` -/* - * The linear kernel range starts at the bottom of the virtual address - * space. Testing the top bit for the start of the region is a - * sufficient check and avoids having to worry about the tag. - */ - -#define virt_to_phys(addr) ({                                   \ -        if (!(((u64)addr) & BIT(vabits_actual - 1)))            \ -                (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) -}) - -#define phys_to_virt(addr) ((unsigned long)((addr) - PHYS_OFFSET) | PAGE_OFFSET) - -where: - PAGE_OFFSET - the virtual address of the start of the linear map, at the -                start of the TTBR1 address space, - PHYS_OFFSET - the physical address of the start of memory, and - vabits_actual - the *actual* VA space size -``` - -### Impact on userspace applications used to debug kernel - -Several userspace applications are used to debug running/live kernels or analyze the vmcore dump from a crashing system (e.g., to determine the root cause of the kernel crash): kexec-tools, makedumpfile, and crash-utility. - -When these are used for debugging the Arm64 kernel, there is also an impact on them because of the Arm64 kernel memory map getting "flipped." These applications also need to perform a translation table walk for determining a physical address corresponding to a virtual address (similar to how it is done in the kernel). - -Accordingly, userspace applications must be modified as they are broken upstream after the "flip" was introduced in the kernel memory map. - -I have proposed fixes in the three affected userspace applications; while some have been accepted upstream, others are still pending: - - * [Proposed makedumpfile upstream fix][6] - * [Proposed kexec-tools upstream fix][7] - * [Fix accepted in crash-utility][8] - - - -Unless these changes are made in userspace applications, they will remain broken for debugging running/live kernels or analyzing the vmcore dump from a crashing system. - -### 52-bit userspace VAs - -To maintain compatibility with userspace applications that rely on the ARMv8.0 VA space maximum size of 48 bits, the kernel will, by default, return virtual addresses to userspace from a 48-bit range. - -Userspace applications can "opt-in" to receiving VAs from a 52-bit space by specifying an mmap hint parameter larger than 48 bits. - -For example: - - -``` -.mmap_high_addr.c -\---- - -   maybe_high_address = mmap(~0UL, size, prot, flags,...); -``` - -It is also possible to build a debug kernel that returns addresses from a 52-bit space by enabling the following kernel config options: - - -``` -`   CONFIG_EXPERT=y && CONFIG_ARM64_FORCE_52BIT=y` -``` - -_Note that this option is only intended for debugging applications and should **not** be used in production._ - -### Conclusions - -To summarize: - - 1. Starting with Linux kernel version 5.14, the new Armv8.2 hardware extensions LVA and LPA are now well-supported in the Linux kernel. - 2. Userspace applications like kexec-tools and makedumpfile used for debugging the kernel are broken _right now_ and awaiting acceptance of upstream fixes. - 3. Legacy userspace applications that rely on Arm64 kernel providing it a 48-bit VA will continue working as-is, whereas newer userspace applications can "opt-in" to receiving VAs from a 52-bit space by specifying an mmap hint parameter that is larger than 48 bits. - - - -* * * - -_This article draws on [Memory Layout on AArch64 Linux][9] and [Linux kernel documentation v5.9.12][10]. Both are licensed under GPLv2.0._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/12/52-bit-arm64-kernel - -作者:[Bhupesh Sharma][a] -选题:[lujun9972][b] -译者:[萌新阿岩](https://github.com/mengxinayan) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bhsharma -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) -[2]: https://lwn.net/Articles/716916/ -[3]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/memory.rst -[4]: https://developer.arm.com/documentation/ddi0487/latest/ -[5]: https://opensource.com/sites/default/files/arm64-multi-level-translation_0.png (arm64 Multi-level Translation) -[6]: http://lists.infradead.org/pipermail/kexec/2020-September/021372.html -[7]: http://lists.infradead.org/pipermail/kexec/2020-September/021333.html -[8]: https://github.com/crash-utility/crash/commit/1c45cea02df7f947b4296c1dcaefa1024235ef10 -[9]: https://www.kernel.org/doc/html/latest/arm64/memory.html -[10]: https://elixir.bootlin.com/linux/latest/source/arch/arm64/include/asm/memory.h diff --git a/sources/tech/20201231 Build your own text editor in Java.md b/sources/tech/20201231 Build your own text editor in Java.md deleted file mode 100644 index b0cfec8471..0000000000 --- a/sources/tech/20201231 Build your own text editor in Java.md +++ /dev/null @@ -1,342 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Build your own text editor in Java) -[#]: via: (https://opensource.com/article/20/12/write-your-own-text-editor) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) - -Build your own text editor in Java -====== -Sometimes, no one can make your dream tool but you. Here's how to start -building your own text editor. -![Working from home at a laptop][1] - -There are a lot of text editors available. There are those that run in the terminal, in a GUI, in a browser, and in a browser engine. Many are very good, and some are great. But sometimes, the most satisfying answer to any question is the one you build yourself. - -Make no mistake: building a really good text editor is a lot harder than it may seem. But then again, it’s also not as hard as you might fear to build a basic one. In fact, most programming toolkits already have most of the text editor parts ready for you to use. The components around the text editing, such as a menu bar, file chooser dialogues, and so on, are easy to drop into place. As a result, a basic text editor is a surprisingly fun and elucidating, though intermediate, lesson in programming. You might find yourself eager to use a tool of your own construction, and the more you use it, the more you might be inspired to add to it, learning even more about the programming language you’re using. - -To make this exercise realistic, it’s best to choose a language with a good GUI toolkit. There are many to choose from, including Qt, FLTK, or GTK, but be sure to review the documentation first to ensure it has the features you expect. For this article, I use Java with its built-in Swing widget set. If you want to use a different language or a different toolset, this article can still be useful in giving you an idea of how to approach the problem. - -Writing a text editor in any major toolkit is surprisingly similar, no matter which one you choose. If you’re new to Java and need further information on getting started, read my [Guessing Game article][2] first. - -### Project setup - -Normally, I use and recommend an IDE like [Netbeans][3] or Eclipse, but I find that, when practicing a new language, it can be helpful to do some manual labor, so you better understand the things that get hidden from you when using an IDE. In this article, I assume you’re programming using a text editor and a terminal. - -Before getting started, create a project directory for yourself. In the project folder, create one directory called `src` to hold your source files. - - -``` -$ mkdir -p myTextEditor/src -$ cd myTextEditor -``` - -Create an empty file called `TextEdit.java` in your `src` directory: - - -``` -`$ touch src/TextEditor.java` -``` - -Open the file in your favorite text editor (I mean your favorite one that you didn’t write) and get ready to code! - -### Package and imports - -To ensure your Java application has a unique identifier, you must declare a **package** name. The typical format for this is to use a reverse domain name, which is particularly easy should you actually have a domain name. If you don’t, you can use `local` as the top level. As usual for Java and many languages, the line is terminated with a semicolon. - -After naming your Java package, you must tell the Java compiler (`javac`) what libraries to use when building your code. In practice, this is something you usually add to as you code because you rarely know yourself what libraries you need. However, there are some that are obvious beforehand. For instance, you know this text editor is based around the Swing GUI toolkit, so importing `javax.swing.JFrame` and `javax.swing.UIManager` and other related libraries is a given. - - -``` -package com.example.textedit; - -import javax.swing.JFileChooser; -import javax.swing.JFrame; -import javax.swing.JMenu; -import javax.swing.JMenuBar; -import javax.swing.JMenuItem; -import javax.swing.JOptionPane; -import javax.swing.JTextArea; -import javax.swing.UIManager; -import javax.swing.UnsupportedLookAndFeelException; -import javax.swing.filechooser.FileSystemView; -import java.awt.Component; -import java.awt.event.ActionEvent; -import java.awt.event.ActionListener; -import java.io.File; -import java.io.FileNotFoundException; -import java.io.FileReader; -import java.io.FileWriter; -import java.io.IOException; -import java.util.Scanner; -import java.util.logging.Level; -import java.util.logging.Logger; -``` - -For the purpose of this exercise, you get prescient knowledge of all the libraries you need in advance. In real life, regardless of what language you favor, you’ll discover libraries as you research how to solve any given problem, and then you’ll import it into your code and use it. And don’t worry—should you forget to include a library, your compiler or interpreter will warn you! - -### Main window - -This is a single-window application, so the primary class of this application is a JFrame with an `ActionListener` attached to catch menu events. In Java, when you’re using an existing widget element, you "extend" it with your code. This main window needs three fields: the window itself (an instance of JFrame), an indicator for the return value of the file chooser, and the text editor itself (JTextArea). - - -``` -public final class TextEdit extends [JFrame][4] implements [ActionListener][5] { -private static [JTextArea][6] area; -private static [JFrame][4] frame; -private static int returnValue = 0; -``` - -Amazingly, these few lines do about 80% of the work toward implementing a basic text editor because JTextArea is Java’s text entry field. Most of the remaining 80 lines take care of helper features, like saving and opening files. - -### Building a menu - -The `JMenuBar` widget is designed to sit at the top of a JFrame, providing as many entries as you want. Java isn’t a drag-and-drop programming language, though, so for every menu you add, you must also program a function. To keep this project manageable, I provide four functions: creating a new file, opening an existing file, saving text to a file, and closing the application. - -The process of creating a menu is basically the same in most popular toolkits. First, you create the menubar itself, then you create a top-level menu (such as "File"), and then you create submenu items (such as "New," "Save," and so on). - - -``` - public TextEdit() { run(); } - -  public void run() { -    frame = new [JFrame][4]("Text Edit"); - -    // Set the look-and-feel (LNF) of the application -        // Try to default to whatever the host system prefers -    try { -      [UIManager][7].setLookAndFeel([UIManager][7].getSystemLookAndFeelClassName()); -    } catch ([ClassNotFoundException][8] | [InstantiationException][9] | [IllegalAccessException][10] | [UnsupportedLookAndFeelException][11] ex) { -      Logger.getLogger(TextEdit.class.getName()).log(Level.SEVERE, null, ex); -    } - -        // Set attributes of the app window -    area = new [JTextArea][6](); -    frame.setDefaultCloseOperation([JFrame][4].EXIT_ON_CLOSE); -    frame.add(area); -    frame.setSize(640, 480); -        frame.setVisible(true); - -        // Build the menu -    [JMenuBar][12] menu_main = new [JMenuBar][12](); - -        [JMenu][13] menu_file = new [JMenu][13]("File"); - -        [JMenuItem][14] menuitem_new = new [JMenuItem][14]("New"); -    [JMenuItem][14] menuitem_open = new [JMenuItem][14]("Open"); -    [JMenuItem][14] menuitem_save = new [JMenuItem][14]("Save"); -    [JMenuItem][14] menuitem_quit = new [JMenuItem][14]("Quit"); - -        menuitem_new.addActionListener(this); -    menuitem_open.addActionListener(this); -    menuitem_save.addActionListener(this); -    menuitem_quit.addActionListener(this); - -        menu_main.add(menu_file); - -        menu_file.add(menuitem_new); -    menu_file.add(menuitem_open); -    menu_file.add(menuitem_save); -    menu_file.add(menuitem_quit); - -        frame.setJMenuBar(menu_main); -    } -``` - -  - -All that’s left to do now is to implement the functions described by the menu items. - -### Programming menu actions - -Your application responds to menu selections because your JFrame has an `ActionListener` attached to it. When you implement an event handler in Java, you must "override" its built-in functions. This sounds more severe than it actually is. You’re not rewriting Java; you’re just implementing functions that have been defined but not implemented by the event handler. - -In this case, you must override the `actionPerformed` method. Because nearly all entries in the **File** menu have something to do with files, my code defines a JFileChooser early. The rest of the code is separated into clauses of an `if` statement, which looks to see what event was received and acts accordingly. Each clause is drastically different from the other because each item suggests something wholly unique. The most similar are **Open** and **Save** because they both use the JFileChooser to select a point in the filesystem to either get or put data. - -The "**New**" selection clears the JTextArea without warning, and **Quit** closes the application without warning. Both of these "features" are dangerous, so should you want to make a small improvement to this code, that’s a good place to start. A friendly warning that the content hasn’t been saved is a vital feature of any good text editor, but for simplicity’s sake, that’s a feature for the future. - - -``` -@Override -public void actionPerformed([ActionEvent][15] e) { -    [String][16] ingest = null; -    [JFileChooser][17] jfc = new [JFileChooser][17]([FileSystemView][18].getFileSystemView().getHomeDirectory()); -    jfc.setDialogTitle("Choose destination."); -    jfc.setFileSelectionMode([JFileChooser][17].FILES_AND_DIRECTORIES); - -    [String][16] ae = e.getActionCommand(); -    if (ae.equals("Open")) { -        returnValue = jfc.showOpenDialog(null); -        if (returnValue == [JFileChooser][17].APPROVE_OPTION) { -        [File][19] f = new [File][19](jfc.getSelectedFile().getAbsolutePath()); -        try{ -            [FileReader][20] read = new [FileReader][20](f); -            Scanner scan = new Scanner(read); -            while(scan.hasNextLine()){ -                [String][16] line = scan.nextLine() + "\n"; -                ingest = ingest + line; -        } -            area.setText(ingest); -        } -    catch ( [FileNotFoundException][21] ex) { ex.printStackTrace(); } -} -    // SAVE -    } else if (ae.equals("Save")) { -        returnValue = jfc.showSaveDialog(null); -        try { -            [File][19] f = new [File][19](jfc.getSelectedFile().getAbsolutePath()); -            [FileWriter][22] out = new [FileWriter][22](f); -            out.write(area.getText()); -            out.close(); -        } catch ([FileNotFoundException][21] ex) { -            [Component][23] f = null; -            [JOptionPane][24].showMessageDialog(f,"File not found."); -        } catch ([IOException][25] ex) { -            [Component][23] f = null; -            [JOptionPane][24].showMessageDialog(f,"Error."); -        } -    } else if (ae.equals("New")) { -        area.setText(""); -    } else if (ae.equals("Quit")) { [System][26].exit(0); } -  } -} -``` - -That’s technically all there is to this text editor. Of course, nothing’s ever truly done, and besides, there’s still the testing and packaging steps, so there’s still plenty of time to discover missing requisites. In case you’re not picking up on the hint: there’s _definitely_ something missing in this code. Do you know what it is yet? (It’s mentioned mainly in the [Guessing Game article][2].) - -### Testing - -You can now test your application. Launch your text editor from a terminal: - - -``` -$ java ./src/TextEdit.java -error: can’t find main(String[]) method in class: com.example.textedit.TextEdit -``` - -It seems that the code hasn’t got a main method. There are a few ways to fix this problem: you could create a main method in `TextEdit.java` and have it run an instance of the `TextEdit` class, or you can create a separate file containing the main method. Both work equally well, but the latter is more realistic in terms of what to expect from large projects, so it’s worth getting used to dealing with separate files that work together to make a complete application. - -Create a `Main.java` file in `src` and open in your favorite editor: - - -``` -package com.example.textedit; - -public class Main { -  public static void main([String][16][] args) { -  TextEdit runner = new TextEdit(); -  } -} -``` - -You can try again, but now there are two files that depend upon one another to run, so you have to compile the code. Java uses the `javac` compiler, and you can set your destination directory with the `-d` option: - - -``` -`$ javac src/*java -d .` -``` - -This creates a new directory structure modeled exactly after your package name: `com/example/textedit`. This new classpath contains the files `Main.class` and `TextEdit.class`, which are the two files that make up your application. You can run them with `java` by referencing the location and _name_ (not the filename) of your Main class: - - -``` -`$ java info/slackermedia/textedit/Main` -``` - -Your text editor opens, and you can type into it, open files, and even save your work. - -![White text editor box with single drop down menu with options File, New, Open, Save, and Quit][27] - -### Sharing your work as a Java package - -While it seems to be acceptable to some programmers to deliver applications as an assortment of source files and hearty encouragement to learn how to run them, Java makes it really easy to package up your application so others can run it. You have most of the structure required, but you do need to add some metadata to a `Manifest.txt` file: - - -``` -`$ echo "Manifest-Version: 1.0" > Manifest.txt` -``` - -The `jar` command, used for packaging, is a lot like the [tar][28] command, so many of the options may look familiar to you. To create a JAR file: - - -``` -$ jar cvfme TextEdit.jar -Manifest.txt -com.example.textedit.Main -com/example/textedit/*.class -``` - -From the syntax of the command, you may surmise that it creates a new JAR file called `TextEdit.jar`, with its required manifest data located in `Manifest.txt`. Its main class is defined as an extension of the package name, and the class itself is `com/example/textedit/Main.class`. - -You can view the contents of the JAR file: - - -``` -$ jar tvf TextEdit.jar -0 Wed Nov 25 META-INF/ -105 Wed Nov 25 META-INF/MANIFEST.MF -338 Wed Nov 25 com/example/textedit/textedit/Main.class -4373 Wed Nov 25 com/example/textedit/textedit/TextEdit.class -``` - -And you can even extract it with the `xvf` options, if you’d like to see how your metadata has been integrated into the `MANIFEST.MF` file. - -Run your JAR file with the `java` command: - - -``` -`$ java -jar TextEdit.jar` -``` - -You can even [create a desktop file][29], so your application launches at the click of an icon in your applications menu. - -### Improve it - -In its current state, this is a very basic text editor, best suited for quick notes or short README documents. Some improvements (such as adding a vertical scrollbar) are quick and easy with a little research, while others (such as implementing an extensive preferences system) require real work. - -But if you’ve been meaning to learn a new language, this could be the perfect practical project for your self-education. Creating a text editor, as you can see, isn’t overwhelming in terms of code, and it’s manageable in scope. If you use text editors frequently, then writing your own can be satisfying and fun. So open your favorite text editor (the one you wrote) and start adding features! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/12/write-your-own-text-editor - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop) -[2]: https://opensource.com/article/20/12/learn-java -[3]: https://opensource.com/article/20/12/netbeans -[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jframe -[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionlistener -[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jtextarea -[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+uimanager -[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+classnotfoundexception -[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+instantiationexception -[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+illegalaccessexception -[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+unsupportedlookandfeelexception -[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jmenubar -[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jmenu -[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jmenuitem -[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionevent -[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string -[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jfilechooser -[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filesystemview -[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+file -[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filereader -[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filenotfoundexception -[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filewriter -[23]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+component -[24]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+joptionpane -[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception -[26]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system -[27]: https://opensource.com/sites/default/files/uploads/this-time-its-personal-31_days_yourself-opensource.png (White text editor box with single drop down menu with options File, New, Open, Save, and Quit) -[28]: https://opensource.com/article/17/7/how-unzip-targz-file -[29]: https://opensource.com/article/18/1/how-install-apps-linux diff --git a/sources/tech/20210104 Network address translation part 1 - packet tracing.md b/sources/tech/20210104 Network address translation part 1 - packet tracing.md index 850b4f9e1c..a1cfd4de2c 100644 --- a/sources/tech/20210104 Network address translation part 1 - packet tracing.md +++ b/sources/tech/20210104 Network address translation part 1 - packet tracing.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (amwps290) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20210111 7 Bash tutorials to enhance your command line skills in 2021.md b/sources/tech/20210111 7 Bash tutorials to enhance your command line skills in 2021.md deleted file mode 100644 index b0a6f04d2a..0000000000 --- a/sources/tech/20210111 7 Bash tutorials to enhance your command line skills in 2021.md +++ /dev/null @@ -1,68 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (7 Bash tutorials to enhance your command line skills in 2021) -[#]: via: (https://opensource.com/article/21/1/bash) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) - -7 Bash tutorials to enhance your command line skills in 2021 -====== -Bash is the default command line shell on most Linux systems. So why not -learn how to get the most out of it? -![Terminal command prompt on orange background][1] - -Bash is the default command line shell on most Linux systems. So why not learn how to get the most out of it? This year, Opensource.com featured many great articles to help you leverage the power of the Bash shell. These are some of the most-read articles about Bash: - -## [Read and write data from anywhere with redirection in the Linux terminal][2] - -Redirection of input and output is a natural function of any programming or scripting language. Technically, it happens inherently whenever you interact with a computer. Input gets read from stdin (standard input, usually your keyboard or mouse), output goes to stdout (standard output, a text or data stream), and errors get sent to stderr. Understanding that these data streams exist enables you to control where information goes when you're using a shell such as Bash. Seth Kenlon shared these great tips to get data from one place to another without a lot of mouse moving and key pressing. You may not use redirection often, but learning to use it can save you a lot of time needlessly opening files and copying and pasting data. - -## [Get started with Bash scripting for sysadmins][3] - -Bash is free and open source software, so anyone can install it, whether they run Linux, BSD, OpenIndiana, Windows, or macOS. Seth Kenlon helps you learn the commands and features that make Bash one of the most powerful shells available. - -## [Try this Bash script for large filesystems][4] - -Have you ever wanted to list all the files in a directory, but just the files, nothing else? How about only the directories? If you have, then Nick Clifton's article might be just what you're looking for. Nick shares a nifty Bash script that can list directories, files, links, or executables. The script works by using the **find** command to do the searching, and then it runs **ls** to show details. It's a clever solution for anyone managing a large Linux system. - -## [Screenshot your Linux system configuration with Bash tools][5] - -There are many reasons you might want to share your Linux configuration with other people. You might be looking for help troubleshooting a problem on your system, or maybe you're so proud of the environment you've created that you want to showcase it to fellow open source enthusiasts. Don Watkins shows us screenFetch and Neofetch to capture and share your system configuration. - -## [6 Handy Bash scripts for Git][6] - -Git has become a ubiquitous code management system. Knowing how to manage a Git repository can streamline your development experience. Bob Peterson shares six Bash scripts that will make your life easier when you're working with Git repositories. **gitlog** prints an abbreviated list of current patches against the master version. Variations of the script can show the patch SHA1 IDs or search for a string within a collection of patches. - -## [5 ways to improve your Bash scripts][7] - -A system admin often writes Bash scripts, some short and some quite lengthy, to accomplish various tasks. Alan Formy-Duval explains how you can make your Bash scripts simpler, more robust, and easier to read and debug. We might assume that we need to employ languages, such as Python, C, or Java, for higher functionality, but that's not necessarily true. The Bash scripting language is very powerful. There is a lot to learn to maximize its usefulness. - -## [My favorite Bash hacks][8] - -Katie McLaughlin helps you improve your productivity with aliases and other shortcuts for the things you forget too often. When you work with computers all day, it's fantastic to find repeatable commands and tag them for easy use later on. Katie's summary of useful Bash features and helper commands to save you time. - -These Bash tips take an already powerful shell to a whole new level of usefulness. Feel free to share your own tips, too. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/bash - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) -[2]: https://opensource.com/article/20/6/redirection-bash -[3]: https://opensource.com/article/20/4/bash-sysadmins-ebook -[4]: https://opensource.com/article/20/2/script-large-files -[5]: https://opensource.com/article/20/1/screenfetch-neofetch -[6]: https://opensource.com/article/20/1/bash-scripts-git -[7]: https://opensource.com/article/20/1/improve-bash-scripts -[8]: https://opensource.com/article/20/1/bash-scripts-aliases diff --git a/sources/tech/20210112 3 email rules to live by in 2021.md b/sources/tech/20210112 3 email rules to live by in 2021.md deleted file mode 100644 index 9ea5e9a252..0000000000 --- a/sources/tech/20210112 3 email rules to live by in 2021.md +++ /dev/null @@ -1,58 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (3 email rules to live by in 2021) -[#]: via: (https://opensource.com/article/21/1/email-rules) -[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) - -3 email rules to live by in 2021 -====== -Email is not instant messaging. Prevent email from being a constant -interruption by following these rules. -![email or newsletters via inbox and browser][1] - -In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 2 of 21 Days of Productivity in 2021. - -Like many of us, I have a love/hate relationship with email. Email was one of the earliest means of communication on the proto-internet, corporate LANs, and the dial-up BBS ecosystem. Email was, and still is, one of the primary means of electronic correspondence. It is used for business communications, commerce, notifications, collaboration, and a pile of useful things. - -![Mutt email client][2] - -Mutt email client,  [CC BY-SA 4.0][3] by Kevin Sonney - -Many people have an incorrect perception of email. Email is not an instant messaging platform. It can seem like email is instant messaging sometimes when a person can send a message, have it show up on the other side of the world almost immediately, and then have a response in minutes. Because of this, we can fall into a mindset that we need to have our email program active at all times, and as soon as something comes in, we need to look at it and respond to it right now. - -Email was designed around the principle that a person sends a message, and the recipient responds to it when they can. Yes, there are flags for high priority and urgent emails, and our email programs have notifications to tell us when new mail arrives, but they really weren't meant to cause the stress they do for many people today. - -![So many emails][4] - -So. Many. Emails., [CC BY-SA 4.0][3] by Kevin Sonney - -It is generally said that for every interruption a person receives, it requires at least 15 minutes for their thought process to re-focus on the interrupted task. It has become common in the workplace (and at home!) to let email be one of those interruptions. It doesn't need to be, nor was it designed to be. I have adopted a couple of rules to prevent email from being an interruption that keeps me from getting things done. - -**Rule 1**: Email is not an alert platform. It is common for people in technology to configure monitoring and alerting platforms send all the notifications to email. I have encountered this at almost every workplace I have been in for the last 15 years and I spend the first several months changing it. There are many good platforms and services to manage alerts. Email is not one of them. - -**Rule 2**: Do not expect a reply for at least 24 hours. How many of us have received a phone call asking if we have received an email yet, and asking if we have any questions about it? I know I have. Work to set expectations in the workplace, or with people you frequently email, that responses will sometimes be quick, and sometimes not. If something is truly urgent, they should use some other method of communication. - -**Rule 3**: Check email every few hours, not constantly. I admit this one is difficult but it brings me the most peace of mind. When I am working or trying to focus on something like writing, I close my email program (or browser tab) and ignore it until I am done. No notifications, no indicators that 20 new messages are waiting, no interruptions. It took some effort to get over the FOMO (Fear Of Missing Out) when I started doing this, but that has gotten easier over time. I find that when I do open up my email again, I can focus on it and not worry about what I could or should be doing instead. - -Hopefully, these three rules can help you as much as they have helped me. In the upcoming days, I'll have more things that have helped me handle my email. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/email-rules - -作者:[Kevin Sonney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ksonney -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/newsletter_email_mail_web_browser.jpg?itok=Lo91H9UH (email or newsletters via inbox and browser) -[2]: https://opensource.com/sites/default/files/pictures/mutt-email-client.png (Mutt email client) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/sites/default/files/pictures/so-many-emails.png (So many emails) diff --git a/sources/tech/20210112 Super Productivity- A Super Cool Open Source To-Do List App with GitHub Integration.md b/sources/tech/20210112 Super Productivity- A Super Cool Open Source To-Do List App with GitHub Integration.md deleted file mode 100644 index d805a3afbb..0000000000 --- a/sources/tech/20210112 Super Productivity- A Super Cool Open Source To-Do List App with GitHub Integration.md +++ /dev/null @@ -1,106 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Super Productivity: A Super Cool Open Source To-Do List App with GitHub Integration) -[#]: via: (https://itsfoss.com/super-productivity/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Super Productivity: A Super Cool Open Source To-Do List App with GitHub Integration -====== - -_**Brief: Super Productivity is an awesome open-source to-do app that helps you manage tasks, track tickets, and manage time.**_ - -No matter what you do, improving productivity is a common goal for most of the people. Usually, you would end up trying various [to-do list apps][1] or a [note-taking app][2] to help yourself organize and remind things to efficiently keep up with your work. - -Sure, you can check out those lists and try them as you like. Here, I’ve come across something unique that you also may want to try if you wanted a desktop to-do application with a solid user interface, GitHub/GitLab integration, and a list of essential features. - -Super Productivity seems to be an impressive to-do list app with some unique features to offer. In this article, I’ll let you know all about it briefly. - -### Super Productivity: A Simple & Attractive Open-Source To-do App - -![][3] - -Super Productivity is an open-source app, and it is actively maintained by [Johannes Millan][4] on GitHub. - -To me, the user experience matters the most, and I’m completely impressed with the UI offered by Super Productivity. - -It also offers a bunch of essential features along with some interesting options. Let’s take a look at them. - -### Features of Super Productivity - -![][5] - - * Add to-do tasks, description - * Track time spent on tasks and break - * Project management (with JIRA, GitHub, and GitLab integration) - * Ability to schedule tasks - * Language selection option - * Sync option to Dropbox, Google Drive, or any other WebDAV storage location - * Import/Export functionality - * Auto-backup functionality - * Ability to tweak the behavior of timers and counters - * Dark Mode them available - * Add attachment to tasks - * Ability to repeat tasks completely for free - * Cross-platform support - - - -In addition to the features I mentioned, you will find more detailed settings and tweaks to configure. - -Especially, the integration with [JIRA][6], [GitHub][7] and [GitL][8][ab][8]. You can automatically assign tasks to work on without needing to check your email for the recent updates to issue trackers or tickets. - -Compared to many premium to-do web services that I’ve used so far, you will be surprised to find many useful features completely for free. You can also take a look at the video below to get some idea: - -### Installing Super Productivity on Linux - -![][9] - -You get a variety of options to install. I downloaded the AppImage file to test. But, you can also get the deb package for Debian-based distros. - -It is also available as a [snap][10]. You can find all the packages in the [GitHub releases section][11]. - -If you’re curious, you can check out its [GitHub page][12] to know more about it. - -Download Super Productivity - -### Concluding Thoughts - -I found the user experience fantastic with Super Productivity. The features offered are incredibly useful and considering that you get some premium functionalities (that you’d get normally with to-do web services) it could be a perfect replacement for most of the users. - -You can simply sync the data using Google Drive, Dropbox, or any other WebDAV storage location. - -It could also replace a service like [ActivityWatch][13] to help you track the time you work on your tasks and remain idle. So, it could be your all-in-one solution for improving productivity! - -Sounds exciting, right? - -What do you think about Super Productivity? Let me know your thoughts in the comments section below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/super-productivity/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/to-do-list-apps-linux/ -[2]: https://itsfoss.com/note-taking-apps-linux/ -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/super-productivity.jpg?resize=800%2C569&ssl=1 -[4]: https://github.com/johannesjo -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/super-productivity-2.jpg?resize=800%2C575&ssl=1 -[6]: https://www.atlassian.com/software/jira -[7]: https://github.com/ -[8]: https://about.gitlab.com -[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/super-productivity-1.jpg?resize=800%2C574&ssl=1 -[10]: https://snapcraft.io/superproductivity -[11]: https://github.com/johannesjo/super-productivity/releases -[12]: https://github.com/johannesjo/super-productivity -[13]: https://itsfoss.com/activitywatch/ diff --git a/sources/tech/20210115 3 plain text note-taking tools.md b/sources/tech/20210115 3 plain text note-taking tools.md deleted file mode 100644 index df29925cb7..0000000000 --- a/sources/tech/20210115 3 plain text note-taking tools.md +++ /dev/null @@ -1,78 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (3 plain text note-taking tools) -[#]: via: (https://opensource.com/article/21/1/plain-text) -[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) - -3 plain text note-taking tools -====== -Note-taking is important, and plain text is an easy, neutral way to do -it. Here are three tools you can spice up your notes without losing the -ease and portability of plain text. -![Typewriter with hands][1] - -In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 5 of 21 Days of Productivity in 2021. - -Plain text is the most resilient format for documents. Plain text documents are small, transfer quickly between machines, and can be read on _any_ device. Therefore, it makes a lot of sense to take notes in a plain text document. - -However, plain text is also that - plain. We live in a rich text world and still need titles, lists, and ways to demark one section from the other. Fortunately, there are several ways we can add these elements without having to add complex markup to plain text documents. - -### Markdown - -![Markdown][2] - -Markdown (Kevin Sonney, [CC BY-SA 4.0][3]) - -[Markdown][4], created by Aaron Schwartz and John Gruber, is the format I use the most day today. From reading and writing README files, documentation, note-taking, and even source code comments, Markdown allows me to add formatting without sacrificing the ability to read the document easily. - -Additionally, Markdown has several "extended versions" to allow for items that were not part of the original design. In particular, [GitHub Flavored Markdown][5] is exceptionally popular due to its use in the eponymous source control site. - -Many file editors support Markdown highlighting out of the box with no extra add-ons or effort required. - -### AsciiDoc - -![AsciiDoc][6] - -AsciiDoc (Kevin Sonney, [CC BY-SA 4.0][3]) - -[AsciiDoc][7], created by Stuart Rackham, is another way to add rich text elements to plain text documents. AsciiDoc has many features for generating documentation, books, and papers. That does not mean it shouldn't be used for note-taking, however. There are many environments (particularly in the education and research fields) where being able to quickly convert notes to a more "formal" format is helpful. - -AsciiDoc also has many tools for converting text to other formats for collaboration. There are also several add-ons for importing data from different sources and putting it in a final document, or for handling special formatting like MathML or LaTeX. - -### Org Mode - -![ORG-Mode][8] - -ORG-Mode (Kevin Sonney, [CC BY-SA 4.0][3]) - -I cannot leave out [Org][9] when I talk about text formatting. Originally designed for use with [GNU Emacs][10], Org Mode has become one of the go-to plain text formats for notes, to-do lists, documentation, and more. Org can be written and used in a whole host of text editors, including [Vim][11]. Org is simple, easy to learn, and one of my favorite text formats for notes. - -At the end of the day, choosing Markdown, AsciiDoc, or Org for plain text notes is a way to make sure they can be read and updated anywhere. And if you are like me, you'll find yourself using the same syntax when taking paper-based notes as well! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/plain-text - -作者:[Kevin Sonney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ksonney -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/typewriter-hands.jpg?itok=oPugBzgv (Typewriter with hands) -[2]: https://opensource.com/sites/default/files/pictures/markdown.png (Markdown) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/article/19/9/introduction-markdown -[5]: https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown -[6]: https://opensource.com/sites/default/files/pictures/asciidoc.png (AsciiDoc) -[7]: https://asciidoc.org/ -[8]: https://opensource.com/sites/default/files/pictures/org-mode.png (ORG-Mode) -[9]: https://orgmode.org/ -[10]: https://www.gnu.org/software/emacs/ -[11]: https://opensource.com/article/19/1/productivity-tool-org-mode diff --git a/sources/tech/20210115 How to Create and Manage Archive Files in Linux.md b/sources/tech/20210115 How to Create and Manage Archive Files in Linux.md deleted file mode 100644 index d49695b76f..0000000000 --- a/sources/tech/20210115 How to Create and Manage Archive Files in Linux.md +++ /dev/null @@ -1,214 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to Create and Manage Archive Files in Linux) -[#]: via: (https://www.linux.com/news/how-to-create-and-manage-archive-files-in-linux-2/) -[#]: author: (LF Training https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/) - -How to Create and Manage Archive Files in Linux -====== - -_By Matt Zand and Kevin Downs_ - -In a nutshell, an archive is a single file that contains a collection of other files and/or directories. Archive files are typically used for a transfer (locally or over the internet) or make a backup copy of a collection of files and directories which allow you to work with only one file (if compressed, it has a lower size than the sum of all files within it) instead of many. Likewise, archives are used for software application packaging. This single file can be easily compressed for ease of transfer while the files in the archive retain the structure and permissions of the original files. - -We can use the tar tool to create, list, and extract files from archives. Archives made with tar are normally called “tar files,” “tar archives,” or—since all the archived files are rolled into one—“tarballs.” - -This tutorial shows how to use tar to create an archive, list the contents of an archive, and extract the files from an archive. Two common options used with all three of these operations are ‘-f’ and ‘-v’: to specify the name of the archive file, use ‘-f’ followed by the file name; use the ‘-v’ (“verbose”) option to have tar output the names of files as they are processed. While the ‘-v’ option is not necessary, it lets you observe the progress of your tar operation. - -For the remainder of this tutorial, we cover 3 topics: 1- Create an archive file, 2- List contents of an archive file, and 3- Extract contents from an archive file. We conclude this tutorial by surveying 6 practical questions related to archive file management. What you take away from this tutorial is essential for performing tasks related to [cybersecurity][1] and [cloud technology][2]. - -### 1- Creating an Archive File - -To create an archive with tar, use the ‘-c’ (“create”) option, and specify the name of the archive file to create with the ‘-f’ option. It’s common practice to use a name with a ‘.tar’ extension, such as ‘my-backup.tar’. Note that unless specifically mentioned otherwise, all commands and command parameters used in the remainder of this article are used in lowercase. Keep in mind that while typing commands in this article on your terminal, you need not type the $ prompt sign that comes at the beginning of each command line. - -Give as arguments the names of the files to be archived; to create an archive of a directory and all of the files and subdirectories it contains, give the directory’s name as an argument. - -* *To create an archive called ‘project.tar’ from the contents of the ‘project’ directory, type: - -$ _tar -cvf project.tar project_ - -This command creates an archive file called ‘project.tar’ containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged. - -Use the ‘-z’ option to compress the archive as it is being written. This yields the same output as creating an uncompressed archive and then using gzip to compress it, but it eliminates the extra step. - -* *To create a compressed archive called ‘project.tar.gz’ from the contents of the ‘project’ directory, type: - -$ _tar -zcvf project.tar.gz project_ - -This command creates a compressed archive file, ‘project.tar.gz’, containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged. - -**NOTE:** While using the ‘-z’ option, you should specify the archive name with a ‘.tar.gz’ extension and not a ‘.tar’ extension, so the file name shows that the archive is compressed. Although not required, it is a good practice to follow. - -Gzip is not the only form of compression. There is also bzip2 and and xz. When we see a file with an extension of xz we know it has been compressed using xz. When we see a file with the extension of .bz2 we can infer it was compressed using bzip2. We are going to steer away from bzip2 as it is becoming unmaintained and focus on xz. When compressing using xz it is going to take longer for the files to compressed. However, it is typically worth the wait as the compression is much more effective, meaning the resulting file will usually be smaller than other compression methods used. Even better is the fact that decompression, or expanding the file, is not much different between the different methods of compression. Below we see an example of how to utilize xz when compressing a file using tar - -  $ _tar -Jcvf project.tar.xz project_ - -We simply switch -z for gzip to uppercase -J for xz. Here are some outputs to display the differences between the forms of compression: - -![][3] - -![][4] - -As you can see xz does take the longest to compress. However it does the best job of reducing files size, so it’s worth the wait. The larger the file is the better the compression becomes too! - -### 2- Listing Contents of an Archive File - -To list the contents of a tar archive without extracting them, use tar with the ‘-t’ option. - -* *To list the contents of an archive called ‘project.tar’, type: - -$ _tar -tvf project.tar_ * * - -This command lists the contents of the ‘project.tar’ archive. Using the ‘-v’ option along with the ‘-t’ option causes tar to output the permissions and modification time of each file, along with its file name—the same format used by the ls command with the ‘-l’ option. - -* *To list the contents of a compressed archive called ‘project.tar.gz’, type: - -$ _tar -tvf project.tar_ - -* *3- Extracting contents from an Archive File - -To extract (or _unpack_) the contents of a tar archive, use tar with the ‘-x’ (“extract”) option. - -* *To extract the contents of an archive called ‘project.tar’, type: - -$ _tar -xvf project.tar_ - -This command extracts the contents of the ‘project.tar’ archive into the current directory. - -If an archive is compressed, which usually means it will have a ‘.tar.gz’ or ‘.tgz’ extension, include the ‘-z’ option. - -* *To extract the contents of a compressed archive called ‘project.tar.gz’, type: - -$ _tar -zxvf project.tar.gz_ - -**NOTE:** If there are files or subdirectories in the current directory with the same name as any of those in the archive, those files will be overwritten when the archive is extracted. If you don’t know what files are included in an archive, consider listing the contents of the archive first. - -Another reason to list the contents of an archive before extracting them is to determine whether the files in the archive are contained in a directory. If not, and the current directory contains many unrelated files, you might confuse them with the files extracted from the archive. - -To extract the files into a directory of their own, make a new directory, move the archive to that directory, and change to that directory, where you can then extract the files from the archive. - -Now that we have learned how to create an Archive file and list/extract its contents, we can move on to discuss the following 9 practical questions that are frequently asked by Linux professionals. - - * Can we add content to an archive file without unpacking it? - - - -Unfortunately, once a file has been compressed there is no way to add content to it. You would have to “unpack” it or extract the contents, edit or add content, and then compress the file again. If it’s a small file this process will not take long. If it’s a larger file then be prepared for it to take a while. - - * Can we delete content from an archive file without unpacking it? - - - -This depends on the version of tar being used. Newer versions of tar will support a –delete. - -For example, let’s say we have files file1 and file2 . They can be removed from file.tar with the following: - -_$ tar -vf file.tar –delete file1 file2_ - -To remove a directory dir1: - -_$ tar -f file.tar –delete dir1/*_ - - * What are the differences between compressing a folder and archiving it? - - - -The simplest way to look at the difference between archiving and compressing is to look at the end result. When you archive files you are combining multiple files into one. So if we archive 10 100kb files you will end up with one 1000kb file. On the other hand if we compress those files we could end up with a file that is only a few kb or close to 100kb. - - * How to compress archive files? - - - -As we saw above you can create and archive files using the tar command with the cvf options. To compress the archive file we made there are two options; run the archive file through compression such as gzip. Or use a compression flag when using the tar command. The most common compression flags are- z for gzip, -j for bzip and -J for xz. We can see the first method below: - -_$ gzip file.tar_ - -Or we can just use a compression flag when using the tar command, here we’ll see the gzip flag “z”: - -_$ tar -cvzf file.tar /some/directory_ - - * How to create archives of multiple directories and/or files at one time? - - - -It is not uncommon to be in situations where we want to archive multiple files or directories at once. And it’s not as difficult as you think to tar multiple files and directories at one time. You simply supply which files or directories you want to tar as arguments to the tar command: - -_$ tar -cvzf file.tar file1 file2 file3_ - -or - -_$ tar -cvzf file.tar /some/directory1 /some/directory2_ - - * How to skip directories and/or files when creating an archive? - - - -You may run into a situation where you want to archive a directory or file but you don’t need certain files to be archived. To avoid archiving those files or “exclude” them you would use the –exclude option with tar: - -_$ tar –exclude ‘/some/directory’ -cvf file.tar /home/user_ - -So in this example /home/user would be archived but it would exclude the /some/directory if it was under /home/user. It’s important that you put the –exclude option before the source and destination as well as to encapsulate the file or directory being excluded with single quotation marks. - -### Summary - -The tar command is useful for creating backups or compressing files you no longer need. It’s good practice to back up files before changing them. If something doesn’t work how it’s intended to after the change you will always be able to revert back to the old file. Compressing files no longer in use helps keep systems clean and lowers the disk space usage. There are other utilities available but tar has reigned supreme for its versatility, ease of use and popularity. - -### Resources - -If you like to learn more about Linux, reading the following articles and tutorials are highly recommended: - - * [Comprehensive Review of Linux File System Architecture and Management][5] - * [Comprehensive Review of How Linux File and Directory System Works][6] - * [Comprehensive list of all Linux OS distributions][7] - * [Comprehensive list of all special purpose Linux distributions][8] - * [Linux System Admin Guide- Best Practices for Making and Managing Backup Operations][9] - * [Linux System Admin Guide- Overview of Linux Virtual Memory and Disk Buffer Cache][10] - * [Linux System Admin Guide- Best Practices for Monitoring Linux Systems][11] - * [Linux System Admin Guide- Best Practices for Performing Linux Boots and Shutdowns][12] - - - -### About the Authors - -**Matt Zand** is a serial entrepreneur and the founder of 3 tech startups: [DC Web Makers][13], [Coding Bootcamps][14] and [High School Technology Services][15]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][16] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: - -**Kevin Downs** is Red Hat Certified System Administrator or RHCSA. At his current job at IBM as Sys Admin, he is in charge of administering hundreds of servers running on different Linux distributions. He is a Lead Linux Instructor at [Coding Bootcamps][17] where he has authored [5 self-paced Courses][18]. - -The post [How to Create and Manage Archive Files in Linux][19] appeared first on [Linux Foundation – Training][20]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/how-to-create-and-manage-archive-files-in-linux-2/ - -作者:[LF Training][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/ -[b]: https://github.com/lujun9972 -[1]: https://learn.coding-bootcamps.com/p/essential-practical-guide-to-cybersecurity-for-system-admin-and-developers -[2]: https://learn.coding-bootcamps.com/p/introduction-to-cloud-technology -[3]: https://training.linuxfoundation.org/wp-content/uploads/2020/12/Linux1-300x94.png -[4]: https://training.linuxfoundation.org/wp-content/uploads/2020/12/Linux2-300x110.png -[5]: https://blockchain.dcwebmakers.com/blog/linux-os-file-system-architecture-and-management.html -[6]: https://coding-bootcamps.com/linux/filesystem/index.html -[7]: https://myhsts.org/tutorial-list-of-all-linux-operating-system-distributions.php -[8]: https://coding-bootcamps.com/list-of-all-special-purpose-linux-distributions.html -[9]: https://myhsts.org/tutorial-system-admin-best-practices-for-managing-backup-operations.php -[10]: https://myhsts.org/tutorial-how-linux-virtual-memory-and-disk-buffer-cache-work.php -[11]: https://myhsts.org/tutorial-system-admin-best-practices-for-monitoring-linux-systems.php -[12]: https://myhsts.org/tutorial-best-practices-for-performing-linux-boots-and-shutdowns.php -[13]: https://blockchain.dcwebmakers.com/ -[14]: http://coding-bootcamps.com/ -[15]: https://myhsts.org/ -[16]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/ -[17]: https://coding-bootcamps.com/ -[18]: https://learn.coding-bootcamps.com/courses/author/758905 -[19]: https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/ -[20]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210116 How to use KDE-s productivity suite, Kontact.md b/sources/tech/20210116 How to use KDE-s productivity suite, Kontact.md deleted file mode 100644 index 3bfc8eef89..0000000000 --- a/sources/tech/20210116 How to use KDE-s productivity suite, Kontact.md +++ /dev/null @@ -1,77 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to use KDE's productivity suite, Kontact) -[#]: via: (https://opensource.com/article/21/1/kde-kontact) -[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) - -How to use KDE's productivity suite, Kontact -====== -KDE is great, but it really shines when you use Kontact, the unified -personal info manager. -![Working on a team, busy worklife][1] - -In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 6 of 21 Days of Productivity in 2021. - -In the long, long ago, when compiling a kernel was the only way to get wifi drivers, a graphical environment was mainly for running a web browser and opening lots of terminal windows. The look and feel was a mishmash of whatever toolkit the author of the program chose to use. And then, in 1996 [Matthias Ettrich][2] proposed and later released the first version of [KDE][3]. It was based on the then proprietary [Qt][4] toolkit (since made Free and Open Source). This release sparked what can only be called a desktop revolution on Linux, with the creation of the [GNOME Desktop][5] using the at-that-time FOSS GTK Toolkit. Between KDE and GNOME, Linux went from a _only computer people use Linux_ operating system to a robust desktop environment for everyone. - -![Fedora KDE Spin Default Desktop][6] - -Fedora KDE Spin Default Desktop (Kevin Sonney, [CC BY-SA 4.0][7]) - -KDE Plasma 5 is the latest KDE incarnation and is jam-packed with features to make you more productive. Included are the Konqueror web browser, Dolphin File Manager, and Konsole terminal emulator. All of these provide a good solid base for a desktop environment, but the real productivity win from KDE is [Kontact][8], the unified personal information manager. - -Kontact provides a single interface to several other KDE Programs: KMail (email), KOrganizer (calendar, to-do, and journals), KAddressBook, KNotes, Akregator (RSS/ATOM feed reader), and several others. On the first launch, Kontact will walk you through your email provider setup, which supports both local and remote mail configurations. Kontact will then take you to a dashboard that by default shows recent emails, calendar events, scheduled tasks, and recent notes. - -![Kontact Summary screen][9] - -Kontact Summary screen (Kevin Sonney, [CC BY-SA 4.0][7]) - -The setup "flow" can seem a little strange since no unified single account is configured outside of the built-in local accounts. After Kontact (via KMail) walks you through the mail setup, you can go to the calendar screen to add your calendars, which also configures the to-do lists and journal applications (and for some providers, address books as well). - -The mail and calendar components are pretty straight forward and work as you would expect them to. The to-do screen and the journal are tied to the calendar, which can be an issue with some calendar providers that do not fully support all calendar types (Google. I mean Google). If you are using one of those providers, you will need to create a local calendar specific to the journal and to-do entries. - -![Kontact Calendar][10] - -Kontact Calendar (Kevin Sonney, [CC BY-SA 4.0][7]) - -The to-do list has a lot to offer. While it can be used as a simple checklist of tasks with reminders, it also supports some lightweight project management features. There is a slider for percent complete that can be updated from the main list view so you can track progress. It has the ability to attach files and assign priorities from 1-10. Finally, it can add users to tasks, just like other calendar appointments. - -Creating a journal entry is essentially creating a note to yourself on the calendar. It is a free-form text, like writing in a physical notebook or planner on a specific day. This feature is _very_ handy if you are logging work, keeping a daily journal, or just need someplace for meeting notes (more on that later in this series). - -![Kontact Journal][11] - -Kontact Journal (Kevin Sonney, [CC BY-SA 4.0][7]) - -On their own, the programs that make up Kontact are very powerful and can be run as individual apps if you want. Kontact levels up their usefulness by giving you a central place and application to find all your information. - -Most distributions will allow you to install Kontact and the components it uses without KDE, but it really shines when used as part of the KDE Desktop. KDE is available as the default desktop in the Fedora KDE Spin, KUbuntu, KDE Neon (based on Ubuntu LTS), and several other distributions. - -KDE originally stood for Kool Desktop Environment, but is now known by many as the K Desktop... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/kde-kontact - -作者:[Kevin Sonney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ksonney -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) -[2]: https://en.wikipedia.org/wiki/Matthias_Ettrich -[3]: https://kde.org/ -[4]: https://en.wikipedia.org/wiki/Qt_(software) -[5]: https://www.gnome.org/ -[6]: https://opensource.com/sites/default/files/pictures/fedora-kde-spin-default-desktop.png (Fedora KDE Spin Default Desktop) -[7]: https://creativecommons.org/licenses/by-sa/4.0/ -[8]: https://kontact.kde.org/ -[9]: https://opensource.com/sites/default/files/pictures/kontact-summary-screen_0.png (Kontact Summary screen) -[10]: https://opensource.com/sites/default/files/pictures/kontact-calendar.png (Kontact Calendar) -[11]: https://opensource.com/sites/default/files/pictures/kontact-journal.png (Kontact Journal) diff --git a/sources/tech/20210118 10 ways to get started with open source in 2021.md b/sources/tech/20210118 10 ways to get started with open source in 2021.md deleted file mode 100644 index 09b4c89cd3..0000000000 --- a/sources/tech/20210118 10 ways to get started with open source in 2021.md +++ /dev/null @@ -1,124 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (10 ways to get started with open source in 2021) -[#]: via: (https://opensource.com/article/21/1/getting-started-open-source) -[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo) - -10 ways to get started with open source in 2021 -====== -If you're new to open source, 2020's top 10 articles about getting -started will help guide your path. -![Looking at a map for career journey][1] - -Opensource.com exists to educate the world about everything open source, from new tools and frameworks to scaling communities. We aim to make open source more accessible to anyone who wants to use or contribute to it. - -Getting started in open source can be hard, so we regularly share tips and advice on how you can get involved. If you want to learn Python, help fight COVID-19, or join the Kubernetes community, we've got you covered. - -To help you begin, we curated the 10 most popular articles on getting started in open source we published in 2020. We hope they'll inspire you to learn something new in 2021. - -### A beginner's guide to web scraping with Python - -Want to learn Python through doing, not reading? In this tutorial, Julia Piaskowski guides you through her first [web scraping project in Python][2]. She shows how to access webpage content with Python library requests. - -Julia walks through each step, from installing Python 3 to cleaning web scraping results with pandas. Aided by screenshots galore, she explains how to scrape with an end goal in mind. - -The section on extracting relevant content is especially helpful; she doesn't mince words when saying this can be tough. But, like the rest of the article, she guides you through each step. - -### A beginner's guide to SSH for remote connections on Linux - -If you've never opened a secure shell (SSH) before, the first time can be confusing. In this tutorial, Seth Kenlon shows how to [configure two computers for SSH connections][3] and securely connect them without a password. - -From four key phrases you should know to steps for activating SSH on each host, Seth explains every step of making SSH connections. He includes advice on finding your computer's IP address, creating an SSH key, and verifying your access to a remote machine. - -### 5 steps to learn any programming language - -If you know one programming language, you can [learn them all][4]. That's the premise of this article by Seth Kenlon, which argues that knowing some basic programming logic can scale across languages. - -Seth shares five things programmers look for when considering a new language to learn to code in. Syntax, built-ins, and parsers are among the five, and he accompanies each with steps to take action. - -The key argument uniting them all? Once you know the theory of how code works, it scales across languages. Nothing is too hard for you to learn. - -### Contribute to open source healthcare projects for COVID-19 - -Did you know that an Italian hospital saved COVID-19 patients' lives by 3D printing valves for reanimation devices? It's one of many ways open source contributors [built solutions for the pandemic][5] in 2020. - -In this article, Joshua Pearce shares several ways to volunteer with open source projects addressing COVID-19. While Project Open Air is the largest, Joshua explains how you can also work on a wiki for an open source ventilator, write open source COVID-19 medical supply requirements, test an open source oxygen concentrator prototype, and more. - -### Advice for getting started with GNOME - -GNOME is one of the most popular Linux desktops, but is it right for you? This article shares [advice from GNOME][6] users interspersed with Opensource.com's take on this topic. - -Want some inspiration for configuring your desktop? This article includes links to get started with GNOME extensions, installing Dash to Dock, using the GNOME Tweak tool, and more. - -After all that, you might decide that GNOME still isn't for you—and that's cool. You'll find links to other Linux desktops and window managers at the end. - -### 3 reasons to contribute to open source now - -As of June 2020, GitHub hosted more than 180,000 public repositories. It's never been easier to join the open source community, but does that mean you should? In this article, Opensource.com Correspondent Jason Blais [shares three reasons][7] to take the plunge. - -Contributing to open source can boost your confidence, resume, and professional network. Jason explains how to leverage your contributions in helpful detail, from sharing how to add open source contributions on your LinkedIn profile to turning these contributions into paid roles. There's even a list of great projects for first-time contributors at the end. - -### 4 ways I contribute to open source as a Linux systems administrator - -Sysadmins are the unsung heroes of open source. They do lots of work behind the code that's deeply valuable but often unseen. - -In this article, Elizabeth K. Joseph explains how she [improves open source projects][8] as a Linux sysadmin. User support, hosting project resources, and finding new website environments are just a few ways she leaves communities better than she found them. - -Perhaps the most crucial contribution of all? Documentation! Elizabeth got her start in open source rewriting a quickstart guide for a project she used. Submitting bugs and patch reports to projects you use often is an ideal way to get involved. - -### 6 ways to contribute to an open source alternative to Slack - -Mattermost is a popular platform for teams that want an open source messaging system. Its active, vibrant community is a key plus that keeps users loyal to the product, especially those with experience in Go, React, and DevOps. - -If you'd like to [contribute to Mattermost][9], Jason Blais explains how. Consider this article your Getting Started documentation: Blais shares steps to take, organized by six types of contributions you can make. - -Whether you'd like to build an integration or localize your language, this article shares how to get going. - -### How to contribute to Kubernetes - -I walked into Open Source Summit 2018 in Vancouver young and unaware of Kubernetes. After the keynotes, I walked out of the ballroom a changed-yet-confused woman. It's not hyperbole to say that Kubernetes has changed open source for good: It's tough to find a more popular, impactful project. - -If you'd like to contribute, IBM Engineer Tara Gu explains [how she got started.][10] This recap of her lightning talk at All Things Open 2019 includes a video of the talk she gave in person. At a conference. Remember those…? - -### How anyone can contribute to open source software in their job - -Necessity is the mother of invention, especially in open source. Many folks build open source solutions to their own problems. But what happens when developers miss the mark by building products without gathering feedback from their target users? - -Product and design teams often fill this gap in enterprises. What should developers do if such roles don't exist on their open source teams? - -In this article, Catherine Robson explains how open source teams [can collect feedback][11] from their target users. It's written for folks who want to share their work experiences with developers, thus contributing their feedback to open source projects. - -The steps Catherine outlines will help you share your insights with open source teams and play a key role helping teams build better products. - -### What do you want to learn? - -What would you like to know about getting started in open source? Please share your suggestions for article topics in the comments. And if you have a story to share to help others get started in open source, please consider [writing an article][12] for Opensource.com. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/getting-started-open-source - -作者:[Lauren Maffeo][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/lmaffeo -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey) -[2]: https://opensource.com/article/20/5/web-scraping-python -[3]: https://opensource.com/article/20/9/ssh -[4]: https://opensource.com/article/20/10/learn-any-programming-language -[5]: https://opensource.com/article/20/3/volunteer-covid19 -[6]: https://opensource.com/article/20/6/gnome-users -[7]: https://opensource.com/article/20/6/why-contribute-open-source -[8]: https://opensource.com/article/20/7/open-source-sysadmin -[9]: https://opensource.com/article/20/7/mattermost -[10]: https://opensource.com/article/20/1/contributing-kubernetes-all-things-open-2019 -[11]: https://opensource.com/article/20/10/open-your-job -[12]: https://opensource.com/how-submit-article diff --git a/sources/tech/20210119 10 ways big data and data science impacted the world in 2020.md b/sources/tech/20210119 10 ways big data and data science impacted the world in 2020.md new file mode 100644 index 0000000000..ffdce5baaf --- /dev/null +++ b/sources/tech/20210119 10 ways big data and data science impacted the world in 2020.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 ways big data and data science impacted the world in 2020) +[#]: via: (https://opensource.com/article/21/1/big-data) +[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo) + +10 ways big data and data science impacted the world in 2020 +====== +Learn how open source data science languages, libraries, and tools are +helping us understand our world better by reviewing 2020's top 10 data +science articles on Opensource.com. +![Looking at a map][1] + +Big data’s one of many domains where open source shines. From open source alternatives for Google Analytics to new features in MySQL, 2020 brought several ways for open source enthusiasts to learn big data skills. + +Get up to speed on how open source data science languages, libraries, and tools help us understand our world better by reviewing the top 10 data science articles published on Opensource.com last year.  + +### The 7 most popular ways to plot data in Python + +Once upon a time, Matplotlib was the lone way to make plots in Python. In recent years, Python's status as data science's de facto language changed that. We have a plethora of ways to plot data using Python today. + +In this article, Shaun Taylor-Morgan walks through [seven ways to plot data in Python][2]. Don't worry if you're a Matplotlib user: It's covered, along with Seaborn, Plotly, and Bokeh. You'll find codes and charts per plotting library, plus some newcomers to the Python plotting field: Altair, Pygal, and pandas. + +### Transparent, open source alternative to Google Analytics + +Many websites use Google Analytics to track their activity metrics. Its status as a de facto tool leaves some to wonder if open source options exist. In this [overview of Plausible Analytics][3], Marko Saric proves they do. + +If you want to compare Google Analytics against open source options, you will find Marko's article helpful. It's especially great if you're a website admin trying to comply with new data collection regulations, such as GDPR. + +If you want to learn more about Plausible, you'll find links to Plausible's code and roadmap on GitHub in Marko's article. + +### 5 MySQL features you need to know + +After MySQL 8.0 came out in April 2018, its release cycle for new features updated to four times per year. Despite the more frequent deployments, many users don't know about [new MySQL features][4] that could save them hours of time. + +In this March 2020 article, Dave Stokes shares five features that were new to MySQL. They include dual passwords, new shells, and better SQL support. But keep in mind that these updates are now close to a year old: There's a lot more to discover in MySQL since then! + +### Using C and C++ for data science + +Did you know that C and C++ are both strong options for data science projects? They're especially good choices to [run data science programs on the command line][5]. + +In this article, Cristiano L. Fontana uses [C99][6] and [C++11][7] to write a program that uses [Anscombe's quartet][8] dataset. The step-by-step instructions include reading data from a CSV file, interpolating data, and plotting results to an image file. + +### Using Python to visualize COVID-19 projections + +The COVID-19 pandemic brought an influx of data to the proverbial forefront. In this article, Anurag Gupta shows how to use Python to [project COVID-19 cases and deaths][9] across India. + +Anurag walks through downloading and parsing data, selecting and plotting data for India, and creating an animated horizontal bar graph. If you're interested in the complete script, you'll find a link at the end of this article. + +### How I use Python to map the global spread of COVID-19 + +If you want to [track the spread of COVID-19 globally][10], you can use Python, pandas, and Plotly to do it. In this article, Anurag Gupta explains how you can use them to clean and visualize raw data. + +Using screenshots to help, Anurag shares how to load data into a pandas DataFrame; clean and modify the DataFrame; and visualize the spread in Plotly. The complete code yields a gorgeous graph, and the article ends with a link to download and run it. + +### 3 ways to use PostgreSQL commands + +In this follow-up to his article on getting started with PostgreSQL, Greg Pittman shares how he uses PostgreSQL commands to [keep his grocery shopping list updated][11]. + +Whether you want to do per-item entry or bring order to complex tables, Greg explains how to create the commands you need. He also shows how to output your lists once you're ready to print them. + +No matter how long your shopping list is, PostgreSQL commands—especially the WHERE parameter—can bring ease to your life beyond programming. + +### Using Python and GNU Octave to plot data + +Python is data science's language du jour, but how can you use it for specific tasks? In this article, Cristiano Fontana shares how to [write a program in Python and GNU Octave][12]. + +Cristiano walks through each step to read data from a CSV file, interpolate the data with a straight line, and plot the result to an image file. From printing output and reading data to plotting the outcome, Fontana's step-by-step guidelines explain the whole process in Python and GNU Octave. + +### Fast data modeling with JavaScript + +Want a way to [model data in a few minutes][13]? In this article, Szymon shares how to do it using less than 15 lines of JavaScript code. + +It really is that simple: You merely need to create a class and use the defaultsDeep function in the [Lodash][14] JavaScript library. Szymon shows this process using screenshots and code samples. + +It keeps your data in one place, avoids code repetition, and is fully customizable. If you want to try out the code in this article, Szymon links to it in CodeSandbox at the end. + +### How to process real-time data with Apache tools + +We process so much data today that storing data for analysis later might be impossible soon. Teams that handle failure prediction and other context-sensitive data need to get this information in real time, before it hits a database. Luckily, you can do this with Apache tools. + +In this article, Simon Crosby explains how Apache Spark—a unified analytics engine—can [process large datasets][15] in real time at scale. For instance, "Spark Streaming breaks data into mini-batches that are each independently analyzed by a Spark model or some other system," he writes. + +If Apache's not your thing, Simon presents other open source options. Flink, Beam, and Stanza—along with Apache-licensed SwimOS and Hazelcast—are just a few of your choices. + +### What do you want to know? + +What would you like to know about big data and data science? Please share your suggestions for article topics in the comments. And if you have something interesting to share about data science, please consider [writing an article][16] for Opensource.com. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/big-data + +作者:[Lauren Maffeo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lmaffeo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) +[2]: https://opensource.com/article/20/4/plot-data-python +[3]: https://opensource.com/article/20/5/plausible-analytics +[4]: https://opensource.com/article/20/3/mysql-features +[5]: https://opensource.com/article/20/2/c-data-science +[6]: https://en.wikipedia.org/wiki/C99 +[7]: https://en.wikipedia.org/wiki/C%2B%2B11 +[8]: https://en.wikipedia.org/wiki/Anscombe%27s_quartet +[9]: https://opensource.com/article/20/4/python-data-covid-19 +[10]: https://opensource.com/article/20/4/python-map-covid-19 +[11]: https://opensource.com/article/20/2/postgresql-commands +[12]: https://opensource.com/article/20/2/python-gnu-octave-data-science +[13]: https://opensource.com/article/20/5/data-modeling-javascript +[14]: https://en.wikipedia.org/wiki/Lodash +[15]: https://opensource.com/article/20/2/real-time-data-processing +[16]: https://opensource.com/how-submit-article diff --git a/sources/tech/20210119 Haruna Video Player- An Open-Source Qt-based MPV GUI Front-end for Linux.md b/sources/tech/20210119 Haruna Video Player- An Open-Source Qt-based MPV GUI Front-end for Linux.md deleted file mode 100644 index 9513b5d75c..0000000000 --- a/sources/tech/20210119 Haruna Video Player- An Open-Source Qt-based MPV GUI Front-end for Linux.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Haruna Video Player: An Open-Source Qt-based MPV GUI Front-end for Linux) -[#]: via: (https://itsfoss.com/haruna-video-player/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Haruna Video Player: An Open-Source Qt-based MPV GUI Front-end for Linux -====== - -_**Brief: A Qt-based video player for Linux that acts as a front-end to mpv along with the ability to use youtube-dl.**_ - -### Haruna Video Player: A Qt-based Free Video Player - -![Haruna Video Player][1] - -In case you’re not aware of [mpv][2], it is a free and open-source command-line based media player. Okay, there is a [minimalist GUI for MPV][3] but at the core, it is command line. - -You might also find several [open-source video players][4] that are basically the GUI front-end to mpv. - -Haruna video player is one of them along with the ability to [use youtube-dl][5]. You can easily play local media files as well as YouTube content. - -Let me give you an overview of the features offered with this player. - -### Features of Haruna Video Player - -![][6] - -You might find it a bit different from some other video players. Here’s what you get with Haruna video player: - - * Ability to play YouTube videos directly using the URL - * Support playlists and you get to control them easily - * Ability to auto-skip based on some words in the subtitle. - * Control the playback speed - * Change the format to play (audio/video) using [youtube-dl][7] - * Plenty of keyboard shortcuts - * Easily take a screenshot from the video - * Option to add primary and secondary subtitle - * Change the file format of the screenshot - * Hardware decoding supported - * Color adjustments to improve the quality of what you watch - * Ability to tweak mouse and keyboard shortcuts to be able to quickly navigate and do what you want - * Tweak the UI (fonts, theme) - - - -### Installing Haruna Video Player on Linux - -![][8] - -Unfortunately (or not), depending on what you prefer, you can only install it [using Flatpak][9]. You can install it on any Linux distribution using the [Flatpak package][10]. - -You can find it in [AUR][11] as well if you’re using an Arch-based system. - -But, if you do not prefer that, you may take a look at the source code on [GitHub][12] to see if you can build it yourself like a normal Gentoo user. - -[Haruna Video Player][12] - -### Concluding Thoughts - -Haruna Video Player is a simple and useful GUI on top [libmpv][13]. The ability to play YouTube videos along with various file formats on the system is definitely something many users would like. - -The user interface is easy to get used to and offers some important customization options as well. - -Have you tried this video player already? Let me know what you think about it in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/haruna-video-player/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/haruna-video-player-dark.jpg?resize=800%2C512&ssl=1 -[2]: https://mpv.io/ -[3]: https://itsfoss.com/mpv-video-player/ -[4]: https://itsfoss.com/video-players-linux/ -[5]: https://itsfoss.com/download-youtube-linux/ -[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/haruna-video-player-1.png?resize=800%2C503&ssl=1 -[7]: https://github.com/ytdl-org/youtube-dl -[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/haruna-player-garuda-linux.png?resize=800%2C506&ssl=1 -[9]: https://itsfoss.com/flatpak-guide/ -[10]: https://flathub.org/apps/details/com.georgefb.haruna -[11]: https://itsfoss.com/aur-arch-linux/ -[12]: https://github.com/g-fb/haruna -[13]: https://github.com/mpv-player/mpv/tree/master/libmpv diff --git a/sources/tech/20210119 Set up a Linux cloud on bare metal.md b/sources/tech/20210119 Set up a Linux cloud on bare metal.md new file mode 100644 index 0000000000..5445ad9141 --- /dev/null +++ b/sources/tech/20210119 Set up a Linux cloud on bare metal.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Set up a Linux cloud on bare metal) +[#]: via: (https://opensource.com/article/21/1/cloud-image-virt-install) +[#]: author: (Sumantro Mukherjee https://opensource.com/users/sumantro) + +Set up a Linux cloud on bare metal +====== +Create cloud images with virt-install on Fedora. +![Sky with clouds and grass][1] + +Virtualization is one of the most used technologies. Fedora Linux uses [Cloud Base images][2] to create general-purpose virtual machines (VM), but there are many ways to set up Cloud Base images. Recently, the virt-install command-line tool for provisioning VMs added support for **cloud-init**, so it can now be used to configure and run a cloud image locally. + +This article walks through how to set up a base Fedora cloud instance on bare metal. The same steps can be used with any raw or Qcow2 Cloud Base image. + +### What is --cloud-init? + +The **virt-install** command creates a KVM, Xen, or [LXC][3] guest using **libvirt**. The `--cloud-init` option uses a local file (called a **nocloud datasource**) so you don't need a network connection to create an image. The **nocloud** method derives user data and metadata for the guest from an iso9660 filesystem (an `.iso` file) during the first boot. When you use this option, **virt-install** generates a random (and temporary) password for the root user account, provides a serial console so you can log in and change your password, and then disables the `--cloud-init` option for subsequent boots.  + +### Set up a Fedora Cloud Base image + +First, [download a Fedora Cloud Base (for OpenStack) image][2]. + +![Fedora Cloud website screenshot][4] + +(Sumantro Mukherjee, [CC BY-SA 4.0][5]) + +Then install the **virt-install** command: + + +``` +`$ sudo dnf install virt-install` +``` + +Once **virt-install** is installed and the Fedora Cloud Base image is downloaded, create a small YAML file named `cloudinit-user-data.yaml` to contain a few configuration lines that virt-install will use. + + +``` +#cloud-config +password: 'r00t' +chpasswd: { expire: false } +``` + +This simple cloud-config sets the password for the default **fedora** user. If you want to use a password that expires, you can set it to expire after logging in. + +Create and boot the VM: + + +``` +$ virt-install --name local-cloud18012709 \ +\--memory 2000 --noreboot \ +\--os-variant detect=on,name=fedora-unknown \ +\--cloud-init user-data="/home/r3zr/cloudinit-user-data.yaml" \ +\--disk=size=10,backing_store="/home/r3zr/Downloads/Fedora-Cloud-Base-33-1.2.x86_64.qcow2" +``` + +In this example, `local-cloud18012709` is the name of the virtual machine, RAM is set to 2000MiB, disk size (the virtual hard drive) is set to 10GB, and `--cloud-init` and `backing_store` contain the absolute path to the YAML config file you created and the Qcow2 image you downloaded. + +### Log in + +After the image is created, you can log in with the username **fedora** and the password set in the YAML file (in my example, this is **r00t**, but you may have used something different). Change your password once you've logged in for the first time. + +To power off your virtual machine, execute the `sudo poweroff` command, or press **Ctrl**+**]** on your keyboard. + +### Start, stop, and kill VMs + +The `virsh` command is used to start, stop, and kill VMs. + +To start any VM that is running: + + +``` +`$ virsh start ` +``` + +To stop any running VM: + + +``` +`$ virsh shutdown ` +``` + +To list all VMs that are in a running state: + + +``` +`$ virsh list` +``` + +To destroy the VMs: + + +``` +`$ virsh destroy ` +``` + +![Destroying a VM][6] + +(Sumantro Mukherjee, [CC BY-SA 4.0][5]) + +### Fast and easy + +The **virt-install** command combined with the `--cloud-init` option makes it fast and easy to create cloud-ready images without worrying about whether you have a cloud to run them on yet.  Whether you're preparing for a a major deployment or just learning about containers, give `virt-install --cloud-init` a try. + +Do you have a favourite tool for your work in the cloud? Tell us about them in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/cloud-image-virt-install + +作者:[Sumantro Mukherjee][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sumantro +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass) +[2]: https://alt.fedoraproject.org/cloud/ +[3]: https://www.redhat.com/sysadmin/exploring-containers-lxc +[4]: https://opensource.com/sites/default/files/uploads/fedoracloud.png (Fedora Cloud website) +[5]: https://creativecommons.org/licenses/by-sa/4.0/ +[6]: https://opensource.com/sites/default/files/uploads/destroyvm.png (Destroying a VM) diff --git a/sources/tech/20210120 Help safeguard your Linux server from attack with this REST API.md b/sources/tech/20210120 Help safeguard your Linux server from attack with this REST API.md new file mode 100644 index 0000000000..8c44360b77 --- /dev/null +++ b/sources/tech/20210120 Help safeguard your Linux server from attack with this REST API.md @@ -0,0 +1,409 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Help safeguard your Linux server from attack with this REST API) +[#]: via: (https://opensource.com/article/21/1/crowdsec-rest-api) +[#]: author: (Philippe Humeau https://opensource.com/users/philippe-humeau) + +Help safeguard your Linux server from attack with this REST API +====== +CrowdSec's new architecture improves communication among components to +better protect systems against cybersecurity threats. +![Lock][1] + +CrowdSec is an open source cybersecurity detection system meant to identify aggressive behaviors and prevent them from accessing systems. Its user-friendly design offers a low technical barrier of entry with a significant boost to security. + +A modern behavior detection system written in Go, CrowdSec combines the philosophy of [Fail2ban][2] with Grok patterns and YAML grammar to analyze logs for a modern, decoupled approach for securing the cloud, containers, and virtual machine (VM) infrastructures. Once detected, a threat can be mitigated with bouncers (block, 403, captcha, and so on), and blocked internet protocol addresses (IPs) are shared among all users to improve everyone's security further. + +December's official release of [CrowdSec v.1.0.][3][x][3] introduces several improvements, including a major architectural change: the introduction of a local REST API. This local API allows all components to communicate more efficiently to support more complex architectures while keeping things simple for single-machine users. It also makes it simpler to create bouncers (the remediation component) and renders them more resilient to upcoming changes, which limits maintenance time. + +The CrowdSec architecture was deeply remodeled in the new 1.0 release. + +![CrowdSec architecture][4] + +(Priya James, [CC BY-SA 4.0][5]) + +All CrowdSec components (the agent reading logs, cscli for humans, and bouncers to deter the bad guys) can now communicate via a REST API instead of reading or writing directly in the database. With this new version, only the local API service will interact with the database (e.g., SQLite, PostgreSQL, or MySQL). + +This article covers how to install and run CrowdSec on a Linux server: + + * CrowdSec setup + * Testing detection capabilities + * Bouncer setup + * Observability + + + +### Set up the environment + +The machine I used for this test is a Debian 10 Buster t2.medium EC2. + +To make it more relevant, install Nginx: + + +``` +$ sudo apt-get update +$ sudo apt-get install nginx +``` + +Configure the security groups so that both secure shell (SSH) (tcp/22) and HTTP (tcp/80) can be reached from the outside world. This will be useful for simulating attacks later. + +#### Install CrowdSec + +Grab the latest version of CrowdSec: + + +``` +`$ curl -s https://api.github.com/repos/crowdsecurity/crowdsec/releases/latest | grep browser_download_url| cut -d '"' -f 4  | wget -i -` +``` + +You can also download it from CrowdSec's [GitHub page][6]. + +Install it: + + +``` +$ tar xvzf crowdsec-release.tgz +$ cd crowdsec-v1.0.0/ +$ sudo ./wizard.sh -i +``` + +The wizard helps guide installation and configuration. And I have to say it's very helpful! + +![CrowdSec Installation Wizard][7] + +(Priya James, [CC BY-SA 4.0][5]) + +First, the wizard identifies services present on the machine: + +![CrowdSec identifies services on the machine][8] + +(Priya James, [CC BY-SA 4.0][5]) + +It allows you to choose which services to monitor. For this tutorial, go with the default option and monitor all three services: Nginx, SSHD, and the Linux system. + +For each service, the wizard identifies the associated log files and asks you to confirm (use the defaults again): + +![CrowdSec wizard identifies log files][9] + +(Priya James, [CC BY-SA 4.0][5]) + +Once the services and associated log files have been identified correctly (which is crucial, as this is where CrowdSec will get its information), the wizard prompts you with suggested collections. + +A [collection][10] is a set of configurations that aims to create a coherent ensemble to protect a technological stack. For example, the [crowdsecurity/sshd][11] collection contains a [parser for SSHD logs][12] and a [scenario to detect SSH bruteforce and SSH user enumeration][13]. + +The suggested collections are based on the services that you choose to protect. + +![CrowdSec suggests collections][14] + +(Priya James, [CC BY-SA 4.0][5]) + +The wizard's last step is to deploy [generic whitelists][15] to prevent banning private IP addresses. It also reminds you that CrowdSec will _detect_ malevolent IPs but _not ban_ any of them. You need a bouncer to block attacks. This is an essential thing to remember: _CrowdSec detects attacks; bouncers block them_. + +![CrowdSec detects attacks][16] + +(Priya James, [CC BY-SA 4.0][5]) + +Now that the initial setup is done, CrowdSec should be up and running. + +![CrowdSec running][17] + +(Priya James, [CC BY-SA 4.0][5]) + +### Detect attacks with CrowdSec + +By installing CrowdSec, you should already have coverage for common internet background noise. Check it out! + +#### Attack a web server with Wapiti + +Simulate a web application vulnerability scan on your Nginx service using Wapiti, a web application vulnerability scanner. You need to do this from an external IP, and keep in mind that [private IPs are whitelisted by default][15]: + + +``` +ATTACKER$ wapiti   -u +[*] Saving scan state, please wait... + + Note +======== +This scan has been saved in the file +/home/admin/.wapiti/scans/34.248.33.108_folder_b753f4f6.db +... +``` + +On your freshly equipped machine, you can see the attacks in the logs: + +![Seeing attacks in the logs][18] + +(Priya James, [CC BY-SA 4.0][5]) + +My IP triggered different scenarios: + + * [**crowdsecurity/http-path-traversal-probing**][19]: Detects path traversal probing attempt patterns in the `URI` or the `GET` parameters + * [**crowdsecurity/http-sqli-probing-detection**][20]: Detects obvious SQL injection probing attempt patterns in the `URI` or the `GET` parameters. + + + +Bear in mind that the website you attacked in this example is an empty Nginx server. If this were a real website, the scanner would perform many other actions that would lead to more detections. + +#### Check results with cscli + +**[cscli][21]** is one of the main tools for interacting with the CrowdSec service, and one of its features is visualizing _active decisions_ and _past alerts_. + +![cscli][22] + +(Priya James, [CC BY-SA 4.0][5]) + +The `cscli decisions list` command displays active decisions at any time, while `cscli alerts list` shows past alerts (even if decisions are expired or the alert didn't lead to a decision). + +You can also inspect a specific alert to get more details with `cscli alerts inspect -d ` (using the ID displayed in the left-hand column of the alerts list). + +![Inspect a specific alert][23] + +(Priya James, [CC BY-SA 4.0][5]) + +cscli offers other features, but one to look at now is to find out which parsers and scenarios are installed in the default setup. + +![Find installed parsers and scenarios][24] + +(Priya James, [CC BY-SA 4.0][5]) + +### Observability + +Observability (especially for software that might take defensive countermeasures) is always a key point for a security solution. Besides its "tail the logfile" capability, CrowdSec offers two ways to achieve this: Metabase dashboards and Prometheus metrics. + +#### Metabase dashboard + +cscli allows you to deploy a dashboard using Metabase and Docker. Begin by [installing Docker using its official documentation][25]. + +![Installing Docker][26] + +(Priya James, [CC BY-SA 4.0][5]) + +If you're using an AWS EC2 instance, be sure to expose tcp/3000 to access your dashboard. + +`cscli dashboard setup` enables you to deploy a new Metabase dashboard running on Docker with a random password. + +![Metabase sign in][27] + +(Priya James, [CC BY-SA 4.0][5]) + +![CrowdSec dashboard][28] + +(Priya James, [CC BY-SA 4.0][5]) + +![CrowdSec dashboard timeline][29] + +(Priya James, [CC BY-SA 4.0][5]) + +![CrowdSec dashboard attack sources][30] + +(Priya James, [CC BY-SA 4.0][5]) + +#### Prometheus metrics + +While some people love visual dashboards, others prefer other kinds of metrics. This is where CrowdSec's Prometheus integration comes into play. + +One way to visualize these metrics is with `cscli metrics`. + +![cscli metrics][31] + +(Priya James, [CC BY-SA 4.0][5]) + +The `cscli metrics` command exposes only a subset of Prometheus metrics that are important for system administrators. You can find a detailed [description of the metrics in the documentation.][32] The metrics are split into sections: + + * **Buckets:** How many buckets of each type were created, poured, or have overflowed since the daemon startup? + * **Acquisition:** How many lines or events were read from each of the specified sources, and were they parsed and/or poured into buckets later? + * **Parser:** How many lines/events were delivered to each parser, and did the parser succeed in processing the mentioned events? + * **Local API:** How many times was each route hit, and so on? + + + +Viewing CrowdSec's Prometheus metrics via `cscli metrics` is convenient but doesn't do justice to Prometheus. It is out of scope for this article to deep dive into Prometheus, but these screenshots offer a quick look at what CrowdSec's Prometheus metrics look like in Grafana. + +![CrowdSec's Prometheus metrics in Grafana][33] + +(Priya James, [CC BY-SA 4.0][5]) + +![CrowdSec's Prometheus metrics in Grafana][34] + +(Priya James, [CC BY-SA 4.0][5]) + +### Defend attacks with bouncers + +CrowdSec's detection capabilities provide observability into what is going on. However, to protect yourself, you need to block attackers, which is where bouncers play a major part. Remember: CrowdSec detects, bouncers deter. + +Bouncers work by querying CrowdSec's API to know when to block an IP. You can download bouncers directly from the [CrowdSec Hub][35]. + +![CrowdSec Hub][36] + +(Priya James, [CC BY-SA 4.0][5]) + +For this example, use `cs-firewall-bouncer`. It directly bans any malevolent IP at the firewall level using iptables or nftables. + +Note: If you used your IP to simulate attacks, _unban your IP_ before going further:  + + +``` +`sudo cscli decisions delete -i X.X.X.X` +``` + +#### Install the bouncer + +First, download the bouncer from the Hub: + + +``` +$ wget +$ tar xvzf cs-firewall-bouncer.tgz +$ cd cs-firewall-bouncer-v0.0.5/ +``` + +Install the bouncer with a simple install script: + +![Install Bouncer][37] + +(Priya James, [CC BY-SA 4.0][5]) + +The install script will check if you have iptables or nftables installed and prompt you to install if not. + +Bouncers communicate with CrowdSec via a REST API, so check that the bouncer is registered on the API. + +![Check Bouncer Registration][38] + +(Priya James, [CC BY-SA 4.0][5]) + +The last command (`sudo cscli bouncers list`) shows your newly installed bouncer. + +#### Test the bouncer + +**Warning:** Before going further, _ensure you have another IP available_ to access your machine and that you will not kick yourself out. Using your smartphone's internet connection will work. + +Now that you have a bouncer to protect you, try the test again. + +![Test the bouncer][39] + +(Priya James, [CC BY-SA 4.0][5]) + +Try to access the server at the end of the scan: + + +``` +ATTACKER$ curl --connect-timeout 1 +curl: (28) Connection timed out after 1001 milliseconds +``` + +See how it turns out from the defender's point of view. + +![CrowdSec defender's point of view][40] + +(Priya James, [CC BY-SA 4.0][5]) + +For the technically curious, `cs-firewall-bouncer` uses either nftables or iptables. Using nftables (used on Debian 10 by default) creates and maintains two tables named `crowdsec` and `crowdsec6` (for IPv4 and IPv6, respectively): + + +``` +$ sudo nft list ruleset +… +table ip crowdsec { +        set crowdsec_blocklist { +                type ipv4_addr +                elements = { 3.22.63.25, 3.214.184.223, +                             3.235.62.151, 3.236.112.98, +                             13.66.209.11, 17.58.98.156, … +                        } +        } + +        chain crowdsec_chain { +                type filter hook input priority 0; policy accept; +                ip saddr @crowdsec_blocklist drop +        } +} +table ip6 crowdsec6 { +        set crowdsec6_blocklist { +                type ipv6_addr +        } + +        chain crowdsec6_chain { +                type filter hook input priority 0; policy accept; +                ip6 saddr @crowdsec6_blocklist drop +        } +} +``` + +You can change the firewall backend used by the bouncer in `/etc/crowdsec/cs-firewall-bouncer/cs-firewall-bouncer.yaml` by changing the mode from nftables to iptables (ipset is required for iptables mode). + +### Get involved + +Please share your feedback about the latest CrowdSec release. If you are interested in testing the software or would like to get in touch with the team, check the following links: + + * [Download CrowdSec v.1.0][41][.x][3] + * [CrowdSec website][42] + * [GitHub repository][43] + * [Gitter][44] + + + +* * * + +_This article has been adapted from [Most advanced CrowdSec IPS v.1.0.x is out: how-to guide][45], originally published on GBHackers on Security._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/crowdsec-rest-api + +作者:[Philippe Humeau][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/philippe-humeau +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum (Lock) +[2]: https://www.fail2ban.org/ +[3]: https://github.com/crowdsecurity/crowdsec/releases/tag/v1.0.2 +[4]: https://opensource.com/sites/default/files/uploads/crowdsec_newarchitecture.png (CrowdSec architecture) +[5]: https://creativecommons.org/licenses/by-sa/4.0/ +[6]: https://github.com/crowdsecurity/crowdsec/releases/latest +[7]: https://opensource.com/sites/default/files/uploads/crowdsec_installationwizard.gif (CrowdSec Installation Wizard) +[8]: https://opensource.com/sites/default/files/uploads/crowdsec_servicestomonitor.png (CrowdSec identifies services on the machine) +[9]: https://opensource.com/sites/default/files/uploads/crowdsec_nginxlogfiles.png (CrowdSec wizard identifies log files) +[10]: https://doc.crowdsec.net/Crowdsec/v1/getting_started/concepts/#collections +[11]: https://hub.crowdsec.net/author/crowdsecurity/collections/sshd +[12]: https://hub.crowdsec.net/author/crowdsecurity/configurations/sshd-logs +[13]: https://hub.crowdsec.net/author/crowdsecurity/configurations/ssh-bf +[14]: https://opensource.com/sites/default/files/uploads/crowdsec_collections.png (CrowdSec suggests collections) +[15]: https://hub.crowdsec.net/author/crowdsecurity/configurations/whitelists +[16]: https://opensource.com/sites/default/files/uploads/crowdsec_detects.png (CrowdSec detects attacks) +[17]: https://opensource.com/sites/default/files/uploads/crowdsec_running.png (CrowdSec running) +[18]: https://opensource.com/sites/default/files/uploads/crowdsec_detectattack.png (Seeing attacks in the logs) +[19]: https://hub.crowdsec.net/author/crowdsecurity/configurations/http-path-traversal-probing +[20]: https://hub.crowdsec.net/author/crowdsecurity/configurations/http-sqli-probing +[21]: https://doc.crowdsec.net/Crowdsec/v1/user_guide/cscli/ +[22]: https://opensource.com/sites/default/files/uploads/cscli.png (cscli) +[23]: https://opensource.com/sites/default/files/uploads/cscli-alerts-inspect.png (Inspect a specific alert) +[24]: https://opensource.com/sites/default/files/uploads/cscli_parsersscenarios.png (Find installed parsers and scenarios) +[25]: https://docs.docker.com/engine/install/debian/ +[26]: https://opensource.com/sites/default/files/uploads/cscli_installdocker.png (Installing Docker) +[27]: https://opensource.com/sites/default/files/uploads/metabasesignin.png (Metabase sign in) +[28]: https://opensource.com/sites/default/files/uploads/crowdsec_maindashboard.png (CrowdSec dashboard) +[29]: https://opensource.com/sites/default/files/uploads/crowdsec_timeline.png (CrowdSec dashboard timeline) +[30]: https://opensource.com/sites/default/files/uploads/crowdsec_bysource_0.png (CrowdSec dashboard attack sources) +[31]: https://opensource.com/sites/default/files/uploads/cscli-metrics.png (cscli metrics) +[32]: https://doc.crowdsec.net/Crowdsec/v1/observability/prometheus/ +[33]: https://opensource.com/sites/default/files/uploads/crowdsec_prometheus.png (CrowdSec's Prometheus metrics in Grafana) +[34]: https://opensource.com/sites/default/files/uploads/crowdsec_prometheus2.png (CrowdSec's Prometheus metrics in Grafana) +[35]: https://hub.crowdsec.net/browse/#bouncers +[36]: https://opensource.com/sites/default/files/uploads/crowdsec_hub.png (CrowdSec Hub) +[37]: https://opensource.com/sites/default/files/uploads/installbouncer.png (Install Bouncer) +[38]: https://opensource.com/sites/default/files/uploads/bouncerregistration.png (Check Bouncer Registration) +[39]: https://opensource.com/sites/default/files/uploads/crowdsectestbouncer.png (Test the bouncer) +[40]: https://opensource.com/sites/default/files/uploads/crowdsec_defender.png (CrowdSec defender's point of view) +[41]: https://github.com/crowdsecurity/crowdsec/releases/tag/v1.0.0 +[42]: https://crowdsec.net/ +[43]: https://github.com/crowdsecurity/crowdsec +[44]: https://gitter.im/crowdsec-project/community +[45]: https://gbhackers.com/crowdsec-ips-v-1-0-x/ diff --git a/sources/tech/20210120 Highlighted Text Not Visible in gedit in Dark Mode- Here-s What You Can Do.md b/sources/tech/20210120 Highlighted Text Not Visible in gedit in Dark Mode- Here-s What You Can Do.md new file mode 100644 index 0000000000..a037e8b060 --- /dev/null +++ b/sources/tech/20210120 Highlighted Text Not Visible in gedit in Dark Mode- Here-s What You Can Do.md @@ -0,0 +1,84 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Highlighted Text Not Visible in gedit in Dark Mode? Here’s What You Can Do) +[#]: via: (https://itsfoss.com/gedit-dark-mode-problem/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Highlighted Text Not Visible in gedit in Dark Mode? Here’s What You Can Do +====== + +I love [using dark mode in Ubuntu][1]. It’s soothing on the eyes and makes the system look aesthetically more pleasing, in my opinion. + +One minor annoyance I noticed is with [gedit][2] text editor and if you use it with the dark mode in your system, you might have encountered it too. + +By default, gedit highlights the line where your cursor is. That’s a useful feature but it becomes a pain if you are using dark mode in your Linux system. Why? Because the highlighted text is not readable anymore. Have a look at it yourself: + +![Text on the highlighted line is hardly visible][3] + +If you select the text, it becomes readable but it’s not really a pleasant reading or editing experience. + +![Selecting the text makes it better but that’s not a convenient thing to do for all lines][4] + +The good thing is that you don’t have to live with it. I’ll show a couple of steps you can take to enjoy dark mode system and gedit together. + +### Making gedit reader-friendly in dark mode + +You basically have two options: + + 1. Disable highlight the current line but then you’ll have to figure out which line you are at. + 2. Change the default color settings but then the colors of the editor will be slightly different, and it won’t switch to light mode automatically if you change the system theme. + + + +It’s a workaround and compromise that you’ll have to make until the gedit or GNOME developers fix the issue. + +#### Option 1: Disable highlighting current line + +When you have gedit opened, click on the hamburger menu and select **Preferences**. + +![Go to Preferences][5] + +In the View tab, you should see the “Highlight current line” option under Highlighting section. Uncheck this. The effects are visible immediately. + +![Disable highlighting current line][6] + +Highlighting current line is a usable feature and if you want to continue using it, opt for the second option. + +#### Option 2: Change the editor color theme + +In the Preferences window, go to Font & Colors tab and change the color scheme to Oblivion, Solarized Dark or Cobalt. + +![Change the color scheme][7] + +As I mentioned earlier, the drawback is that when you switch the system theme to a light theme, the editor theme isn’t switched automatically to the light theme. + +### A bug that should be fixed by devs + +There are [several text editors available for Linux][8] but for quick reading or editing a text file, I prefer using gedit. It’s a minor annoyance but an annoyance nonetheless. The developers should fix it in future version of this awesome text editor so that we don’t have to resort to these worarounds. + +How about you? Do you use dark mode on your system or light mode? Had you noticed this trouble with gedit? Did you take any steps to fix it? Feel free to share your experience. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/gedit-dark-mode-problem/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/dark-mode-ubuntu/ +[2]: https://wiki.gnome.org/Apps/Gedit +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/gedit-dark-mode-problem.png?resize=779%2C367&ssl=1 +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/gedit-dark-mode-issue.png?resize=779%2C367&ssl=1 +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/gedit-preference.jpg?resize=777%2C527&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/disable-highlight-line-gedit.jpg?resize=781%2C530&ssl=1 +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/change-color-scheme-gedit.jpg?resize=785%2C539&ssl=1 +[8]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ diff --git a/sources/tech/20210121 Convert your Windows install into a VM on Linux.md b/sources/tech/20210121 Convert your Windows install into a VM on Linux.md new file mode 100644 index 0000000000..4c6c78a369 --- /dev/null +++ b/sources/tech/20210121 Convert your Windows install into a VM on Linux.md @@ -0,0 +1,250 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Convert your Windows install into a VM on Linux) +[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux) +[#]: author: (David Both https://opensource.com/users/dboth) + +Convert your Windows install into a VM on Linux +====== +Here's how I configured a VirtualBox VM to use a physical Windows drive +on my Linux workstation. +![Puzzle pieces coming together to form a computer screen][1] + +I use VirtualBox frequently to create virtual machines for testing new versions of Fedora, new application programs, and lots of administrative tools like Ansible. I have even used VirtualBox to test the creation of a Windows guest host. + +Never have I ever used Windows as my primary operating system on any of my personal computers or even in a VM to perform some obscure task that cannot be done with Linux. I do, however, volunteer for an organization that uses one financial program that requires Windows. This program runs on the office manager's computer on Windows 10 Pro, which came preinstalled. + +This financial application is not special, and [a better Linux program][2] could easily replace it, but I've found that many accountants and treasurers are extremely reluctant to make changes, so I've not yet been able to convince those in our organization to migrate. + +This set of circumstances, along with a recent security scare, made it highly desirable to convert the host running Windows to Fedora and to run Windows and the accounting program in a VM on that host. + +It is important to understand that I have an extreme dislike for Windows for multiple reasons. The primary ones that apply to this case are that I would hate to pay for another Windows license – Windows 10 Pro costs about $200 – to install it on a new VM. Also, Windows 10 requires enough information when setting it up on a new system or after an installation to enable crackers to steal one's identity, should the Microsoft database be breached. No one should need to provide their name, phone number, and birth date in order to register software. + +### Getting started + +The physical computer already had a 240GB NVMe m.2 storage device installed in the only available m.2 slot on the motherboard. I decided to install a new SATA SSD in the host and use the existing SSD with Windows on it as the storage device for the Windows VM. Kingston has an excellent overview of various SSD devices, form factors, and interfaces on its web site. + +That approach meant that I wouldn't need to do a completely new installation of Windows or any of the existing application software. It also meant that the office manager who works at this computer would use Linux for all normal activities such as email, web access, document and spreadsheet creation with LibreOffice. This approach increases the host's security profile. The only time that the Windows VM would be used is to run the accounting program. + +### Back it up first + +Before I did anything else, I created a backup ISO image of the entire NVMe storage device. I made a partition on a 500GB external USB storage drive, created an ext4 filesystem on it, and then mounted that partition on **/mnt**. I used the **dd** command to create the image. + +I installed the new 500GB SATA SSD in the host and installed the Fedora 32 Xfce spin on it from a Live USB. At the initial reboot after installation, both the Linux and Windows drives were available on the GRUB2 boot menu. At this point, the host could be dual-booted between Linux and Windows. + +### Looking for help in all the internet places + +Now I needed some information on creating a VM that uses a physical hard drive or SSD as its storage device. I quickly discovered a lot of information about how to do this in the VirtualBox documentation and the internet in general. Although the VirtualBox documentation helped me to get started, it is not complete, leaving out some critical information. Most of the other information I found on the internet is also quite incomplete. + +With some critical help from one of our Opensource.com Correspondents, Joshua Holm, I was able to break through the cruft and make this work in a repeatable procedure. + +### Making it work + +This procedure is actually fairly simple, although one arcane hack is required to make it work. The Windows and Linux operating systems were already in place by the time I was ready for this step. + +First, I installed the most recent version of VirtualBox on the Linux host. VirtualBox can be installed from many distributions' software repositories, directly from the Oracle VirtualBox repository, or by downloading the desired package file from the VirtualBox web site and installing locally. I chose to download the AMD64 version, which is actually an installer and not a package. I use this version to circumvent a problem that is not related to this particular project. + +The installation procedure always creates a **vboxusers** group in **/etc/group**. I added the users intended to run this VM to the **vboxusers** and **disk** groups in **/etc/group**. It is important to add the same users to the **disk** group because VirtualBox runs as the user who launched it and also requires direct access to the **/dev/sdx** device special file to work in this scenario. Adding users to the **disk** group provides that level of access, which they would not otherwise have. + +I then created a directory to store the VMs and gave it ownership of **root.vboxusers** and **775** permissions. I used **/vms** for the directory, but it could be anything you want. By default, VirtualBox creates new virtual machines in a subdirectory of the user creating the VM. That would make it impossible to share access to the VM among multiple users without creating a massive security vulnerability. Placing the VM directory in an accessible location allows sharing the VMs. + +I started the VirtualBox Manager as a non-root user. I then used the VirtualBox **Preferences ==> General** menu to set the Default Machine Folder to the directory **/vms**. + +I created the VM without a virtual disk. The **Type** should be **Windows**, and the **Version** should be set to **Windows 10 64-bit**. Set a reasonable amount of RAM for the VM, but this can be changed later so long as the VM is off. On the **Hard disk** page of the installation, I chose the "Do not add a virtual hard disk" and clicked on **Create**. The new VM appeared in the VirtualBox Manager window. This procedure also created the **/vms/Test1** directory. + +I did this using the **Advanced** menu and performed all of the configurations on a single page, as seen in Figure 1. The **Guided Mode** obtains the same information but requires more clicks to go through a window for each configuration item. It does provide a little more in the way of help text, but I did not need that. + +![VirtualBox dialog box to create a new virtual machine but do not add a hard disk][3] + +opensource.com + +Figure 1: Create a new virtual machine but do not add a hard disk. + +Then I needed to know which device was assigned by Linux to the raw Windows drive. As root in a terminal session, use the **lshw** command to discover the device assignment for the Windows disk. In this case, the device that represents the entire storage device is **/dev/sdb**. + + +``` +# lshw -short -class disk,volume +H/W path           Device      Class          Description +========================================================= +/0/100/17/0        /dev/sda    disk           500GB CT500MX500SSD1 +/0/100/17/0/1                  volume         2047MiB Windows FAT volume +/0/100/17/0/2      /dev/sda2   volume         4GiB EXT4 volume +/0/100/17/0/3      /dev/sda3   volume         459GiB LVM Physical Volume +/0/100/17/1        /dev/cdrom  disk           DVD+-RW DU-8A5LH +/0/100/17/0.0.0    /dev/sdb    disk           256GB TOSHIBA KSG60ZMV +/0/100/17/0.0.0/1  /dev/sdb1   volume         649MiB Windows FAT volume +/0/100/17/0.0.0/2  /dev/sdb2   volume         127MiB reserved partition +/0/100/17/0.0.0/3  /dev/sdb3   volume         236GiB Windows NTFS volume +/0/100/17/0.0.0/4  /dev/sdb4   volume         989MiB Windows NTFS volume +[root@office1 etc]# +``` + +Instead of a virtual storage device located in the **/vms/Test1** directory, VirtualBox needs to have a way to identify the physical hard drive from which it is to boot. This identification is accomplished by creating a ***.vmdk** file, which points to the raw physical disk that will be used as the storage device for the VM. As a non-root user, I created a **vmdk** file that points to the entire Windows device, **/dev/sdb**. + + +``` +$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb +RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully. +``` + +I then used the **VirtualBox Manager File ==> Virtual Media Manager** dialog to add the **vmdk** disk to the available hard disks. I clicked on **Add**, and the default **/vms** location was displayed in the file management dialog. I selected the **Test1** directory and then the **Test1.vmdk** file. I then clicked **Open**, and the **Test1.vmdk** file was displayed in the list of available hard drives. I selected it and clicked on **Close**. + +The next step was to add this **vmdk** disk to the storage devices for our VM. In the settings menu for the **Test1 VM**, I selected **Storage** and clicked on the icon to add a hard disk. This opened a dialog that showed the **Test1vmdk** virtual disk file in a list entitled **Not attached.** I selected this file and clicked on the **Choose** button. This device is now displayed in the list of storage devices connected to the **Test1 VM**. The only other storage device on this VM is an empty CD/DVD-ROM drive. + +I clicked on **OK** to complete the addition of this device to the VM. + +There was one more item to configure before the new VM would work. Using the **VirtualBox Manager Settings** dialog for the **Test1 VM**, I navigated to the **System ==> Motherboard** page and placed a check in the box for **Enable EFI**. If you do not do this, VirtualBox will generate an error stating that it cannot find a bootable medium when you attempt to boot this VM. + +The virtual machine now boots from the raw Windows 10 hard drive. However, I could not log in because I did not have a regular account on this system, and I also did not have access to the password for the Windows administrator account. + +### Unlocking the drive + +No, this section is not about breaking the encryption of the hard drive. Rather, it is about bypassing the password for one of the many Windows administrator accounts, which no one at the organization had. + +Even though I could boot the Windows VM, I could not log in because I had no account on that host and asking people for their passwords is a horrible security breach. Nevertheless, I needed to log in to the VM to install the **VirtualBox Guest Additions**, which would provide seamless capture and release of the mouse pointer, allow me to resize the VM to be larger than 1024x768, and perform normal maintenance in the future. + +This is a perfect use case for the Linux capability to change user passwords. Even though I am accessing the previous administrator's account to start, in this case, he will no longer support this system, and I won't be able to discern his password or the patterns he uses to generate them. I will simply clear the password for the previous sysadmin. + +There is a very nice open source software tool specifically for this task. On the Linux host, I installed **chntpw**, which probably stands for something like, "Change NT PassWord." + + +``` +`# dnf -y install chntpw` +``` + +I powered off the VM and then mounted the **/dev/sdb3** partition on **/mnt**. I determined that **/dev/sdb3** is the correct partition because it is the first large NTFS partition I saw in the output from the **lshw** command I performed previously. Be sure not to mount the partition while the VM is running; that could cause significant corruption of the data on the VM storage device. Note that the correct partition might be different on other hosts. + +Navigate to the **/mnt/Windows/System32/config** directory. The **chntpw** utility program does not work if that is not the present working directory (PWD). Start the program. + + +``` +# chntpw -i SAM +chntpw version 1.00 140201, (c) Petter N Hagen +Hive <SAM> name (from header): <\SystemRoot\System32\Config\SAM> +ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c <lh> +File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage) +Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes. + +<>========<> chntpw Main Interactive Menu <>========<> + +Loaded hives: <SAM> + +  1 - Edit user data and passwords +  2 - List groups +      - - - +  9 - Registry editor, now with full write support! +  q - Quit (you will be asked if there is something to save) + +What to do? [1] -> +``` + +The **chntpw** command uses a TUI (Text User Interface), which provides a set of menu options. When one of the primary menu items is chosen, a secondary menu is usually displayed. Following the clear menu names, I first chose menu item **1**. + + +``` +What to do? [1] -> 1 + +===== chntpw Edit User Info & Passwords ==== + +| RID -|---------- Username ------------| Admin? |- Lock? --| +| 01f4 | Administrator                  | ADMIN  | dis/lock | +| 03ec | john                           | ADMIN  | dis/lock | +| 01f7 | DefaultAccount                 |        | dis/lock | +| 01f5 | Guest                          |        | dis/lock | +| 01f8 | WDAGUtilityAccount             |        | dis/lock | + +Please enter user number (RID) or 0 to exit: [3e9] +``` + +Next, I selected our admin account, **john**, by typing the RID at the prompt. This displays information about the user and offers additional menu items to manage the account. + + +``` +Please enter user number (RID) or 0 to exit: [3e9] 03eb +================= USER EDIT ==================== + +RID     : 1003 [03eb] +Username: john +fullname: +comment : +homedir : + +00000221 = Users (which has 4 members) +00000220 = Administrators (which has 5 members) + +Account bits: 0x0214 = +[ ] Disabled        | [ ] Homedir req.    | [ ] Passwd not req. | +[ ] Temp. duplicate | [X] Normal account  | [ ] NMS account     | +[ ] Domain trust ac | [ ] Wks trust act.  | [ ] Srv trust act   | +[X] Pwd don't expir | [ ] Auto lockout    | [ ] (unknown 0x08)  | +[ ] (unknown 0x10)  | [ ] (unknown 0x20)  | [ ] (unknown 0x40)  | + +Failed login count: 0, while max tries is: 0 +Total  login count: 47 + +\- - - - User Edit Menu: + 1 - Clear (blank) user password + 2 - Unlock and enable user account [probably locked now] + 3 - Promote user (make user an administrator) + 4 - Add user to a group + 5 - Remove user from a group + q - Quit editing user, back to user select +Select: [q] > 2 +``` + +At this point, I chose menu item **2**, "Unlock and enable user account," which deletes the password and enables me to log in without a password. By the way – this is an automatic login. I then exited the program. Be sure to unmount **/mnt** before proceeding. + +I know, I know, but why not! I have already bypassed security on this drive and host, so it matters not one iota. At this point, I did log in to the old administrative account and created a new account for myself with a secure password. I then logged in as myself and deleted the old admin account so that no one else could use it. + +There are also instructions on the internet for using the Windows Administrator account (01f4 in the list above). I could have deleted or changed the password on that account had there not been an organizational admin account in place. Note also that this procedure can be performed from a live USB running on the target host. + +### Reactivating Windows + +So I now had the Windows SSD running as a VM on my Fedora host. However, in a frustrating turn of events, after running for a few hours, Windows displayed a warning message indicating that I needed to "Activate Windows." + +After following many more dead-end web pages, I finally gave up on trying to reactivate using an existing code because it appeared to have been somehow destroyed. Finally, when attempting to follow one of the on-line virtual support chat sessions, the virtual "Get help" application indicated that my instance of Windows 10 Pro was already activated. How can this be the case? It kept wanting me to activate it, yet when I tried, it said it was already activated. + +### Or not + +By the time I had spent several hours over three days doing research and experimentation, I decided to go back to booting the original SSD into Windows and come back to this at a later date. But then Windows – even when booted from the original storage device – demanded to be reactivated. + +Searching the Microsoft support site was unhelpful. After having to fuss with the same automated support as before, I called the phone number provided only to be told by an automated response system that all support for Windows 10 Pro was only provided by internet. By now, I was nearly a day late in getting the computer running and installed back at the office. + +### Back to the future + +I finally sucked it up, purchased a copy of Windows 10 Home – for about $120 – and created a VM with a virtual storage device on which to install it. + +I copied a large number of document and spreadsheet files to the office manager's home directory. I reinstalled the one Windows program we need and verified with the office manager that it worked and the data was all there. + +### Final thoughts + +So my objective was met, literally a day late and about $120 short, but using a more standard approach. I am still making a few adjustments to permissions and restoring the Thunderbird address book; I have some CSV backups to work from, but the ***.mab** files contain very little information on the Windows drive. I even used the Linux **find** command to locate all the ones on the original storage device. + +I went down a number of rabbit holes and had to extract myself and start over each time. I ran into problems that were not directly related to this project, but that affected my work on it. Those problems included interesting things like mounting the Windows partition on **/mnt** on my Linux box and getting a message that the partition had been improperly closed by Windows (yes – on my Linux host) and that it had fixed the inconsistency. Not even Windows could do that after multiple reboots through its so-called "recovery" mode. + +Perhaps you noticed some clues in the output data from the **chntpw** utility. I cut out some of the other user accounts that were displayed on my host for security reasons, but I saw from that information that all of the users were admins. Needless to say, I changed that. I am still surprised by the poor administrative practices I encounter, but I guess I should not be. + +In the end, I was forced to purchase a license, but one that was at least a bit less expensive than the original. One thing I know is that the Linux piece of this worked perfectly once I had found all the necessary information. The issue was dealing with Windows activation. Some of you may have been successful at getting Windows reactivated. If so, I would still like to know how you did it, so please add your experience to the comments. + +This is yet another reason I dislike Windows and only ever use Linux on my own systems. It is also one of the reasons I am converting all of the organization's computers to Linux. It just takes time and convincing. We only have this one accounting program left, and I need to work with the treasurer to find one that works for her. I understand this – I like my own tools, and I need them to work in a way that is best for me. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/virtualbox-windows-linux + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) +[2]: https://opensource.com/article/20/7/godbledger +[3]: https://opensource.com/sites/default/files/virtualbox.png diff --git a/sources/tech/20210121 How to Uninstall Applications from Ubuntu Linux.md b/sources/tech/20210121 How to Uninstall Applications from Ubuntu Linux.md new file mode 100644 index 0000000000..4a32dec9b7 --- /dev/null +++ b/sources/tech/20210121 How to Uninstall Applications from Ubuntu Linux.md @@ -0,0 +1,167 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Uninstall Applications from Ubuntu Linux) +[#]: via: (https://itsfoss.com/uninstall-programs-ubuntu/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +How to Uninstall Applications from Ubuntu Linux +====== + +Don’t use a certain application anymore? Remove it. + +In fact, removing programs is one of the [easiest ways to free up disk space on Ubuntu][1] and keep your system clean. + +In this beginner’s tutorial, I’ll show you various ways of uninstalling software from Ubuntu. + +Did I say various ways? Yes, because there are [various ways of installing applications in Ubuntu][2] and hence various ways of removing them. You’ll learn to: + + * Remove applications from Ubuntu Software Center (for desktop users) + * Remove applications using apt remove command + * Remove snap applications in command line (intermediate to advanced users) + + + +Let’s see these steps one by one. + +### Method 1: Remove applications using Ubuntu Software Center + +Start the Software Center application. You should find it in the dock on the left side or search for it in the menu. + +![][3] + +You can see the installed applications in the Installed tab. + +![List installed applications][4] + +If you don’t see a program here, try to use the search feature. + +![Search for installed applications][5] + +When you open an installed application, you should see the option to remove it. Click on it. + +![Removing installed applications][6] + +It will ask for your account password. Enter it and the applications will be removed in seconds. + +This method works pretty well except in the case when Software Center is misbehaving (it does that a lot) or if the program is a software library or some other command line utility. You can always resort to the terminal in such cases. + +### Method 2: Remove programs from Ubuntu using command line + +You know that you can use `apt-get install` or `apt install` for installing applications. For uninstalling, you don’t use the apt-get uninstall command but `apt-get remove` or `apt remove`. + +All you need to do is to use the command in the following fashion: + +``` +sudo apt remove program_name +``` + +You’ll be asked to enter your account password. When you enter it, nothing is visible on the screen. That’s normal. Just type it blindly and press enter. + +The program won’t be removed immediately. You need to confirm it. When it asks for your conformation, press the enter key or Y key: + +![][7] + +Keep in mind that you’ll have to use the exact package name in the apt remove command otherwise it will throw ‘[unable to locate package error][8]‘. + +Don’t worry if you don’t remember the exact program name. You can utilize the super useful tab completion. It’s one of the [most useful Linux command line tips][9] that you must know. + +What you can do is to type the first few letters of the program you want to uninstall. And then hit the tab key. It will show all the installed packages that match those letters at the beginning of their names. + +When you see the desired package, you can type its complete name and remove it. + +![][10] + +What if you do not know the exact package name or even the starting letters? Well, you can [list all the installed packages in Ubuntu][11] and grep with whatever your memory serves. + +For example, the command below will show all the installed packages that have the string ‘my’ in its name anywhere, not just the beginning. + +``` +apt list --installed | grep -i my +``` + +![][12] + +That’s cool, isn’t it? Just be careful with the package name when using the remove command in Ubuntu. + +#### Tip: Using apt purge for removing package (advanced users) + +When you remove a package in Ubuntu, the packaged data is removed, but it may leave small, modified user configuration files. This is intentional because if you install the same program again, it would use those configuration files. + +If you want to remove it completely, you can use apt purge command. You can use it instead of apt remove command or after running the apt remove command. + +``` +sudo apt purge program_name +``` + +Keep in mind that the purge command won’t remove any data or configuration file stored in the home directory of a user. + +### Method 3: Uninstall Snap applications in Ubuntu + +The previous method works with the DEB packages that you installed using apt command, software center or directly from the deb file. + +Ubuntu also has a new packaging system called [Snap][13]. Most of the software you find in the Ubuntu Software Center are in this Snap package format. + +You can remove these applications from the Ubuntu Software Center easily but if you want to use the command line, here’s what you should do. + +List all the snap applications installed to get the package name. + +``` +snap list +``` + +![][14] + +Now use the package name to remove the application from Ubuntu. You won’t be asked for confirmation before removal. + +``` +sudo snap remove package_name +``` + +### Bonus Tip: Clean up your system with one magical command + +Alright! You learned to remove the applications. Now let me tell you about a simple command that cleans up leftover package traces like dependencies that are no longer used, old Linux kernel headers that won’t be used anymore. + +In the terminal, just run this command: + +``` +sudo apt autoremove +``` + +This is a safe command, and it will easily free up a few hundred MB’s of disk space. + +### Conclusion + +You learned three ways of removing applications from Ubuntu Linux. I covered both GUI and command line methods so that you are aware of all the options. + +I hope you find this simple tutorial helpful as an Ubuntu beginner. Questions and suggestions are always welcome. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/uninstall-programs-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/free-up-space-ubuntu-linux/ +[2]: https://itsfoss.com/remove-install-software-ubuntu/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/ubuntu_software_applications_menu.jpg?resize=800%2C390&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/installed-apps-ubuntu.png?resize=800%2C455&ssl=1 +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/search-installed-apps-ubuntu.png?resize=800%2C455&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/remove-applications-ubuntu.png?resize=800%2C487&ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/apt-remove-program-ubuntu.png?resize=768%2C424&ssl=1 +[8]: https://itsfoss.com/unable-to-locate-package-error-ubuntu/ +[9]: https://itsfoss.com/linux-command-tricks/ +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/remove-package-ubuntu-linux.png?resize=768%2C424&ssl=1 +[11]: https://itsfoss.com/list-installed-packages-ubuntu/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/search-list-installed-apps-ubuntu.png?resize=768%2C424&ssl=1 +[13]: https://itsfoss.com/install-snap-linux/ +[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/list-snap-remove.png?resize=800%2C407&ssl=1 diff --git a/sources/tech/20210122 Configure a Linux workspace remotely from the command line.md b/sources/tech/20210122 Configure a Linux workspace remotely from the command line.md new file mode 100644 index 0000000000..f7118f5ba8 --- /dev/null +++ b/sources/tech/20210122 Configure a Linux workspace remotely from the command line.md @@ -0,0 +1,137 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Configure a Linux workspace remotely from the command line) +[#]: via: (https://opensource.com/article/21/1/remote-configuration-xfce4) +[#]: author: (David Both https://opensource.com/users/dboth) + +Configure a Linux workspace remotely from the command line +====== +Nearly everything can be done from the Linux command line, including +remote configuration of Xfce4. +![Coding on a computer][1] + +One of the things I appreciate about Linux versus proprietary operating systems is that almost everything can be managed and configured from the command line. That means that nearly everything can be configured locally or even remotely via an SSH login connection. Sometimes it takes a bit of time spent on Internet searches, but if you can think of a task, it can probably be done from the command line. + +### The problem + +Sometimes it is necessary to make remote modifications to a desktop using the command line. In this particular case, I needed to reduce the number of workspaces on the [Xfce][2] panel from four to three at the request of a remote user. This configuration only required about 20 minutes of searching on the Internet. + +The default workspace count and many other settings for **xfwm4** can be found and changed in the **/usr/share/xfwm4/defaults** file. So setting _workspace_count=4_ to _workspace_count=2_ changes the default for all users on the host. Also, the **xfconf-query** command can be run by non-root users to query and set various attributes for the **xfwm4** window manager. It should be used by the user account that requires the change and not by root. + +In the sample below, I have first verified the current setting of _four_ workspaces, then set the number to _two_, and finally confirmed the new setting. + + +``` +[user@test1 ~]# xfconf-query -c xfwm4 -p /general/workspace_count +4 +[user@test1 ~]# xfconf-query -c xfwm4 -p /general/workspace_count -s 2 +[user@test1 ~]# xfconf-query -c xfwm4 -p /general/workspace_count +2 +[user@test1 ~]# +``` + +This change takes place immediately and is visible to the user without a reboot or even logging out and back in. I had a bit of fun with this on my workstation by watching the workspace switcher change as I entered commands to set different numbers of workspaces. I get my amusements where I can these days. ;-) + +### More exploration + +Now that I fixed the problem, I decided to explore the **xfconf-query** command in a bit more detail. Unfortunately, there are no man or info pages for this tool, nor is there any documentation in **/usr/share**. The usual fallback of using the **-h** option resulted in little helpful information. + + +``` +$ xfconf-query -h + Usage: +   xfconf-query [OPTION…] - Xfconf commandline utility + Help Options: +   -h, --help            Show help options + Application Options: +   -V, --version         Version information +   -c, --channel         The channel to query/modify +   -p, --property        The property to query/modify +   -s, --set             The new value to set for the property +   -l, --list            List properties (or channels if -c is not specified) +   -v, --verbose         Verbose output +   -n, --create          Create a new property if it does not already exist +   -t, --type            Specify the property value type +   -r, --reset           Reset property +   -R, --recursive       Recursive (use with -r) +   -a, --force-array     Force array even if only one element +   -T, --toggle          Invert an existing boolean property +   -m, --monitor         Monitor a channel for property changes +``` + +This is not a lot of help, but we can figure out a good bit from it anyway. First, _channels_ are groupings of properties that can be modified. I made the change above to the **general** channel, and the property is **workspace_count**. Let’s look at the complete list of channels. + + +``` +$ xfconf-query -l +Channels: +  xfwm4 +  xfce4-keyboard-shortcuts +  xfce4-notifyd +  xsettings +  xfdashboard +  thunar +  parole +  xfce4-panel +  xfce4-appfinder +  xfce4-settings-editor +  xfce4-power-manager +  xfce4-session +  keyboards +  displays +  keyboard-layout +  ristretto +  xfcethemer +  xfce4-desktop +  pointers +  xfce4-settings-manager +  xfce4-mixer +``` + +The properties for a given channel can also be viewed using the following syntax. I have used the **less** pager because the result is a long stream of data. I have pruned the listing below but left enough to see the type of entries you can expect to find. + + +``` +$ xfconf-query -c xfwm4 -l | less +/general/activate_action +/general/borderless_maximize +/general/box_move +/general/box_resize +/general/button_layout +/general/button_offset +<SNIP> +/general/workspace_count +/general/workspace_names +/general/wrap_cycle +/general/wrap_layout +/general/wrap_resistance +/general/wrap_windows +/general/wrap_workspaces +/general/zoom_desktop +(END) +``` + +You can explore all the channels in this manner. I discovered that the channels generally correspond to the various settings in the **Settings Manager**. The properties are the ones that you would set in those dialogs. Note that not all the icons you will find in the **Settings Manager** dialog window are part of the **Xfce** desktop, so there are no corresponding channels for them. The **Screensaver** is one example because it is a generic GNU screensaver and not unique to **Xfce**. The **Settings Manager** is just a good central place for **Xfce** to locate many of these configuration tools. + +### Documentation + +As mentioned previously, there do not appear to be any man or info pages for the **xconf-query** command, and I found a lot of incorrect and poorly documented information on the Internet. The best documentation I found for **Xfce4** is on the [Xfce website][2], and some specific information on **xconf-query** can be found here. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/remote-configuration-xfce4 + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) +[2]: https://www.xfce.org/ diff --git a/sources/tech/20210122 Convert your filesystem to Btrfs.md b/sources/tech/20210122 Convert your filesystem to Btrfs.md new file mode 100644 index 0000000000..99ae229466 --- /dev/null +++ b/sources/tech/20210122 Convert your filesystem to Btrfs.md @@ -0,0 +1,339 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Convert your filesystem to Btrfs) +[#]: via: (https://fedoramagazine.org/convert-your-filesystem-to-btrfs/) +[#]: author: (Gergely Gombos https://fedoramagazine.org/author/gombosg/) + +Convert your filesystem to Btrfs +====== + +![][1] + +### Introduction + +The purpose of this article is to give you an overview about why, and how to migrate your current partitions to a Btrfs filesystem. To read a step-by-step walk through of how this is accomplished – follow along, if you’re curious about doing this yourself. + +Starting with Fedora 33, the default filesystem is now Btrfs for new installations. I’m pretty sure that most users have heard about its advantages by now: copy-on-write, built-in checksums, flexible compression options, easy snapshotting and rollback methods. It’s really a modern filesystem that brings new features to desktop storage. + +Updating to Fedora 33, I wanted to take advantage of Btrfs, but personally didn’t want to reinstall the whole system for ‘just a filesystem change’. I found [there was] little guidance on how exactly to do it, so decided to share my detailed experience here. + +### Watch out! + +Doing this, you are playing with fire. Hopefully you are not surprised to read the following: + +> During editing partitions and converting file systems, you can have your data corrupted and/or lost. You can end up with an unbootable system and might be facing data recovery. You can inadvertently delete your partitions or otherwise harm your system. + +These conversion procedures are meant to be safe even for production systems – but only if you plan ahead, have backups for critical data and rollback plans. As a _sudoer_, you can do anything without limits, without any of the usual safety guards protecting you. + +### The safe way: reinstalling Fedora + +Reinstalling your operating system is the ‘official’ way of converting to Btrfs, recommended for most users. Therefore, choose this option if you are unsure about anything in this guide. The steps are roughly the following: + + 1. Backup your home folder and any data that might be used in your system like _/etc_. [Editors note: VM’s too] + 2. Save your list of installed packages to a file. + 3. Reinstall Fedora by removing your current partitions and choosing the new default partitioning scheme with Btrfs. + 4. Restore the contents of your home folder and reinstall the packages using the package list. + + + +For detailed steps and commands, see this comment by a community user at [ask.fedoraproject.org][2]. If you do this properly, you’ll end up with a system that is functioning in the same way as before, with minimal risk of losing any data. + +### Pros and cons of conversion + +Let’s clarify this real quick: what kind of advantages and disadvantages does this kind of filesystem conversion have? + +##### **The good** + + * Of course, no reinstallation is needed! Every file on your system will remain the exact same as before. + * It’s technically possible to do it in-place i.e. without a backup. + * You’ll surely learn a lot about btrfs! + * It’s a rather quick procedure if everything goes according to plan. + + + +##### The bad + + * You have to know your way around the terminal and shell commands. + * You can lose data, see above. + * If anything goes wrong, you are on your own to fix it. + + + +##### The ugly + + * You’ll need about 20% of free disk space for a successful conversion. But for the complete backup & reinstall scenario, you might need even more. + * You can customize everything about your partitions during the process, but you can also do that from Anaconda if you choose to reinstall. + + + +### **What about LVM?** + +LVM layouts have been the default during the last few Fedora installations. If you have an LVM partition layout with multiple partitions e.g. _/_ and _/home_, you would somehow have to merge them in order to enjoy all the benefits of Btrfs. + +If you choose so, you can individually convert partitions to Btrfs while keeping the volume group. Nevertheless, one of the advantages of migrating to Btrfs is to get rid of the limits imposed by the LVM partition layout. You can also use the send-receive functionality offered by _btrfs_ to merge the partitions after the conversion. + +See also on Fedora Magazine: [Reclaim hard-drive space with LVM][3], [Recover your files from Btrfs snapshots][4] and [Choose between Btrfs and LVM-ext4][5]. + +### Getting acquainted with Btrfs + +It’s advisable to read at least the following to have a basic understanding about what Btrfs is about. If you are unsure, just choose the safe way of reinstalling Fedora. + +##### Must reads + + * [Fedora Magazine: Btrfs Coming to Fedora 33][6] + * [Btrfs sysadmin guide][7], _especially_ about subvolumes & flat subvolume layout. + * [Btrfs-convert guide][8] + + + +##### Useful resources + + * [_man 8 btrfs_][9] – command-line interface + * _[man 5 btrfs][10]_ – mount options + * _[man btrfs-convert][11]_ – the conversion tool we are going to use + * _[man btrfs-subvolume][12]_ – managing subvolumes + + + +### Conversion steps + +##### Create a live image + +Since you can’t convert mounted filesystems, we’ll be working from a Fedora live image. Install [Fedora Media Writer][13] and ‘burn’ Fedora 33 to your favorite USB stick. + +##### Free up disk space + +_btrfs-convert_ will recreate filesystem metadata in your partition’s free disk space, while keeping all existing _ext4_ data at its current location. + +Unfortunately, the amount of free space required cannot be known ahead – the conversion will just fail (and do no harm) if you don’t have enough. Here are some useful ideas for freeing up space: + + * Use _baobab_ to identify large files & folders to remove. Don’t manually delete files outside of your home folder if possible. + * Clean up old system journals: _journalctl –vacuum-size=100M_ + * If you are using Docker, carefully use tools like _docker volume prune, docker image prune -a_ + * Clean up unused virtual machine images inside e.g. GNOME Boxes + * Clean up unused packages and flatpaks: _dnf autoremove_, _flatpak remove –unused_, + * Clean up package caches: _pkcon refresh force -c -1_, _dnf clean all_ + * If you’re confident enough to, you can cautiously clean up the _~/.cache_ folder. + + + +##### Convert to Btrfs + +Save all your valuable data to a backup, make sure your system is fully updated, then reboot into the live image. Run _gnome-disks_ to find out your device handle e.g. _/dev/sda1_ (it can look different if you are using LVM). Check the filesystem and do the conversion: [Editors note: The following commands are run as root, use caution!] + +``` +$ sudo su - +# fsck.ext4 -fyv /dev/sdXX +# man btrfs-convert (read it!) +# btrfs-convert /dev/sdXX +``` + +This can take anywhere from 10 minutes to even hours, depending on the partition size and whether you have a rotational or solid-state hard drive. If you see errors, you’ll likely need more free space. As a last resort, you could try _btrfs-convert_ _-n_. + +##### How to roll back? + +If the conversion fails for some reason, your partition will remain _ext4_ or whatever it was before. If you wish to roll back after a successful conversion, it’s as simple as + +``` +# btrfs-convert -r /dev/sdXX +``` + +**Warning!** You will permanently lose your ability to roll back if you do any of these: defragmentation, balancing or deleting the _ext2_saved_ subvolume. + +Due to the copy-on-write nature of Btrfs, you can otherwise safely copy, move and even delete files, create subvolumes, because _ext2_saved_ keeps referencing to the old data. + +##### Mount & check + +Now the partition is supposed to have _btrfs_ file system. Mount it and look around your files… and subvolumes! + +``` +# mount /dev/sdXX /mnt +# man btrfs-subvolume (read it!) +# btrfs subvolume list / (-t for a table view) +``` + +Because you have already read the [relevant manual page][14], you should now know that it’s safe to create subvolume snapshots, and that you have an _ext2-saved_ subvolume as a handy backup of your previous data. + +It’s time to read the [Btrfs sysadmin guide][7], so that you won’t confuse subvolumes with regular folders. + +##### Create subvolumes + +We would like to achieve a ‘flat’ subvolume layout, which is the same as what Anaconda creates by default: + +``` +toplevel (volume root directory, not to be mounted by default) + +-- root (subvolume root directory, to be mounted at /) + +-- home (subvolume root directory, to be mounted at /home) +``` + +You can skip this step, or decide to aim for a different layout. The advantage of this particular structure is that you can easily create snapshots of _/home_, and have different compression or mount options for each subvolume. + +``` +# cd /mnt +# btrfs subvolume snapshot ./ ./root2 +# btrfs subvolume create home2 +# cp -a home/* home2/ +``` + +Here, we have created two subvolumes. _root2_ is a full snapshot of the partition, while _home2_ starts as an empty subvolume and we copy the contents inside. (This _cp_ command doesn’t duplicate data so it is going to be fast.) + + * In _/mnt_ (the top-level subvolume) delete everything except _root2_, _home2_, and _ext2_saved_. + * Rename _root2_ and _home2_ subvolumes to _root_ and _home_. + * Inside _root_ subvolume, empty out the _home_ folder, so that we can mount the _home_ subvolume there later. + + + +It’s simple if you get everything right! + +##### Modify fstab + +In order to mount the new volume after a reboot, _fstab_ has to be modified, by replacing the old _ext4_ mount lines with new ones. + +You can use the command _blkid_ to learn your partition’s UUID. + +``` +UUID=xx / btrfs subvol=root 0 0 +UUID=xx /home btrfs subvol=home 0 0 +``` + +(Note that the two UUIDs are the same if they are referring to the same partition.) + +These are the defaults for new Fedora 33 installations. In _fstab_ you can also choose to customize compression and add options like _noatime._ + +See the relevant [wiki page about compression][15] and _[man 5 btrfs][10]_ for all relevant options. + +##### Chroot into your system + +If you’ve ever done system recovery, I’m pretty sure you know these commands. Here, we get a shell prompt that is essentially _inside_ your system, with network access. + +First, we have to remount the _root_ subvolume to _/mnt_, then mount the _/boot_ and _/boot/efi_ partitions (these can be different depending on your filesystem layout): + +``` +# umount /mnt +# mount -o subvol=root /dev/sdXX /mnt +# mount /dev/sdXX /mnt/boot +# mount /dev/sdXX /mnt/boot/efi +``` + +Then we can move on to mounting system devices: + +``` +# mount -t proc /proc /mnt/proc +# mount --rbind /dev /mnt/dev +# mount --make-rslave /mnt/dev +# mount --rbind /sys /mnt/sys +# mount --make-rslave /mnt/sys +# cp /mnt/etc/resolv.conf /mnt/etc/resolv.conf.chroot +# cp -L /etc/resolv.conf /mnt/etc +# chroot /mnt /bin/bash +$ ping www.fedoraproject.org +``` + +##### Reinstall GRUB & kernel + +The easiest way – now that we have network access – is to reinstall GRUB and the kernel because it does all configuration necessary. So, inside the chroot: + +``` +# mount /boot/efi +# dnf reinstall grub2-efi shim +# grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg +# dnf reinstall kernel-core +...or just renegenerating initramfs: +# dracut --kver $(uname -r) --force +``` + +This applies if you have an UEFI system. Check the docs below if you have a BIOS system. Let’s check if everything went well, before rebooting: + +``` +# cat /boot/grub2/grubenv +# cat /boot/efi/EFI/fedora/grub.cfg +# lsinitrd /boot/initramfs-$(uname -r).img | grep btrfs +``` + +You should have proper partition UUIDs or references in _grubenv_ and _grub.cfg_ (grubenv may not have been updated, edit it if needed) and see _insmod btrfs_ in _grub.cfg_ and _btrfs_ module in your initramfs image. + +See also: [Reinstalling GRUB 2][16] and [Verifying the Initial RAM Disk Image][17] in the Fedora System Administration Guide. + +##### Reboot + +Now your system should boot properly. If not, don’t panic, go back to the live image and fix the issue. In the worst case, you can just reinstall Fedora from right there. + +##### After first boot + +Check that everything is fine with your new Btrfs system. If you are happy, you’ll need to reclaim the space used by the old _ext4_ snapshot, defragment and balance the subvolumes. The latter two might take some time and is quite resource intensive. + +You have to mount the top level subvolume for this: + +``` +# mount /dev/sdXX -o subvol=/ /mnt/someFolder +# btrfs subvolume delete /mnt/someFolder/ext2_saved +``` + +Then, run these commands when the machine has some idle time: + +``` +# btrfs filesystem defrag -v -r -f / +# btrfs filesystem defrag -v -r -f /home +# btrfs balance start -m / +``` + +Finally, there’s a “no copy-on-write” [attribute][18] that is automatically set for virtual machine image folders for new installations. Set it if you are using VMs: + +``` +# +``` + +chattr +C /var/lib/libvirt/images + +``` +$ chattr +C +``` + +~/.local/share/gnome-boxes/images +``` + +``` + +This attribute only takes effect for new files in these folders. Duplicate the images and delete the originals. You can confirm the result with _lsattr_. + +### Wrapping up + +I really hope that you have found this guide to be useful, and was able to make a careful and educated decision about whether or not to convert to Btrfs on your system. I wish you a successful conversion process! + +Feel free to share your experience here in the comments, or if you run into deeper issues, on [ask.fedoraproject.org][19]. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/convert-your-filesystem-to-btrfs/ + +作者:[Gergely Gombos][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gombosg/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/08/butterfs-816x346.png +[2]: https://ask.fedoraproject.org/t/conversion-of-an-existing-ext4-fedora-32-system-completely-to-btrfs/9446/6?u=gombosghttps://ask.fedoraproject.org/t/conversion-of-an-existing-ext4-fedora-32-system-completely-to-btrfs/9446/6?u=gombosg +[3]: https://fedoramagazine.org/reclaim-hard-drive-space-with-lvm/ +[4]: https://fedoramagazine.org/recover-your-files-from-btrfs-snapshots/ +[5]: https://fedoramagazine.org/choose-between-btrfs-and-lvm-ext4/ +[6]: https://fedoramagazine.org/btrfs-coming-to-fedora-33/ +[7]: https://btrfs.wiki.kernel.org/index.php/SysadminGuide +[8]: https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3 +[9]: https://www.mankier.com/8/btrfs +[10]: https://www.mankier.com/5/btrfs +[11]: https://www.mankier.com/8/btrfs-convert +[12]: https://www.mankier.com/8/btrfs-subvolume +[13]: https://getfedora.org/en/workstation/download/ +[14]: https://www.mankier.com/8/btrfs-subvolume#Subvolume_and_Snapshot +[15]: https://btrfs.wiki.kernel.org/index.php/Compression +[16]: https://docs.fedoraproject.org/en-US/fedora/f33/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/#sec-Reinstalling_GRUB_2 +[17]: https://docs.fedoraproject.org/en-US/fedora/f33/system-administrators-guide/kernel-module-driver-configuration/Manually_Upgrading_the_Kernel/#sec-Verifying_the_Initial_RAM_Disk_Image +[18]: https://www.mankier.com/1/chattr#Attributes-C +[19]: https://ask.fedoraproject.org/ diff --git a/sources/tech/20210122 Why KubeEdge is my favorite open source project of 2020.md b/sources/tech/20210122 Why KubeEdge is my favorite open source project of 2020.md new file mode 100644 index 0000000000..941a36a082 --- /dev/null +++ b/sources/tech/20210122 Why KubeEdge is my favorite open source project of 2020.md @@ -0,0 +1,123 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why KubeEdge is my favorite open source project of 2020) +[#]: via: (https://opensource.com/article/21/1/kubeedge) +[#]: author: (Mike Calizo https://opensource.com/users/mcalizo) + +Why KubeEdge is my favorite open source project of 2020 +====== +KubeEdge is a workload framework for edge computing. +![Tips and gears turning][1] + +I believe [edge computing][2], which "brings computation and data storage closer to the location where it is needed to improve response times and save bandwidth," is the next major phase of technology adoption. The widespread use of mobile devices and wearable gadgets and the availability of free city-wide WiFi in some areas create a lot of data that can provide many advantages if used properly. For example, this data can help people fight crime, learn about nearby activities and events, find the best sale price, avoid traffic, and so on. + +[Gartner][3] says the rapid growth in mobile application adoption requires an edge infrastructure to use the data from these devices to further progress and improve quality of life. Some of the brightest minds are looking for ways to use the rich data generated from our mobile devices. Take the COVID-19 pandemic, for example. Edge computing can gather data that can help fight the spread of the virus. In the future, mobile devices might warn people about the potential for community infection by providing live updates to their devices based on processing and serving data collected from other devices (using artificial intelligence and machine learning). + +In defining an edge-computing architecture, one thing is constant: The platform must be flexible and scalable to deploy a smart or intelligent application on it and in your core data center. As an open source advocate and user, this naturally triggers my interest in using open source technology to harness the power of edge computing. + +This is why [KubeEdge][4], which delivers container orchestration to resource-constrained environments, is my favorite open source project of 2020. This extremely lightweight but fully compliant Kubernetes distribution was created to run cloud-native workloads in Internet of Things (IoT) devices at the network's edge. + +![Edge computing architecture][5] + +(Michael Calizo, [CC BY-SA 4.0][6]) + +### Challenges of collecting and consuming data + +Having a rich data source does not mean anything if the data isn't used properly. This is the dilemma that edge computing is trying to solve. To be able to use data properly, the platform must be flexible enough to handle the demand required to collect, process, and serve data and make smart decisions about whether the data can be processed at the edge or must be processed in a regional or core data center. + +The challenges when moving data from the edge location to a core data center include: + + * Network reliability + * Security + * Resource constraints + * Autonomy + + + +A Kubernetes platform on the edge, such as KubeEdge, meets these requirements, as it provides the scalability, flexibility, and security needed to perform data collection, processing, and serving. KubeEdge is open source, lightweight, and easy to deploy, has low resource requirements, and provides everything you need. + +### KubeEdge's architecture + +KubeEdge was [introduced in 2018][7] at KubeCon in Seattle. In 2019, it was accepted as a Cloud Native Computing Foundation (CNCF) sandbox project, which gives it wider public visibility and puts it on the way to becoming a full-fledged CNCF-sanctioned project. + +![KubeEdge architecture][8] + +(©2019 [The New Stack][9]) + +In a nutshell, KubeEdge has two main components or parts: Cloud and Edge. + +#### Cloud + +The Cloud part is where the Kubernetes Master components, the EdgeController, and edge CloudHub reside. + + * **CloudHub** is a communication interface module in the Cloud component. It acts as a caching mechanism to ensure changes in the Cloud part are sent to the Edge caching mechanism (EdgeHub). + * The **EdgeController** manages the edge nodes and performs reconciliation between edge nodes. + + + +#### Edge + +The Edge part is where edge nodes are found. The most important Edge components are: + + * **EdgeHub** is a communication interface module to the Cloud component. + * **Edged** does the kubelet's job, including managing pod lifecycles and other related kubelet jobs on the nodes. + * **MetaManager** makes sure that all node-level metadata is persistent. + * **DeviceTwin** is responsible for syncing devices between the Cloud and the Edge components. + * **EventBus** handles the internal edge communications using Message Queuing Telemetry Transport (MQTT). + + + +### Kubernetes for edge computing + +Kubernetes has become the gold standard for orchestrating containerized workloads on premises and in public clouds. This is why I think KubeEdge is the perfect solution for using edge computing to reap the benefits of the data that mobile technology generates. + +The KubeEdge architecture allows autonomy on an edge computing layer, which solves network latency and velocity problems. This enables you to manage and orchestrate containers in a core data center as well as manage millions of mobile devices through an autonomous edge computing layer. This is possible because of how KubeEdge uses a combination of the message bus (in the Cloud and Edge components) and the Edge component's data store to allow the edge node to be independent. Through caching, data is synchronized with the local datastore every time a handshake happens. Similar principles are applied to edge devices that require persistency. + +KubeEdge handles machine-to-machine (M2M) communication differently from other edge platform solutions. KubeEdge uses [Eclipse Mosquitto][10], a popular open source MQTT broker from the Eclipse Foundation. Mosquitto enables WebSocket communication between the edge and the master nodes. Most importantly, Mosquitto allows developers to author custom logic and enable resource-constrained device communication at the edge. + +**[Read next: [How to explain edge computing in plain terms][11]]** + +Security is a must for M2M communication; it is the only way you can trust sensitive data sent through the web. Currently, KubeEdge supports Secure Production Identity Framework for Everyone ([SPIFFE][12]), ensuring that: + + 1. Only verifiable nodes can join the edge cluster. + 2. Only verifiable workloads can run on the edge nodes. + 3. Short-lived certificates are used with rotation policies. + + + +### Where KubeEdge is heading + +KubeEdge is in the very early stage of adoption, but it is gaining popularity due to its flexible approach to making edge computing communications secure, reliable, and autonomous so that they won't be affected by network latency. + +KubeEdge is a flexible, vendor-neutral, lightweight, heterogeneous edge computing platform. This enables it to support use cases such as data analysis, video analytics, machine learning, and more. Because it is vendor-neutral, KubeEdge allows big cloud players to use it. + +These are the reasons why KubeEdge is my favorite project of 2020. There is much more to come, and I expect to see more contributions from the community for wider adoption. I am excited about its future of enabling us to consume available data and use it for the greater good. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/kubeedge + +作者:[Mike Calizo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mcalizo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning) +[2]: https://en.wikipedia.org/wiki/Edge_computing +[3]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/ +[4]: https://kubeedge.io/en/ +[5]: https://opensource.com/sites/default/files/uploads/edgecomputing.png (Edge computing architecture) +[6]: https://creativecommons.org/licenses/by-sa/4.0/ +[7]: https://www.youtube.com/watch?v=nWFkxuRvZ7U&feature=youtu.be&t=1755 +[8]: https://opensource.com/sites/default/files/uploads/kubeedge-architecture.png (KubeEdge architecture) +[9]: https://thenewstack.io/kubeedge-extends-the-power-of-kubernetes-to-the-edge/ +[10]: https://mosquitto.org/ +[11]: https://enterprisersproject.com/article/2019/7/edge-computing-explained-plain-english +[12]: https://spiffe.io/ diff --git a/sources/tech/20210123 Firecracker- start a VM in less than a second.md b/sources/tech/20210123 Firecracker- start a VM in less than a second.md new file mode 100644 index 0000000000..45f77890ad --- /dev/null +++ b/sources/tech/20210123 Firecracker- start a VM in less than a second.md @@ -0,0 +1,305 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Firecracker: start a VM in less than a second) +[#]: via: (https://jvns.ca/blog/2021/01/23/firecracker--start-a-vm-in-less-than-a-second/) +[#]: author: (Julia Evans https://jvns.ca/) + +Firecracker: start a VM in less than a second +====== + +Hello! I spent this whole past week figuring out how to use [Firecracker][1] and I really like it so far. + +Initially when I read about Firecracker being released, I thought it was just a tool for cloud providers to use – I knew that AWS Fargate and used it, but I didn’t think that it was something that I could directly use myself. + +But it turns out that Firecracker is relatively straightforward to use (or at least as straightforward as anything else that’s for running VMs), the documentation and examples are pretty clear, you definitely don’t need to be a cloud provider to use it, and as advertised, it starts VMs really fast! + +So I wanted to write about using Firecracker from a more DIY “I just want to run some VMs” perspective. + +I’ll start out by talking about what I’m using it for, and then I’ll explain a few things I learned about it along the way. + +### my goal: a game where every player gets their own virtual machine + +I’m working on a sort of game to help people learn command line tools by giving them a problem to solve and a virtual machine to solve it in, a little like a CTF. It still basically exists only on my computer, but I’ve been working on it for a while. + +Here’s a screenshot of one of the puzzles I’m working on right now. This one is about setting file extended attributes with `setfacl`. + + + +### why not use containers? + +I wanted to use virtual machines and not containers for this project basically because I wanted to mimic a real production machine that the user has root access to – I wanted folks to be able to set sysctls, use `nsenter`, make `iptables` rules, configure networking with `ip`, run `perf`, basically literally anything. + +### the problem: starting a virtual machine is slow + +I wanted people to be able to click “Start” on a puzzle and instantly launch a virtual machine. Originally I was launching a DigitalOcean VM every time, but they took about a minute to boot, I was getting really impatient waiting for them every time, and I didn’t think it was an acceptable user experience for people to have to wait a minute. + +I also tried using qemu, but for reasons I don’t totally understand, starting a VM with qemu was also kind of slow – it seemed to take at least maybe 20 seconds. + +### Firecracker can start a VM in less than a second! + +Firecracker says this about performance in their [specification][2]: + +> It takes <= 125 ms to go from receiving the Firecracker InstanceStart API call to the start of the Linux guest user-space /sbin/init process. + +So far I’ve been using Firecracker to start relatively large VMs – Ubuntu VMs running systemd as an init system – and it takes maybe 2-3 seconds for them to boot. I haven’t been measuring that closely because honestly 5 seconds is fast enough and I don’t mind too much about an extra 200ms either way. + +But enough background, let’s talk about how to actually use Firecracker. + +### here’s a “hello world” script to start a Firecracker VM + +I said at the beginning of this post that Firecracker is pretty straightforward to get started with. Here’s how. + +Firecracker’s [getting started][3] instructions are really good (they just work!) but it was separated into a bunch of steps and I wanted to see everything you have to do together in 1 shell script. So I wrote a short shell script you can use to start a Firecracker VM, and some quick instructions for how to use it. + +Running a script like this was the first thing I did when trying to wrap my head around Firecracker. There’s basically 3 steps: + +**step 1**: Download Firecracker from their [releases page][4] and put it somewhere + +**step 2**: Run this script as root (you might have to edit the last line with the path to the `firecracker` binary if it’s not in root’s PATH) + +I also put this script in a gist: [firecracker-hello-world.sh][5]. The IP addresses here are chosen pretty arbitrarily. Most the script is just writing a JSON file. + +``` +set -eu + +# download a kernel and filesystem image +[ -e hello-vmlinux.bin ] || wget https://s3.amazonaws.com/spec.ccfc.min/img/hello/kernel/hello-vmlinux.bin +[ -e hello-rootfs.ext4 ] || wget -O hello-rootfs.ext4 https://github.com/firecracker-microvm/firecracker-demo/raw/master/xenial.rootfs.ext4 +[ -e hello-id_rsa ] || wget -O hello-id_rsa https://raw.githubusercontent.com/firecracker-microvm/firecracker-demo/ec271b1e5ffc55bd0bf0632d5260e96ed54b5c0c/xenial.rootfs.id_rsa + +TAP_DEV="fc-88-tap0" + +# set up the kernel boot args +MASK_LONG="255.255.255.252" +MASK_SHORT="/30" +FC_IP="169.254.0.21" +TAP_IP="169.254.0.22" +FC_MAC="02:FC:00:00:00:05" + +KERNEL_BOOT_ARGS="ro console=ttyS0 noapic reboot=k panic=1 pci=off nomodules random.trust_cpu=on" +KERNEL_BOOT_ARGS="${KERNEL_BOOT_ARGS} ip=${FC_IP}::${TAP_IP}:${MASK_LONG}::eth0:off" + +# set up a tap network interface for the Firecracker VM to user +ip link del "$TAP_DEV" 2> /dev/null || true +ip tuntap add dev "$TAP_DEV" mode tap +sysctl -w net.ipv4.conf.${TAP_DEV}.proxy_arp=1 > /dev/null +sysctl -w net.ipv6.conf.${TAP_DEV}.disable_ipv6=1 > /dev/null +ip addr add "${TAP_IP}${MASK_SHORT}" dev "$TAP_DEV" +ip link set dev "$TAP_DEV" up + +# make a configuration file +cat < vmconfig.json +{ + "boot-source": { + "kernel_image_path": "hello-vmlinux.bin", + "boot_args": "$KERNEL_BOOT_ARGS" + }, + "drives": [ + { + "drive_id": "rootfs", + "path_on_host": "hello-rootfs.ext4", + "is_root_device": true, + "is_read_only": false + } + ], + "network-interfaces": [ + { + "iface_id": "eth0", + "guest_mac": "$FC_MAC", + "host_dev_name": "$TAP_DEV" + } + ], + "machine-config": { + "vcpu_count": 2, + "mem_size_mib": 1024, + "ht_enabled": false + } +} +EOF +# start firecracker +firecracker --no-api --config-file vmconfig.json +``` + +**step 3**: You have a VM running! + +You can also SSH into the VM like this, with the SSH key that the script downloaded: + +``` +ssh -o StrictHostKeyChecking=false [email protected] -i hello-id_rsa +``` + +You might notice that if you run `ping 8.8.8.8` inside this VM, it doesn’t work: it’s not able to connect to the outside internet. I think I’m actually going to use a setup like this for my puzzles where people don’t need to connect to the internet. + +The networking commands and the rootfs image in this script are from the [firecracker-demo][6] repository which I found really helpful. + +### how I put a Firecracker VM on the Docker bridge + +I had a couple of problems with this “hello world” setup though: + + * I wanted to be able to SSH to them from a Docker container (because I was running my game’s webserver in `docker-compose`) + * I wanted them to be able to connect to the outside internet + + + +I struggled with trying to understand what a Linux bridge was and how it worked for about a day before figuring out how to get this to work. Here’s a slight modification of the previous script [firecracker-hello-world-docker-bridge.sh][7] which runs a Firecracker VM on the Docker bridge + +You can run it as root and SSH to the resulting VM like this (the IP is different because it has to be in the Docker subnet). + +``` +ssh -o StrictHostKeyChecking=false [email protected] -i hello-id_rsa +``` + +It basically just changes 2 things: + + 1. There’s an extra `sudo brctl addif docker0 $TAP_DEV` to add the VM’s network interface to the Docker bridge + 2. It changes the gateway in the kernel boot args to the Docker bridge network interface’s IP (172.17.0.1) + + + +My guess is that most people probably won’t want to use the Docker bridge, if you just want the VM to be able to connect to the outside internet I think the best way is to create a new bridge. + +In my application I’m actually using a bridge called `firecracker0` which is a docker-compose network I made. It feels a little sketchy to be using a bridge managed by Docker in this way but for now it works so I’ll keep doing that unless I find a better way. + +### how I built my own Firecracker images + +This “hello world” example is all very well and good, but you might say – ok, how do I build my own images? + +Basically you have to do 2 things: + + 1. Make a Linux kernel. I wanted a 5.8 kernel so I used the instructions in the [firecracker docs on creating your own image][8] for compiling a Linux kernel and they worked. I was kind of intimidated by this because I’d somehow never compiled a Linux kernel before, but I followed the instructions and it just worked the first time. I thought it would be super slow but it actually took less than 10 minutes to compile from scratch. + 2. Make an `ext4` filesystem image with all the files you want in your VM’s filesystem. + + + +Here’s how I put together my filesystem. Initially I tried downloading Ubuntu’s focal cloud image and extracting the root partition with `dd`, but I couldn’t get it work. + +Instead, I did what the Firecracker docs suggested and I built a Docker container and copied the contents of the container into a filesystem image. + +Here’s what the `Dockerfile` I used looked like approximately: (I haven’t tested this exact Dockerfile but I think it should work). The main things are that you have to install some kind of init system because the default `ubuntu:20.04` image doesn’t come with one because you don’t need one in a container. I also ran `unminimize` to restore some man pages because the container is for interactive use. + +``` +FROM ubuntu:20.04 +RUN apt-get update +RUN apt-get install -y init openssh-server +RUN yes | unminimize +# copy over some SSH keys and install other programs I wanted +``` + +And here’s the basic shell script I’ve been using to create a filesystem image from the Docker container. I ran the whole thing as root, but technically you only have to run `mount` as root. + +``` +IMG_ID=$(docker build -q .) +CONTAINER_ID=$(docker run -td $IMG_ID /bin/bash) + +MOUNTDIR=mnt +FS=mycontainer.ext4 + +mkdir $MOUNTDIR +qemu-img create -f raw $FS 800M +mkfs.ext4 $FS +mount $FS $MOUNTDIR +docker cp $CONTAINER_ID:/ $MOUNTDIR +umount $MOUNTDIR +``` + +I’m still not quite sure how much I’m going to like this approach of using Docker containers to create VM images – it feels a bit weird to me but it’s been working fine so far. + +I think most people who use Firecracker use a more lightweight init system than systemd and it’s definitely not necessary to use systemd but I think I’m going to stick with systemd for now because I want it to feel mostly like a normal production Linux system and a lot of the production servers I’ve used have used systemd. + +Okay, that’s all I have to say about creating images. Let’s talk a bit more about configuring Firecracker. + +### Firecracker supports either a socket interface or a configuration file + +You can start a Firecracker VM 2 ways: + + 1. create a configuration file and run `firecracker --no-api --config-file vmconfig.json` + 2. create an API socket and write instructions to the API socket (like they explain in their [getting started][3] instructions) + + + +I really liked the configuration file approach for doing some initial experimentation because I found it easier to be able to see everything all in one place. But when integrating Firecracker with my actual application in real life, I found it easier to use the API. + +### how I wrote a HTTP service that starts Firecracker VMs: use the Go SDK! + +I wanted to have a little HTTP service that I could call from my Ruby on Rails server to start new VMs and stop them when I was done with them. + +Here’s what the interface looks like – you give it a root image and a kernel and it returns an ID an the VM’s IP address. All of the files paths are just local paths on my machine. + +``` +$ http post localhost:8080/create root_image_path=/images/base.ext4 kernel_path=/images/vmlinux-5.8 +HTTP/1.1 200 OK +{ + "id": "D248122A-1CCA-475C-856E-E3003A913F32", + "ip_address": "172.102.0.4" +} +``` + +and then here’s what deleting a VM looks like (I might make this use the `DELETE` method later to make it more REST-y :) ) + +``` +$ http post localhost:8080/delete id=D248122A-1CCA-475C-856E-E3003A913F32 +HTTP/1.1 200 OK +``` + +At first I wasn’t sure how I was going to use the Firecracker socket API to implement this interface, but then I discovered that there’s a [Go SDK][9]! This made it way easier to generate the correct JSON, because there were a bunch of structs and the compiler would tell me if I made a typo in a field name. + +I basically wrote all of my code so far by copying and modifying code from [firectl][10], a Go command line tool. The reason I wrote my own tool insted of just using `firectl` directly was that I wanted to have a HTTP API that could launch and stop lots of different VMs. + +I found the `firectl` code and the Go SDK pretty easy to understand so I won’t say too much more about it here. + +If you’re interested you can see [a gist with my current HTTP service for managing Firecracker VMs][11] which is a huge mess and pretty buggy and not intended for anyone but me to use. It does start VMs successfully though which is an important first step!!! + +### DigitalOcean supports nested virtualization + +Another question I had was: “ok, where am I going to run these Firecracker VMs in production?“. The funny thing about running a VM in the cloud is that cloud instances are _already_ VMs. Running a VM inside a VM is called “nested virtualization” and not all cloud providers support it – for example AWS doesn’t. + +Right now I’m using DigitalOcean and I was delighted to see that DigitalOcean does support nested virtualization even on their smallest droplets – I tried running the “hello world” Firecracker script from above and it just worked! + +I think GCP supports nested virtualization too but I haven’t tried it. The official Firecracker documentation suggests using a `metal` instance on AWS, probably because Firecracker is made by AWS. + +I don’t know what the performance implications of using nested virtualization are yet but I guess I’ll find out! + +### Firecracker only runs on Linux + +I should say that Firecracker uses KVM so it only runs on Linux. I don’t know if there’s a way to start VMs in a similarly fast way on a Mac, maybe there is? Or maybe there’s something special about KVM? I don’t understand how KVM works. + +### some open questions + +A few things I still haven’t figured out: + + * Right now I’m not using `jailer`, another part of Firecracker that helps further isolate the Firecracker VM by adding some `seccomp-BPF` rules and other things. Maybe I should be! `firectl` uses `jailer` so it would be pretty easy to copy the code that does that. + * I still don’t totally understand _why_ Firecracker is fast (or alternatively, why qemu is slow). This [LWN article][12] says that it’s because Firecracker emulates less devices than qemu does, but I don’t know exactly which devices are the ones that are making qemu slow to start. + * will it be slow to use nested virtualization? + * I don’t know if it’s possible to run graphical applications in Firecracker, it seems like it might not because it’s intended for servers, but maybe it is possible? + * I’m not sure how many Firecracker VMs I can run at a time on my little $5/month DigitalOcean droplet, I need to do some of experiments. + + + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2021/01/23/firecracker--start-a-vm-in-less-than-a-second/ + +作者:[Julia Evans][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://jvns.ca/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/firecracker-microvm/firecracker/ +[2]: https://github.com/firecracker-microvm/firecracker/blob/master/SPECIFICATION.md +[3]: https://github.com/firecracker-microvm/firecracker/blob/master/docs/getting-started.md#getting-the-firecracker-binary +[4]: https://github.com/firecracker-microvm/firecracker/releases +[5]: https://gist.github.com/jvns/c8470e75af67deec2e91ff1bd9883e53 +[6]: https://github.com/firecracker-microvm/firecracker-demo/ +[7]: https://gist.github.com/jvns/e13e6f498d26b584d8ab66651cdb04e0 +[8]: https://github.com/firecracker-microvm/firecracker/blob/master/docs/rootfs-and-kernel-setup.md +[9]: https://github.com/firecracker-microvm/firecracker-go-sdk +[10]: https://github.com/firecracker-microvm/firectl/ +[11]: https://gist.github.com/jvns/9b274f24cfa1db7abecd0d32483666a3 +[12]: https://lwn.net/Articles/775736/ diff --git a/sources/tech/20210123 How I programmed a virtual gift exchange.md b/sources/tech/20210123 How I programmed a virtual gift exchange.md new file mode 100644 index 0000000000..446c1b50c2 --- /dev/null +++ b/sources/tech/20210123 How I programmed a virtual gift exchange.md @@ -0,0 +1,237 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How I programmed a virtual gift exchange) +[#]: via: (https://opensource.com/article/21/1/open-source-gift-exchange) +[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen) + +How I programmed a virtual gift exchange +====== +A book club takes its annual gift exchange online with the help of HTML, +CSS, and JavaScript. +![Package wrapped with brown paper and red bow][1] + +Every year, my wife's book club has a book exchange during the holidays. Due to the need to maintain physical distance in 2020, I created an online gift exchange for them to use during a book club videoconference. Apparently, the virtual book exchange worked out (at least, I received kind compliments from the book club members), so I decided to share this simple little hack. + +### How the book exchange usually works + +In past years, the exchange has gone something like this: + + 1. Each person buys a book and wraps it up. + 2. Everyone arrives at the host's home and puts the wrapped books in a pile. + 3. Each person draws a number out of a hat, which establishes their turn. + 4. The person who drew No. 1 selects a book from the pile and unwraps it. In turn, each subsequent person chooses to either take a wrapped book from the pile or to steal an unwrapped book from someone who has gone before. + 5. When someone's book is stolen, they can either replace it with a wrapped book from the pile or steal another book (but not the one that was stolen from them) from someone else. + 6. And so on… eventually, someone has to take the last unwrapped book to end the exchange. + + + +### Designing the virtual book exchange + +My first decision was which implementation platform to use for the book exchange. Because there would already be a browser open to host the videoconference, I decided to use HTML, CSS, and JavaScript. + +Then it was design time. After some thinking, I decided to use rectangles to represent the book club members and the books. The books would be draggable, and when one was dropped on a member's rectangle, the book would unwrap (and stay unwrapped). I needed some "wrapping paper," so I used this source of [free-to-use images][2]. + +I took screenshots of the patterns I liked and used [GIMP][3] to scale the images to the right width and height. + +I needed a way to handle draggable and droppable interactions; given that I've been using jQuery and jQuery UI for several years now, I decided to continue along that path. + +For a while, I struggled with what a droppable element should do when something was dropped on it. Finally, I realized that all I needed to do was unwrap the dropped item (if it was still wrapped). I also spent some time fretting over how to lay stuff out until I realized that the easiest thing to do was just leave the elements floating. + +Jumping to the results, here's a screenshot of the user interface at the beginning of the exchange: + +![Virtual book exchange][4] + +(Chris Hermansen, [CC BY-SA 4.0][5]) + +There are nine book club members: Wanda, Carlos, Bill, and so on. There are also nine fairly ugly wrapped parcels. + +Let's say Wanda goes first and chooses the flower wrapping paper. The host clicks and drags that parcel to Wanda's name, and the parcel unwraps: + +![Virtual book exchange][6] + +(Chris Hermansen, [CC BY-SA 4.0][5]) + +Whoops! That title and author are a bit too long to fit on the book's "cover." Oh well, I'll fix that in the next version. + +Carlos is next. He decides he really wants to read that book, so he steals it. Wanda then chooses the paisley pattern, and the screen looks like this: + +![Virtual book exchange][7] + +(Chris Hermansen, [CC BY-SA 4.0][5]) + +And so on until the exchange ends. + +### The code + +So what about the code? Here it is: + + +``` +     1  <!doctype html> +     2  <[html][8] lang="en"> +     3  <[head][9]> +     4    <[meta][10] charset="utf-8"> +     5    <[title][11]>Book Exchange</[title][11]> +     6    <[link][12] rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/smoothness/jquery-ui.css"> +     7    <[style][13]> +     8    .draggable { +     9      float: left; +    10      width: 90px; +    11      height: 90px; +    12      background: #ccc; +    13      padding: 5px; +    14      margin: 5px 5px 5px 0; +    15    } +    16    .droppable { +    17      float: left; +    18      width: 100px; +    19      height: 125px; +    20      background: #999; +    21      color: #fff; +    22      padding: 10px; +    23      margin: 10px 10px 10px 0; +    24    } +    25    </[style][13]> +    26    <[script][14] src="[https://code.jquery.com/jquery-1.12.4.js"\>\][15]</[script][14]> +    27    <[script][14] src="[https://code.jquery.com/ui/1.12.1/jquery-ui.js"\>\][16]</[script][14]> +    28  </[head][9]> +    29  <[body][17]> +    30  <[h1][18] style="color:#1a1aff;">Raffles Book Club Remote Gift Exchange</[h1][18]> +    31  <[h2][19] style="color:#aa0a0a;">The players, in random order, and the luxurious gifts, wrapped:</[h2][19]> +    32    +    33  <[div][20]> +    34  <[div][20] id="wanda" class="droppable">Wanda</[div][20]> +    35  <[div][20] id="carlos" class="droppable">Carlos</[div][20]> +    36  <[div][20] id="bill" class="droppable">Bill</[div][20]> +    37  <[div][20] id="arlette" class="droppable">Arlette</[div][20]> +    38  <[div][20] id="joanne" class="droppable">Joanne</[div][20]> +    39  <[div][20] id="aleks" class="droppable">Alekx</[div][20]> +    40  <[div][20] id="ermintrude" class="droppable">Ermintrude</[div][20]> +    41  <[div][20] id="walter" class="droppable">Walter</[div][20]> +    42  <[div][20] id="hilary" class="droppable">Hilary</[div][20]> +    43  </[div][20]> +    44  <[div][20]> +    45  <[div][20] id="bows" class="draggable" style="background-image: url('bows.png');"></[div][20]> +    46  <[div][20] id="boxes" class="draggable" style="background-image: url('boxes.png');"></[div][20]> +    47  <[div][20] id="circles" class="draggable" style="background-image: url('circles.png');"></[div][20]> +    48  <[div][20] id="gerbers" class="draggable" style="background-image: url('gerbers.png');"></[div][20]> +    49  <[div][20] id="hippie" class="draggable" style="background-image: url('hippie.png');"></[div][20]> +    50  <[div][20] id="lattice" class="draggable" style="background-image: url('lattice.png');"></[div][20]> +    51  <[div][20] id="nautical" class="draggable" style="background-image: url('nautical.png');"></[div][20]> +    52  <[div][20] id="splodges" class="draggable" style="background-image: url('splodges.png');"></[div][20]> +    53  <[div][20] id="ugly" class="draggable" style="background-image: url('ugly.png');"></[div][20]> +    54  </[div][20]> +    55    +    56  <[script][14]> +    57  var books = { +    58      'bows': 'Untamed by Glennon Doyle', +    59      'boxes': "The Heart's Invisible Furies by John Boyne", +    60      'circles': 'The Great Halifax Explosion by John Bacon', +    61      'gerbers': 'Homes: A Refugee Story by Abu Bakr al Rabeeah, Winnie Yeung', +    62      'hippie': 'Before We Were Yours by Lisa Wingate', +    63      'lattice': "Hamnet and Judith by Maggie O'Farrell", +    64      'nautical': 'Shuggy Bain by Douglas Stewart', +    65      'splodges': 'Magdalena by Wade Davis', +    66      'ugly': 'Funny Boy by Shyam Selvadurai' +    67  }; +    68  $( ".droppable" ).droppable({ +    69    drop: function(event, ui) { +    70      var element = $(ui.draggable[0]); +    71      var wrapping = element.attr('id'); +    72      /* alert( $(this).text() + " got " + wrapping); */ +    73      $(ui.draggable[0]).css("background-image","url(book_cover.png)"); +    74      $(ui.draggable[0]).text(books[wrapping]); +    75    }, +    76    out: function() { +    77      /* alert( $(this).text() + " lost it" ); */ +    78    } +    79  }); +    80  $( ".draggable" ).draggable(); +    81  </[script][14]> +    82    +    83  </[body][17]> +    84  </[html][8]> +``` + +### Breaking it down + +Let's go over this code bit by bit. + + * **Lines 1–6:** Upfront, I have the usual HTML boilerplate, `HTML`, `HEAD`, `META`, `TITLE` elements, followed by a link to the CSS for jQuery UI. + * **Lines 7–25:** I added two new style classes: `draggable` and `droppable`. These define the layout for the books (draggable) and the people (droppable). Note that, aside from defining the size, background color, padding, and margin, I established that these need to float left. This way, the layout adjusts to the browser window width in a reasonably acceptable form. + * **Line 26–27:** With the CSS out of the way, it's time for the JavaScript libraries, first jQuery, then jQuery UI. + * **Lines 29–83:** Now that the `HEAD` element is done, next is the `BODY`: + * **Lines 30–31:** These couple of titles, `H1` and `H2`, let people know what they're doing here. + * **Lines 33–43:** A `DIV` to contain the people: + * **Lines 34–42:** The people are defined as droppable `DIV` elements and given `ID` fields corresponding to their names. + * **Lines 44–54:** A `DIV` to contain the books: + * **Lines 45–53:** The books are defined as draggable `DIV` elements. Each element is declared with a background image corresponding to the wrapping paper with no text between the `

` and `
`. The `ID` fields correspond to the wrapping paper. + * **Lines 56–81:** These contain JavaScript to make it all work. + * **Lines 57–67:** This JavaScript object contains the book definitions. The keys (`'bows'`, `'boxes'`, etc.) correspond to the `ID` fields of the book `DIV` elements. The values (`'Untamed by Glennon Doyle',` `"The Heart's Invisible Furies by John Boyne"`, etc.) are the book titles and authors. + * **Lines 68–79:** This JavaScript jQuery UI function defines the droppable functionality to be attached to HTML elements whose class is `droppable`. + * **Lines 69–75:** When a `draggable` element is dropped onto a `droppable` element, the function `drop` is called. + * **Line 70:** The `element` variable is assigned the draggable object that was dropped (this will be a `
` element. + * **Line 71:** The `wrapping` variable is assigned the value of the `ID` field in the draggable object. + * **Line 72:** This line is commented out, but while I was learning and testing, calls to `alert()` were useful. + * **Line 73:** This reassigns the draggable object's background image to a bland image on which text can be read; part 1 of unwrapping is getting rid of the wrapping paper. + * **Line 74:** This sets the text of the draggable object to the title of the book, looked up in the book's object using the draggable object's ID; part 2 of the unwrapping is showing the book title and author. + * **Lines 76–78:** For a while, I thought I wanted something to happen when a draggable object was removed from a droppable object (e.g., when a club member stole a book), which would require using the `out` function in a droppable object. Eventually, I decided not to do anything. But, this could note that the book was stolen and make it "unstealable" for one turn; or it could show a status line that says something like: _"Wanda's book Blah Blah by Joe Blogs was stolen, and she needs to choose another."_ + * **Line 80:** This JavaScript jQuery UI function defines the draggable functionality to be attached to HTML elements whose class is `draggable`. In my case, the default behavior was all I needed. + + + +That's it! + +### A few last thoughts + +Libraries like jQuery and jQuery UI are incredibly helpful when trying to do something complicated in JavaScript. Look at the `$().draggable()` and `$().droppable()` functions, for example: + + +``` +`$( ".draggable" ).draggable();` +``` + +The `".draggable"` allows associating the `draggable()` function with any HTML element whose class is "draggable." The `draggable()` function comes with all sorts of useful behavior about picking, dragging, and releasing a draggable HTML element. + +If you haven't spent much time with jQuery, I really like the book [_jQuery in Action_][21] by Bear Bibeault, Yehuda Katz, and Aurelio De Rosa. Similarly, [_jQuery UI in Action_][22] by TJ VanToll is a great help with the jQuery UI (where draggable and droppable come from). + +Of course, there are many other JavaScript libraries, frameworks, and what-nots around to do good stuff in the user interface. I haven't really started to explore all that jQuery and jQuery UI offer, and I want to play around with the rest to see what can be done. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/open-source-gift-exchange + +作者:[Chris Hermansen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow) +[2]: https://all-free-download.com/free-vector/patterns-creative-commons.html#google_vignette +[3]: https://opensource.com/tags/gimp +[4]: https://opensource.com/sites/default/files/uploads/bookexchangestart.png (Virtual book exchange) +[5]: https://creativecommons.org/licenses/by-sa/4.0/ +[6]: https://opensource.com/sites/default/files/uploads/bookexchangeperson1.png (Virtual book exchange) +[7]: https://opensource.com/sites/default/files/uploads/bookexchangeperson2.png (Virtual book exchange) +[8]: http://december.com/html/4/element/html.html +[9]: http://december.com/html/4/element/head.html +[10]: http://december.com/html/4/element/meta.html +[11]: http://december.com/html/4/element/title.html +[12]: http://december.com/html/4/element/link.html +[13]: http://december.com/html/4/element/style.html +[14]: http://december.com/html/4/element/script.html +[15]: https://code.jquery.com/jquery-1.12.4.js"\>\ +[16]: https://code.jquery.com/ui/1.12.1/jquery-ui.js"\>\ +[17]: http://december.com/html/4/element/body.html +[18]: http://december.com/html/4/element/h1.html +[19]: http://december.com/html/4/element/h2.html +[20]: http://december.com/html/4/element/div.html +[21]: https://www.manning.com/books/jquery-in-action-third-edition +[22]: https://www.manning.com/books/jquery-ui-in-action diff --git a/sources/tech/20210123 Schedule appointments with an open source alternative to Doodle.md b/sources/tech/20210123 Schedule appointments with an open source alternative to Doodle.md new file mode 100644 index 0000000000..4627c2c618 --- /dev/null +++ b/sources/tech/20210123 Schedule appointments with an open source alternative to Doodle.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Schedule appointments with an open source alternative to Doodle) +[#]: via: (https://opensource.com/article/21/1/open-source-scheduler) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +Schedule appointments with an open source alternative to Doodle +====== +Easy!Appointments is an open source appointment scheduler filled with +features to make planning your day easier. +![Working on a team, busy worklife][1] + +In previous years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 13 of 21 Days of Productivity in 2021. + +Setting appointments with other people is difficult. Most of the time, we guess at a date and time and then start the "is this time bad for you? No, that time is bad for me, how about..." dance. It is easier with co-workers since you can see each others' calendars. You just have to find that magic spot that is good for almost everyone who needs to be on the call. However, for freelancers managing personal calendars, the dance is a routine part of setting up calls and meetings. + +![Service and Provider set up screen][2] + +Easy!Appointments (Kevin Sonney, [CC BY-SA 4.0][3]) + +This scheduling is a particular challenge for someone like me. Since I interview people for my weekly productivity podcast, finding a time that works for both of us can be extra challenging. + +Finally, one of my guests said, "Hey, to get on my calendar, go to this URL, and pick a time that works for both of us." + +This concept was, to be honest, a revelation. There is software (and services) out there that allows a person requesting the meeting to _pick_ a time that is good for both parties! No more back and forth trying to figure it out! It also means that I could give the person being interviewed control over their own availability. + +![Appointment, Date, and Time settings][4] + +Easy!Appointments (Kevin Sonney, [CC BY-SA 4.0][3]) + +There are several commercial and cloud-hosted solutions that provide this service. The best open source alternative I've used is [Easy!Appointments][5]. It is exceptionally easy to set up and has a handy WordPress plug-in that allows users to put the request form on a page or post. + +Easy!Appointments is geared more towards a service organization, like a helpdesk or a handyman service, with the ability to add multiple people (aka Service Providers) and give them individual schedules. It also allows for various service types. While I only have it set up for one person (me) and one service (an interview), for a helpdesk, it might have four or five people and services like "Set up new laptop," "New hire setup," and "Set up new printer." + +Easy!Appointments can also synchronize with Google Calendar on a per-person basis to automatically add or update any new appointments on their calendar. There are discussions in their issue tracker about support for syncing to additional backends. Easy!Appointments also supports multiple languages, time zones, and a whole host of other useful features. + +![Final booking interface][6] + +A final booking (Kevin Sonney, [CC BY-SA 4.0][3]) + +It has been freeing to be able to say, "You can book your interview on this web page," and spending less time negotiating when we are both available to talk. That gives both myself and the other person more time to do more productive things. Whether you are an individual, like me, or a service organization, Easy!Appointments is a big help when scheduling time with other people. + +Need to keep your schedule straight? Learn how to do it using open source with these free... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/open-source-scheduler + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) +[2]: https://opensource.com/sites/default/files/day13-image1_1.png +[3]: https://creativecommons.org/licenses/by-sa/4.0/ +[4]: https://opensource.com/sites/default/files/day13-image2_0.png +[5]: https://easyappointments.org/ +[6]: https://opensource.com/sites/default/files/day13-image3_0.png diff --git a/sources/tech/20210124 3 stress-free steps to tackling your task list.md b/sources/tech/20210124 3 stress-free steps to tackling your task list.md new file mode 100644 index 0000000000..04e65e4b06 --- /dev/null +++ b/sources/tech/20210124 3 stress-free steps to tackling your task list.md @@ -0,0 +1,70 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 stress-free steps to tackling your task list) +[#]: via: (https://opensource.com/article/21/1/break-down-tasks) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +3 stress-free steps to tackling your task list +====== +Break your larger tasks into small steps to keep from being overwhelmed. +![Team checklist][1] + +In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 14 of 21 Days of Productivity in 2021. + +At the start of the week, I like to review my schedule and look at the things I either need or would like to accomplish. And often, there are some items on that list that are relatively big. Whether it is an issue for work, a series of articles on productivity, or maybe an improvement to the chicken enclosures, the task can seem really daunting when taken as a single job. The odds are good that I will not be able to sit down and finish something like (just as an example, mind you) 21 articles in a single block of time, or even a single day. + +![21 Days of Productivity project screenshot][2] + +21 Days of Productivity (Kevin Sonney, [CC BY-SA 4.0][3]) + +So the first thing I do when I have something like this on my list is to break it down into smaller pieces. As Nobel laureate [William Faulkner][4] famously said, "The man who removes a mountain begins by carrying away small stones." We need to take our big tasks (the mountain) and find the individual steps (the small stones) that need to be done. + +I use the following steps to break down my big tasks into little ones: + + 1. I usually have a fair idea of what needs to be done to complete a task. If not, I do a little research to figure that out. + 2. I write down the steps I think it will take, in order. + 3. Finally, I sit down with my calendar and the list and start to spread the tasks out across several days (or weeks, or months) to get an idea of when I might finish it. + + + +Now I have not only a plan but an idea of how long it is going to take. As I complete each step, I can see that big task get not only a little smaller but closer to completion. + +There is an old military saying that goes, "No plan survives contact with the enemy." It is almost certain that there will be a point or two (or five) where I realize that something as simple as "take a screenshot" needs to be expanded into something _much_ more complex. In fact, taking the screenshots of [Easy!Appointments][5] turned out to be: + + 1. Install and configure Easy!Appointments. + 2. Install and configure the Easy!Appointments WordPress plugin. + 3. Generate the API keys needed to sync the calendar. + 4. Take screenshots. + + + +Even then, I had to break these tasks down into smaller pieces—download the software, configure NGINX, validate the installs…you get the idea. And that's OK. A plan, or set of tasks, is not set in stone and can be changed as needed. + +![project completion pie chart][6] + +About 2/3 done for this year! (Kevin Sonney, [CC BY-SA 4.0][3]) + +This is a learned skill and will take some effort the first few times. Learning how to break big tasks into smaller steps allows you to track progress towards a goal or completion of something big without getting overwhelmed in the process. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/break-down-tasks + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist) +[2]: https://opensource.com/sites/default/files/day14-image1.png +[3]: https://creativecommons.org/licenses/by-sa/4.0/ +[4]: https://en.wikipedia.org/wiki/William_Faulkner +[5]: https://opensource.com/article/21/1/open-source-scheduler +[6]: https://opensource.com/sites/default/files/day14-image2_1.png diff --git a/sources/tech/20210126 Automate setup and delivery for virtual machines in the cloud.md b/sources/tech/20210126 Automate setup and delivery for virtual machines in the cloud.md new file mode 100644 index 0000000000..65f7fe07e8 --- /dev/null +++ b/sources/tech/20210126 Automate setup and delivery for virtual machines in the cloud.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automate setup and delivery for virtual machines in the cloud) +[#]: via: (https://opensource.com/article/21/1/testcloud-virtual-machines) +[#]: author: (Sumantro Mukherjee https://opensource.com/users/sumantro) + +Automate setup and delivery for virtual machines in the cloud +====== +Get a cloud image ready in minutes by using Testcloud to automate the +setup process and deliver a VM ready to run. +![Looking at a map][1] + +If you're a developer or hobbyist using a Fedora [qcow2 image][2] for the cloud, you always have to do a bunch of initial configuration before an image is ready to use. I know this all too well, and I was eager to find a way to make the setup process simpler. As it happens, the entire Fedora quality assurance team feels the same way, so we developed [Testcloud][3]. + +Testcloud is a tool that makes it easy to get a cloud image ready for testing in minutes. It automates the setup process and delivers a virtual machine (VM) ready to run on the cloud with just a few commands.  + +Testcloud: + + 1. Downloads the qcow2 image + 2. Creates the instance with the name of your choice + 3. Creates a user named `fedora` with the password of `passw0rd` + 4. Assigns an IP, which you can later use to secure shell (SSH) into the cloud + 5. Starts, stops, removes, and lists an instance + + + +### Install Testcloud + +To start your journey, you first must install the Testcloud package. You can install it from a terminal or through the software application. In both cases, the package name is `testcloud`. Install with: + + +``` +`$ sudo dnf install testcloud -y` +``` + +Once the installation is complete, add your desired user to the `testcloud` group, which helps Testcloud automate the rest of the process. Execute these two commands to add your user to the `testcloud` group and restart the session with the updated group privileges: + + +``` +$ sudo usermod -a -G testcloud $USER +$ su - $USER +``` + +![Add user to testcloud group][4] + +(Sumantro Mukherjee, [CC BY-SA 4.0][5]) + +### Spin cloud images like a pro + +Once your user has the required group permissions, create an instance: + + +``` +`$ testcloud instance create -u ` +``` + +Alternatively, you can use `fedora:latest/fedora:XX` (where `XX` is your Fedora release) instead of the full URL: + + +``` +`$ testcloud instance create -u fedora:latest` +``` + +This returns the IP address of your VM: + + +``` +$ testcloud instance create testcloud272593 -u   +[...] +INFO:Successfully booted instance testcloud272593 +The IP of vm testcloud272593:  192.168.122.202 +\------------------------------------------------------------ +To connect to the VM, use the following command (password is 'passw0rd'): +ssh fedora@192.168.122.202 +\------------------------------------------------------------ +``` + +You can log in as the default user `fedora` with the password `passw0rd` (note the zero). You can get to the VM with `ssh`, `virt-manager`, or any other method that supports connecting to libvirt machines. + +Another simple way to create a Fedora cloud is: + + +``` +$ testcloud instance create testcloud193 -u fedora:33 +  +WARNING:Not proceeding with backingstore cleanup because there are some testcloud instances running. +You can fix this by following command(s): +testcloud instance stop testcloud272593 + +DEBUG:Local downloads will be stored in /var/lib/testcloud/backingstores. +DEBUG:successfully changed SELinux context for image /var/lib/testcloud/backingstores/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 +DEBUG:Creating instance directories +DEBUG:creating seed image /var/lib/testcloud/instances/testcloud193/testcloud193-seed.img +INFO:Seed image generated successfully +INFO:Successfully booted instance testcloud193 +The IP of vm testcloud193:  192.168.122.225 +\------------------------------------------------------------ +To connect to the VM, use the following command (password is 'passw0rd'): +ssh fedora@192.168.122.225 +\------------------------------------------------------------ +``` + +### Play with instances + +Testcloud can be used to administer instances. This includes activities such as listing images or stopping and starting an instance. + +To list instances, use the `list` subcommand: + + +``` +$ testcloud instance list                 +Name                            IP                      State     +\------------------------------------------------------------ +testcloud272593                 192.168.122.202         running     +testcloud193                    192.168.122.225         running     +testcloud252793                 192.168.122.146         shutoff     +testcloud93                             192.168.122.152         shutoff +``` + +To stop a running instance: + + +``` +$ testcloud instance stop testcloud193   +DEBUG:stop instance: testcloud193 +DEBUG:stopping instance testcloud193. +``` + +To remove an instance: + + +``` +$ testcloud instance destroy testcloud193   +DEBUG:remove instance: testcloud193 +DEBUG:removing instance testcloud193 from libvirt. +DEBUG:Unregistering instance from libvirt. +DEBUG:removing instance /var/lib/testcloud/instances/testcloud193 from disk +``` + +To reboot a running instance: + + +``` +$ testcloud instance reboot testcloud93                                                                                         +DEBUG:stop instance: testcloud93 +[...] +INFO:Successfully booted instance testcloud93 +The IP of vm testcloud93:  192.168.122.152 +usage: testcloud [-h] {instance,image} ... +``` + +Give Testcloud a try and let me know what you think in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/testcloud-virtual-machines + +作者:[Sumantro Mukherjee][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sumantro +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) +[2]: https://en.wikipedia.org/wiki/Qcow +[3]: https://pagure.io/testcloud +[4]: https://opensource.com/sites/default/files/uploads/adduser.png (Add user to testcloud group) +[5]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20210126 Movim- An Open-Source Decentralized Social Platform Based on XMPP Network.md b/sources/tech/20210126 Movim- An Open-Source Decentralized Social Platform Based on XMPP Network.md new file mode 100644 index 0000000000..0a9565bd73 --- /dev/null +++ b/sources/tech/20210126 Movim- An Open-Source Decentralized Social Platform Based on XMPP Network.md @@ -0,0 +1,107 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Movim: An Open-Source Decentralized Social Platform Based on XMPP Network) +[#]: via: (https://itsfoss.com/movim/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Movim: An Open-Source Decentralized Social Platform Based on XMPP Network +====== + +_**Brief: Movim is an open-source decentralized social media platform that relies on XMPP network and can communicate with other applications using XMPP.**_ + +We’ve already highlighted some [open-source alternatives to mainstream social media platforms][1]. In addition to those options available, I have come across another open-source social media platform that focuses on privacy and decentralization. + +### Movim: Open-Source Web-based Social Platform + +![][2] + +Just like some other XMPP desktop clients, [Movim][3] is a web-based XMPP front-end to let you utilize it as a federated social media. + +Since it relies on [XMPP network][4], you can interact with other users utilizing XMPP clients such as [Conversations][5] (for Android) and [Dino][6] (for Desktop). + +In case you didn’t know, XMPP is an open-standard for messaging. + +So, Movim can act as your decentralized messaging app or a full-fledged social media platform giving you an all-in-one experience without relying on a centralized network. + +It offers many features that can appeal to a wide variety of users. Let me briefly highlight most of the important ones. + +![][7] + +### Features of Movim + + * Chatroom + * Ability to organize video conferences + * Publish articles/stories publicly to all federated network + * Tweak the privacy setting of your post + * Easily talk with other Movim users or XMPP users with different clients + * Automatically embed your links and images to your post + * Explore topics easily using hashtags + * Ability to follow a topic or publication + * Auto-save to draft when you type in a post + * Supports Markdown syntax to let you publish informative posts and start a publication on the network for free + * React to chat messages + * Supports GIFs and funny Stickers + * Edit or delete your messages + * Supports screen sharing + * Supports night mode + * Self-hosting option available + * Offers a free public instance as well + * Cross-platform web support + + + +### Using Movim XMPP Client + +![][8] + +In addition to all the features listed above, it is also worth noting that you can also find a Movim mobile app on [F-Droid][9]. + +If you have an iOS device, you might have a hard time looking for a good XMPP client (I’m not aware of any decent options). If you rule that out, you should not have any issues using it on your Android device. + +For desktop, you can simply use Movim’s [public instance][10], sign up for an account, and use it on your favorite browser no matter which platform you’re on. + +You can also deploy your instance by using the Docker Compose script, the Debian package, or any other methods mentioned in their [GitHub page][11]. + +[Movim][3] + +### Concluding Thoughts + +While the idea of decentralized social media platforms is good, not everyone would prefer to use it because they probably do not have friends on it and the user experience is not the best out there. + +That being said, XMPP clients like Movim are trying to make a federated social platform that a general consumer can easily use without any hiccups. + +Just like it took a while for users to look for [WhatsApp alternatives][12], the craze for decentralized social media platform like Movim and [Mastodon][13] is a possibility in the near future as well. + +If you like it, do consider making a donation to their project. + +What do you think about Movim? Let me know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/movim/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/mainstream-social-media-alternaives/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/movim-dark-mode.jpg?resize=800%2C486&ssl=1 +[3]: https://movim.eu/ +[4]: https://xmpp.org/ +[5]: https://conversations.im/ +[6]: https://itsfoss.com/dino-xmpp-client/ +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/movim-discover.png?resize=800%2C466&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/movim-eu.jpg?resize=800%2C464&ssl=1 +[9]: https://f-droid.org/packages/com.movim.movim/ +[10]: https://nl.movim.eu +[11]: https://github.com/movim/movim +[12]: https://itsfoss.com/private-whatsapp-alternatives/ +[13]: https://itsfoss.com/mastodon-open-source-alternative-twitter/ diff --git a/sources/tech/20210127 Build a programmable light display on Raspberry Pi.md b/sources/tech/20210127 Build a programmable light display on Raspberry Pi.md new file mode 100644 index 0000000000..b7b0ee31b5 --- /dev/null +++ b/sources/tech/20210127 Build a programmable light display on Raspberry Pi.md @@ -0,0 +1,218 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Build a programmable light display on Raspberry Pi) +[#]: via: (https://opensource.com/article/21/1/light-display-raspberry-pi) +[#]: author: (Darin London https://opensource.com/users/dmlond) + +Build a programmable light display on Raspberry Pi +====== +Celebrate the holidays or any special occasion with a DIY light display +using a Raspberry Pi, Python, and programmable LED lights. +![Lightbulb][1] + +This past holiday season, I decided to add some extra joy to our house by setting up a DIY light display. I used a Raspberry Pi, a programmable light string, and Python. + + + +You can set up your own light display for any occasion, thanks to the flexibility of the WS12911/2 (or NeoPixel) system, by following these directions. + +### Prerequisites + +You will need: + + * 1 – Raspberry Pi with headers and an Ethernet or WiFi connection. I used a Raspberry Pi Zero W with headers. + * 1 – WS12811/2 light string. I used the [Alitove WS2811 Addressable LED Pixel Light 50][2], but many other types are available. Adafruit brands these as [NeoPixel][3]. + * 1 – [5v/10A AC-DC power supply for WS12811][4] if you use the Alitove. Other lights may come with a power supply. + * 1 – Breadboard + * 2 – Breadboard-to-Pi-header jumper wires. I used blue for the Pi GPIO pin 18 and black for the Pi ground. + * 1 – 74AHCT125 level converter chip to safely transmit Pi GPIO wire signals to 5v/10A power without feeding back to the Pi. + * 8 – Breadboard-to-breadboard jumper wires or solid-core 24 AWG wires. I used red/orange for 5v power, black for ground, and yellow for data. + * 1 – SD card with Raspberry Pi OS installed. I used [Raspberry Pi OS Lite][5] and set it up in a headless mode with SSH enabled. + + + +### What are WS2811/2 programmable LEDs? + +The [WS2811/2 class of programmable lights][6] integrates red, green, and blue LED lights with a driver chip into a tiny surface-mounted package controlled through a single wire. + +![Programmable LED light][7] + +(Darin London, [CC BY-SA 4.0][8]) + +Each light can be individually programmed using an RGB set of integers or hex equivalents. These lights can be packaged together into matrices, strings, and other form factors, and they can be programmatically accessed using a data structure that makes sense for the form factor. The light strings I use are addressed using a standard Python list. Adafruit has a great [tutorial on wiring and controlling your lights][9]. + +### Control NeoPixel LEDs with Python + +Adafruit has created a full suite of Python libraries for most of the parts it sells. These are designed to work with [CircuitPython][10], Adafruit's port of Python designed for low-cost microcontroller boards. You do not need to install CircuitPython on the Raspberry Pi OS because the preinstalled Python 2 and Python 3 are compatible. + +You will need to `pip3` to install libraries for Python 3. Install it with: + + +``` +`sudo apt-get install python3-pip` +``` + +Then install the following libraries: + + * [rpi_ws281x][11] + * [Adafruit-circuitpython-neopixel][12] + * [Adafruit-blinka][13] + + + +Once these libraries and their dependencies are installed, you can write code like the following to program one or more lights wired to your Raspberry Pi using `sudo python3` (sudo is required): + + +``` +import board +import neopixel +num_lights = 50 +# program 50 lights with the default brightness 1.0, and autoWrite true +pixels = neopixel.NeoPixel(board.D18, num_lights) +# light 20 bright green +pixels[19] = (0,255,0) +# light all pixels red +pixels.fill((255.0,0)) +# turn off neopixels +pixels.fill((0,0,0)) +``` + +### Set up your lighting system + + 1. Install the SD card into the Raspberry Pi and secure it, the breadboard, and lights [where they need to be][14] (velcro works for the Pi and breadboard). + + 2. Install the 74AHCT125 level converter chip, light, power supply, and Pi according to this schematic: + +![Wiring schematic][15] + +([Kattni Rembor][16], [CC BY-SA 4.0][8]) + + 3. String additional lights to the first light using their connectors. Note the total number of lights. + + 4. Plug the power supply into the wall. + + 5. Plug the Raspberry Pi power supply into the wall, and wait for it to boot. + + + + +  + +![Lighting hardware wiring][17] + +(Darin London, [CC BY-SA 4.0][8]) + +![Lighting hardware wiring][18] + +(Darin London, [CC BY-SA 4.0][8]) + +![Lighting hardware wiring][19] + +(Darin London, [CC BY-SA 4.0][8]) + +### Install the light controller and Flask web application + +I wrote a Python application and library to interact with the lights and a Flask web application that runs on the Pi. See my [Raspberry Pi Neopixel Controller][20] GitHub repository for more information about the code. + +#### The lib.neopixc library + +The [`lib.neopixc` library][21] extends the `neopixel.NeoPixC` class to work with two 50-light Alitove light strands connected in serial, using a programmable list of RGB colors lists. It adds the following functions:  + + * `set_color`: Takes a new list of lists of RGB colors + * `walk`: Walks through each light and sets them to the colors in order + * `rotate`: Pushes the last color in the list of lists to the beginning of the list of lists for blinking the lights + + + +If you have a different number of lights, you will need to edit this library to change the `self._num_lights` value. Also, some lights require a different argument in the order constructor attribute. The Alitove is compatible with the default order attribute `neopixel.GRBW`. + +#### The run_lights.py script + +The [`run_lights.py` script][22] uses `lib.neopixc` to support a colors file and a state file to dynamically set how the lights behave at any time. The colors file is a JSON array of arrays of RGB (or RGBW) integers that is fed as the colors to the `lib.neopixc` object using its `set_colors` method. The state file can hold one of three words: + + * `static`: Does not rotate the lights with each iteration of the while loop + * `blink`: Rotates the lights with each iteration of the main while loop + * `down`: Turns all the lights off + + + +If the state file does not exist, the default state is `static`. + +The script also has HUP and INT signal handlers, which will turn off the lights when those signals are received. + +Note: Because the GPIO 18 pin requires sudo on the Raspberry Pi to work, the `run_lights.py` script must be run with sudo. + +#### The neopixel_controller application + +The `neopixel_controller` Flask application, in the neopix_controller directory of the github repository (see below), offers a front-end browser graphical user interface (GUI) to control the lights. My raspberry pi connects to my wifi, and is accessible at raspberrypi.local. To access the GUI in a browser, go to . Alternatively, you can use ping to find the IP address of raspberrypi.local, and use it as the hostname, which is useful if you have multiple raspberry pi devices connected to your wifi. + +![Flask app UI][23] + +(Darin London, [CC BY-SA 4.0][8]) + +The current state and three front-end buttons use JavaScript to interact with a set of REST API endpoints presented by the Flask application: + + * `/api/v1/state`: Returns the current state of the shared state file, which defaults to `static` if the state file does not exist + * `/api/v1/blink`: Sets the state file to blink + * `/api/v1/static`: Sets the state file to static + * `/api/v1/down`: Sets the state file to down + + + +I wrote two scripts and corresponding JSON definition files that launch `run_lights.py` and the Flask application: + + * `launch_christmas.sh` + * `launch_new_years.sh` + + + +These can be launched from a command-line session (terminal or SSH) on the Pi after it is set up (they do not require sudo, but use sudo internally): + + +``` +`./launch_christmas.sh` +``` + +You can turn off the lights and stop `run_lights.sh` and the Flask application by using `lights_down.sh`. + +The code for the library and the flask application are in the [Raspberry Pi Neopixel Controller][20] GitHub repository. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/light-display-raspberry-pi + +作者:[Darin London][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dmlond +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh (Lightbulb) +[2]: https://www.amazon.com/gp/product/B06XD72LYM +[3]: https://www.adafruit.com/category/168 +[4]: https://www.amazon.com/gp/product/B01M0KLECZ +[5]: https://opensource.com/article/20/6/custom-raspberry-pi +[6]: https://learn.adafruit.com/adafruit-neopixel-uberguide +[7]: https://opensource.com/sites/default/files/uploads/led_1.jpg (Programmable LED light) +[8]: https://creativecommons.org/licenses/by-sa/4.0/ +[9]: https://learn.adafruit.com/neopixels-on-raspberry-pi +[10]: https://circuitpython.org/ +[11]: https://pypi.org/project/rpi-ws281x/ +[12]: https://circuitpython.readthedocs.io/projects/neopixel/en/latest/api.html +[13]: https://pypi.org/project/Adafruit-Blinka/ +[14]: https://gpiozero.readthedocs.io/en/stable/recipes.html#pin-numbering +[15]: https://opensource.com/sites/default/files/uploads/schematic.png (Wiring schematic) +[16]: https://learn.adafruit.com/assets/64121 +[17]: https://opensource.com/sites/default/files/uploads/wiring.jpg (Lighting hardware wiring) +[18]: https://opensource.com/sites/default/files/uploads/wiring2.jpg (Lighting hardware wiring) +[19]: https://opensource.com/sites/default/files/uploads/wiring3.jpg (Lighting hardware wiring) +[20]: https://github.com/dmlond/raspberry_pi_neopixel +[21]: https://github.com/dmlond/raspberry_pi_neopixel/blob/main/lib/neopixc.py +[22]: https://github.com/dmlond/raspberry_pi_neopixel/blob/main/run_lights.py +[23]: https://opensource.com/sites/default/files/uploads/neopixelui.png (Flask app UI) diff --git a/sources/tech/20210127 Introduction to Thunderbird mail filters.md b/sources/tech/20210127 Introduction to Thunderbird mail filters.md new file mode 100644 index 0000000000..423d48f86d --- /dev/null +++ b/sources/tech/20210127 Introduction to Thunderbird mail filters.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Introduction to Thunderbird mail filters) +[#]: via: (https://fedoramagazine.org/introduction-to-thunderbird-mail-filters/) +[#]: author: (Richard England https://fedoramagazine.org/author/rlengland/) + +Introduction to Thunderbird mail filters +====== + +![][1] + +Everyone eventually runs into an inbox loaded with messages that they need to sort through. If you are like a lot of people, this is not a fast process. However, use of mail filters can make the task a little less tedious by letting Thunderbird pre-sort the messages into categories that reflect their source, priority, or usefulness. This article is an introduction to the creation of filters in Thunderbird. + +Filters may be created for each email account you have created in Thunderbird. These are the accounts you see in the main Thunderbird folder pane shown at the left of the “Classic Layout”. + +![Classic Layout][2] + +There are two methods that can be used to create mail filters for your accounts. The first is based on the currently selected account and the second on the currently selected message. Both are discussed here. + +### Message destination folder + +Before filtering messages there has to be a destination for them. Create the destination by selecting a location to create a new folder. In this example the destination will be **Local Folders** shown in the accounts pane. Right click on **Local Folders** and select _New Folder…_ from the menu. + +![Creating a new folder][3] + +Enter the name of the new folder in the menu and select _Create Folder._ The mail to filter is coming from the New York Times so that is the name entered. + +![Folder creation][4] + +### Filter creation based on the selected account + +Select the _Inbox_ for the account you wish to filter and select the toolbar menu item at _Tools > Message_Filters_. + +![Message_Filters menu location][5] + +The _Message Filters_ menu appears and is set to your pre-selected account as indicated at the top in the selection menu labelled _Filters for:_. + +![Message Filters menu][6] + +Previously created filters, if any, are listed beneath the account name in the “_Filter Name”_ column. To the right of this list are controls that let you modify the filters selected. These controls are activated when you select a filter. More on this later. + +Start creating your filter as follows: + + 1. Verify the correct account has been pre-selected. It may be changed if necessary. + 2. Select _New…_ from the menu list at the right. + + + +When you select _New_ you will see the _Filter Rules_ menu where you define your filter. Note that when using _New…_ you have the option to copy an existing filter to use as a template or to simply duplicate the settings. + +Filter rules are made up of three things, the “property” to be tested, the “test”, and the “value” to be tested against. Once the condition is met, the “action” is performed. + +![Message Filters menu][7] + +Complete this filter as follows: + + 1. Enter an appropriate name in the textbox labelled _Filter name:_ + 2. Select the property _From_ in the left drop down menu, if not set. + 3. Leave the test set to _contains_. + 4. Enter the value, in this case the email address of the sender. + + + +Under the _Perform these actions:_ section at the bottom, create an action rule to move the message and choose the destination. + + 1. Select _Move Messages to_ from the left end of the action line. + 2. Select _Choose Folder…_ and select _Local Folders > New York Times_. + 3. Select _OK_. + + + +By default the **Apply filter when:** is set to _Manually Run_ and _Getting New Mail:_. This means that when new mail appears in the Inbox for this account the filter will be applied and you may run it manually at any time, if necessary. There are other options available but they are too numerous to be discussed in this introduction. They are, however, for the most part self explanatory. + +If more than one rule or action is to be created during the same session, the “+” to the right of each entry provides that option. Additional property, test, and value entries can be added. If more than one rule is created, make certain that the appropriate option for _Match all of the following_ and _Match any of the following_ is selected. In this example the choice does not matter since we are only setting one filter. + +After selecting _OK,_ the _Message Filters_ menu is displayed again showing your newly created filter. Note that the menu items on the right side of the menu are now active for _Edit…_ and _Delete._ + +![First filter in the Message Filters menu][8] + +Also notice the message _“Enabled filters are run automatically in the order shown below”_. If there are multiple filters the order is changed by selecting the one to be moved and using the _Move to Top, Move Up, Move Down,_ or _Move to Bottom_ buttons. The order can change the destination of your messages so consider the tests used in each filter carefully when deciding the order. + +Since you have just created this filter you may wish to use the _Run Now_ button to run your newly created filter on the Inbox shown to the left of the button. + +### Filter creation based on a message + +An alternative creation technique is to select a message from the message pane and use the _Create Filter From Message…_ option from the menu bar. + +In this example the filter will use two rules to select the messages: the email address and a text string in the Subject line of the email. Start as follows: + + 1. Select a message in the message page. + 2. Select the filter options on the toolbar at _Message > Create Filter From Message…_. + + + +![Create new filters from Messages][9] + +The pre-selected message, highlighted in grey in the message pane above, determines the account used and _Create Filter From Message…_ takes you directly to the _Filter Rules_ menu. + +![][10] + +The property (_From_), test (_is_), and value (email) are pre-set for you as shown in the image above. Complete this filter as follows: + + 1. Enter an appropriate name in the textbox labelled _Filter name:_. _COVID_ is the name in this case. + 2. Check that the property is _From_. + 3. Verify the test is set to _is_. + 4. Confirm that the value for the email address is from the correct sender. + 5. Select the “+” to the right of the _From_ rule to create a new filter rule. + 6. In the new rule, change the default property entry _From_ to _Subject_ using the pulldown menu. + 7. Set the test to _contains_. + 8. Enter the value text to be matched in the Email “Subject” line. In this case _COVID_. + + + +Since we left the _Match all of the following_ item checked, each message will be from the address chosen AND will have the text _COVID_ in the email subject line. + +Now use the action rule to choose the destination for the messages under the _Perform these actions:_ section at the bottom: + + 1. Select _Move Messages to_ from the left menu. + 2. Select _Choose Folder…_ and select _Local Folders > COVID in Scotland_. (This destination was created before this example was started. There was no magic here.) + 3. Select _OK_. + + + +_OK_ will cause the _Message Filters_ menu to appear, again, verifying that the new filter has been created. + +### The Message Filters menu + +All the message filters you create will appear in the _Message Filters_ menu. Recall that the _Message Filters_ is available in the menu bar at _Tools > Message Filters_. + +Once you have created filters there are several options to manage them. To change a filter, select the filter in question and click on the _Edit_ button. This will take you back to the _Filter Rules_ menu for that filter. As mentioned earlier, you can change the order in which the rules are apply here using the _Move_ buttons. Disable a filter by clicking on the check mark in the _Enabled_ column. + +![][11] + +The _Run Now_ button will execute the selected filter immediately. You may also run your filter from the menu bar using _Tools > Run Filters on Folder_ or _Tools > Run Filters on Message_. + +### Next step + +This article hasn’t covered every feature available for message filtering but hopefully it provides enough information for you to get started. Places for further investigation are the “property”, “test”, and “actions” in the _Filter menu_ as well as the settings there for when your filter is to be run, _Archiving, After Sending,_ and _Periodically_. + +### References + +Mozilla: [Organize][12] [Your Messages][12] [by Using Filters][12] + +MozillaZine: [Message][13] [Filters][13] + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introduction-to-thunderbird-mail-filters/ + +作者:[Richard England][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/rlengland/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/Tbird_mail_filters-1-816x345.jpg +[2]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_001-1024x613.png +[3]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_New_Folder.png +[4]: https://fedoramagazine.org/wp-content/uploads/2021/01/Folder_name-1.png +[5]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_002-2-1024x672.png +[6]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_Message_Filters-1.png +[7]: https://fedoramagazine.org/wp-content/uploads/2021/01/Filter_rules_1-1.png +[8]: https://fedoramagazine.org/wp-content/uploads/2021/01/Messsage_Filters_1st_entry.png +[9]: https://fedoramagazine.org/wp-content/uploads/2021/01/Create_by_messasge.png +[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Filter_rules_2-1.png +[11]: https://fedoramagazine.org/wp-content/uploads/2021/01/Message_Filters_2nd_entry.png +[12]: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters +[13]: http://kb.mozillazine.org/Filters_%28Thunderbird%29 diff --git a/sources/tech/20210128 Interview with Shuah Khan, Kernel Maintainer - Linux Fellow.md b/sources/tech/20210128 Interview with Shuah Khan, Kernel Maintainer - Linux Fellow.md new file mode 100644 index 0000000000..1d22b6f0fb --- /dev/null +++ b/sources/tech/20210128 Interview with Shuah Khan, Kernel Maintainer - Linux Fellow.md @@ -0,0 +1,173 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Interview with Shuah Khan, Kernel Maintainer & Linux Fellow) +[#]: via: (https://www.linux.com/news/interview-with-shuah-khan-kernel-maintainer-linux-fellow/) +[#]: author: (The Linux Foundation https://www.linuxfoundation.org/en/blog/interview-with-shuah-khan-kernel-maintainer-linux-fellow/) + +Interview with Shuah Khan, Kernel Maintainer & Linux Fellow +====== + +![][1] + +_Jason Perlow, Director of Project Insights and Editorial Content at the Linux Foundation, had an opportunity to speak with Shuah Khan about her experiences as a woman in the technology industry. She discusses how mentorship can improve the overall diversity and makeup of open source projects, why software maintainers are important for the health of open source projects such as the Linux kernel, and how language inclusivity and codes of conduct can improve relationships and communication between software maintainers and individual contributors._ + +**JP:** So, Shuah, I know you wear many different hats at the Linux Foundation. What do you call yourself around here these days? + +**SK:** <laughs> Well, I primarily call myself a Kernel Maintainer & Linux Fellow. In addition to that, I focus on two areas that are important to the continued health and sustainability of the open source projects in the Linux ecosystem. The first one is bringing more women into the Kernel community, and additionally, I am leading the mentorship program efforts overall at the Linux Foundation. And in that role, in addition to the Linux Kernel Mentorship, we are looking at how the Linux Foundation mentorship program is working overall, how it is scaling. I make sure the [LFX Mentorship][2] platform scales and serves diverse mentees and mentors’ needs in this role. + +The LF mentorships program includes several projects in the Linux kernel, LFN, HyperLedger, Open MainFrame, OpenHPC, and other technologies. [The Linux Foundation’s Mentorship Programs][3] are designed to help developers with the necessary skills–many of whom are first-time open source contributors–experiment, learn, and contribute effectively to open source communities. + +The mentorship program has been successful in its mission to train new developers and make these talented pools of prospective employees trained by experts to employers. Several graduated mentees have found jobs. New developers have improved the quality and security of various open source projects, including the Linux kernel. Several Linux kernel bugs were fixed, a new subsystem mentor was added, and a new driver maintainer is now part of the Linux kernel community. My sincere thanks to all our mentors for volunteering to share their expertise. + +**JP:** How long have you been working on the Kernel? + +**SK:** Since 2010, or 2011, I got involved in the [Android Mainlining project][4]. My [first patch removed the Android pmem driver][5]. + +**JP:** Wow! Is there any particular subsystem that you specialize in? + +**SK:** I am a self described generalist. I maintain the [kernel self-test][6] subsystem, the [USB over IP driver][7], [usbip tool][8], and the [cpupower][9] tool. I contributed to the media subsystem working on [Media Controller Device Allocator API][10] to resolve shared device resource management problems across device drivers from different subsystems. + +**JP:** Hey, I’ve [actually used the USB over IP driver][11] when I worked at Microsoft on Azure. And also, when I’ve used AWS and Google Compute. + +**SK:** It’s a small niche driver used in cloud computing. Docker and other containers use that driver heavily. That’s how they provide remote access to USB devices on the server to export devices to be imported by other systems for use. + +**JP:** I initially used it for IoT kinds of stuff in the embedded systems space. Were you the original lead developer on it, or was it one of those things you fell into because nobody else was maintaining it? + +**SK:** Well, twofold. I was looking at USB over IP because I like that technology. it just so happened the driver was brought from the staging tree into the Mainline kernel, I volunteered at the time to maintain it. Over the last few years, we discovered some security issues with it, because it handles a lot of userspace data, so I had a lot of fun fixing all of those. <laugh>. + +**JP:** What drew you into the Linux operating system, and what drew you into the kernel development community in the first place? + +**SK:** Well, I have been doing kernel development for a very long time. I worked on the [LynxOS RTOS][12], a while back, and then HP/UX, when I was working at HP, after which I transitioned into  doing open source development — the [OpenHPI][13] project, to support HP’s rack server hardware, and that allowed me to work much more closely with Linux on the back end. And at some point, I decided I wanted to work with the kernel and become part of the Linux kernel community. I started as an independent contributor. + +**JP:** Maybe it just displays my own ignorance, but you are the first female, hardcore Linux kernel developer I have ever met. I mean, I had met female core OS developers before — such as when I was at Microsoft and IBM — but not for Linux. Why do you suppose we lack women and diversity in general when participating in open source and the technology industry overall? + +**SK:** So I’ll answer this question from my perspective, from what I have seen and experienced, over the years. You are right; you probably don’t come across that many hardcore women Kernel developers. I’ve been working professionally in this industry since the early 1990s, and on every project I have been involved with, I am usually the only woman sitting at the table. Some of it, I think, is culture and society. There are some roles that we are told are acceptable to women — even me, when I was thinking about going into engineering as a profession. Some of it has to do with where we are guided, as a natural path. + +There’s a natural resistance to choosing certain professions that you have to overcome first within yourself and externally. This process is different for everybody based on their personality and their origin story. And once you go through the hurdle of getting your engineering degree and figuring out which industry you want to work in, there is a level of establishing credibility in those work environments you have to endure and persevere. Sometimes when I would walk into a room, I felt like people were looking at me and thinking, “why is she here?” You aren’t accepted right away, and you have to overcome that as well. You have to go in there and say, “I am here because I want to be here, and therefore, I belong here.” You have to have that mindset. Society sends you signals that “this profession is not for me” — and you have to be aware of that and resist it. I consider myself an engineer that happens to be a woman as opposed to a woman engineer. + +**JP:** Are you from India, originally? + +**SK:** Yes. + +**JP:** It’s funny; my wife really likes this [Netflix show about matchmaking in India][14]. Are you familiar with it? + +**SK:** <laughs> Yes I enjoyed the series, and [A Suitable Girl][15] documentary film that follows three women as they navigate making decisions about their careers and family obligations. + +**JP:** For many Americans, this is our first introduction to what home life is like for Indian people. But many of the women featured on this show are professionals, such as doctors, lawyers, and engineers. And they are very ambitious, but of course, the family tries to set them up in a marriage to find a husband for them that is compatible. As a result, you get to learn about the traditional values and roles they still want women to play there — while at the same time, many women are coming out of higher learning institutions in that country that are seeking technical careers. + +**SK:** India is a very fascinatingly complex place. But generally speaking, in a global sense, having an environment at home where your parents tell you that you may choose any profession you want to choose is very encouraging. I was extremely fortunate to have parents like that. They never said to me that there was a role or a mold that I needed to fit into. They have always told me, “do what you want to do.” Which is different; I don’t find that even here, in the US. Having that support system, beginning in the home to tell you, “you are open to whatever profession you want to choose,” is essential. That’s where a lot of the change has to come from. + +**JP:** Women in technical and STEM professions are becoming much more prominent in other countries, such as China, Japan, and Korea. For some reason, in the US, I tend to see more women enter the medical profession than hard technology — and it might be a level of effort and perceived reward thing. You can spend eight years becoming a medical doctor or eight years becoming a scientist or an engineer, and it can be equally difficult, but the compensation at the end may not be the same. It’s expensive to get an education, and it takes a long time and hard work, regardless of the professional discipline. + +**SK:** I have also heard that women also like to enter professions where they can make a difference in the world — a human touch, if you will. So that may translate to them choosing careers where they can make a larger impact on people — and they may view careers in technology as not having those same attributes. Maybe when we think about attracting women to technology fields, we might have to promote technology aspects that make a difference. That may be changing now, such as the [LF Public Health][16] (LFPH) project we kicked off last year. And with [LF AI & Data Foundation][17], we are also making a difference in people’s lives, such as [detecting earthquakes][18] or [analyzing climate change][19]. If we were to promote projects such as these, we might draw more women in. + +**JP:** So clearly, one of the areas of technology where you can make a difference is in open source, as the LF is hosting some very high-concept and existential types of projects such as [LF Energy][20], for example — I had no idea what was involved in it and what its goals were until I spoke to [Shuli Goodman][21] in-depth about it. With the mentorship program, I assume we need this to attract fresh talent — because as folks like us get older and retire, and they exit the field, we need new people to replace them. So I assume mentorship, for the Linux Foundation, is an investment in our own technologies, correct? + +**SK:** Correct. Bringing in new developers into the fold is the primary purpose, of course — and at the same time, I view the LF as taking on mentorship provides that neutral, level playing field across the industry for all open source projects. Secondly, we offer a self-service platform, [LFX Mentorship][22], where anyone can come in and start their project. So when the COVID-19 pandemic began, we [expanded this program to help displaced people][3] — students, et cetera, and less visible projects. Not all projects typically get as much funding or attention as others do — such as a Kubernetes or  Linux kernel — among the COVID mentorship program projects we are funding. I am particularly proud of supporting a climate change-related project, [Using Machine Learning to Predict Deforestation][23]. + +The self-service approach allows us to fund and add new developers to projects where they are needed. The LF mentorships are remote work opportunities that are accessible to developers around the globe. We see people sign up for mentorship projects from places we haven’t seen before, such as Africa, and so on, thus creating a level playing field. + +The other thing that we are trying to increase focus on is how do you get maintainers? Getting new developers is a starting point, but how do we get them to continue working on the projects they are mentored on? As you said, someday, you and I and others working on these things are going to retire, maybe five or ten years from now. This is a harder problem to solve than training and adding new developers to the project itself. + +**JP:** And that is core to our [software supply chain security mission][24]. It’s one thing to have this new, flashy project, and then all these developers say, “oh wow, this is cool, I want to join that,” but then, you have to have a certain number of people maintaining it for it to have long-term viability. As we learned in our [FOSS study with Harvard][25], there are components in the Linux operating system that are like this. Perhaps even modules within the kernel itself, I assume that maybe you might have only one or two people actively maintaining it for many years. And what happens if that person dies or can no longer work? What happens to that code? And if someone isn’t familiar with that code, it might become abandoned. That’s a serious problem in open source right now, isn’t it? + +**SK:** Right. We have seen that with SSH and other security-critical areas. What if you don’t have the bandwidth to fix it? Or the money to fix it? I ended up volunteering to maintain a tool for a similar reason when the maintainer could no longer contribute regularly. It is true; we have many drivers where maintainer bandwidth is an issue in the kernel. So the question is, how do we grow that talent pool? + +**JP:** Do we need a job board or something? We need X number of maintainers. So should we say, “Hey, we know you want to join the kernel project as a contributor, and we have other people working on this thing, but we really need your help working on something else, and if you do a good job, we know tons of companies willing to hire developers just like you?” + +**SK:** With the kernel, we are talking about organic growth; it is just like any other open source project. It’s not a traditional hire and talent placement scenario. Organically they have to have credibility, and they have to acquire it through experience and relationships with people on those projects. We just talked about it at the previous [Linux Plumbers Conference][26], we do have areas where we really need maintainers, and the [MAINTAINERS][27] file does show areas where they need help. + +To answer your question, it’s not one of those things where we can seek people to fill that role, like LinkedIn or one of the other job sites. It has to be an organic fulfillment of that role, so the mentorship program is essential in creating those relationships. It is the double-edged sword of open source; it is both the strength and weakness. People need to have an interest in becoming a maintainer and also a commitment to being one, long term. + +**JP:** So, what do you see as the future of your mentorship and diversity efforts at the Linux Foundation? What are you particularly excited about that is forthcoming that you are working on? + +**SK:** I view the Linux Foundation mentoring as a three-pronged approach to provide unstructured webinars, training courses, and structured mentoring programs. All of these efforts combine to advance a diverse, healthy, and vibrant open source community. So over the past several months, we have been morphing our speed mentorship style format into an expanded webinar format — the [LF Live Mentorship series][28]. This will have the function of growing our next level of expertise. As a complement to our traditional mentorship programs, these are webinars and courses that are an hour and a half long that we hold a few times a month that tackle specific technical areas in software development. So it might cover how to write great commit logs, for example, for your patches to be accepted, or how to find bugs in C code. Commit logs are one of those things that are important to code maintenance, so promoting good documentation is a beneficial thing. Webinars provide a way for experts short on time to share their knowledge with a few hours of time commitment and offer a self-paced learning opportunity to new developers. + +Additionally, I have started the [Linux Kernel Mentorship forum][29] for developers and their mentors to connect and interact with others participating in the Linux Kernel Mentorship program and graduated mentees to mentor new developers. We kicked off [Linux Kernel mentorship Spring 2021][30] and are planning for Summer and Fall. + +A big challenge is we are short on mentors to be able to scale the structured program. Solving the problem requires help from LF member companies and others to encourage their employees to mentor, “it takes a village,” they say. + +**JP:** So this webinar series and the expanded mentorship program will help developers cultivate both hard and soft skills, then. + +**SK:** Correct. The thing about doing webinars is that if we are talking about this from a diversity perspective, they might not have time for a full-length mentorship, typically like a three-month or six-month commitment. This might help them expand their resources for self-study. When we ask for developers’ feedback about what else they need to learn new skill sets, we hear that they don’t have resources, don’t have time to do self-study, and learn to become open source developers and software maintainers. This webinar series covers general open source software topics such as the Linux kernel and legal issues. It could also cover topics specific to other LF projects such as CNCF, Hyperledger, LF Networking, etc. + +**JP:** Anything else we should know about the mentorship program in 2021? + +**SK:** In my view,  attracting diversity and new people is two-fold. One of the things we are working on is inclusive language. Now, we’re not talking about curbing harsh words, although that is a component of what we are looking at. The English you and I use in North America isn’t the same English used elsewhere. As an example, when we use North American-centric terms in our email communications, such as when a maintainer is communicating on a list with people from South Korea, something like “where the rubber meets the road” may not make sense to them at all. So we have to be aware of that. + +**JP:** I know that you are serving on the [Linux kernel Code of Conduct Committee][31] and actively developing the handbook. When I first joined the Linux Foundation, I learned what the Community Managers do and our governance model. I didn’t realize that we even needed to have codes of conduct for open source projects. I have been covering open source for 25 years, but I come out of the corporate world, such as IBM and Microsoft. Codes of Conduct are typically things that the Human Resources officer shows you during your initial onboarding, as part of reviewing your employee manual. You are expected to follow those rules as a condition of employment. + +So why do we need Codes of Conduct in an open source project? Is it because these are people who are coming from all sorts of different backgrounds, companies, and ways of life, and may not have interacted in this form of organized and distributed project before? Or is it about personalities, people interacting with each other over long distance, and email, which creates situations that may arise due to that separation? + +**SK:** Yes, I come out of the corporate world as well, and of course, we had to practice those codes of conduct in that setting. But conduct situations arise that you have to deal with in the corporate world. There are always interpersonal scenarios that can be difficult or challenging to work with — the corporate world isn’t better than the open source world in that respect. It is just that all of that happens behind a closed setting. + +But there is no accountability in the open source world because everyone participates out of their own free will. So on a small, traditional closed project, inside the corporate world, where you might have 20 people involved, you might get one or two people that could be difficult to work with. The same thing happens and is multiplied many times in the open source community, where you have hundreds of thousands of developers working across many different open source projects. + +The biggest problem with these types of projects when you encounter situations such as this is dealing with participation in public forums. In the corporate world, this can be addressed in private. But on a public mailing list, if you are being put down or talked down to, it can be extremely humiliating. + +These interactions are not always extreme cases; they could be simple as a maintainer or a lead developer providing negative feedback — so how do you give it? It has to be done constructively. And that is true for all of us. + +**JP:** Anything else? + +**SK:** In addition to bringing our learnings and applying this to the kernel project, I am also doing this on the [ELISA][32] project, where I chair the Technical Steering Committee, where I am bridging communication between experts from the kernel and the safety communities. To make sure we can use the kernel the best ways in safety-critical applications, in the automotive and medical industry, and so on. Many lessons can be learned in terms of connecting the dots, defining clearly what is essential to make Linux run effectively in these environments, in terms of dependability. How can we think more proactively instead of being engaged in fire-fighting in terms of security or kernel bugs? As a result of this, I am also working on any necessary kernel changes needed to support these safety-critical usage scenarios. + +**JP:** Before we go, what are you passionate about besides all this software stuff? If you have any free time left, what else do you enjoy doing? + +**SK:** I read a lot. COVID quarantine has given me plenty of opportunities to read. I like to go hiking, snowshoeing, and other outdoor activities. Living in Colorado gives me ample opportunities to be in nature. I also like backpacking — while I wasn’t able to do it last year because of COVID — I like to take backpacking trips with my son. I also love to go to conferences and travel, so I am looking forward to doing that again as soon as we are able. + +Talking about backpacking reminded me of the two-day, 22-mile backpacking trip during the summer of 2019 with my son. You can see me in the picture above at the end of the road, carrying a bearbox, sleeping bag, and hammock. It was worth injuring my foot and hurting in places I didn’t even know I had. + +**JP:** Awesome. I enjoyed talking to you today. So happy I finally got to meet you virtually. + +The post [Interview with Shuah Khan, Kernel Maintainer & Linux Fellow][33] appeared first on [Linux Foundation][34]. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/interview-with-shuah-khan-kernel-maintainer-linux-fellow/ + +作者:[The Linux Foundation][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxfoundation.org/en/blog/interview-with-shuah-khan-kernel-maintainer-linux-fellow/ +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/wp-content/uploads/2021/01/3E9C3E02-5F59-4A99-AD4A-814C7B8737A9_1_105_c.jpeg +[2]: https://lfx.linuxfoundation.org/tools/mentorship/ +[3]: https://linuxfoundation.org/about/diversity-inclusivity/mentorship/ +[4]: https://elinux.org/Android_Mainlining_Project +[5]: https://lkml.org/lkml/2012/1/26/368 +[6]: https://www.kernel.org/doc/html/v4.15/dev-tools/kselftest.html +[7]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/usb/usbip +[8]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/usb/usbip +[9]: https://www.systutorials.com/docs/linux/man/1-cpupower/ +[10]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/mc/mc-dev-allocator.c +[11]: https://www.linux-magazine.com/Issues/2018/208/Tutorial-USB-IP +[12]: https://en.wikipedia.org/wiki/LynxOS +[13]: http://www.openhpi.org/Developers +[14]: https://www.netflix.com/title/80244565 +[15]: https://en.wikipedia.org/wiki/A_Suitable_Girl_(film) +[16]: https://www.lfph.io/ +[17]: https://lfaidata.foundation/ +[18]: https://openeew.com/ +[19]: https://www.os-climate.org/ +[20]: https://www.lfenergy.org/ +[21]: mailto:sgoodman@contractor.linuxfoundation.org +[22]: https://mentorship.lfx.linuxfoundation.org/ +[23]: https://mentorship.lfx.linuxfoundation.org/project/926665ac-9b96-45aa-bb11-5d99096be870 +[24]: https://www.linuxfoundation.org/en/blog/preventing-supply-chain-attacks-like-solarwinds/ +[25]: https://www.linuxfoundation.org/en/press-release/new-open-source-contributor-report-from-linux-foundation-and-harvard-identifies-motivations-and-opportunities-for-improving-software-security/ +[26]: https://www.linuxplumbersconf.org/ +[27]: https://www.kernel.org/doc/linux/MAINTAINERS +[28]: https://events.linuxfoundation.org/lf-live-mentorship-series/ +[29]: https://forum.linuxfoundation.org/categories/lfx-mentorship-linux-kernel +[30]: https://forum.linuxfoundation.org/discussion/858202/linux-kernel-mentorship-spring-projects-are-now-accepting-applications#latest +[31]: https://www.kernel.org/code-of-conduct.html +[32]: https://elisa.tech/ +[33]: https://www.linuxfoundation.org/en/blog/interview-with-shuah-khan-kernel-maintainer-linux-fellow/ +[34]: https://www.linuxfoundation.org/ diff --git a/sources/tech/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md b/sources/tech/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md new file mode 100644 index 0000000000..493b72b562 --- /dev/null +++ b/sources/tech/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md @@ -0,0 +1,89 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Open Source Security Foundation (OpenSSF): Reflection and Future) +[#]: via: (https://www.linux.com/news/open-source-security-foundation-openssf-reflection-and-future/) +[#]: author: (The Linux Foundation https://www.linuxfoundation.org/en/blog/openssf-reflection-and-future/) + +Open Source Security Foundation (OpenSSF): Reflection and Future +====== + +The [Open Source Software Foundation (OpenSSF)][1] officially [launched on August 3, 2020][2]. In this article, we’ll look at why the OpenSSF was formed, what it’s accomplished in its first six months, and its plans for the future. + +The world depends on open source software (OSS), so OSS security is vital. Various efforts have been created to help improve OSS security. These efforts include the Core Infrastructure Initiative (CII) in the Linux Foundation, the Open Source Security Coalition (OSSC) founded by the GitHub Security Lab, and the Joint Open Source Software Initiative (JOSSI) founded by Google and others. + +It became apparent that progress would be easier if these efforts merged into a single effort. The OpenSSF was created in 2020 as a merging of these three groups into “a cross-industry collaboration that brings together leaders to improve the security of open source software (OSS).” + +The OpenSSF has certainly gained that “cross-industry collaboration”; its dozens of members include (alphabetically) Canonical, GitHub, Google, IBM, Intel, Microsoft, and Red Hat. Its governing board also includes a Security Community Individual Representative to represent those not represented in other ways specifically. It’s also created some structures to help people work together: it’s established active working groups, identified (and posted) its values, and agreed on its technical vision. + +But none of that matters unless they actually _produce_ results. It’s still early, but they already have several accomplishments. They have released: + + * [Secure Software Development Fundamentals courses][3]. This set of 3 freely-available courses on the edX platform is for software developers to learn to develop secure software. It focuses on practical steps that any software developer can easily take, not theory or actions requiring unlimited resources.  Developers can also pay a fee to take tests to attempt to earn certificates to prove they understand the material. + * [Security Scorecards][4]. This auto-generates a “security score” for open source projects to help users as they decide the trust, risk, and security posture for their use case. + * [Criticality Score][5]. This project auto-generates a criticality score for open source projects based on a number of parameters. The goal is to better understand the most critical open source projects the world depends on. + * [Security metrics dashboard][6]. This early-release work provides a dashboard of security and sustainment information about OSS projects by combining the Security ScoreCards, CII Best Practices, and other data sources. + * [OpenSSF CVE Benchmark][7]. This benchmark consists of vulnerable code and metadata for over 200 historical JavaScript/TypeScript vulnerabilities (CVEs). This will help security teams evaluate different security tools on the market by enabling teams to determine false positive and false negative rates with real codebases instead of synthetic test code. + * [OWASP Security Knowledge Framework (SKF)][8]. In collaboration with OWASP, this work is a knowledge base that includes projects with checklists and best practice code examples in multiple programming languages. It includes training materials for developers on how to write secure code in specific languages and security labs for hands-on work. + * [Report on the 2020 FOSS Contributor Survey][9], The OpenSSF and the Laboratory for Innovation Science at Harvard (LISH) released a report that details the findings of a contributor survey to study and identify ways to improve OSS security and sustainability. There were nearly 1,200 respondents. + + + +The existing [CII Best Practices badge][10] project has also been folded into the OpenSSF and continues to be improved. The project now has more Chinese translators, a new ongoing Swahili translation, and various small refinements that clarify the badging requirements. + +The [November 2020 OpenSSF Town Hall][11] discussed the OpenSSF’s ongoing work. The OpenSSF currently has the following working groups: + + * Vulnerability Disclosures + * Security Tooling + * Security Best Practices + * Identifying Security Threats to Open Source Projects (focusing on a metrics dashboard) + * Securing Critical Projects + * Digital Identity Attestation + + + +Future potential work, other than continuously improving work already released, includes: + + * Identifying overlapping and related security requirements in various specifications to reduce duplicate effort. This is to be developed in collaboration with OWASP as lead and is termed the [Common Requirements Enumeration (CRE)][12]. The CRE is to “link sections of standard[s] and guidelines to each other, using a mutual topic identifier, enabling standard and scheme makers to work efficiently, enabling standard users to find the information they need, and attaining a shared understanding in the industry of what cyber security is.” [Source: “Common Requirements Enumeration”] + * Establishing a website for no-install access to a security metrics OSS dashboard. Again, this will provide a single view of data from multiple data sources, including the Security Scorecards and CII Best Practices. + * Developing improved identification of critical OSS projects. Harvard and the LF have previously worked to identify critical OSS projects. In the coming year, they will refine their approaches and add new data sources to identify critical OSS projects better. + * Funding specific critical OSS projects to improve their security. The expectation is that this will focus on critical OSS projects that are not otherwise being adequately funded and will work to improve their overall sustainability. + * Identifying and implementing improved, simplified techniques for digitally signing commits and verifying those identity attestations. + + + +As with all Linux Foundation projects, the work by the OpenSSF is decided by its participants. If you are interested in the security of the OSS we all depend on, check out the OpenSSF and participate in some way. The best way to get involved is to attend the working group meetings — they are usually every other week and very casual. By working together we can make a difference. For more information, see [https://openssf.org][1] + +_[**David A. Wheeler,**][13]_* Director of Open Source Supply Chain Security at the Linux Foundation*** + +The post [Open Source Security Foundation (OpenSSF): Reflection and Future][14] appeared first on [Linux Foundation][15]. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/open-source-security-foundation-openssf-reflection-and-future/ + +作者:[The Linux Foundation][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxfoundation.org/en/blog/openssf-reflection-and-future/ +[b]: https://github.com/lujun9972 +[1]: https://openssf.org/ +[2]: https://www.linuxfoundation.org/en/press-release/technology-and-enterprise-leaders-combine-efforts-to-improve-open-source-security/ +[3]: https://openssf.org/blog/2020/10/29/announcing-secure-software-development-edx-course-sign-up-today/ +[4]: https://openssf.org/blog/2020/11/06/security-scorecards-for-open-source-projects/ +[5]: https://github.com/ossf/criticality_score +[6]: https://github.com/ossf/Project-Security-Metrics +[7]: https://openssf.org/blog/2020/12/09/introducing-the-openssf-cve-benchmark/ +[8]: https://owasp.org/www-project-security-knowledge-framework/ +[9]: https://www.linuxfoundation.org/en/press-release/new-open-source-contributor-report-from-linux-foundation-and-harvard-identifies-motivations-and-opportunities-for-improving-software-security/ +[10]: https://bestpractices.coreinfrastructure.org/ +[11]: https://openssf.org/blog/2020/11/23/openssf-town-hall-recording-now-available/ +[12]: https://owasp.org/www-project-integration-standards/ +[13]: mailto:dwheeler@linuxfoundation.org +[14]: https://www.linuxfoundation.org/en/blog/openssf-reflection-and-future/ +[15]: https://www.linuxfoundation.org/ diff --git a/sources/tech/20210128 Start programming in Racket by writing a -guess the number- game.md b/sources/tech/20210128 Start programming in Racket by writing a -guess the number- game.md new file mode 100644 index 0000000000..4f0bb194a6 --- /dev/null +++ b/sources/tech/20210128 Start programming in Racket by writing a -guess the number- game.md @@ -0,0 +1,156 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Start programming in Racket by writing a "guess the number" game) +[#]: via: (https://opensource.com/article/21/1/racket-guess-number) +[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana) + +Start programming in Racket by writing a "guess the number" game +====== +Racket is a great way to learn a language from the Scheme and Lisp +families. +![Person using a laptop][1] + +I am a big advocate of learning multiple programming languages. That's mostly because I tend to get bored with the languages I use the most. It also teaches me new and interesting ways to approach programming. + +Writing the same program in multiple languages is a good way to learn their differences and similarities. Previously, I wrote articles showing the same sample data plotting program written in [C & C++][2], JavaScript with [Node.js][3], and [Python and Octave][4]. + +This article is part of another series about writing a "guess the number" game in different programming languages. In this game, the computer picks a number between one and 100 and asks you to guess it. The program loops until you make a correct guess. + +### Learning a new language + +Venturing into a new language always feels awkward—I feel like I am losing time since it would be much quicker to use the tools I know and use all the time. Luckily, at the start, I am also very enthusiastic about learning something new, and this helps me overcome the initial pain. And once I learn a new perspective or a solution that I would never have thought of, things become interesting! Learning new languages also helps me backport new techniques to my old and tested tools. + +When I start learning a new language, I usually look for a tutorial that introduces me to its [syntax][5]. Once I have a feeling for the syntax, I start working on a program I am familiar with and look for examples that will adapt to my needs. + +### What is Racket? + +[Racket][6] is a programming language in the [Scheme family][7], which is a dialect of [Lisp][8]. Lisp is also a family of languages, which can make it hard to decide which "dialect" to start with when you want to learn Lisp. All of the implementations have various degrees of compatibility, and this plethora of options might turn away newbies. I think that is a pity because these languages are really fun and stimulating! + +Starting with Racket makes sense because it is very mature and versatile, and the community is very active. Since Racket is a Lisp-like language, a major characteristic is that it uses the [prefix notation][9] and a [lot of parentheses][10]. Functions and operators are applied to a list of operands by prefixing them: + + +``` +(function-name operand operand ...) + +(+ 2 3) +↳ Returns 5 + +(list 1 2 3 5) +↳ Returns a list containing 1, 2, 3, and 5 + +(define x 1) +↳ Defines a variable called x with value of 1 + +(define (f x y) (* x x)) +↳ Defines a function called f with two parameters called x and y that returns their product. +``` + +This is basically all there is to know about Racket syntax; the rest is learning the functions from the [documentation][11], which is very thorough. There are other aspects of the syntax, like [keyword arguments][12] and [quoting][13], but you do not need them for this example. + +Mastering Racket might be difficult, and its syntax might look weird (especially if you are used to languages like Python), but I find it very fun to use. A big bonus is Racket's programming environment, [DrRacket][14], which is very supportive, especially when you are getting started with the language. + +The major Linux distributions offer packaged versions of Racket, so [installation][15] should be easy. + +### Guess the number game in Racket + +Here is a version of the "guess the number" program written in Racket: + + +``` +#lang racket + +(define (inquire-user number) +  (display "Insert a number: ") +  (define guess (string->number (read-line))) +  (cond [(> number guess) (displayln "Too low") (inquire-user number)] +        [(< number guess) (displayln "Too high") (inquire-user number)] +        [else (displayln "Correct!")])) + +(displayln "Guess a number between 1 and 100") +(inquire-user (random 1 101)) +``` + +Save this listing to a file called `guess.rkt` and run it: + + +``` +`$ racket guess.rkt` +``` + +Here is some example output: + + +``` +Guess a number between 1 and 100 +Insert a number: 90 +Too high +Insert a number: 50 +Too high +Insert a number: 20 +Too high +Insert a number: 10 +Too low +Insert a number: 12 +Too low +Insert a number: 13 +Too low +Insert a number: 14 +Too low +Insert a number: 15 +Correct! +``` + +### Understanding the program + +I'll go through the program line by line. The first line declares the language the listing is written into: `#lang racket`. This might seem strange, but Racket is very good at [writing interpreters][16] for new [domain-specific languages][17]. Do not panic, though! You can use Racket as it is because it is very rich in tools. + +Now for the next line. `(define ...)` is used to declare new variables or functions. Here, it defines a new function called `inquire-user` that accepts the parameter `number`. The `number` parameter is the random number that the user will have to guess. The rest of the code inside the parentheses of the `define` procedure is the body of the `inquire-user` function. Notice that the function name contains a dash; this is Racket's idiomatic style for writing a long variable name. + +This function recursively calls itself to repeat the question until the user guesses the right number. Note that I am not using loops; I feel that Racket programmers do not like loops and only use recursive functions. This approach is idiomatic to Racket, but if you prefer, [loops are an option][18]. + +The first step of the `inquire-user` function asks the user to insert a number by writing that string to the console. Then it defines a variable called `guess` that contains whatever the user entered. The [`read-line` function][19] returns the user input as a string. The string is then converted to a number with the [`string->number` function][20]. After the variable definition, the [`cond` function][21] accepts a series of conditions. If a condition is satisfied, it executes the code inside that condition. These conditions, `(> number guess)` and `(< number guess)`, are followed by two functions: a `displayln` that gives clues to the user and a `inquire-user` call. The function calls itself again when the user does not guess the right number. The `else` clause executes when the two conditions are not met, i.e., the user enters the correct number. The program's guts are this `inquire-user` function. + +However, the function still needs to be called! First, the program asks the user to guess a number between 1 and 100, and then it calls the `inquire-user` function with a random number. The random number is generated with the [`random` function][22]. You need to inform the function that you want to generate a number between 1 and 100, but the `random` function generates integer numbers up to `max-1`, so I used 101. + +### Try Racket + +Learning new languages is fun! I am a big advocate of programming languages polyglotism because it brings new, interesting approaches and insights to programming. Racket is a great opportunity to start learning how to program with a Lisp-like language. I suggest you give it a try. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/racket-guess-number + +作者:[Cristiano L. Fontana][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cristianofontana +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) +[2]: https://opensource.com/article/20/2/c-data-science +[3]: https://opensource.com/article/20/6/data-science-nodejs +[4]: https://opensource.com/article/20/2/python-gnu-octave-data-science +[5]: https://en.wikipedia.org/wiki/Syntax_(programming_languages) +[6]: https://racket-lang.org/ +[7]: https://en.wikipedia.org/wiki/Scheme_(programming_language) +[8]: https://en.wikipedia.org/wiki/Lisp_(programming_language) +[9]: https://en.wikipedia.org/wiki/Polish_notation +[10]: https://xkcd.com/297/ +[11]: https://docs.racket-lang.org/ +[12]: https://rosettacode.org/wiki/Named_parameters#Racket +[13]: https://docs.racket-lang.org/guide/quote.html +[14]: https://docs.racket-lang.org/drracket/ +[15]: https://download.racket-lang.org/ +[16]: https://docs.racket-lang.org/guide/hash-languages.html +[17]: https://en.wikipedia.org/wiki/Domain-specific_language +[18]: https://docs.racket-lang.org/heresy/conditionals.html +[19]: https://docs.racket-lang.org/reference/Byte_and_String_Input.html?q=read-line#%28def._%28%28quote._~23~25kernel%29._read-line%29%29 +[20]: https://docs.racket-lang.org/reference/generic-numbers.html?q=string-%3Enumber#%28def._%28%28quote._~23~25kernel%29._string-~3enumber%29%29 +[21]: https://docs.racket-lang.org/reference/if.html?q=cond#%28form._%28%28lib._racket%2Fprivate%2Fletstx-scheme..rkt%29._cond%29%29 +[22]: https://docs.racket-lang.org/reference/generic-numbers.html?q=random#%28def._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._random%29%29 diff --git a/sources/tech/20210129 Machine learning made easy with Python.md b/sources/tech/20210129 Machine learning made easy with Python.md new file mode 100644 index 0000000000..da6e7f078f --- /dev/null +++ b/sources/tech/20210129 Machine learning made easy with Python.md @@ -0,0 +1,218 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Machine learning made easy with Python) +[#]: via: (https://opensource.com/article/21/1/machine-learning-python) +[#]: author: (Girish Managoli https://opensource.com/users/gammay) + +Machine learning made easy with Python +====== +Solve real-world machine learning problems with Naïve Bayes classifiers. +![arrows cycle symbol for failing faster][1] + +Naïve Bayes is a classification technique that serves as the basis for implementing several classifier modeling algorithms. Naïve Bayes-based classifiers are considered some of the simplest, fastest, and easiest-to-use machine learning techniques, yet are still effective for real-world applications. + +Naïve Bayes is based on [Bayes' theorem][2], formulated by 18th-century statistician [Thomas Bayes][3]. This theorem assesses the probability that an event will occur based on conditions related to the event. For example, an individual with [Parkinson's disease][4] typically has voice variations; hence such symptoms are considered related to the prediction of a Parkinson's diagnosis. The original Bayes' theorem provides a method to determine the probability of a target event, and the Naïve variant extends and simplifies this method. + +### Solving a real-world problem + +This article demonstrates a Naïve Bayes classifier's capabilities to solve a real-world problem (as opposed to a complete business-grade application). I'll assume you have basic familiarity with machine learning (ML), so some of the steps that are not primarily related to ML prediction, such as data shuffling and splitting, are not covered here. If you are an ML beginner or need a refresher, see _[An introduction to machine learning today][5]_ and _[Getting started with open source machine learning][6]_. + +The Naïve Bayes classifier is [supervised][7], [generative][8], non-linear, [parametric][9], and [probabilistic][10]. + +In this article, I'll demonstrate using Naïve Bayes with the example of predicting a Parkinson's diagnosis. The dataset for this example comes from this [UCI Machine Learning Repository][11]. This data includes several speech signal variations to assess the likelihood of the medical condition; this example will use the first eight of them: + + * **MDVP:Fo(Hz):** Average vocal fundamental frequency + * **MDVP:Fhi(Hz):** Maximum vocal fundamental frequency + * **MDVP:Flo(Hz):** Minimum vocal fundamental frequency + * **MDVP:Jitter(%)**, **MDVP:Jitter(Abs)**, **MDVP:RAP**, **MDVP:PPQ**, and **Jitter:DDP:** Five measures of variation in fundamental frequency + + + +The dataset used in this example, shuffled and split for use, is available in my [GitHub repository][12]. + +### ML with Python + +I'll use Python to implement the solution. The software I used for this application is: + + * Python 3.8.2 + * Pandas 1.1.1 + * scikit-learn 0.22.2.post1 + + + +There are several open source Naïve Bayes classifier implementations available in Python, including: + + * **NLTK Naïve Bayes:** Based on the standard Naïve Bayes algorithm for text classification + * **NLTK Positive Naïve Bayes:** A variant of NLTK Naïve Bayes that performs binary classification with partially labeled training sets + * **Scikit-learn Gaussian Naïve Bayes:** Provides partial fit to support a data stream or very large dataset + * **Scikit-learn Multinomial Naïve Bayes:** Optimized for discrete data features, example counts, or frequency + * **Scikit-learn Bernoulli Naïve Bayes:** Designed for binary/Boolean features + + + +I will use [sklearn Gaussian Naive Bayes][13] for this example. + +Here is my Python implementation of `naive_bayes_parkinsons.py`: + + +``` +import pandas as pd + +# Feature columns we use +x_rows=['MDVP:Fo(Hz)','MDVP:Fhi(Hz)','MDVP:Flo(Hz)', +        'MDVP:Jitter(%)','MDVP:Jitter(Abs)','MDVP:RAP','MDVP:PPQ','Jitter:DDP'] +y_rows=['status'] + +# Train + +# Read train data +train_data = pd.read_csv('parkinsons/Data_Parkinsons_TRAIN.csv') +train_x = train_data[x_rows] +train_y = train_data[y_rows] +print("train_x:\n", train_x) +print("train_y:\n", train_y) + +# Load sklearn Gaussian Naive Bayes and fit +from sklearn.naive_bayes import GaussianNB + +gnb = GaussianNB() +gnb.fit(train_x, train_y) + +# Prediction on train data +predict_train = gnb.predict(train_x) +print('Prediction on train data:', predict_train) + +# Accuray score on train data +from sklearn.metrics import accuracy_score +accuracy_train = accuracy_score(train_y, predict_train) +print('Accuray score on train data:', accuracy_train) + +# Test + +# Read test data +test_data = pd.read_csv('parkinsons/Data_Parkinsons_TEST.csv') +test_x = test_data[x_rows] +test_y = test_data[y_rows] + +# Prediction on test data +predict_test = gnb.predict(test_x) +print('Prediction on test data:', predict_test) + +# Accuracy Score on test data +accuracy_test = accuracy_score(test_y, predict_test) +print('Accuray score on test data:', accuracy_train) +``` + +Run the Python application: + + +``` +$ python naive_bayes_parkinsons.py + +train_x: +      MDVP:Fo(Hz)  MDVP:Fhi(Hz) ...  MDVP:RAP  MDVP:PPQ  Jitter:DDP +0        152.125       161.469  ...   0.00191   0.00226     0.00574 +1        120.080       139.710  ...   0.00180   0.00220     0.00540 +2        122.400       148.650  ...   0.00465   0.00696     0.01394 +3        237.323       243.709  ...   0.00173   0.00159     0.00519 +..           ...           ...           ...  ...       ...       ...         +155      138.190       203.522  ...   0.00406   0.00398     0.01218 + +[156 rows x 8 columns] + +train_y: +      status +0         1 +1         1 +2         1 +3         0 +..      ... +155       1 + +[156 rows x 1 columns] + +Prediction on train data: [1 1 1 0 ... 1] +Accuracy score on train data: 0.6666666666666666 + +Prediction on test data: [1 1 1 1 ... 1 + 1 1] +Accuracy score on test data: 0.6666666666666666 +``` + +The accuracy scores on the train and test sets are 67% in this example; its performance can be optimized. Do you want to give it a try? If so, share your approach in the comments below. + +### Under the hood + +The Naïve Bayes classifier is based on Bayes' rule or theorem, which computes conditional probability, or the likelihood for an event to occur when another related event has occurred. Stated in simple terms, it answers the question: _If we know the probability that event x occurred before event y, then what is the probability that y will occur when x occurs again?_ The rule uses a prior-prediction value that is refined gradually to arrive at a final [posterior][14] value. A fundamental assumption of Bayes is that all parameters are of equal importance. + +At a high level, the steps involved in Bayes' computation are: + + 1. Compute overall posterior probabilities ("Has Parkinson's" and "Doesn't have Parkinson's") + 2. Compute probabilities of posteriors across all values and each possible value of the event + 3. Compute final posterior probability by multiplying the results of #1 and #2 for desired events + + + +Step 2 can be computationally quite arduous. Naïve Bayes simplifies it: + + 1. Compute overall posterior probabilities ("Has Parkinson's" and "Doesn't have Parkinson's") + 2. Compute probabilities of posteriors for desired event values + 3. Compute final posterior probability by multiplying the results of #1 and #2 for desired events + + + +This is a very basic explanation, and several other factors must be considered, such as data types, sparse data, missing data, and more. + +### Hyperparameters + +Naïve Bayes, being a simple and direct algorithm, does not need hyperparameters. However, specific implementations may provide advanced features. For example, [GaussianNB][13] has two: + + * **priors:** Prior probabilities can be specified instead of the algorithm taking the priors from data. + * **var_smoothing:** This provides the ability to consider data-curve variations, which is helpful when the data does not follow a typical Gaussian distribution. + + + +### Loss functions + +Maintaining its philosophy of simplicity, Naïve Bayes uses a [0-1 loss function][15]. If the prediction correctly matches the expected outcome, the loss is 0, and it's 1 otherwise. + +### Pros and cons + +**Pro:** Naïve Bayes is one of the easiest and fastest algorithms. +**Pro:** Naïve Bayes gives reasonable predictions even with less data. +**Con:** Naïve Bayes predictions are estimates, not precise. It favors speed over accuracy. +**Con:** A fundamental Naïve Bayes assumption is the independence of all features, but this may not always be true. + +In essence, Naïve Bayes is an extension of Bayes' theorem. It is one of the simplest and fastest machine learning algorithms, intended for easy and quick training and prediction. Naïve Bayes provides good-enough, reasonably accurate predictions. One of its fundamental assumptions is the independence of prediction features. Several open source implementations are available with traits over and above what are available in the Bayes algorithm. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/machine-learning-python + +作者:[Girish Managoli][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gammay +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster) +[2]: https://en.wikipedia.org/wiki/Bayes%27_theorem +[3]: https://en.wikipedia.org/wiki/Thomas_Bayes +[4]: https://en.wikipedia.org/wiki/Parkinson%27s_disease +[5]: https://opensource.com/article/17/9/introduction-machine-learning +[6]: https://opensource.com/business/15/9/getting-started-open-source-machine-learning +[7]: https://en.wikipedia.org/wiki/Supervised_learning +[8]: https://en.wikipedia.org/wiki/Generative_model +[9]: https://en.wikipedia.org/wiki/Parametric_model +[10]: https://en.wikipedia.org/wiki/Probabilistic_classification +[11]: https://archive.ics.uci.edu/ml/datasets/parkinsons +[12]: https://github.com/gammay/Machine-learning-made-easy-Naive-Bayes/tree/main/parkinsons +[13]: https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html +[14]: https://en.wikipedia.org/wiki/Posterior_probability +[15]: https://en.wikipedia.org/wiki/Loss_function#0-1_loss_function diff --git a/sources/tech/20210130 How I de-clutter my digital workspace.md b/sources/tech/20210130 How I de-clutter my digital workspace.md new file mode 100644 index 0000000000..4a62c73b9e --- /dev/null +++ b/sources/tech/20210130 How I de-clutter my digital workspace.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How I de-clutter my digital workspace) +[#]: via: (https://opensource.com/article/21/1/declutter-workspace) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +How I de-clutter my digital workspace +====== +Archive old email and other files to de-clutter your digital workspace. +![video editing dashboard][1] + +In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 20 of 21 Days of Productivity in 2021. + +I am a digital pack-rat. So many of us are. After all, who knows when we'll need that email our partner sent asking us to pick up milk on our way home from work in 2009? + +The truth is, we don't need it. We _really_ don't. However, large cloud providers have given us so much storage space for cheap or for free that we don't even think about it anymore. When I can have unlimited documents, notes, to-do items, calendar appointments, and email, why _shouldn't_ I just keep everything? + +![Marie Kondo indicating clearing email will bring you joy][2] + +It really does. (Kevin Sonney, [CC BY-SA 4.0][3]) + +When dealing with physical items, like a notebook or a stack of documents, there comes a point where it is obvious we need to move it off our desks or out of our offices. We need to store it in some other place where we can get to it if we need to, but also know it will be safe. Eventually, that too fills up, and we are forced to clean out that storage as well. + +Digital storage is really no different, only we're tempted to keep more things in it. If I have a note on my desk to pick something up on the way home, I'm going to throw it away when I'm done with it. That same note in my shared notebook with my wife is likely just to stay there. Maybe we'll re-use it, maybe we'll just let it sit there, taking up space. + +This approach is the same as "hot" and "cold" storage. Hot storage is the most recent and relevant data that tends to be accessed frequently. Cold storage is for the archives we might need to refer to in the future and may have historical significance, but doesn't need to be accessed frequently. + +Last year I took the time to export all of my emails from before 2019 and put them in an archive file. Then I deleted them. Why? For starters, I really didn't need any of it anymore. Sure it is nice to have the emails my spouse and I sent each other when we started dating, but they are not something I look at daily or even monthly. I could put them in cold storage, where they would be safe, and I could get them when I did want to look at them. The same for the emails and schedules for the conventions I had worked at before the pandemic. Do I need to have the schedule grid for my department at AnthroCon 2015 at my fingertips? NOPE. + +### Archiving messages + +The process of archiving messages will differ, depending on what email client you use, but the general idea is the same. In KMail, the email client from KDE, you can archive (and export, by nature of the archival process) a folder of messages by right-clicking on a folder and selecting **Archive Folder**. I also have KMail remove the messages after completing the archive. + +![Archiving a directory of messages in KMail][4] + +On the GNOME side of things, you can either export a folder of messages as an **mbox** file or you can use the **Save As** option to export it as an archive. + +![Archive file of 2007-2019 email.][5] + +8 Gb of mail (Kevin Sonney, [CC BY-SA 4.0][3]) + +If you're not on Linux, you might look into using the Thunderbird client. In Thunderbird, highlight the messages you want to archive (or press **Ctrl+A** to select all of them, and then right-click and select **Archive**. + +![Archiving mail in Thunderbird][6] + +### Cold storage + +I have been managing my documents, notes, online to-do lists, and so on, by archiving the excess. I keep the most recent and relevant, and then I move the rest either into an archive file that does not live on my machine. Otherwise, if they no longer have any relevance, I just delete them altogether. That has made finding the relevant information much easier because there aren't loads of old things cluttering up my results. + +It is important to take some time and de-clutter our digital workspaces the way we de-clutter our physical workspaces. Productivity isn't just getting things done, but also being able to find the right things we need to do them. Moving data into cold storage archives means we can rest easy knowing that it is safe if we need it and out of our way for the 99.9% of the time when we don't. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/declutter-workspace + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) +[2]: https://opensource.com/sites/default/files/day20-image1.jpg +[3]: https://creativecommons.org/licenses/by-sa/4.0/ +[4]: https://opensource.com/sites/default/files/kmail-archive.jpg (Archiving a directory of messages in KMail) +[5]: https://opensource.com/sites/default/files/day20-image2.png +[6]: https://opensource.com/sites/default/files/thunderbird-export.jpg (Archiving mail in Thunderbird) diff --git a/sources/tech/20210131 How to teach open source beyond business.md b/sources/tech/20210131 How to teach open source beyond business.md new file mode 100644 index 0000000000..f9563e14ff --- /dev/null +++ b/sources/tech/20210131 How to teach open source beyond business.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to teach open source beyond business) +[#]: via: (https://opensource.com/article/21/1/open-source-beyond-business) +[#]: author: (Irit Goihman https://opensource.com/users/iritgoihman) + +How to teach open source beyond business +====== +The Beyond program connects future talents in the tech industry with +open source culture. +![Teacher or learner?][1] + +When I was a university student, I didn't understand the fuss about open source software. I used Linux and open source software but didn't really understand the open source model, how to contribute to projects, or how it could benefit my future career. My development experience consisted mainly of homework assignments and a large final project required for my degree. + +So, when I took my first steps in the tech industry, there was a big learning curve before I felt comfortable. I needed to understand how to join established, sometimes large, and distributed teams working on an ongoing project. I also needed to know how to communicate properly so that my efforts could be recognized. + +I am not special in this regard. This is a common situation among new graduates. + +### Open source gives students a head start + +Since then, as an engineer and later as a manager, I have helped onboard many junior engineers. One of the things I've noticed is that the new graduates who have already contributed to open source projects could onboard quickly and start contributing faster than those without this experience. + +By incorporating open source methodology into academic studies, students can gain experience relevant to the industry, learn to reuse their existing knowledge, and establish a good platform for formulating ideas and sharing knowledge. Practicing open source can make a positive impact on students' technical knowledge and experience. This can help them become more successful in bootstrapping their careers. + +The value of open source methodologies in the tech industry is well-established and shapes the culture of software companies worldwide. Involvement in open source projects and adoption of the [open organization culture][2] has become an industry standard. Companies seek fresh-minded, talented employees who know how to work in open source and cultivate its culture. Therefore, the tech industry must drive the academic world to embrace open source culture as one of the fundamental methodologies to learn in tech studies. + +### Moving open source culture 'Beyond' business + +When I met [Liora Milbaum][3], a senior principal software engineer at Red Hat, I learned we shared an interest in bringing open source culture and principles into academics. Liora had previously founded [DevOps Loft][4], in which she shared DevOps practices with people interested in stepping into this world, and wished to start a similar initiative to teach open source to university students. We decided to launch the [Beyond][5] program to connect future talents in the tech industry with open source culture as Red Hat practices it. + +We started the Beyond program at the [Academic College of Tel Aviv-Yafo][6], where we were warmly welcomed by the information systems faculty. We started by teaching an "Introduction to DevOps'' course to introduce elements of the DevOps tech stack. Our biggest challenge at the start was deciding how to teach what open source is. The answer was simple: by practicing it, of course. We didn't want to deliver yet another old-school academic course; rather, we wanted to expose students to industry standards. + +We created a syllabus that incorporated common open source projects and tools to teach the DevOps stack. The course consisted of lectures and hands-on participation taught by engineers. The students were divided into groups, each one mentored and supported by an engineer. They practiced working in teams, sharing knowledge (both inside and outside of their groups), and collaborating effectively. + +During our second course, "Open source development pillars," for students in the computer science department, we encountered another big obstacle. Two weeks after the course started, we became fully remote as the COVID pandemic hit the globe. We solved this problem by using the same remote collaboration tools with our students that we were using for our daily work at Red Hat. We were amazed at how simple and smooth the transition was. + +![Beyond teaching online][7] + +(Irit Goihman, [CC BY-SA 4.0][8]) + +### Successful early outcomes + +The two courses were a huge success, and we even hired one of the top students we taught. The feedback we received was amazing; the students said we positively impacted their knowledge, thinking, and soft skills. A few students were hired for their first tech job based on their open source contributions during the course. + +Other academic institutions have expressed interest in adopting these courses, so we've expanded the program to another university. + +I am fortunate to co-lead this successful initiative with Liora, accompanied by a team of talented engineers. Together, we are helping increase the open source community a bit more. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/1/open-source-beyond-business + +作者:[Irit Goihman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/iritgoihman +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?) +[2]: https://opensource.com/open-organization/resources/open-org-definition +[3]: https://www.linkedin.com/in/lioramilbaum +[4]: https://www.devopsloft.io/ +[5]: https://research.redhat.com/blog/2020/05/24/open-source-development-course-and-devops-methodology/ +[6]: https://www.int.mta.ac.il/ +[7]: https://opensource.com/sites/default/files/pictures/beyond_mta.png (Beyond teaching online) +[8]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20210201 Best Single Board Computers for AI and Deep Learning Projects.md b/sources/tech/20210201 Best Single Board Computers for AI and Deep Learning Projects.md new file mode 100644 index 0000000000..73cf55c47b --- /dev/null +++ b/sources/tech/20210201 Best Single Board Computers for AI and Deep Learning Projects.md @@ -0,0 +1,282 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Best Single Board Computers for AI and Deep Learning Projects) +[#]: via: (https://itsfoss.com/best-sbc-for-ai/) +[#]: author: (Community https://itsfoss.com/author/itsfoss/) + +Best Single Board Computers for AI and Deep Learning Projects +====== + +[Single-board computers][1] (SBC) are very popular with tinkerers and hobbyists alike, they offer a lot of functionality in a very small form factor. An SBC has the CPU, GPU, memory, IO ports, etc. on a small circuit board and users can add functionality by adding new devices to the [GPIO ports][2]. Some of the more popular SBCs include the [Raspberry Pi][3] and [Arduino][4] family of products. + +However, there is an increasing demand for SBC’s that can be used for edge compute applications like Artificial Intelligence (AI) or Deep Learning (DL) and there are quite a few. The list below consists of some of the best SBCs that have been developed for edge computing. + +The list is in no particular order of ranking. Some links here are affiliate links. Please read our [affiliate policy][5]. + +### 1\. Nvidia Jetson Family + +Nvidia has a great lineup of SBCs that cater to AI developers and hobbyists alike. Their line of “[Jetson Developer Kits][6]” are some of the most powerful and value for money SBCs available in the market. Below is a list of their offerings. + +#### Nvidia Jetson Nano Developer Kit + +![][7] + +Starting at **$59**, the Jetson Nano is the cheapest SBC in the list and offers a good price to performance ratio. It can run multiple neural networks alongside other applications such as object detection, segmentation, speech processing and image classification. + +The Jetson Nano is aimed towards AI enthusiasts, hobbyists and developers who want to do projects by implementing AI. + +The Jetson Nano is being offered in two variants: 4 GB and 2 GB. The main differences between the two are, the price, RAM capacity and IO ports being offered. The 4 GB variant has been showcased in the image above. + +**Key Specifications** + + * **CPU:** Quad-core ARM A57 @ 1.43 GHz + * **GPU:** 128-core NVIDIA Maxwell + * **Memory:** 4 GB 64-bit LPDDR4 @ 25.6 GB/s or 2 GB 64-bit LPDDR4 @ 25.6 GB/s + * **Storage:** microSD card support + * **Display:** HDMI and Display Port or HDMI + +Preview | Product | Price | +---|---|---|--- +![NVIDIA Jetson Nano 2GB Developer Kit \(945-13541-0000-000\)][8] ![NVIDIA Jetson Nano 2GB Developer Kit \(945-13541-0000-000\)][8] | [NVIDIA Jetson Nano 2GB Developer Kit (945-13541-0000-000)][9] | $59.00[][10] | [Buy on Amazon][11] + +#### Nvidia Jetson Xavier NX Developer Kit + +![][12] + +The Jetson Xavier NX is a step up from the Jetson Nano and is aimed more towards OEMs, start-ups and AI developers. + +The Jetson Xavier NX is meant for applications that need more serious AI processing power that an entry level offering like the Jetson Nano simply can’t deliver. The Jetson Xavier NX is being offered at **$386.99**. + +**Key Specifications** + + * **CPU:** 6-core NVIDIA Carmel ARM v8.2 64-bit CPU + * **GPU:** NVIDIA Volta architecture with 384 NVIDIA CUDA cores and 48 Tensor cores + * **DL Accelerator:** 2x NVDLA Engines + * **Vision Accelerator:** 7-Way VLIW Vision Processor + * **Memory:** 8 GB 128-bit LPDDR4x @ 51.2 GB/s + * **Storage:** microSD support + * **Display:** HDMI and Display Port + +Preview | Product | Price | +---|---|---|--- +![NVIDIA Jetson Xavier NX Developer Kit \(812674024318\)][13] ![NVIDIA Jetson Xavier NX Developer Kit \(812674024318\)][13] | [NVIDIA Jetson Xavier NX Developer Kit (812674024318)][14] | $386.89[][10] | [Buy on Amazon][15] + +#### Nvidia Jetson AGX Xavier Developer Kit + +![][16] + +The Jetson AGX Xavier is the flagship product of the Jetson family, it is meant to be deployed in servers and AI robotics applications in industries such as manufacturing, retail, automobile, agriculture, etc. + +Coming in at **$694.91**, the Jetson AGX Xavier is not meant for beginners, it is meant for developers who want top-tier edge compute performance at their disposal and for companies who want good scalability for their applications. + +**Key Specifications** + + * **CPU:** 8-core ARM v8.2 64-bit CPU + * **GPU:** 512-core Volta GPU with Tensor Cores + * **DL Accelerator:** 2x NVDLA Engines + * **Vision Accelerator:** 7-Way VLIW Vision Processor + * **Memory:** 32 GB 256-Bit LPDDR4x @ 137 GB/s + * **Storage:** 32 GB eMMC 5.1 and uSD/UFS Card Socket for storage expansion + * **Display:** HDMI 2.0 + +Preview | Product | Price | +---|---|---|--- +![NVIDIA Jetson AGX Xavier Developer Kit \(32GB\)][17] ![NVIDIA Jetson AGX Xavier Developer Kit \(32GB\)][17] | [NVIDIA Jetson AGX Xavier Developer Kit (32GB)][18] | $694.91[][10] | [Buy on Amazon][19] + +### 2\. ROCK Pi N10 + +![][20] + +The ROCK Pi N10, developed by [Radxa][21] is the second cheapest offering in this list with its base variant coming in at **$99**, its range topping variant comes in at **$169**, + +The ROCK Pi N10 is equipped with a NPU (Neural Processing Unit) that helps it in processing AI/ Deep Learning workloads with ease. It offers up to 3 TOPS (Tera Operations Per Second) of performance. + +It is being offered in three variants namely, ROCK Pi N10 Model A, ROCK Pi N10 Model B, ROCK Pi N10 Model C, the only differences between these variants are the price, RAM and Storage capacities. + +The ROCK Pi N10 is available for purchase through [Seeed Studio][22]. + +**Key Specifications** + + * **CPU:** RK3399Pro with 2-core Cortex-A72 @ 1.8 GHz and 4-Core Cortex-A53 @ 1.4 GHz + * **GPU:** Mali T860MP4 + * **NPU:** Supports 8bit/16bit computing with up to 3.0 TOPS computing power + * **Memory:** 4 GB/6 GB/8 GB 64-bit LPDDR3 @ 1866 Mb/s + * **Storage:** 16 GB/32 GB/64 GB eMMC + * **Display:** HDMI 2.0 + + + +### 3\. BeagleBone AI + +![][23] + +The BeagleBone AI is [BeagleBoard.org][24]‘s open source SBC is meant to bridge the gap between small SBCs and more powerful industrial computers. The hardware and software of the BeagleBoard are completely open source. + +It is meant for use in the automation of homes, industries and other commercial use cases. It is priced at **~$110**, the price varies across dealers, for more info check [their website][25]. + +**Key Specifications** + + * **CPU:** Texas Instrument AM5729 with Dual-core ARM Cortex-A15 @ 1.5GHz + * **Co-Processor:** 2 x Dual-core ARM Cortex-M4 + * **DSP:** 2 x C66x floating-point VLIW + * **EVE:** 4 x Embedded Vision Engines + * **GPU:** PowerVR SGX544 + * **RAM:** 1 GB + * **Storage:** 16 GB eMMC + * **Display:** microHDMI + +Preview | Product | Price | +---|---|---|--- +![BeagleBone AI][26] ![BeagleBone AI][26] | [BeagleBone AI][27] | $127.49[][10] | [Buy on Amazon][28] + +### 4\. BeagleV + +![][29] + +The BeagleV is the latest launch in the list, it is an SBC that runs Linux out of the box and has a [RISC-V][30] CPU. + +It is capable of running edge compute applications effortlessly, to know more about the BeagleV check [our coverage][31] of the launch. + +The BeagleV will be getting two variants, a 4 GB RAM variant and an 8 GB RAM variant. Pricing starts at **$119** for the base model and **$149** for the 8 GB RAM model, it is up for pre-order through [their website][32]. + +**Key Specifications** + + * **CPU:** RISC-V U74 2-Core @ 1.0GHz + * **DSP:** Vision DSP Tensilica-VP6 + * **DL Accelerator:** NVDLA Engine 1-core + * **NPU:** Neural Network Engine + * **RAM:** 4 GB/8 GB (2 x 4 GB) LPDDR4 SDRAM + * **Storage:** microSD slot + * **Display:** HDMI 1.4 + + + +### 5\. HiKey970 + +![][33] + +HiKey970 is [96 Boards][34] first SBC meant for edge compute applications and is the world’s first dedicated NPU AI platform. + +The HiKey970 features an CPU, GPU and an NPU for accelerating AI performance, it can also be used for training and building DL (Deep Learning) models. + +The HiKey970 is priced at **$299** and can be bought from their [official store][35]. + +**Key Specifications** + + * **SoC:** HiSilicon Kirin 970 + * **CPU:** ARM Cortex-A73 4-Core @ 2.36GHz and ARM Cortex-A53 4-Core @ 1.8GHz + * **GPU:** ARM Mali-G72 MP12 + * **RAM:** 6 GB LPDDR4X @ 1866MHz + * **Storage:** 64 GB UFS 2.1 microSD + * **Display:** HDMI and 4 line MIPI/LCD port + + + +### 6\. Google Coral Dev Board + +![][36] + +The Coral Dev Board is Google’s first attempt at an SBC dedicated for edge computing. It is capable of performing high speed ML (Machine Learning) inferencing and has support for TensorFlow Lite and AutoML Vision Edge. + +The board is priced at **$129.99** and is available through [Coral’s official website][37]. + +**Key Specifications** + + * **CPU:** NXP i.MX 8M SoC (4-Core Cortex-A53, Cortex-M4F) + * **ML Accelerator**: Google Edge TPU coprocessor + * **GPU:** Integrated GC7000 Lite Graphics + * **RAM:** 1 GB LPDDR4 + * **Storage:** 8 GB eMMC and microSD slot + * **Display:** HDMI 2.0a, 39-pin FFC connector for MIPI-DSI display (4-lane) and 24-pin FFC connector for MIPI-CSI2 camera (4-lane) + + + +### 7\. Google Coral Dev Board Mini + +![][38] + +The Coral Dev Board Mini is the successor to the Coral Dev Board, it packs in more processing power into a smaller form factor and a lower price point of **$99.99**. + +The Coral Dev Board Mini can be purchased from their [official web store][39]. + +**Key Specifications** + + * **CPU:** MediaTek 8167s SoC (4-core Arm Cortex-A35) + * **ML Accelerator:** Google Edge TPU coprocessor + * **GPU:** IMG PowerVR GE8300 + * **RAM:** 2 GB LPDDR3 + * **Storage:** 8 GB eMMC + * **Display:** micro HDMI (1.4), 24-pin FFC connector for MIPI-CSI2 camera (4-lane) and 24-pin FFC connector for MIPI-DSI display (4-lane) + +Preview | Product | Price | +---|---|---|--- +![Google Coral Dev Board Mini][40] ![Google Coral Dev Board Mini][40] | [Google Coral Dev Board Mini][41] | $99.99[][10] | [Buy on Amazon][42] + +### Closing Thoughts + +There is an SBC available in every price range for edge compute applications. Some are just basic, like the Nvidia Jetson Nano or the BeagleBone AI and some are performance oriented models like the BeagleV and Nvidia Jetson AGX Xavier. + +If you are looking for something more universal you can check [our article on Raspberry Pi alternatives][1] that could help you in finding a suitable SBC for your use case. + +If I missed any SBC dedicated for edge compute, feel free to let me know in the comments below. + +_**Author info: Sourav Rudra is a FOSS Enthusiast with love for Gaming Rigs/Workstation building.**_ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-sbc-for-ai/ + +作者:[Community][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/itsfoss/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/raspberry-pi-alternatives/ +[2]: https://en.wikipedia.org/wiki/General-purpose_input/output +[3]: https://www.raspberrypi.org/products/ +[4]: https://www.arduino.cc/en/main/products +[5]: https://itsfoss.com/affiliate-policy/ +[6]: https://developer.nvidia.com/embedded/jetson-developer-kits +[7]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Nvidia-Jetson-Nano.png?ssl=1 +[8]: https://i1.wp.com/m.media-amazon.com/images/I/310YWrfdnTL._SL160_.jpg?ssl=1 +[9]: https://www.amazon.com/dp/B08J157LHH?tag=chmod7mediate-20&linkCode=osi&th=1&psc=1 (NVIDIA Jetson Nano 2GB Developer Kit (945-13541-0000-000)) +[10]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime) +[11]: https://www.amazon.com/dp/B08J157LHH?tag=chmod7mediate-20&linkCode=osi&th=1&psc=1 (Buy on Amazon) +[12]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Nvidia-Jetson-Xavier-NX.png?ssl=1 +[13]: https://i1.wp.com/m.media-amazon.com/images/I/31B9xMmCvwL._SL160_.jpg?ssl=1 +[14]: https://www.amazon.com/dp/B086874Q5R?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (NVIDIA Jetson Xavier NX Developer Kit (812674024318)) +[15]: https://www.amazon.com/dp/B086874Q5R?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) +[16]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Nvidia-Jetson-AGX-Xavier-.png?ssl=1 +[17]: https://i1.wp.com/m.media-amazon.com/images/I/41tO5hw4zHL._SL160_.jpg?ssl=1 +[18]: https://www.amazon.com/dp/B083ZL3X5B?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (NVIDIA Jetson AGX Xavier Developer Kit (32GB)) +[19]: https://www.amazon.com/dp/B083ZL3X5B?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) +[20]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/ROCK-Pi-N10.png?ssl=1 +[21]: https://wiki.radxa.com/Home +[22]: https://www.seeedstudio.com/ROCK-Pi-4-c-1323.html?cat=1343 +[23]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Beagle-AI.png?ssl=1 +[24]: https://beagleboard.org/ +[25]: https://beagleboard.org/ai +[26]: https://i2.wp.com/m.media-amazon.com/images/I/41K+htPCUHL._SL160_.jpg?ssl=1 +[27]: https://www.amazon.com/dp/B07YR1RV64?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (BeagleBone AI) +[28]: https://www.amazon.com/dp/B07YR1RV64?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) +[29]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/BeagleV.png?ssl=1 +[30]: https://en.wikipedia.org/wiki/RISC-V +[31]: https://news.itsfoss.com/beaglev-announcement/ +[32]: https://beaglev.seeed.cc/ +[33]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/HiKey970.png?ssl=1 +[34]: https://www.96boards.org/ +[35]: https://www.96boards.org/product/hikey970/ +[36]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Google-Coral-Dev-Board.png?ssl=1 +[37]: https://coral.ai/products/dev-board/ +[38]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Google-Coral-Dev-Board-Mini.png?ssl=1 +[39]: https://coral.ai/products/dev-board-mini +[40]: https://i0.wp.com/m.media-amazon.com/images/I/41g5c6IwLmL._SL160_.jpg?ssl=1 +[41]: https://www.amazon.com/dp/B08QLXKJB7?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Google Coral Dev Board Mini) +[42]: https://www.amazon.com/dp/B08QLXKJB7?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) diff --git a/sources/tech/20210201 My handy guide to software development and testing.md b/sources/tech/20210201 My handy guide to software development and testing.md new file mode 100644 index 0000000000..2ac51f55f3 --- /dev/null +++ b/sources/tech/20210201 My handy guide to software development and testing.md @@ -0,0 +1,229 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (My handy guide to software development and testing) +[#]: via: (https://opensource.com/article/21/2/development-guide) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +My handy guide to software development and testing +====== +Programming can feel like a battle against a horde of zombies at times. +In this series, learn how to put this ZOMBIES acronym to work for you. +![Gears above purple clouds][1] + +A long time ago, when I was but a budding computer programmer, we used to work in large batches. We were each assigned a programming task, and then we'd go away and hide in our cubicles and bang on the keyboard. I remember my team members spending hours upon hours in isolation, each of us in our own cubicle, wrestling with challenges to create defect-free apps. The theory was, the larger the batch, the better the evidence that we're awesome problem solvers. + +For me, it was a badge of honor to see how long I could write new code or modify existing code before stopping to check to see whether what I did worked. Back then, many of us thought stopping to verify that our code worked was a sign of weakness, a sign of a rookie programmer. A "real developer" should be able to crank out the entire app without stopping to check anything! + +When I did stop to test my code, however unwillingly, I usually got a reality check. Either my code wouldn't compile, or it wouldn't build, or it wouldn't run, or it just wouldn't process the data the way I'd intended. Inevitably, I'd scramble in desperation to fix all the pesky problems I'd uncovered. + +### Avoiding the zombie horde + +If the old style of working sounds chaotic, that's because it was. We tackled our tasks all at once, hacking and slashing through problems only to be overwhelmed by more. It was like a battle against a horde of zombies. + +Today, we've learned to avoid large batches. Hearing some experts extolling the virtues of avoiding large batches sounded completely counterintuitive at first, but I've learned a lot from past mistakes. Appropriately, I'm using a system James Grenning () calls **ZOMBIES** to guide my software development efforts. + +### ZOMBIES to the rescue! + +There's nothing mysterious about **ZOMBIES**. It's an acronym that stands for: + +**Z** – Zero +**O** – One +**M** – Many (or more complex) +**B** – Boundary behaviors +**I** – Interface definition +**E** – Exercise exceptional behavior +**S** – Simple scenarios, simple solutions + +I'll break it down for you in this article series. + +### Zero in action! + +**Z**ero stands for the simplest possible case. + +A solution is _simplest_ because everyone initially prefers to use hard-coded values. By starting a coding session with hard-coded values, you quickly create a situation that gives you immediate feedback. Without having to wait several minutes or potentially hours, hard-coded values provide instant feedback on whether you like interacting with what you're building. If you find out you like interacting with it, great! Carry on in that direction. If you discover, for one reason or another, that you don't like interacting with it, there's been no big loss. You can easily dismiss it; you don't even have any losses to cut. + +As an example, build a simple backend shopping API. This service lets users grab a shopping basket, add items to the basket, remove items from the basket, and get the order total from the API. + +Create the necessary infrastructure (segregate the shipping app into an `app` folder and tests into a `tests` folder). This example uses the open source [xUnit][2] testing framework. + +Roll up your sleeves, and see the Zero principle in action! + + +``` +[Fact] +public void NewlyCreatedBasketHas0Items() {     +    var expectedNoOfItems = 0; +    var actualNoOfItems = 1; +    Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +This test is _faking it_ because it is testing for hard-coded values. When the shopping basket is newly created, it contains no items; therefore, the expected number of items in the basket is 0. This expectation is put to the test (or _asserted_) by comparing expected and actual values for equality. + +When the test runs, it produces the following results: + + +``` +Starting test execution, please wait... + +A total of 1 test files matched the specified pattern. +[xUnit.net 00:00:00.57] tests.UnitTest1.NewlyCreatedBasketHas0Items [FAIL] +  X tests.UnitTest1.NewlyCreatedBasketHas0Items [4ms] +  Error Message: +   Assert.Equal() Failure +Expected: 0 +Actual: 1 +[...] +``` + +The test fails for obvious reasons: you expected the number of items to be 0, but the actual number of items was hard-coded as 1. + +Of course, you can quickly remedy that error by modifying the hard-coded value assigned to the actual variable from 1 to 0: + + +``` +[Fact] +public void NewlyCreatedBasketHas0Items() { +    var expectedNoOfItems = 0; +    var actualNoOfItems = 0; +    Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +As expected, when this test runs, it passes successfully: + + +``` +Starting test execution, please wait... + +A total of 1 test files matched the specified pattern. + +Test Run Successful. +Total tests: 1 +     Passed: 1 + Total time: 1.0950 Seconds +``` + +You might not think it's worth testing code you're forcing to fail, but no matter how simple a test may be, it is absolutely mandatory to see it fail at least once. That way, you can rest assured that the test will alert you later should some inadvertent change corrupt your processing logic. + +Now's the time to stop faking the Zero case and replace that hard-coded value with a value that will be provided by the running API. Now that you know you have a reliably failing test that expects an empty basket to have 0 items, it's time to write some application code. + +As with any other modeling exercise in software, begin by crafting a simple _interface_. Create a new file in the solution's `app` folder and name it `IShoppingAPI.cs` (by convention, preface every interface name with an upper-case **I**). In the interface, declare the method `NoOfItems()` to return the number of items as an `int`. Here's the listing of the interface: + + +``` +using System; + +namespace app {     +    public interface IShoppingAPI { +        int NoOfItems(); +    } +} +``` + +Of course, this interface is incapable of doing any work until you implement it. Create another file in the `app` folder and name it `ShoppingAPI`. Declare `ShoppingAPI` as a public class that implements `IShoppingAPI`. In the body of the class, define `NoOfItems` to return the integer 1: + + +``` +using System; + +namespace app { +    public class ShoppingAPI : IShoppingAPI { +        public int NoOfItems() { +            return 1; +        } +    } +} +``` + +You can see in the above that you are faking the processing logic again by hard-coding the return value to 1. That's good for now because you want to keep everything super brain-dead simple. Now's not the time (not yet, at least) to start mulling over how you're going to implement this shopping basket. Leave that for later! For now, you're playing with the Zero case, which means you want to see whether you like your current arrangement. + +To ascertain that, replace the hard-coded expected value with the value that will be delivered when your shopping API runs and receives the request. You need to let the tests know where the shipping code is located by declaring that you are using the `app` folder. + +Next, you need to instantiate the `IShoppingAPI` interface: + + +``` +`IShoppingAPI shoppingAPI = new ShoppingAPI();` +``` + +This instance is used to send requests and receive actual values after the code runs. + +Now the listing looks like: + + +``` +using System; +using Xunit; +using app; + +namespace tests { +    public class ShoppingAPITests { +        IShoppingAPI shoppingAPI = [new][3] ShoppingAPI(); +  +        [Fact]         +        public void NewlyCreatedBasketHas0Items() { +            var expectedNoOfItems = 0; +            var actualNoOfItems = shoppingAPI.NoOfItems(); +            Assert.Equal(expectedNoOfItems, actualNoOfItems); +        } +    } +} +``` + +Of course, when this test runs, it fails because you hard-coded an incorrect return value (the test expects 0, but the app returns 1). + +Again, you can easily make the test pass by modifying the hard-coded value from 1 to 0, but that would be a waste of time at this point. Now that you have a proper interface hooked up to your test, the onus is on you to write programming logic that results in expected code behavior. + +For the application code, you need to decide which data structure to use to represent the shopping cart. To keep things bare-bones, strive to identify the simplest representation of a collection in C#. The thing that immediately comes to mind is `ArrayList`. This collection is perfect for these purposes—it can take an indefinite number of items and is easy and simple to traverse. + +In your app code, declare that you're using `System.Collections` because `ArrayList` is part of that package: + + +``` +`using System.Collections;` +``` + +Then declare your `basket`: + + +``` +`ArrayList basket = new ArrayList();` +``` + +Finally, replace the hard-coded value in the `NoOfItems()` with actual running code: + + +``` +public int NoOfItems() { +    return basket.Count; +} +``` + +This time, the test passes because your instantiated basket is empty, so `basket.Count` returns 0 items. + +Which is exactly what your first Zero test expects. + +### More examples + +Your homework is to tackle just one zombie for now, and that's the Zeroeth zombie. In the next article, I'll take a look at **O**ne and **M**any. Stay strong! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/development-guide + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds) +[2]: https://xunit.net/ +[3]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20210201 Use Mac-style emoji on Linux.md b/sources/tech/20210201 Use Mac-style emoji on Linux.md new file mode 100644 index 0000000000..81c80366bc --- /dev/null +++ b/sources/tech/20210201 Use Mac-style emoji on Linux.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use Mac-style emoji on Linux) +[#]: via: (https://opensource.com/article/21/2/emoji-linux) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg) + +Use Mac-style emoji on Linux +====== +Splatmoji provides an easy way to spice up your communication with +emoji. +![Emoji keyboard][1] + +Linux provides an amazing desktop experience by default. Although advanced users have the flexibility to choose their own [window manager][2], the day-to-day flow of Gnome is better than ever since the [GNOME 3.36 improvements][3]. As a long-time Mac enthusiast turned Linux user, that's huge. + +There is, however, one shortcut I use every day on a Mac that you won't find by default on Linux. It's a task I do dozens of times a day and an essential part of my digital communication. It's the emoji launcher. + +You might laugh when you see that, but stick with me. + +Most communication includes body language, and experts estimate upwards of 80% of what people remember comes from it. According to Advancement Courses' [History of emoji][4], people have been using "typographical art" since the 1800s. It's indisputable that in 1881, _Puck Magazine_ included four emotional faces for joy, melancholy, indifference, and astonishment. There is some disagreement about whether Abraham Lincoln's use of a winking smiley face, `;)`, in 1862 was a typo or an intentional form of expression. I could speculate further back into hieroglyphics, as this [museum exhibit][5] did. However you look at it, emoji and their ancestorial predecessors have conveyed complex human emotion in writing for a long time. That power is not going away. + +Macs make it trivial to add these odd forms of expression to text with a shortcut to insert emoji into a sentence quickly. Pressing **Cmd**+**Ctrl**+**Space** launches a menu, and a quick click completes the keystroke. + +GNOME does not (yet) have this functionality by default, but there is open source software to add it. + +## My first attempts at emoji on Linux + +So how can you add emoji-shortcut functionality to a Linux window manager? I began with trial and error. I tried about a dozen different tools along the way. I found [Autokey][6], which has been a great way to insert text using shortcuts or keywords (and I still use for that), but the [emoji extension][7] did not render for me (on Fedora or Pop!_OS). I hope one day it does, so I can use colon notation to insert emoji, like `:+1:` to get a 👍️. + +It turns out that the way emoji render and interact with font choices throughout a window manager is nontrivial. Partway through my struggle, I reached out to the GNOME emoji team (yes, there's a [team for emoji][8]!) and got a small taste of its complexity. + +I did, however, find a project that works consistently across multiple Linux distributions. It's called Splatmoji. + +## Splatmoji for inserting emoji + +[Splatmoji][9] lets me consistently insert emoji into my Linux setup exactly like I would on a Mac. Here is what it looks like in action: + +![Splatmoji scroll example][10] + +(Matthew Broberg, [CC BY-SA 4.0][11]) + +It's written in Bash, which is impressive for all that it does. Splatmoji depends on a pretty interesting toolchain outside of Bash to avoid a lot of complexity in its main features. It uses: + + * **[rofi][12]** to provide a smooth window-switcher experience + * [**xdotool**][13] to input the keystrokes into the window + * [**xsel**][14] or [**xclipboard**][15] to copy the selected item + * [**jq**][16], a JSON processor, if JSON escaping is called + + + +Thanks to these dependencies, Splatmoji is a surprisingly straightforward tool that calls these pieces in the right order. + +## Set up Splatmoji + +Splatmoji offers packaged releases for dnf and apt-based systems, but I set it up using the source code to keep up with the latest updates to the project: + + +``` +# Go to whatever directory you want to store the source code. +# I keep everything in a ~/Development folder, and do so here. +# Note that `mkdir -p` will make that folder if you haven't already. +$ mkdir -p ~/Development +$ cd ~/Development +$ git clone +$ cd splatmoji/ +``` + +Install the requirements above using the syntax for your package manager. I usually use [Homebrew][17] and add `/home/linuxbrew/.linuxbrew/bin/` to my path, but I will use `dnf` for this example: + + +``` +`$ sudo dnf install rofi xdoto \ No newline at end of file diff --git a/sources/tech/20210202 Convert audio files with this versatile Linux command.md b/sources/tech/20210202 Convert audio files with this versatile Linux command.md new file mode 100644 index 0000000000..683907b70f --- /dev/null +++ b/sources/tech/20210202 Convert audio files with this versatile Linux command.md @@ -0,0 +1,240 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Convert audio files with this versatile Linux command) +[#]: via: (https://opensource.com/article/20/2/linux-sox) +[#]: author: (Klaatu https://opensource.com/users/klaatu) + +Convert audio files with this versatile Linux command +====== +SoX Sound Exchange can even add effects to your audio files. +![HiFi vintage stereo][1] + +I work with media, and when you work with any kind of media, you learn pretty quickly that standardization is a valuable tool. Just as you wouldn't try to add a fraction to a decimal without converting one or the other, I've learned that it's not ideal to combine media of differing formats. Most hobbyist-level applications make the conversion process invisible to the user as a convenience. Flexible software aimed at users needing control over the fine details of their assets, however, often leave it up to you to convert your media to your desired format in advance. I have a few favorite tools for conversion, and one of those is the so-called _Swiss army knife of sound_, [SoX][2]. + +### Installing + +On Linux or BSD, you can install the **sox** command (and some helpful symlinks) from your software repository or ports tree. + +You can also install SoX from its home on [Sourceforge.net][3]. It doesn't release often, but its codebase tends to be stable, so if you want the latest features (such as Opus support), it's easy and safe to build. + +SoX provides primarily the **sox** command, but installation also creates a few useful symlinks: **play**, **rec**, and **soxi**. + +### Getting information about files with SoX + +SoX reads and rewrites audio data. Whether it stores the rewritten audio data is up to you. There are use cases in which you don't need to store the converted data, for instance, when you're sending the output directly to your speakers for playback. Before doing any conversion, however, it's usually a good idea to determine exactly what you're dealing with in the first place. + +To gather information about an audio file, use the **soxi** command. This is a symlink to **sox --info**. + + +``` +$ soxi countdown.mp3 +Input File     : '/home/tux/countdown.mp3' +Channels       : 1 +Sample Rate    : 44100 +Precision      : 16-bit +Duration       : 00:00:11.21 = 494185 samples... +File Size      : 179k +Bit Rate       : 128k +Sample Encoding: MPEG audio (layer I, II or III) +``` + +This output gives you a good idea of what codec the audio file is encoded in, the file length, file size, sample rate, and the number of channels. Some of these you might _think_ you already know, but I never trust assumptions when media is brought to me by a client. Verify media attributes with **soxi**. + +### Converting files + +In this example, the audio of a game show countdown has been delivered as an MP3 file. While nearly all editing applications accept compressed audio, none of them actually edit the compressed data. Conversion is happening somewhere, whether it's a secret background task or a prompt for you to save a copy. I generally prefer to do the conversion myself, in advance. This way, I can control what format I'm using. I can do lots of media in batches overnight instead of wasting valuable production time waiting for an editing application to churn through them on demand. + +The **sox** command is meant for converting audio files. There are a few stages in the **sox** pipeline: + + * input + * combine + * effects + * output + + + +In command syntax, the effects step is, confusingly, written _last_. That means the pipeline is composed this way: + + +``` +`input → combine → output → effects` +``` + +### Encoding + +The simplest conversion command involves only an input file and an output file. Here's the command to convert an MP3 file to a lossless FLAC file: + + +``` +$ sox countdown.mp3 output.flac +$ soxi output.flac + +Input File     : 'output.flac' +Channels       : 1 +Sample Rate    : 44100 +Precision      : 16-bit +Duration       : 00:00:11.18 = 493056 samples... +File Size      : 545k +Bit Rate       : 390k +Sample Encoding: 16-bit FLAC +Comment        : 'Comment=Processed by SoX' +``` + +#### Effects + +The effects chain is specified at the end of a command. It can alter audio prior to sending the data to its final destination. For instance, sometimes audio that's too loud can cause problems during conversion: + + +``` +$ sox bad.wav bad.ogg +sox WARN sox: `bad.ogg' output clipped 126 samples; decrease volume? +``` + +Applying a **gain** effect can often solve this problem: + + +``` +`$ sox bad.wav bad.ogg gain -1` +``` + +#### Fade + +Another useful effect is **fade**. This effect lets you define the shape of a fade-in or fade-out, along with how many seconds you want the fade to span. + +Here's an example of a six-second fade-in using an inverted parabola: + + +``` +`$ sox intro.ogg intro.flac fade p 6` +``` + +This applies a three-second fade-in to the head of the audio and a fade-out starting at the eight-second mark (the intro music is only 11 seconds, so the fade-out is also three-seconds in this case): + + +``` +`$ sox intro.ogg intro.flac fade p 3 8` +``` + +The different kinds of fades (sine, linear, inverted parabola, and so on), as well as the options **fade** offers (fade-in, fade-out), are listed in the **sox** man page. + +#### Effect syntax + +Each effect plugin has its own syntax, so refer to the man page for details on how to invoke each one. + +Effects can be daisy-chained in one command, at least to the extent that you want to combine them. In other words, there's no syntax to apply a **flanger** effect only during a six-second fade-out. For something that precise, you need a graphical sound wave editor or a digital audio workstation such as [LMMS][4] or [Rosegarden][5]. However, if you just have effects that you want to apply once, you can list them together in the same command. + +This command applies a -1 **gain** effect, a tempo **stretch** of 1.35, and a **fade-out**: + + +``` +$ sox intro.ogg output.flac gain -1 stretch 1.35 fade p 0 6 +$ soxi output.flac + +Input File     : 'output.flac' +Channels       : 1 +Sample Rate    : 44100 +Precision      : 16-bit +Duration       : 00:00:15.10 = 665808 samples... +File Size      : 712k +Bit Rate       : 377k +Sample Encoding: 16-bit FLAC +Comment        : 'Comment=Processed by SoX' +``` + +### Combining audio + +SoX can also combine audio files, either by concatenating them or by mixing them. + +To join (or _concatenate_) files into one, provide more than one input file in your command: + + +``` +`$ sox countdown.mp3 intro.ogg output.flac` +``` + +In this example, **output.flac** now contains **countdown** audio, followed immediately by **intro** music. + +If you want the two tracks to play over one another at the same time, though, you can use the **\--combine mix** option: + + +``` +`$ sox --combine mix countdown.mp3 intro.ogg output.flac` +``` + +Imagine, however, that the two input files differed in more than just their codecs. It's not uncommon for vocal tracks to be recorded in mono (one channel), but for music to be recorded in at least stereo (two channels). SoX won't default to a solution, so you have to standardize the format of the two files yourself first. + +#### Altering audio files + +Options related to the file name listed _after_ it. For instance, the **\--channels** option in this command applies _only_ to **input.wav** and NOT to **example.ogg** or **output.flac**: + + +``` +`$ sox --channels 2 input.wav example.ogg output.flac` +``` + +This means that the position of an option is very significant in SoX. Should you specify an option at the start of your command, you're essentially only overriding what SoX gleans from the input files on its own. Options placed immediately before the _output_ file, however, determine how SoX writes the audio data. + +To solve the previous problem of incompatible channels, you can first standardize your inputs, and then mix: + + +``` +$ sox countdown.mp3 --channels 2 countdown-stereo.flac gain -1 +$ soxi countdown-stereo.flac + +Input File     : 'countdown-stereo.flac' +Channels       : 2 +Sample Rate    : 44100 +Precision      : 16-bit +Duration       : 00:00:11.18 = 493056 samples... +File Size      : 545k +Bit Rate       : 390k +Sample Encoding: 16-bit FLAC +Comment        : 'Comment=Processed by SoX' + +$ sox --combine mix \ +countdown-stereo.flac \ +intro.ogg \ +output.flac +``` + +SoX absolutely requires multiple commands for complex actions, so it's normal to create several temporary and intermediate files as needed. + +### Multichannel audio + +Not all audio is constrained to one or two channels, of course. If you want to combine several audio channels into one file, you can do that with SoX and the **\--combine merge** option: + + +``` +$ sox --combine merge countdown.mp3 intro.ogg output.flac +$ soxi output.flac + +Input File     : 'output.flac' +Channels       : 3 +[...] +``` + +### Easy audio manipulation + +It might seem strange to work with audio using no visual interface, and for some tasks, SoX definitely isn't the best tool. However, for many tasks, SoX provides an easy and lightweight toolkit. SoX is a simple command with powerful potential. With it, you can convert audio, manipulate channels and waveforms, and even generate your own sounds. This article has only provided a brief overview of its capabilities, so go read its man page or [online documentation][2] and then see what you can create. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/2/linux-sox + +作者:[Klaatu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/klaatu +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hi-fi-stereo-vintage.png?itok=KYY3YQwE (HiFi vintage stereo) +[2]: http://sox.sourceforge.net/sox.html +[3]: http://sox.sourceforge.net +[4]: https://opensource.com/life/16/2/linux-multimedia-studio +[5]: https://opensource.com/article/18/3/make-sweet-music-digital-audio-workstation-rosegarden diff --git a/sources/tech/20210202 How I build and expand application development and testing.md b/sources/tech/20210202 How I build and expand application development and testing.md new file mode 100644 index 0000000000..f01b3f4355 --- /dev/null +++ b/sources/tech/20210202 How I build and expand application development and testing.md @@ -0,0 +1,204 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How I build and expand application development and testing) +[#]: via: (https://opensource.com/article/21/2/build-expand-software) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +How I build and expand application development and testing +====== +Start development simply, by writing and testing your code with One +element and then expand it out to Many. +![Security monster][1] + +In my [previous article][2], I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. I also explained the first **ZOMBIES** principle, **Zero**. In this article, I'll demonstrate the next two principles: **One** and **Many**. + +**ZOMBIES** is an acronym that stands for: + +**Z** – Zero +**O** – One +**M** – Many (or more complex) +**B** – Boundary behaviors +**I** – Interface definition +**E** – Exercise exceptional behavior +**S** – Simple scenarios, simple solutions + +In the previous article, you implemented Zero, which provides the simplest possible path through your code. There is absolutely no conditional processing logic anywhere to be found. Now it's time for you to move into **O**ne. + +Unlike with **Z**ero, which basically means nothing is added, or we have an empty case, nothing to take care of, **O**ne means we have a single case to take care of. That single case could be one item in the collection, or one visitor, or one event that demands special treatment. + +With **M**any, we are now dealing with potentially more complicated cases. Two or more items in the collection, two or more events that demand special treatment, and so on. + +### One in action + +Build on the code from the previous article by adding something to your virtual shopping basket. First, write a fake test: + + +``` +[Fact] +public void Add1ItemBasketHas1Item() { +        var expectedNoOfItems = 1; +        var actualNoOfItems = 0; +        Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +As expected, this test fails because you hard-coded an incorrect value: + + +``` +Starting test execution, please wait... + +A total of 1 test files matched the specified pattern. +[xUnit.net 00:00:00.57] tests.UnitTest1.NewlyCreatedBasketHas0Items [FAIL] +  X tests.UnitTest1.NewlyCreatedBasketHas0Items [4ms] +  Error Message: +   Assert.Equal() Failure +Expected: 0 +Actual: 1 +[...] +``` + +Now is the time to think about how to stop faking it. You already created an implementation of a shopping basket (an `ArrayList` to hold items). But how do you implement an _item_? + +Simplicity should always be your guiding principle, and not knowing much about the actual item, you could fake it a little by implementing it as another collection. What could that collection contain? Well, because you're mostly interested in calculating basket totals, the item collection should, at minimum, contain a price (in any currency, but for simplicity, use dollars). + +A simple collection can hold an ID on an item (a pointer to the item, which may be kept elsewhere on the system) and the associated price of an item. + +A good data structure that can easily capture this is a key/value structure. In C#, the first thing that comes to mind is `Hashtable`. + +In the app code, add a new capability to the `IShoppingAPI` interface: + + +``` +`int AddItem(Hashtable item);` +``` + +This new capability accepts one item (an instance of a `Hashtable`) and returns the number of items found in the shopping basket. + +In your tests, replace the hard-coded value with a call to the interface: + + +``` +[Fact] +public void Add1ItemBasketHas1Item() {             +    var expectedNoOfItems = 1; +    Hashtable item = [new][3] Hashtable(); +    var actualNoOfItems = shoppingAPI.AddItem(item); +    Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +This code instantiates `Hashtable` and names it `item`, then invokes `AddItem(item)` on the shopping interface, which returns the actual number of items in the basket. + +To implement it, turn to the `ShoppingAPI` class: + + +``` +public int AddItem(Hashtable item) { +    return 0; +} +``` + +You are faking it again just to see the results of your tests (which are the first customers of your code). Should the test fail (as expected), replace the hard-coded values with actual code: + + +``` +public int AddItem(Hashtable item) { +    basket.Add(item); +    return basket.Count; +} +``` + +In the working code, add an item to the basket, and then return the count of the items in the basket: + + +``` +Test Run Successful. +Total tests: 2 +     Passed: 2 + Total time: 1.0633 Seconds +``` + +So now you have two tests passing and have pretty much covered **Z** and **O**, the first two parts of **ZOMBIES**. + +### A moment of reflection + +If you look back at what you've done so far, you will notice that by focusing your attention on dealing with the simplest possible **Z**ero and **O**ne scenarios, you have managed to create an interface as well as define some processing logic boundaries! Isn't that awesome? You now have the most important abstractions partially implemented, and you know how to process cases where nothing is added and when one thing is added. And because you are building an e-commerce API, you certainly do not foresee placing any other boundaries that would limit your customers when shopping. Your virtual shopping basket is, for all intents and purposes, limitless. + +Another important (although not necessarily immediately obvious) aspect of the stepwise refinement that **ZOMBIES** offers is a reluctance to leap head-first into the brambles of implementation. You may have noticed how sheepish this is about implementing anything. For starters, it's better to fake the implementation by hard-coding the values. Only after you see that the interface interacts with your test in a sensible way are you willing to roll up your sleeves and harden the implementation code. + +But even then, you should always prefer simple, straightforward constructs. And strive to avoid conditional logic as much as you can. + +### Many in action + +Expand your application by defining your expectations when a customer adds two items to the basket. The first test is a fake. It expects 2, but force it to fail by hard-coding 0 items: + + +``` +[Fact] +public void Add2ItemsBasketHas2Items() { +        var expectedNoOfItems = 2; +        var actualNoOfItems = 0; +        Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +When you run the test, two of them pass successfuy (the previous two, the **Z** and **O** tests), but as expected, the hard-coded test fails: + + +``` +A total of 1 test files matched the specified pattern. +[xUnit.net 00:00:00.57] tests.UnitTest1.Add2ItemsBasketHas2Items [FAIL] +  X tests.UnitTest1.Add2ItemsBasketHas2Items [2ms] +  Error Message: +   Assert.Equal() Failure +Expected: 2 +Actual: 0 + +Test Run Failed. +Tatal tests: 3 +     Passed: 2 +     Failed: 1 +``` + +Replace the hard-coded values with the call to the app code: + + +``` +[Fact] +public void Add2ItemsBasketHas2Items() { +        var expectedNoOfItems = 2; +        Hashtable item = [new][3] Hashtable(); +        shoppingAPI.AddItem(item); +        var actualNoOfItems = shoppingAPI.AddItem(item); +        Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +In the test, you add two items (actually, you're adding the same item twice) and then compare the expected number of items to the number of items from the `shoppingAPI` instance after adding the item the second time. + +All tests now pass! + +### Stay tuned + +You have now completed the first pass of the **ZOM** part of the equation. You did a pass on **Z**ero, on **O**ne, and on **M**any. In the next article, I'll take a look at **B** and **I**. Stay vigilant! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/build-expand-software + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_chaos_engineer_monster.png?itok=J31aRccu (Security monster) +[2]: https://opensource.com/article/21/1/zombies-zero +[3]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20210203 Defining boundaries and interfaces in software development.md b/sources/tech/20210203 Defining boundaries and interfaces in software development.md new file mode 100644 index 0000000000..6f3e540da8 --- /dev/null +++ b/sources/tech/20210203 Defining boundaries and interfaces in software development.md @@ -0,0 +1,315 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Defining boundaries and interfaces in software development) +[#]: via: (https://opensource.com/article/21/2/boundaries-interfaces) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +Defining boundaries and interfaces in software development +====== +Zombies are bad at understanding boundaries, so set limits and +expectations for what your app can do. +![Looking at a map for career journey][1] + +Zombies are bad at understanding boundaries. They trample over fences, tear down walls, and generally get into places they don't belong. In the previous articles in this series, I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. + +**ZOMBIES** is an acronym that stands for: + +**Z** – Zero +**O** – One +**M** – Many (or more complex) +**B** – Boundary behaviors +**I** – Interface definition +**E** – Exercise exceptional behavior +**S** – Simple scenarios, simple solutions + +In the first two articles in this series, I demonstrated the first three **ZOMBIES** principles of **Zero**, **One**, and **Many**. The first article [implemented **Z**ero][2], which provides the simplest possible path through your code. The second article [performed tests][3] with **O**ne and **M**any samples. In this third article, I'll take a look at **B**oundaries and **I**nterfaces.  + +### Back to One + +Before you can tackle **B**oundaries, you need to circle back (iterate). + +Begin by asking yourself: What are the boundaries in e-commerce? Do I need or want to limit the size of a shopping basket? (I don't think that would make any sense, actually). + +The only reasonable boundary at this point would be to make sure the shopping basket never contains a negative number of items. Write an executable expectation that expresses this limitation: + + +``` +[Fact] +public void Add1ItemRemoveItemRemoveAgainHas0Items() { +        var expectedNoOfItems = 0; +        var actualNoOfItems = -1; +        Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +This says that if you add one item to the basket, remove that item, and remove it again, the `shoppingAPI` instance should say that you have zero items in the basket. + +Of course, this executable expectation (microtest) fails, as expected. What is the bare minimum modification you need to make to get this microtest to pass? + + +``` +[Fact] +public void Add1ItemRemoveItemRemoveAgainHas0Items() { +        var expectedNoOfItems = 0; +        Hashtable item = [new][4] Hashtable(); +        shoppingAPI.AddItem(item); +        shoppingAPI.RemoveItem(item); +        var actualNoOfItems = shoppingAPI.RemoveItem(item); +        Assert.Equal(expectedNoOfItems, actualNoOfItems); +} +``` + +This encodes an expectation that depends on the `RemoveItem(item)` capability. And because that capability is not in your `shippingAPI`, you need to add it. + +Flip over to the `app` folder, open `IShippingAPI.cs` and add the new declaration: + + +``` +`int RemoveItem(Hashtable item);` +``` + +Go to the implementation class (`ShippingAPI.cs`), and implement the declared capability: + + +``` +public int RemoveItem(Hashtable item) { +        basket.RemoveAt(basket.IndexOf(item)); +        return basket.Count; +} +``` + +Run the system, and you get an error: + +![Error][5] + +(Alex Bunardzic, [CC BY-SA 4.0][6]) + +The system is trying to remove an item that does not exist in the basket, and it crashes. Add a little bit of defensive programming: + + +``` +public int RemoveItem(Hashtable item) { +        if(basket.IndexOf(item) >= 0) { +                basket.RemoveAt(basket.IndexOf(item)); +        } +        return basket.Count; +} +``` + +Before you try to remove the item from the basket, check if it is in the basket. (You could've tried by catching the exception, but I feel the above logic is easier to read and follow.) + +### More specific expectations + +Before we move to more specific expectations, let's pause for a second and examine what is meant by interfaces. In software engineering, an interface denotes a specification, or a description of some capability. In a way, interface in software is similar to a recipe in cooking. It lists the ingredients that make the cake but it is not actually edible. We follow the specified description in the recipe in order to bake the cake. + +Similarly here, we define our service by first specifying what is this service capable of. That specification is what we call interface. But interface itself cannot provide any services to us. It is a mere blueprint which we then use and follow in order to implement specified capabilities. + +So far, you have implemented the interface (partially; more capabilities will be added later) and the processing boundaries (you cannot have a negative number of items in the shopping basket). You instructed the `shoppingAPI` how to add items to the shopping basket and confirmed that the addition works by running the `Add2ItemsBasketHas2Items` test. + +However, just adding items to the basket does not an e-commerce app make. You need to be able to calculate the total of the items added to the basket—time to add another expectation. + +As is the norm by now (hopefully), start with the most straightforward expectation. When you add one item to the basket and the item price is $10, you expect the shopping API to correctly calculate the total as $10. + +Your fifth test (the fake version): + + +``` +[Fact] +public void Add1ItemPrice10GrandTotal10() { +        var expectedTotal = 10.00; +        var actualTotal = 0.00; +        Assert.Equal(expectedTotal, actualTotal); +} +``` + +Make the `Add1ItemPrice10GrandTotal10` test fail by using the good old trick: hard-coding an incorrect actual value. Of course, your previous three tests succeed, but the new fourth test fails: + + +``` +A total of 1 test files matched the specified pattern. +[xUnit.net 00:00:00.57] tests.UnitTest1.Add1ItemPrice10GrandTotal10 [FAIL] +  X tests.UnitTest1.Add1ItemPrice10GrandTotal10 [4ms] +  Error Message: +   Assert.Equal() Failure +Expected: 10 +Actual: 0 + +Test Run Failed. +Total tests: 4 +     Passed: 3 +         Failed: 1 + Total time: 1.0320 Seconds +``` + +Replace the hard-coded value with real processing. First, see if you have any such capability in your interface that would enable it to calculate order totals. Nope, no such thing. So far, you have declared only three capabilities in your interface: + + 1. `int NoOfItems();` + 2. `int AddItem(Hashtable item);` + 3. `int RemoveItem(Hashtable item);` + + + +None of those indicates any ability to calculate totals. You need to declare a new capability: + + +``` +`double CalculateGrandTotal();` +``` + +This new capability should enable your `shoppingAPI` to calculate the total amount by traversing the collection of items it finds in the shopping basket and adding up the item prices. + +Flip over to your tests and change the fifth test: + + +``` +[Fact] +public void Add1ItemPrice10GrandTotal10() { +        var expectedGrandTotal = 10.00; +        Hashtable item = [new][4] Hashtable(); +        item.Add("00000001", 10.00); +        shoppingAPI.AddItem(item); +        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +This test declares your expectation that if you add an item priced at $10 and then call the `CalculateGrandTotal()` method on the shopping API, it will return a grand total of $10. Which is a perfectly reasonable expectation since that's how the API should calculate. + +How do you implement this capability? As always, fake it first. Flip over to the `ShippingAPI` class and implement the `CalculateGrandTotal()` method, as declared in the interface: + + +``` +public double CalculateGrandTotal() { +                return 0.00; +} +``` + +You're hard-coding the return value as 0.00, just to see if the test (your first customer) will be able to run it and whether it will fail. Indeed, it does run fine and fails, so now you must implement processing logic to calculate the grand total of the items in the shopping basket properly: + + +``` +public double CalculateGrandTotal() { +        double grandTotal = 0.00; +        foreach(var product in basket) { +                Hashtable item = product as Hashtable; +                foreach(var value in item.Values) { +                        grandTotal += Double.Parse(value.ToString()); +                } +        } +        return grandTotal; +} +``` + +Run the system. All five tests succeed! + +### From One to Many + +Time for another iteration. Now that you have built the system by iterating to handle the **Z**ero, **O**ne (both very simple and a bit more elaborate scenarios), and **B**oundary scenarios (no negative number of items in the basket), you must handle a bit more elaborate scenario for **M**any.  + +A quick note: as we keep iterating and returning back to the concerns related to **O**ne, **M**any, and **B**oundaries (we are refining our implementation), some readers may expect that we should also rework the **I**nterface. As we will see later on, our interface is already fully fleshed out, and we see no need to add more capabilities at this point. Keep in mind that interfaces should be kept lean and simple; there is not much advantage in proliferating interfaces, as that only adds more noise to the signal. Here, we are following the principle of Occam's Razor, which states that entities should not multiply without a very good reason. For now, we are pretty much done with describing the expected capabilities of our API. We're now rolling up our sleeves and refining the implementation. + +The previous iteration enabled the system to handle more than one item placed in the basket. Now, enable the system to calculate the grand total for more than one item in the basket. First things first; write the executable expectation: + + +``` +[Fact] +public void Add2ItemsGrandTotal30() { +        var expectedGrandTotal = 30.00; +        var actualGrandTotal = 0.00; +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +You "cheat" by hard-coding all values first and then do your best to make sure the expectation fails. + +And it does, so now is the time to make it pass. Modify your expectation by adding two items to the basket and then running the `CalculateGrandTotal()` method: + + +``` +[Fact] +public void Add2ItemsGrandTotal30() { +        var expectedGrandTotal = 30.00; +        Hashtable item = [new][4] Hashtable(); +        item.Add("00000001", 10.00); +        shoppingAPI.AddItem(item); +        Hashtable item2 = [new][4] Hashtable(); +        item2.Add("00000002", 20.00); +        shoppingAPI.AddItem(item2); +        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +And it passes. You now have six microtests pass successfuly; the system is back to steady-state! + +### Setting expectations + +As a conscientious engineer, you want to make sure that the expected acrobatics when users add items to the basket and then remove some items from the basket always calculate the correct grand total. Here comes the new expectation: + + +``` +[Fact] +public void Add2ItemsRemoveFirstItemGrandTotal200() { +        var expectedGrandTotal = 200.00; +        var actualGrandTotal = 0.00; +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +This says that when someone adds two items to the basket and then removes the first item, the expected grand total is $200.00. The hard-coded behavior fails, and now you can elaborate with more specific confirmation examples and running the code: + + +``` +[Fact] +public void Add2ItemsRemoveFirstItemGrandTotal200() { +        var expectedGrandTotal = 200.00; +        Hashtable item = [new][4] Hashtable(); +        item.Add("00000001", 100.00); +        shoppingAPI.AddItem(item); +        Hashtable item2 = [new][4] Hashtable(); +        item2.Add("00000002", 200.00); +        shoppingAPI.AddItem(item2); +        shoppingAPI.RemoveItem(item); +        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +Your confirmation example, coded as the expectation, adds the first item (ID "00000001" with item price $100.00) and then adds the second item (ID "00000002" with item price $200.00). You then remove the first item from the basket, calculate the grand total, and assert if it is equal to the expected value. + +When this executable expectation runs, the system meets the expectation by correctly calculating the grand total. You now have seven tests passing! The system is working; nothing is broken! + + +``` +Test Run Successful. +Total tests: 7 +     Passed: 7 + Total time: 0.9544 Seconds +``` + +### More to come + +You're up to **ZOMBI** now, so in the next article, I'll cover **E**. Until then, try your hand at some tests of your own! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/boundaries-interfaces + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey) +[2]: https://opensource.com/article/21/1/zombies-zero +[3]: https://opensource.com/article/21/1/zombies-2-one-many +[4]: http://www.google.com/search?q=new+msdn.microsoft.com +[5]: https://opensource.com/sites/default/files/uploads/error_0.png (Error) +[6]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20210203 Improve your productivity with this Linux automation tool.md b/sources/tech/20210203 Improve your productivity with this Linux automation tool.md new file mode 100644 index 0000000000..9bd1ea3953 --- /dev/null +++ b/sources/tech/20210203 Improve your productivity with this Linux automation tool.md @@ -0,0 +1,190 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Improve your productivity with this Linux automation tool) +[#]: via: (https://opensource.com/article/21/2/linux-autokey) +[#]: author: (Matt Bargenquast https://opensource.com/users/mbargenquast) + +Improve your productivity with this Linux automation tool +====== +Configure your keyboard to correct common typos, enter frequently used +phrases, and more with AutoKey. +![Linux keys on the keyboard for a desktop computer][1] + +[AutoKey][2] is an open source Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever managed without. It can be a transformative tool to improve your productivity or simply a way to reduce the physical stress associated with typing. + +This article will look at how to install and start using AutoKey, cover some simple recipes you can immediately use in your workflow, and explore some of the advanced features that AutoKey power users may find attractive. + +### Install and set up AutoKey + +AutoKey is available as a software package on many Linux distributions. The project's [installation guide][3] contains directions for many platforms, including building from source. This article uses Fedora as the operating platform. + +AutoKey comes in two variants: autokey-gtk, designed for [GTK][4]-based environments such as GNOME, and autokey-qt, which is [QT][5]-based. + +You can install either variant from the command line: + + +``` +`sudo dnf install autokey-gtk` +``` + +Once it's installed, run it by using `autokey-gtk` (or `autokey-qt`). + +### Explore the interface + +Before you set AutoKey to run in the background and automatically perform actions, you will first want to configure it. Bring up the configuration user interface (UI): + + +``` +`autokey-gtk -c` +``` + +AutoKey comes preconfigured with some examples. You may wish to leave them while you're getting familiar with the UI, but you can delete them if you wish. + +![AutoKey UI][6] + +(Matt Bargenquast, [CC BY-SA 4.0][7]) + +The left pane contains a folder-based hierarchy of phrases and scripts. _Phrases_ are text that you want AutoKey to enter on your behalf. _Scripts_ are dynamic, programmatic equivalents that can be written using Python and achieve basically the same result of making the keyboard send keystrokes to an active window. + +The right pane is where the phrases and scripts are built and configured. + +Once you're happy with your configuration, you'll probably want to run AutoKey automatically when you log in so that you don't have to start it up every time. You can configure this in the **Preferences** menu (**Edit -> Preferences**) by selecting **Automatically start AutoKey at login**. + +![Automatically start AutoKey at login][8] + +(Matt Bargenquast, [CC BY-SA 4.0][7]) + +### Correct common typos with AutoKey + +Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems for you. + +Create a new subfolder where you can group all your "typo correction" configurations. Select **My Phrases** in the left pane, then **File -> New -> Subfolder**. Name the subfolder **Typos**. + +Create a new phrase in **File -> New -> Phrase**, and call it "grep." + +Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering "grep" in the **Enter phrase contents** section (replacing the default "Enter phrase contents" text). + +Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the **Set** button next to **Abbreviations** at the bottom of the UI. + +In the dialog box that pops up, click the **Add** button and add "gerp" as a new abbreviation. Leave **Remove typed abbreviation** checked; this is what instructs AutoKey to replace any typed occurrence of the word "gerp" with "grep." Leave **Trigger when typed as part of a word** unchecked so that if you type a word containing "gerp" (such as "fingerprint"), it _won't_ attempt to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated word. + +![Set abbreviation in AutoKey][9] + +(Matt Bargenquast, [CC BY-SA 4.0][7]) + +### Restrict corrections to specific applications + +You may want a correction to apply only when you make the typo in certain applications (such as a terminal window). You can configure this by setting a Window Filter. Click the **Set** button to define one. + +The easiest way to set a Window Filter is to let AutoKey detect the window type for you: + + 1. Start a new terminal window. + 2. Back in AutoKey, click the **Detect Window Properties** button. + 3. Click on the terminal window. + + + +This will auto-populate the Window Filter, likely with a Window class value of `gnome-terminal-server.Gnome-terminal`. This is sufficient, so click **OK**. + +![AutoKey Window Filter][10] + +(Matt Bargenquast, [CC BY-SA 4.0][7]) + +### Save and test + +Once you're satisfied with your new configuration, make sure to save it. Click **File** and choose **Save** to make the change active. + +Now for the grand test! In your terminal window, type "gerp" followed by a space, and it should automatically correct to "grep." To validate the Window Filter is working, try typing the word "gerp" in a browser URL bar or some other application. It should not change. + +You may be thinking that this problem could have been solved just as easily with a [shell alias][11], and I'd totally agree! Unlike aliases, which are command-line oriented, AutoKey can correct mistakes regardless of what application you're using. + +For example, another common typo I make is "openshfit" instead of "openshift," which I type into browsers, integrated development environments, and terminals. Aliases can't quite help with this problem, whereas AutoKey can correct it in any occasion. + +### Type frequently used phrases with AutoKey + +There are numerous other ways you can invoke AutoKey's phrases to help you. For example, as a site reliability engineer (SRE) working on OpenShift, I frequently type Kubernetes namespace names on the command line: + + +``` +`oc get pods -n openshift-managed-upgrade-operator` +``` + +These namespaces are static, so they are ideal phrases that AutoKey can insert for me when typing ad-hoc commands. + +For this, I created a phrase subfolder named **Namespaces** and added a phrase entry for each namespace I type frequently. + +### Assign hotkeys + +Next, and most crucially, I assign the subfolder a **hotkey**. Whenever I press that hotkey, it opens a menu where I can select (either with **Arrow key**+**Enter** or using a number) the phrase I want to insert. This cuts down on the number of keystrokes I need to enter those commands to just a few keystrokes. + +AutoKey's pre-configured examples in the **My Phrases** folder are configured with a **Ctrl**+**F7** hotkey. If you kept the examples in AutoKey's default configuration, try it out. You should see a menu of all the phrases available there. Select the item you want with the number or arrow keys. + +### Advanced AutoKeying + +AutoKey's [scripting engine][12] allows users to run Python scripts that can be invoked through the same abbreviation and hotkey system. These scripts can do things like switching windows, sending keystrokes, or performing mouse clicks through supporting API functions. + +AutoKey users have embraced this feature by publishing custom scripts for others to adopt. For example, the [NumpadIME script][13] transforms a numeric keyboard into an old cellphone-style text entry method, and [Emojis-AutoKey][14] makes it easy to insert emojis by converting phrases such as `:smile:` into their emoji equivalent. + +Here's a small script I set up that enters Tmux's copy mode to copy the first word from the preceding line into the paste buffer: + + +``` +from time import sleep + +# Send the tmux command prefix (changed from b to s) +keyboard.send_keys("<ctrl>+s") +# Enter copy mode +keyboard.send_key("[") +sleep(0.01) +# Move cursor up one line +keyboard.send_keys("k") +sleep(0.01) +# Move cursor to start of line +keyboard.send_keys("0") +sleep(0.01) +# Start mark +keyboard.send_keys(" ") +sleep(0.01) +# Move cursor to end of word +keyboard.send_keys("e") +sleep(0.01) +# Add to copy buffer +keyboard.send_keys("<ctrl>+m") +``` + +The sleeps are there because occasionally Tmux can't keep up with how fast AutoKey sends the keystrokes, and they have a negligible effect on the overall execution time. + +### Automate with AutoKey + +I hope you've enjoyed this excursion into keyboard automation with AutoKey and it gives you some bright ideas about how it can improve your workflow. If you're using AutoKey in a helpful or novel way, be sure to share it in the comments below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-autokey + +作者:[Matt Bargenquast][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbargenquast +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer) +[2]: https://github.com/autokey/autokey +[3]: https://github.com/autokey/autokey/wiki/Installing +[4]: https://www.gtk.org/ +[5]: https://www.qt.io/ +[6]: https://opensource.com/sites/default/files/uploads/autokey-defaults.png (AutoKey UI) +[7]: https://creativecommons.org/licenses/by-sa/4.0/ +[8]: https://opensource.com/sites/default/files/uploads/startautokey.png (Automatically start AutoKey at login) +[9]: https://opensource.com/sites/default/files/uploads/autokey-set_abbreviation.png (Set abbreviation in AutoKey) +[10]: https://opensource.com/sites/default/files/uploads/autokey-window_filter.png (AutoKey Window Filter) +[11]: https://opensource.com/article/19/7/bash-aliases +[12]: https://autokey.github.io/index.html +[13]: https://github.com/luziferius/autokey_scripts +[14]: https://github.com/AlienKevin/Emojis-AutoKey diff --git a/sources/tech/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md b/sources/tech/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md new file mode 100644 index 0000000000..dd90ccba6b --- /dev/null +++ b/sources/tech/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md @@ -0,0 +1,253 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Tweaks to Customize the Look of Your Linux Terminal) +[#]: via: (https://itsfoss.com/customize-linux-terminal/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +5 Tweaks to Customize the Look of Your Linux Terminal +====== + +The terminal emulator or simply the terminal is an integral part of any Linux distribution. + +When you change the theme of your distribution, often the terminal also gets a makeover automatically. But that doesn’t mean you cannot customize the terminal further. + +In fact, many It’s FOSS readers have asked us how come the terminal in our screenshots or videos look so cool, what fonts do we use, etc. + +To answer this frequent question, I’ll show you some simple and some complex tweaks to change the appearance of the terminal. You can compare the visual difference in the image below: + +![][1] + +### Customizing Linux Terminal + +_This tutorial utilizes a GNOME terminal on Pop!_OS to customize and tweak the look of the terminal. But, most of the advice should be applicable to other terminals as well._ + +For most of the elements like color, transparency, and fonts, you can utilize the GUI to tweak it without requiring to enter any special commands. + +Open your terminal. In the top right corner, look for the hamburger menu. In here, click on “**Preferences**” as shown in the screenshot below: + +![][2] + +This is where you’ll find all the settings to change the appearance of the terminal. + +#### Tip 0: Use separate terminal profiles for your customization + +I would advise you to create a new profile for your customization. Why? Because this way, your changes won’t impact the main terminal profile. Suppose you make some weird change and cannot recall the default value? Profiles help separate the customization. + +As you can see, Abhishek has separate profiles for taking screenshots and making videos. + +![Terminal Profiles][3] + +You can easily change the terminal profiles and open a new terminal window with the new profile. + +![Change Terminal Profile][4] + +That was the suggestion I wanted to put forward. Now, let’s see those tweaks. + +#### Tip 1: Use a dark/light terminal theme + +You may change the system theme and the terminal theme gets changed. Apart from that, you may switch between the dark theme or light theme, if you do not want to change the system theme. + +Once you head in to the preferences, you will notice the general options to change the theme and other settings. + +![][5] + +#### Tip 2: Change the font and size + +Select the profile that you want to customize. Now you’ll get the option to customize the text appearance, font size, font style, spacing, cursor shape, and toggle the terminal bell sound as well. + +For the fonts, you can only change to what’s available on your system. If you want something different, download and install the font on your Linux system first. + +One more thing! Use monospaced fonts otherwise fonts might overlap and the text may not be clearly readable. If you want suggestions, go with [Share Tech Mono][6] (open source) or [Larabiefont][7] (not open source). + +Under the Text tab, select Custom font and then change the font and its size (if required). + +![][8] + +#### Tip 3: Change the color pallet and transparency + +Apart from the text and spacing, you can access the “Colors” tab and change the color of the text and background of your terminal. You can also adjust the transparency to make it look even cool. + +As you can notice, you can change the color palette from a set of pre-configured options or tweak it yourself. + +![][9] + +If you want to enable transparency just like I did, you click on “**Use transparent background**” option. + +You can also choose to use colors from your system theme, if you want a similar color setting with your theme. + +![][10] + +#### Tip 4: Tweaking the bash prompt variables + +Usually, you will see your username along with the hostname (your distribution) as the bash prompt when launching the terminal without any changes. + +For instance, it would be “ankushdas**@**pop-os**:~$**” in my case. However, I [permanently changed the hostname][11] to “**itsfoss**“, so now it looks like: + +![][12] + +To change the hostname, you can type in: + +``` +hostname CUSTOM_NAME +``` + +However, this will be applicable only for the current sessions. So, when you restart, it will revert to the default. To permanently change the hostname, you need to type in: + +``` +sudo hostnamectl set-hostname CUSTOM_NAME +``` + +Similarly, you can also change your username, but it requires some additional configuration that includes killing all the current processes associated with the active username, so we’ll avoid it to change the look/feel of the terminal. + +#### Tip 5: NOT RECOMMENDED: Changing the font and color of the bash prompt (for advanced users) + +However, you can tweak the font and color of the bash prompt (**[[email protected]][13]:~$**) using commands. + +You will need to utilize the **PS1** environment variable which controls what is being displayed as the prompt. You can learn more about it in the [man page][14]. + +For instance, when you type in: + +``` +echo $PS1 +``` + +The output in my case is: + +``` +\[\e]0;\[email protected]\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\[email protected]\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ +``` + +We need to focus on the first part of the output: + +``` +\[\e]0;\[email protected]\h: \w\a\]$ +``` + +Here, you need to know the following: + + * **\e** is a special character that denotes the start of a color sequence + * **\u** indicates the username followed by the @ symbol + * **\h** denotes the hostname of the system + * **\w** denotes the base directory + * **\a** indicates the active directory + * **$** indicates non-root user + + + +The output in your case can be different, but the variables will be the same, so you need to play with the commands mentioned below depending on your output. + +Before you do that, keep these in mind: + + * Codes for text format: **0** for normal text, **1** for bold, **3** for italic and **4** for underline text + * Color range for background colors: **40-47** + * Color range for text color: **30-37** + + + +You just need to type in the following to change the color and font: + +``` +PS1="\e[41;3;32m[\[email protected]\h:\w\a\$]" +``` + +This is how your bash prompt will look like after typing the command: + +![][15] + +If you notice the command properly, as mentioned above, \e helps us assign a color sequence. + +In the command above, I’ve assigned a **background color first**, then the **text style**, and then the **font color** followed by “**m**“. + +Here, “**m**” indicates the end of the color sequence. + +So, all you have to do is, play around with this part: + +``` +41;3;32 +``` + +Rest of the command should remain the same, you just need to assign different numbers to change the background color, text style, and text color. + +Do note that this is in no particular order, you can assign the text style first, background color next, and the text color at the end as “**3;41;32**“, where the command becomes: + +``` +PS1="\e[3;41;32m[\[email protected]\h:\w\a\$]" +``` + +![][16] + +As you can notice, the color customization is the same no matter the order. So, just keep in mind the codes for customization and play around with it till you’re sure you want this as a permanent change. + +The above command that I mentioned temporarily customizes the bash prompt for the current session. If you close the session, you will lose the customization. + +So, to make this a permanent change, you need to add it to **.bashrc** file (this is a configuration file that loads up every time you load up a session). + +![][17] + +You can access the file by simply typing: + +``` +nano ~/.bashrc +``` + +Unless you’re sure what you’re doing, do not change anything. And, just for the sake of restoring the settings back, you should keep a backup of the PS1 environment variable (copy-paste what’s in it by default) to a text file. + +So, even if you need the default font and color, you can again edit the **.bashrc file** and paste the PS1 environment variable. + +#### Bonus Tip: Change the terminal color pallet based on your wallpaper + +If you want to change the background and text color of the terminal but you are not sure which colors to pick, you can use a Python-based tool Pywal. It [automatically changes the color of the terminal based on your wallpaper][18] or the image you provide to it. + +![][19] + +I have written about it in details if you are interested in using this tool. + +**Recommended Read:** + +![][20] + +#### [Automatically Change Color Scheme of Your Linux Terminal Based on Your Wallpaper][18] + +### Wrapping Up + +Of course, it is easy to customize using the GUI while getting a better control of what you can change. But, the need to know the commands is also necessary in case you start [using WSL][21] or access a remote server using SSH, you can customize your experience no matter what. + +How do you customize the Linux terminal? Share your secret ricing recipe with us in the comments. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/customize-linux-terminal/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/default-terminal.jpg?resize=773%2C493&ssl=1 +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal-preferences.jpg?resize=800%2C350&ssl=1 +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/terminal-profiles.jpg?resize=800%2C619&ssl=1 +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/change-terminal-profile.jpg?resize=796%2C347&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-theme.jpg?resize=800%2C363&ssl=1 +[6]: https://fonts.google.com/specimen/Share+Tech+Mono +[7]: https://www.dafont.com/larabie-font.font +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-customization-1.jpg?resize=800%2C500&ssl=1 +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-color-customization.jpg?resize=759%2C607&ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal.jpg?resize=800%2C571&ssl=1 +[11]: https://itsfoss.com/change-hostname-ubuntu/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/itsfoss-hostname.jpg?resize=800%2C188&ssl=1 +[13]: https://itsfoss.com/cdn-cgi/l/email-protection +[14]: https://linux.die.net/man/1/bash +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-bash-prompt-customization.jpg?resize=800%2C190&ssl=1 +[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal-customization-1s.jpg?resize=800%2C158&ssl=1 +[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/bashrch-customization-terminal.png?resize=800%2C615&ssl=1 +[18]: https://itsfoss.com/pywal/ +[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/wallpy-2.jpg?resize=800%2C442&ssl=1 +[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/pywal-linux.jpg?fit=800%2C450&ssl=1 +[21]: https://itsfoss.com/install-bash-on-windows/ diff --git a/sources/tech/20210204 A guide to understanding Linux software libraries in C.md b/sources/tech/20210204 A guide to understanding Linux software libraries in C.md new file mode 100644 index 0000000000..fe977cd22f --- /dev/null +++ b/sources/tech/20210204 A guide to understanding Linux software libraries in C.md @@ -0,0 +1,507 @@ +[#]: collector: (lujun9972) +[#]: translator: (mengxinayan) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A guide to understanding Linux software libraries in C) +[#]: via: (https://opensource.com/article/21/2/linux-software-libraries) +[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu) + +A guide to understanding Linux software libraries in C +====== +Software libraries are an easy and sensible way to reuse code. +![5 pengiuns floating on iceburg][1] + +Software libraries are a longstanding, easy, and sensible way to reuse code. This article explains how to build libraries from scratch and make them available to clients. Although the two sample libraries target Linux, the steps for creating, publishing, and using these libraries apply to other Unix-like systems. + +The sample libraries are written in C, which is well suited to the task. The Linux kernel is written mostly in C with the rest in assembly language. (The same goes for Windows and Linux cousins such as macOS.) The standard system libraries for input/output, networking, string processing, mathematics, security, data encoding, and so on are likewise written mainly in C. To write a library in C is therefore to write in Linux's native language. Moreover, C sets the mark for performance among high-level languages. + +There are also two sample clients (one in C, the other in Python) to access the libraries. It's no surprise that a C client can access a library written in C, but the Python client illustrates that a library written in C can serve clients from other languages. + +### Static versus dynamic libraries + +Linux systems have two types of libraries: + + * A **static library (aka library archive)** is baked into a statically compiled client (e.g., one in C or Rust) during the compilation process' link phase. In effect, each client gets its own copy of the library. A significant downside of a static library comes to the fore if the library needs to be revised (for example, to fix a bug), as each library client must be relinked to the static library. A dynamic library, described next, avoids this shortcoming. + * A **dynamic (aka shared) library** is flagged during a statically compiled client program's link phase, but the client program and the library code remain otherwise unconnected until runtime—the library code is not baked into the client. At runtime, the system's dynamic loader connects a shared library with an executing client, regardless of whether the client comes from a statically compiled language, such as C, or a dynamically compiled language, such as Python. As a result, a dynamic library can be updated without inconveniencing clients. Finally, multiple clients can share a single copy of a dynamic library. + + + +In general, dynamic libraries are preferred over static ones, although there is a cost in complexity and performance. Here's how each library type is created and published: + + 1. The source code for the library is compiled into one or more object modules, which are binary files that can be included in a library and linked to executable clients. + 2. The object modules are packaged into a single file. For a static library, the standard extension is `.a` for "archive." For a dynamic library, the extension is `.so` for "shared object." The two sample libraries, which have the same functionality, are published as the files `libprimes.a` (static) and `libshprimes.so` (dynamic). The prefix `lib` is used for both types of library. + 3. The library file is copied to a standard directory so that client programs, without fuss, can access the library. A typical location for the library, whether static or dynamic, is `/usr/lib` or `/usr/local/lib`; other locations are possible. + + + +Detailed steps for building and publishing each type of library are coming shortly. First, however, I will introduce the C functions in the two libraries. + +### The sample library functions + +The two sample libraries are built from the same five C functions, four of which are accessible to client programs. The fifth function, which is a utility for one of the other four, shows how C supports hiding information. The source code for each function is short enough that the functions can be housed in a single source file, although multiple source files (e.g., one per each of the four published functions) is an option. + +The library functions are elementary and deal, in various ways, with prime numbers. All of the functions expect unsigned (i.e., non-negative) integer values as arguments: + + * The `is_prime` function tests whether its single argument is a prime. + * The `are_coprimes` function checks whether its two arguments have a greatest common divisor (gcd) of 1, which defines co-primes. + * The `prime_factors` function lists the prime factors of its argument. + * The `goldbach` function expects an even integer value of 4 or more, listing whichever two primes sum to this argument; there may be multiple summing pairs. The function is named after the 18th-century mathematician [Christian Goldbach][2], whose conjecture that every even integer greater than two is the sum of two primes remains one of the oldest unsolved problems in number theory. + + + +The utility function `gcd` resides in the deployed library files, but this function is not accessible outside of its containing file; hence, a library client cannot directly invoke the `gcd` function. A closer look at C functions clarifies the point. + +### More on C functions + +Every function in C has a storage class, which determines the function's scope. For functions there are two options: + + * The default storage class for functions is `extern`, which gives a function global scope. A client program can call any `extern` function in the sample libraries. Here's the definition for the function `are_coprimes` with an explicit `extern`: [code] extern unsigned are_coprimes(unsigned n1, unsigned n2) { +  ... +} +``` + * The storage class `static` limits a function's scope to the file in which the function is defined. In the sample libraries, the utility function `gcd` is `static`: [code] static unsigned gcd(unsigned n1, unsigned n2) { +  ... +} +``` + + + +Only functions within the `primes.c` file can invoke `gcd`, and only the function `are_coprimes` does so. When the static and dynamic libraries are built and published, other programs can call an `extern` function such as `are_coprimes` but not the `static` function `gcd`. The `static` storage class thus hides the `gcd` function from library clients by limiting the function's scope to the other library functions. + +The functions other than `gcd` in the `primes.c` file need not specify a storage class, which would default to `extern`. However, it is common in libraries to make the `extern` explicit. + +C distinguishes between function definitions and declarations, which is important for libraries. Let's start with definitions. C has named functions only, and every function is defined with: + + * A unique name. No two functions in a program can have the same name. + * An argument list, which may be empty. The arguments are typed. + * Either a return value type (e.g., `int` for a 32-bit signed integer) or `void` if there is no value returned. + * A body enclosed in curly braces. In a contrived example, the body could be empty. + + + +Every function in a program must be defined exactly once. + +Here's the full definition for the library function `are_coprimes`: + + +``` +extern unsigned are_coprimes(unsigned n1, unsigned n2) { /* definition */ +  return 1 == gcd(n1, n2); /* greatest common divisor of 1? */ +} +``` + +The function returns a boolean value (either 0 for false or 1 for true), depending on whether the two integer arguments have a greatest common divisor of 1. The utility function `gcd` computes the greatest common divisor of integer arguments `n1` and `n2`. + +A function declaration, unlike a definition, does not have a body: + + +``` +`extern unsigned are_coprimes(unsigned n1, unsigned n2); /* declaration */` +``` + +The declaration ends with a semicolon after the argument list; there are no curly braces enclosing a body. A function may be declared multiple times within a program. + +Why are declarations needed at all? In C, a called function must be visible to its caller. There are various ways to provide such visibility, depending on how fussy the compiler is. One sure way is to define the called function above its caller when both reside in the same file: + + +``` +void f() {...}     /* f is defined before being called */ +void g() { f(); }  /* ok */ +``` + +The definition of function `f` could be moved below the call from function `g` if `f` were declared above the call: + + +``` +void f();         /* declaration makes f visible to caller */ +void g() { f(); } /* ok */ +void f() {...}    /* easier to put this above the call from g */ +``` + +But what if the called function resides in a different file than its caller? How are functions defined in one file made visible in another file, given that each function must be defined exactly once in a program? + +This issue impacts libraries, whether static or dynamic. For example, the functions in the two primes libraries are defined in the source file `primes.c`, binary copies of which are in each library; but these defined functions must be visible to a library client in C, which is a separate program with its own source file(s). + +Providing visibility across files is what function declarations can do. For the "primes" examples, there is a header file named `primes.h` that declares the four functions to be made visible to library clients in C: + + +``` +/** header file primes.h: function declarations **/ +extern unsigned is_prime(unsigned); +extern void prime_factors(unsigned); +extern unsigned are_coprimes(unsigned, unsigned); +extern void goldbach(unsigned); +``` + +These declarations serve as an interface by specifying the invocation syntax for each function. + +For client convenience, the text file `primes.h` could be stored in a directory on the C compiler's search path. Typical locations are `/usr/include` and `/usr/local/include`. A C client would `#include` this header file near the top of the client's source code. (A header file is thus imported into the "head" of another source file.) C header files also serve as input to utilities (e.g., Rust's `bindgen`) that enable clients in other languages to access a C library. + +In summary, a library function is defined exactly once but declared wherever needed; any library client in C needs the declaration. A header file should contain function declarations—but not function definitions. If a header file did contain definitions, the file might be included multiple times in a C program, thereby breaking the rule that a function must be defined exactly once in a C program. + +### The library source code + +Below is the source code for two libraries. This code, the header file, and the two sample clients are available on my [website][3]. + +**The library functions** + + +``` +#include <stdio.h> +#include <math.h> + +extern unsigned is_prime(unsigned n) { +  if (n <= 3) return n > 1;                   /* 2 and 3 are prime */ +  if (0 == (n % 2) || 0 == (n % 3)) return 0; /* multiples of 2 or 3 aren't */ + +  /* check that n is not a multiple of other values < n */ +  unsigned i; +  for (i = 5; (i * i) <= n; i += 6) +    if (0 == (n % i) || 0 == (n % (i + 2))) return 0; /* not prime */ + +  return 1; /* a prime other than 2 or 3 */ +} + +extern void prime_factors(unsigned n) { +  /* list 2s in n's prime factorization */ +  while (0 == (n % 2)) {   +    [printf][4]("%i ", 2); +    n /= 2; +  } + +  /* 2s are done, the divisor is now odd */ +  unsigned i; +  for (i = 3; i <= [sqrt][5](n); i += 2) { +    while (0 == (n % i)) { +      [printf][4]("%i ", i); +      n /= i; +    } +  } + +  /* one more prime factor? */ +  if (n > 2) [printf][4]("%i", n); +} + +/* utility function: greatest common divisor */ +static unsigned gcd(unsigned n1, unsigned n2) { +  while (n1 != 0) { +    unsigned n3 = n1; +    n1 = n2 % n1; +    n2 = n3; +  } +  return n2; +} + +extern unsigned are_coprimes(unsigned n1, unsigned n2) { +  return 1 == gcd(n1, n2); +} + +extern void goldbach(unsigned n) { +  /* input errors */ +  if ((n <= 2) || ((n & 0x01) > 0)) { +    [printf][4]("Number must be > 2 and even: %i is not.\n", n); +    return; +  } + +  /* two simple cases: 4 and 6 */ +  if ((4 == n) || (6 == n)) { +    [printf][4]("%i = %i + %i\n", n, n / 2, n / 2); +    return; +  } +  +  /* for n >= 8: multiple possibilities for many */ +  unsigned i; +  for (i = 3; i < (n / 2); i++) { +    if (is_prime(i) && is_prime(n - i)) { +      [printf][4]("%i = %i + %i\n", n, i, n - i); +      /* if one pair is enough, replace this with break */ +    } +  } +} +``` + +These functions serve as grist for the library mill. The two libraries derive from exactly the same source code, and the header file `primes.h` is the C interface for both libraries. + +### Building the libraries + +The steps for building and publishing static and dynamic libraries differ in a few details. Only three steps are required for the static library and just two more for the dynamic library. The additional steps in building the dynamic library reflect the added flexibility of the dynamic approach. Let's start with the static library. + +The library source file `primes.c` is compiled into an object module. Here's the command, with the percent sign as the system prompt (double sharp signs introduce my comments): + + +``` +`% gcc -c primes.c ## step 1 static` +``` + +This produces the binary file `primes.o`, the object module. The flag `-c` means compile only. + +The next step is to archive the object module(s) by using the Linux `ar` utility: + + +``` +`% ar -cvq libprimes.a primes.o ## step 2 static` +``` + +The three flags `-cvq` are short for "create," "verbose," and "quick append" (in case new files must be added to an archive). Recall that the prefix `lib` is standard, but the library name is arbitrary. Of course, the file name for a library must be unique to avoid conflicts. + +The archive is ready to be published: + + +``` +`% sudo cp libprimes.a /usr/local/lib ## step 3 static` +``` + +The static library is now accessible to clients, examples of which are forthcoming. (The `sudo` is included to ensure the correct access rights for copying a file into `/usr/local/lib`.) + +The dynamic library also requires one or more object modules for packaging: + + +``` +`% gcc primes.c -c -fpic ## step 1 dynamic` +``` + +The added flag `-fpic` directs the compiler to generate position-independent code, which is a binary module that need not be loaded into a fixed memory location. Such flexibility is critical in a system of multiple dynamic libraries. The resulting object module is slightly larger than the one generated for the static library. + +Here's the command to create the single library file from the object module(s): + + +``` +`% gcc -shared -Wl,-soname,libshprimes.so -o libshprimes.so.1 primes.o ## step 2 dynamic` +``` + +The flag `-shared` indicates that the library is shared (dynamic) rather than static. The `-Wl` flag introduces a list of compiler options, the first of which sets the dynamic library's `soname`, which is required. The `soname` first specifies the library's logical name (`libshprimes.so`) and then, following the `-o` flag, the library's physical file name (`libshprimes.so.1`). The goal to is keep the logical name constant while allowing the physical file name to change with new versions. In this example, the 1 at the end of the physical file name `libshprimes.so.1` represents the first version of the library. The logical and physical file names could be the same, but best practice is to have separate names. A client accesses the library through its logical name (in this case, `libshprimes.so`), as I will clarify shortly. + +The next step is to make the shared library easily accessible to clients by copying it to the appropriate directory; for example, `/usr/local/lib again:` + + +``` +`% sudo cp libshprimes.so.1 /usr/local/lib ## step 3 dynamic` +``` + +A symbolic link is now set up between the shared library's logical name (`libshprimes.so`) and its full physical file name (`/usr/local/lib/libshprimes.so.1`). It's easiest to give the command with `/usr/local/lib` as the working directory: + + +``` +`% sudo ln --symbolic libshprimes.so.1 libshprimes.so ## step 4 dynamic` +``` + +The logical name `libshprimes.so` should not change, but the target of the symbolic link (`libshrimes.so.1`) can be updated as needed for new library implementations that fix bugs, boost performance, and so on. + +The final step (a precautionary one) is to invoke the `ldconfig` utility, which configures the system's dynamic loader. This configuration ensures that the loader will find the newly published library: + + +``` +`% sudo ldconfig ## step 5 dynamic` +``` + +The dynamic library is now ready for clients, including the two sample ones that follow. + +### A C library client + +The sample C client is the program tester, whose source code begins with two `#include` directives: + + +``` +#include <stdio.h>  /* standard input/output functions */ +#include <primes.h> /* my library functions */ +``` + +The angle brackets around the file names indicate that these header files are to be found on the compiler's search path (in the case of `primes.h`, the directory `/usr/local/include`). Without this `#include`, the compiler would complain about missing declarations for functions such as `is_prime` and `prime_factors`, which are published in both libraries. By the way, the source code for the tester program need not change at all to test each of the two libraries. + +By contrast, the source file for the library (`primes.c`) opens with these `#include` directives: + + +``` +#include <stdio.h> +#include <math.h> +``` + +The header file `math.h` is required because the library function `prime_factors` calls the mathematics function `sqrt` in the standard library `libm.so`. + +For reference, here is the source code for the tester program: + +**The tester program** + + +``` +#include <stdio.h> +#include <primes.h> + +int main() { +  /* is_prime */ +  [printf][4]("\nis_prime\n"); +  unsigned i, count = 0, n = 1000; +  for (i = 1; i <= n; i++) { +    if (is_prime(i)) { +      count++; +      if (1 == (i % 100)) [printf][4]("Sample prime ending in 1: %i\n", i); +    } +  } +  [printf][4]("%i primes in range of 1 to a thousand.\n", count); + +  /* prime_factors */ +  [printf][4]("\nprime_factors\n"); +  [printf][4]("prime factors of 12: "); +  prime_factors(12); +  [printf][4]("\n"); +  +  [printf][4]("prime factors of 13: "); +  prime_factors(13); +  [printf][4]("\n"); +  +  [printf][4]("prime factors of 876,512,779: "); +  prime_factors(876512779); +  [printf][4]("\n"); + +  /* are_coprimes */ +  [printf][4]("\nare_coprime\n"); +  [printf][4]("Are %i and %i coprime? %s\n", +         21, 22, are_coprimes(21, 22) ? "yes" : "no"); +  [printf][4]("Are %i and %i coprime? %s\n", +         21, 24, are_coprimes(21, 24) ? "yes" : "no"); + +  /* goldbach */ +  [printf][4]("\ngoldbach\n"); +  goldbach(11);    /* error */ +  goldbach(4);     /* small one */ +  goldbach(6);     /* another */ +  for (i = 100; i <= 150; i += 2) goldbach(i); + +  return 0; +} +``` + +In compiling `tester.c` into an executable, the tricky part is the order of the link flags. Recall that the two sample libraries begin with the prefix `lib`, and each has the usual extension: `.a` for the static library `libprimes.a` and `.so` for the dynamic library `libshprimes.so`. In a links specification, the prefix `lib` and the extension fall away. A link flag begins with `-l` (lowercase L), and a compilation command may contain many link flags. Here is the full compilation command for the tester program, using the dynamic library as the example: + + +``` +`% gcc -o tester tester.c -lshprimes -lm` +``` + +The first link flag identifies the library `libshprimes.so` and the second link flag identifies the standard mathematics library `libm.so`. + +The linker is lazy, which means that the order of the link flags matters. For example, reversing the order of the link specifications generates a compile-time error: + + +``` +`% gcc -o tester tester.c -lm -lshprimes ## danger!` +``` + +The flag that links to `libm.so` comes first, but no function from this library is invoked explicitly in the tester program; hence, the linker does not link to the `math.so` library. The call to the `sqrt` library function occurs only in the `prime_factors` function that is now contained in the `libshprimes.so` library. The resulting error in compiling the tester program is: + + +``` +`primes.c: undefined reference to 'sqrt'` +``` + +Accordingly, the order of the link flags should notify the linker that the `sqrt` function is needed: + + +``` +`% gcc -o tester tester.c -lshprimes -lm ## -lshprimes 1st` +``` + +The linker picks up the call to the library function `sqrt` in the `libshprimes.so` library and, therefore, does the appropriate link to the mathematics library `libm.so`. There is a more complicated option for linking that supports either link-flag order; in this case, however, the easy way is to arrange the link flags appropriately. + +Here is some output from a run of the tester client: + + +``` +is_prime +Sample prime ending in 1: 101 +Sample prime ending in 1: 401 +... +168 primes in range of 1 to a thousand. + +prime_factors +prime factors of 12: 2 2 3 +prime factors of 13: 13 +prime factors of 876,512,779: 211 4154089 + +are_coprime +Are 21 and 22 coprime? yes +Are 21 and 24 coprime? no + +goldbach +Number must be > 2 and even: 11 is not. +4 = 2 + 2 +6 = 3 + 3 +... +32 =  3 + 29 +32 = 13 + 19 +... +100 =  3 + 97 +100 = 11 + 89 +... +``` + +For the `goldbach` function, even a relatively small even value (e.g., 18) may have multiple pairs of primes that sum to it (in this case, 5+13 and 7+11). Such multiple prime pairs are among the factors that complicate an attempted proof of Goldbach's conjecture. + +### Wrapping up with a Python client + +Python, unlike C, is not a statically compiled language, which means that the sample Python client must access the dynamic rather than the static version of the primes library. To do so, Python has various modules (standard and third-party) that support a foreign function interface (FFI), which allows a program written in one language to invoke functions written in another. Python `ctypes` is a standard and relatively simple FFI that enables Python code to call C functions. + +Any FFI has challenges because the interfacing languages are unlikely to have exactly the same data types. For example, the primes library uses the C type `unsigned int`, which Python does not have; the `ctypes` FFI maps a C `unsigned int` to a Python `int`. Of the four `extern` C functions published in the primes library, two behave better in Python with explicit `ctypes` configuration. + +The C functions `prime_factors` and `goldbach` have `void` instead of a return type, but `ctypes` by default replaces the C `void` with the Python `int`. When called from Python code, the two C functions then return a random (hence, meaningless) integer value from the stack. However, `ctypes` can be configured to have the functions return `None` (Python's null type) instead. Here's the configuration for the `prime_factors` function: + + +``` +`primes.prime_factors.restype = None` +``` + +A similar statement handles the `goldbach` function. + +The interactive session below (in Python 3) shows that the interface between a Python client and the primes library is straightforward: + + +``` +>>> from ctypes import cdll + +>>> primes = cdll.LoadLibrary("libshprimes.so") ## logical name + +>>> primes.is_prime(13) +1 +>>> primes.is_prime(12) +0 + +>>> primes.are_coprimes(8, 24) +0 +>>> primes.are_coprimes(8, 25) +1 + +>>> primes.prime_factors.restype = None +>>> primes.goldbach.restype = None + +>>> primes.prime_factors(72) +2 2 2 3 3 + +>>> primes.goldbach(32) +32 = 3 + 29 +32 = 13 + 19 +``` + +The functions in the primes library use only a simple data type, `unsigned int`. If this C library used complicated types such as structures, and if pointers to structures were passed to and returned from library functions, then an FFI more powerful than `ctypes` might be better for a smooth interface between Python and C. Nonetheless, the `ctypes` example shows that a Python client can use a library written in C. Indeed, the popular NumPy library for scientific computing is written in C and then exposed in a high-level Python API. + +The simple primes library and the advanced NumPy library underscore that C remains the lingua franca among programming languages. Almost every language can talk to C—and, through C, to any other language that talks to C. Python talks easily to C and, as another example, Java may do the same when [Project Panama][6] becomes an alternative to Java Native Interface (JNI). + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-software-libraries + +作者:[Marty Kalin][a] +选题:[lujun9972][b] +译者:[萌新阿岩](https://github.com/mengxinayan) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkalindepauledu +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux31x_cc.png?itok=Pvim4U-B (5 pengiuns floating on iceburg) +[2]: https://en.wikipedia.org/wiki/Christian_Goldbach +[3]: https://condor.depaul.edu/mkalin +[4]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html +[5]: http://www.opengroup.org/onlinepubs/009695399/functions/sqrt.html +[6]: https://openjdk.java.net/projects/panama diff --git a/sources/tech/20210204 Get started with distributed tracing using Grafana Tempo.md b/sources/tech/20210204 Get started with distributed tracing using Grafana Tempo.md new file mode 100644 index 0000000000..6b0efbd218 --- /dev/null +++ b/sources/tech/20210204 Get started with distributed tracing using Grafana Tempo.md @@ -0,0 +1,106 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with distributed tracing using Grafana Tempo) +[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing) +[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal) + +Get started with distributed tracing using Grafana Tempo +====== +Grafana Tempo is a new open source, high-volume distributed tracing +backend. +![Computer laptop in space][1] + +Grafana's [Tempo][2] is an easy-to-use, high-scale, distributed tracing backend from Grafana Labs. Tempo has integrations with [Grafana][3], [Prometheus][4], and [Loki][5] and requires only object storage to operate, making it cost-efficient and easy to operate. + +I've been involved with this open source project since its inception, so I'll go over some of the basics about Tempo and show why the cloud-native community has taken notice of it. + +### Distributed tracing + +It's common to want to gather telemetry on requests made to an application. But in the modern server world, a single application is regularly split across many microservices, potentially running on several different nodes. + +Distributed tracing is a way to get fine-grained information about the performance of an application that may consist of discreet services. It provides a consolidated view of the request's lifecycle as it passes through an application. Tempo's distributed tracing can be used with monolithic or microservice applications, and it gives you [request-scoped information][6], making it the third pillar of observability (alongside metrics and logs). + +The following is an example of a Gantt chart that distributed tracing systems can produce about applications. It uses the Jaeger [HotROD][7] demo application to generate traces and stores them in Grafana Cloud's hosted Tempo. This chart shows the processing time for the request, broken down by service and function. + +![Gantt chart from Grafana Tempo][8] + +(Annanay Agarwal, [CC BY-SA 4.0][9]) + +### Reducing index size + +Traces have a ton of information in a rich and well-defined data model. Usually, there are two interactions with a tracing backend: filtering for traces using metadata selectors like the service name or duration, and visualizing a trace once it's been filtered. + +To enhance search, most open source distributed tracing frameworks index a number of fields from the trace, including the service name, operation name, tags, and duration. This results in a large index and pushes you to use a database like Elasticsearch or [Cassandra][10]. However, these can be tough to manage and costly to operate at scale, so my team at Grafana Labs set out to come up with a better solution. + +At Grafana, our on-call debugging workflows start with drilling down for the problem using a metrics dashboard (we use [Cortex][11], a Cloud Native Computing Foundation incubating project for scaling Prometheus, to store metrics from our application), sifting through the logs for the problematic service (we store our logs in Loki, which is like Prometheus, but for logs), and then viewing traces for a given request. We realized that all the indexing information we need for the filtering step is available in Cortex and Loki. However, we needed a strong integration for trace discoverability through these tools and a complimentary store for key-value lookup by trace ID. + +This was the start of the [Grafana Tempo][12] project. By focusing on retrieving traces given a trace ID, we designed Tempo to be a minimal-dependency, high-volume, cost-effective distributed tracing backend. + +### Easy to operate and cost-effective + +Tempo uses an object storage backend, which is its only dependency. It can be used in either single binary or microservices mode (check out the [examples][13] in the repo on how to get started easily). Using object storage also means you can store a high volume of traces from applications without any sampling. This ensures that you never throw away traces for those one-in-a-million requests that errored out or had higher latencies. + +### Strong integration with open source tools + +[Grafana 7.3 includes a Tempo data source][14], which means you can visualize traces from Tempo in the Grafana UI. Also, [Loki 2.0's new query features][15] make trace discovery in Tempo easy. And to integrate with Prometheus, the team is working on adding support for exemplars, which are high-cardinality metadata information you can add to time-series data. The metric storage backends do not index these, but you can retrieve and display them alongside the metric value in the Grafana UI. While exemplars can store various metadata, trace-IDs are stored to integrate strongly with Tempo in this use case. + +This example shows using exemplars with a request latency histogram where each exemplar data point links to a trace in Tempo. + +![Using exemplars in Tempo][16] + +(Annanay Agarwal, [CC BY-SA 4.0][9]) + +### Consistent metadata + +Telemetry data emitted from applications running as containerized applications generally has some metadata associated with it. This can include information like the cluster ID, namespace, pod IP, etc. This is great for providing on-demand information, but it's even better if you can use the information contained in metadata for something productive.  + +For instance, you can use the [Grafana Cloud Agent to ingest traces into Tempo][17], and the agent leverages the Prometheus Service Discovery mechanism to poll the Kubernetes API for metadata information and adds these as tags to spans emitted by the application. Since this metadata is also indexed in Loki, it makes it easy for you to jump from traces to view logs for a given service by translating metadata into Loki label selectors. + +The following is an example of consistent metadata that can be used to view the logs for a given span in a trace in Tempo. + +### ![][18] + +### Cloud-native + +Grafana Tempo is available as a containerized application, and you can run it on any orchestration engine like Kubernetes, Mesos, etc. The various services can be horizontally scaled depending on the workload on the ingest/query path. You can also use cloud-native object storage, such as Google Cloud Storage, Amazon S3, or Azure Blog Storage with Tempo. For further information, read the [architecture section][19] in Tempo's documentation. + +### Try Tempo + +If this sounds like it might be as useful for you as it has been for us, [clone the Tempo repo][20] and give it a try. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/tempo-distributed-tracing + +作者:[Annanay Agarwal][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/annanayagarwal +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space) +[2]: https://grafana.com/oss/tempo/ +[3]: http://grafana.com/oss/grafana +[4]: https://prometheus.io/ +[5]: https://grafana.com/oss/loki/ +[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html +[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod +[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo) +[9]: https://creativecommons.org/licenses/by-sa/4.0/ +[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster +[11]: https://cortexmetrics.io/ +[12]: http://github.com/grafana/tempo +[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/ +[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/ +[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/ +[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo) +[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/ +[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN +[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/ +[20]: https://github.com/grafana/tempo diff --git a/sources/tech/20210204 How to implement business requirements in software development.md b/sources/tech/20210204 How to implement business requirements in software development.md new file mode 100644 index 0000000000..1527f6babe --- /dev/null +++ b/sources/tech/20210204 How to implement business requirements in software development.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to implement business requirements in software development) +[#]: via: (https://opensource.com/article/21/2/exceptional-behavior) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +How to implement business requirements in software development +====== +Increment your e-commerce app to ensure it implements required business +process rules correctly. +![Working on a team, busy worklife][1] + +In my previous articles in this series, I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. I'm using a helpful acronym to explain why it's better to approach problems incrementally. **ZOMBIES** stands for: + +**Z** – Zero +**O** – One +**M** – Many (or more complex) +**B** – Boundary behaviors +**I** – Interface definition +**E** – Exercise exceptional behavior +**S** – Simple scenarios, simple solutions + +In the first three articles in this series, I demonstrated the first five **ZOMBIES** principles. The first article [implemented **Z**ero][2], which provides the simplest possible path through your code. The second article performed [tests with **O**ne and **M**any][3] samples, and the third article looked at [**B**oundaries and **I**nterfaces][4]. In this article, I'll take a look at the penultimate letter in our acronym: **E**, which stands for "exercise exceptional behavior." + +### Exceptional behavior in action + +When you write an app like the e-commerce tool in this example, you need to contact product owners or business sponsors to learn if there are any specific business policy rules that need to be implemented. + +Sure enough, as with any e-commerce operation, you want to put business policy rules in place to entice customers to keep buying. Suppose a business policy rule has been communicated that any order with a grand total greater than $500 gets a percentage discount. + +OK, time to roll up your sleeves and craft the executable expectation for this business policy rule: + + +``` +[Fact] +public void Add2ItemsTotal600GrandTotal540() { +        var expectedGrandTotal = 540.00; +        var actualGrandTotal = 0.00; +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +The confirmation example that encodes the business policy rule states that if the order total is $600.00, the `shoppingAPI` will calculate the grand total to discount it to $540.00. The script above fakes the expectation just to see it fail. Now, make it pass: + + +``` +[Fact] +public void Add2ItemsTotal600GrandTotal540() { +        var expectedGrandTotal = 540.00; +        Hashtable item = [new][5] Hashtable(); +        item.Add("00000001", 200.00); +        shoppingAPI.AddItem(item); +        Hashtable item2 = [new][5] Hashtable(); +        item2.Add("00000002", 400.00); +        shoppingAPI.AddItem(item2); +        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); +        Assert.Equal(expectedGrandTotal, actualGrandTotal); +} +``` + +In the confirmation example, you are adding one item priced at $200 and another item priced at $400 for a total of $600 for the order. When you call the `CalculateGrandTotal()` method, you expect to get a total of $540. + +Will this microtest pass? + + +``` +[xUnit.net 00:00:00.57] tests.UnitTest1.Add2ItemsTotal600GrandTotal540 [FAIL] +  X tests.UnitTest1.Add2ItemsTotal600GrandTotal540 [2ms] +  Error Message: +   Assert.Equal() Failure +Expected: 540 +Actual: 600 +[...] +``` + +Well, it fails miserably. You were expecting $540, but the system calculates $600. Why the error? It's because you haven't taught the system how to calculate the discount on order totals larger than $500 and then subtract that discount from the grand total. + +Implement that processing logic. Judging from the confirmation example above, when the order total is $600.00 (which is greater than the business rule threshold of an order totaling $500), the expected grand total is $540. This means the system needs to subtract $60 from the grand total. And $60 is precisely 10% of $600. So the business policy rule that deals with discounts expects a 10% discount on all order totals greater than $500. + +Implement this processing logic in the `ShippingAPI` class: + + +``` +private double Calculate10PercentDiscount(double total) { +        double discount = 0.00; +        if(total > 500.00) { +                discount = (total/100) * 10; +        } +        return discount; +} +``` + +First, check to see if the order total is greater than $500. If it is, then calculate 10% of the order total. + +You also need to teach the system how to subtract the calculated 10% from the order grand total. That's a very straightforward change: + + +``` +`return grandTotal - Calculate10PercentDiscount(grandTotal);` +``` + +Now all tests pass, and you're again enjoying steady success. Your script **Exercises exceptional behavior** to implement the required business policy rules. + +### One more to go + +I've taken us to **ZOMBIE** now, so there's just **S** remaining. I'll cover that in the exciting series finale. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/exceptional-behavior + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) +[2]: https://opensource.com/article/21/1/zombies-zero +[3]: https://opensource.com/article/21/1/zombies-2-one-many +[4]: https://opensource.com/article/21/1/zombies-3-boundaries-interface +[5]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20210205 Astrophotography with Fedora Astronomy Lab- setting up.md b/sources/tech/20210205 Astrophotography with Fedora Astronomy Lab- setting up.md new file mode 100644 index 0000000000..7077d9544b --- /dev/null +++ b/sources/tech/20210205 Astrophotography with Fedora Astronomy Lab- setting up.md @@ -0,0 +1,243 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Astrophotography with Fedora Astronomy Lab: setting up) +[#]: via: (https://fedoramagazine.org/astrophotography-with-fedora-astronomy-lab-setting-up/) +[#]: author: (Geoffrey Marr https://fedoramagazine.org/author/coremodule/) + +Astrophotography with Fedora Astronomy Lab: setting up +====== + +![][1] + +Photo by Geoffrey Marr + +You love astrophotography. You love Fedora Linux. What if you could do the former using the latter? Capturing stunning and awe-inspiring astrophotographs, processing them, and editing them for printing or sharing online using Fedora is absolutely possible! This tutorial guides you through the process of setting up a computer-guided telescope mount, guide cameras, imaging cameras, and other pieces of equipment. A future article will cover capturing and processing data into pleasing images. Please note that while this article is written with certain aspects of the astrophotography process included or omitted based off my own equipment, you can custom-tailor it to fit your own equipment and experience. Let’s capture some photons! + +![][2] + +### Installing Fedora Astronomy Lab + +This tutorial focuses on [Fedora Astronomy Lab][3], so it only makes sense that the first thing we should do is get it installed. But first, a quick introduction: based on the KDE Plasma desktop, Fedora Astronomy Lab includes many pieces of open source software to aid astronomers in planning observations, capturing data, processing images, and controlling astronomical equipment. + +Download Fedora Astronomy Lab from the [Fedora Labs website][4]. You will need a USB flash-drive with at least eight GB of storage. Once you have downloaded the ISO image, use [Fedora Media Writer][5] to [write the image to your USB flash-drive.][6] After this is done, [boot from the USB drive][7] you just flashed and [install Fedora Astronomy Lab to your hard drive.][8] While you can use Fedora Astronomy Lab in a live-environment right from the flash drive, you should install to the hard drive to prevent bottlenecks when processing large amounts of astronomical data. + +### Configuring your installation + +Before you can go capturing the heavens, you need to do some minor setup in Fedora Astronomy Lab. + +First of all, you need to add your user to the _dialout_ group so that you can access certain pieces of astronomical equipment from within the guiding software. Do that by opening the terminal (Konsole) and running this command (replacing _user_ with your username): + +sudo usermod -a -G dialout user + +My personal setup includes a guide camera (QHY5 series, also known as Orion Starshoot) that does not have a driver in the mainline Fedora repositories. To enable it, ypu need to install the [qhyccd SDK][9]. (_Note that this package is not officially supported by Fedora. Use it at your own risk.)_ At the time of writing, I chose to use the latest stable release, 20.08.26. Once you download the Linux 64-bit version of the SDK, to extract it: +``` + +``` + +tar zxvf sdk_linux64_20.08.26.tgz +``` + +``` + +Now change into the directory you just extracted, change the permissions of the _install.sh_ file to give you execute privileges, and run the _install.sh_: + +``` +cd sdk_linux64_20.08.26 +chmod +x install.sh +sudo ./install.sh +``` + +Now it’s time to install the qhyccd INDI driver. INDI is an open source software library used to control astronomical equipment. Unfortunately, the driver is unavailable in the mainline Fedora repositories, but it is in a Copr repository. (_Note: Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.)_ If you prefer to have the newest (and perhaps unstable!) pieces of astronomy software, you can also enable the “bleeding” repositories at this time by following [this guide][10]. For this tutorial, you are only going to enable one repo: +``` + +``` + +sudo dnf copr enable xsnrg/indi-3rdparty-bleeding +``` + +``` + +Install the driver by running the following command: +``` + +``` + +sudo dnf install indi-qhy +``` + +``` + +Finally, update all of your system packages: + +sudo dnf update -y + +To recap what you accomplished in this sectio: you added your user to the _dialout_ group, downloaded and installed the qhyccd driver, enabled the _indi-3rdparty-bleeding_ copr, installed the qhyccd-INDI driver with dnf, and updated your system. + +### Connecting your equipment + +This is the time to connect all your equipment to your computer. Most astronomical equipment will connect via USB, and it’s really as easy as plugging each device into your computer’s USB ports. If you have a lot of equipment (mount, imaging camera, guide camera, focuser, filter wheel, etc), you should use an external powered-USB hub to make sure that all connected devices have adequate power. Once you have everything plugged in, run the following command to ensure that the system recognizes your equipment: +``` + +``` + +lsusb +``` + +``` + +You should see output similar to (but not the same as) the output here: + +![][11] + +You see in the output that the system recognizes the telescope mount (a SkyWatcher EQM-35 Pro) as _Prolific Technology, Inc. PL2303 Serial Port_, the imaging camera (a Sony a6000) as _Sony Corp. ILCE-6000_, and the guide camera (an Orion Starshoot, aka QHY5) as _Van Ouijen Technische Informatica_. Now that you have made sure your system recognizes your equipment, it’s time to open your desktop planetarium and telescope controller, KStars! + +### Setting up KStars + +It’s time to open [KStars][12], which is a desktop planetarium and also includes the Ekos telescope control software. The first time you open KStars, you will see the KStars Startup Wizard. + +![][13] + +Follow the prompts to choose your home location (where you will be imaging from) and _Download Extra Data…_ + +![Setting your location][14] + +![“Download Extra Data”][15] + +![Choosing which catalogs to download][16] + +This will allow you to install additional star, nebula, and galaxy catalogs. You don’t need them, but they don’t take up too much space and add to the experience of using KStars. Once you’ve completed this, hit _Done_ in the bottom right corner to continue. + +### Getting familiar with KStars + +Now is a good time to play around with the KStars interface. You are greeted with a spherical image with a coordinate plane and stars in the sky. + +![][17] + +This is the desktop planetarium which allows you to view the placement of objects in the night sky. Double-clicking an object selects it, and right clicking on an object gives you options like _Center & Track_ which will follow the object in the planetarium, compensating for [sidereal time][18]. _Show DSS Image_ shows a real [digitized sky survey][19] image of the selected object. + +![][20] + +Another essential feature is the _Set Time_ option in the toolbar. Clicking this will allow you to input a future (or past) time and then simulate the night sky as if that were the current date. + +![The Set Time button][21] + +### Configuring capture equipment with Ekos + +You’re familiar with the KStars layout and some basic functions, so it’s time to move on configuring your equipment using the [Ekos][22] observatory controller and automation tool. To open Ekos, click the observatory button in the toolbar or go to _Tools_ > _Ekos_. + +![The Ekos button on the toolbar][23] + +You will see another setup wizard: the _Ekos Profile Wizard_. Click _Next_ to start the wizard. + +![][24] + +In this tutorial, you have all of our equipment connected directly to your computer. A future article we will cover using an INDI server installed on a remote computer to control our equipment, allowing you to connect over a network and not have to be in the same physical space as your gear. For now though, select _Equipment is attached to this device_. + +![][25] + +You are now asked to name your equipment profile. I usually name mine something like “Local Gear” to differentiate between profiles that are for remote gear, but name your profile what you wish. We will leave the button marked _Internal Guide_ checked and won’t select any additional services. Now click the _Create Profile & Select Devices_ button. + +![][26] + +This next screen is where we can select your particular driver to use for each individual piece of equipment. This part will be specific to your setup depending on what gear you use. For this tutorial, I will select the drivers for my setup. + +My mount, a [SkyWatcher EQM-35 Pro][27], uses the _EQMod Mount_ under _SkyWatcher_ in the menu (this driver is also compatible with all SkyWatcher equatorial mounts, including the [EQ6-R Pro][28] and the [EQ8-R Pro][29]). For my Sony a6000 imaging camera, I choose the _Sony DSLR_ under _DSLRs_ under the CCD category. Under _Guider_, I choose the _QHY CCD_ under _QHY_ for my Orion Starshoot (and any QHY5 series camera). That last driver we want to select will be under the Aux 1 category. We want to select _Astrometry_ from the drop-down window. This will enable the Astrometry plate-solver from within Ekos that will allow our telescope to automatically figure out where in the night sky it is pointed, saving us the time and hassle of doing a one, two, or three star calibration after setting up our mount. + +You selected your drivers. Now it’s time to configure your telescope. Add new telescope profiles by clicking on the + button in the lower right. This is essential for computing field-of-view measurements so you can tell what your images will look like when you open the shutter. Once you click the + button, you will be presented with a form where you can enter the specifications of your telescope and guide scope. For my imaging telescope, I will enter Celestron into the _Vendor_ field, SS-80 into the _Model_ field, I will leave the _Driver_ field as None, _Type_ field as Refractor, _Aperture_ as 80mm, and _Focal Length_ as 400mm. + +![][30] + +After you enter the data, hit the _Save_ button. You will see the data you just entered appear in the left window with an index number of 1 next to it. Now you can go about entering the specs for your guide scope following the steps above. Once you hit save here, the guide scope will also appear in the left window with an index number of 2. Once all of your scopes are entered, close this window. Now select your _Primary_ and _Guide_ telescopes from the drop-down window. + +![][31] + +After all that work, everything should be correctly configured! Click the _Close_ button and complete the final bit of setup. + +### Starting your capture equipment + +This last step before you can start taking images should be easy enough. Click the _Play_ button under Start & Stop Ekos to connect to your equipment. + +![][32] + +You will be greeted with a screen that looks similar to this: + +![][33] + +When you click on the tabs at the top of the screen, they should all show a green dot next to _Connection_, indicating that they are connected to your system. On my setup, the baud rate for my mount (the EQMod Mount tab) is set incorrectly, and so the mount is not connected. + +![][34] + +This is an easy fix; click on the _EQMod Mount_ tab, then the _Connection_ sub-tab, and then change the baud rate from 9600 to 115200. Now is a good time to ensure the serial port under _Ports_ is the correct serial port for your mount. You can check which port the system has mounted your device on by running the command: +``` + +``` + +ls /dev + +``` +| grep USB +``` + +You should see _ttyUSB0_. If there is more than one USB-serial device plugged in at a time, you will see more than one ttyUSB port, with an incrementing following number. To figure out which port is correct. unplug your mount and run the command again. + +Now click on the _Main Control_ sub-tab, click _Connect_ again, and wait for the mount to connect. It might take a few seconds, be patient and it should connect. + +The last thing to do is set the sensor and pixel size parameters for my camera. Under the _Sony DSLR Alpha-A6000 (Control)_ tab, select the _Image Info_ sub-tab. This is where you can enter your sensor specifications; if you don’t know them, a quick search on the internet will bring you your sensor’s maximum resolution as well as pixel pitch. Enter this data into the right-side boxes, then press the _Set_ button to load them into the left boxes and save them into memory. Hit the _Close_ button when you are done. + +![][35] + +### Conclusion + +Your equipment is ready to use. In the next article, you will learn how to capture and process the images. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/astrophotography-with-fedora-astronomy-lab-setting-up/ + +作者:[Geoffrey Marr][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/coremodule/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/astrophotography-setup-2-816x345.jpg +[2]: https://fedoramagazine.org/wp-content/uploads/2020/11/IMG_4151-768x1024.jpg +[3]: https://labs.fedoraproject.org/en/astronomy/ +[4]: https://labs.fedoraproject.org/astronomy/download/index.html +[5]: https://github.com/FedoraQt/MediaWriter +[6]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Preparing_for_Installation/#_fedora_media_writer +[7]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Booting_the_Installation/ +[8]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Installing_Using_Anaconda/#sect-installation-graphical-mode +[9]: https://www.qhyccd.com/html/prepub/log_en.html#!log_en.md +[10]: https://www.indilib.org/download/fedora/category/8-fedora.html +[11]: https://fedoramagazine.org/wp-content/uploads/2020/11/lsusb_output_rpi.png +[12]: https://edu.kde.org/kstars/ +[13]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_setuo_wizard-2.png +[14]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_location_select-2.png +[15]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars-download-extra-data-1.png +[16]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_install_extra_Data-1.png +[17]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_planetarium-1024x549.png +[18]: https://en.wikipedia.org/wiki/Sidereal_time +[19]: https://en.wikipedia.org/wiki/Digitized_Sky_Survey +[20]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_right_click_object-1024x576.png +[21]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_planetarium_clock_icon.png +[22]: https://www.indilib.org/about/ekos.html +[23]: https://fedoramagazine.org/wp-content/uploads/2020/11/open_ekos_icon.png +[24]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos-profile-wizard.png +[25]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_equipment_attached_to_this_device.png +[26]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_wizard_local_gear.png +[27]: https://www.skywatcherusa.com/products/eqm-35-mount +[28]: https://www.skywatcherusa.com/products/eq6-r-pro +[29]: https://www.skywatcherusa.com/collections/eq8-r-series-mounts/products/eq8-r-mount-with-pier-tripod +[30]: https://fedoramagazine.org/wp-content/uploads/2020/11/setup_telescope_profiles.png +[31]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_setup_aux_1_astrometry-1024x616.png +[32]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_start_equip_connect.png +[33]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_startup_equipment.png +[34]: https://fedoramagazine.org/wp-content/uploads/2020/11/set_baud_rate_to_115200.png +[35]: https://fedoramagazine.org/wp-content/uploads/2020/11/set_camera_sensor_settings.png diff --git a/sources/tech/20210205 Integrate devices and add-ons into your home automation setup.md b/sources/tech/20210205 Integrate devices and add-ons into your home automation setup.md new file mode 100644 index 0000000000..d82dcc262e --- /dev/null +++ b/sources/tech/20210205 Integrate devices and add-ons into your home automation setup.md @@ -0,0 +1,190 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Integrate devices and add-ons into your home automation setup) +[#]: via: (https://opensource.com/article/21/2/home-automation-addons) +[#]: author: (Steve Ovens https://opensource.com/users/stratusss) + +Integrate devices and add-ons into your home automation setup +====== +Learn how to set up initial integrations and install add-ons in Home +Assistant in the fifth article in this series. +![Looking at a map][1] + +In the four previous articles in this series about home automation, I have discussed [what Home Assistant is][2], why you may want [local control][3], some of the [communication protocols][4] for smart home components, and how to [install Home Assistant][5] in a virtual machine (VM) using libvirt. In this fifth article, I will talk about configuring some initial integrations and installing some add-ons. + +### Set up initial integrations + +It's time to start getting into some of the fun stuff. The whole reason Home Assistant (HA) exists is to pull together various "smart" devices from different manufacturers. To do so, you have to make Home Assistant aware of which devices it should coordinate. I'll demonstrate by adding a [Sonoff Zigbee Bridge][6]. + +I followed [DigiBlur's Sonoff Guide][7] to replace the stock firmware with the open source firmware [Tasmota][8] to decouple my sensors from the cloud. My [second article][3] in this series explains why you might wish to replace the stock firmware. (I won't go into the device's setup with either the stock or custom firmware, as that is outside of the scope of this tutorial.) + +First, navigate to the **Configuration** menu on the left side of the HA interface, and make sure **Integrations** is selected: + +![Home Assistant integration configuration][9] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +From there, click the **Add Integration** button in the bottom-right corner and search for Zigbee: + +![Add Zigbee integration in Home Assistant][11] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Enter the device manually. If the Zigbee Bridge was physically connected to the Home Assistant interface, you could select the device path. For instance, I have a ZigBee CC2531 USB stick that I use for some Zigbee devices that do not communicate correctly with the Sonoff Bridge. It attaches directly to the Home Assistant host and shows up as a Serial Device. See my [third article][12] for details on wireless standards. However, in this tutorial, we will continue to configure and use the Sonoff Bridge. + +![Enter device manually][13] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +The next step is to choose the radio type, using the information in the DigiBlur tutorial. In this case, the radio is an EZSP radio: + +![Choose the radio type][14] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Finally, you need to know the IP address of the Sonoff Bridge, the port it is listening on, and the speed of the connection. Once I found the Sonoff Bridge's MAC address, I used my DHCP server to ensure that the device always uses the same IP on my network. DigiBlur's guide provides the port and speed numbers. + +![IP, port, and speed numbers][15] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Once you've added the Bridge, you can begin pairing devices to it. Ensure that your devices are in pairing mode. The Bridge will eventually find your device(s). + +![Device pairing][16] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +You can name the device(s) and assign an area (if you set them up). + +![Name device][17] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +The areas displayed will vary based on whether or not you have any configured. Bedroom, Kitchen, and Living Room exist by default. As you add a device, HA will add a new Card to the **Integrations** tab. A Card is a user interface (UI) element that groups information related to a specific entity. The Zigbee card looks like this: + +![Integration card][18] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Later, I'll come back to using this integration. I'll also get into how to use this device in automation flows. But now, I will show you how to add functionality to Home Assistant to make your life easier. + +### Add functionality with add-ons + +Out of the box, HA has some pretty great features for home automation. If you are buying commercial-off-the-shelf (CoTS) products, there is a good chance you can accomplish everything you need without the help of add-ons. However, you may want to investigate some of the add-ons, especially if (like me) you want to make your own sensors. + +There are all kinds of HA add-ons, ranging from Android debugging (ADB) tools to MQTT brokers to the Visual Studio Code editor. With each release, the number of add-ons grows. Some people make HA the center of their local system, encompassing DHCP, Plex, databases, and other useful programs. In fact, HA now ships with a built-in media browser for playing any media that you expose to it. + +I won't go too crazy in this article; I'll show you some of the basics and let you decide how you want to proceed. + +#### Install official add-ons + +Some of the many HA add-ons are available for installation right from the web UI, and others can be installed from alternative sources, such as Git. + +To see what's available, click on the **Supervisor** menu on the left panel. Near the top, you will see a tab called **Add-on store**. + +![Home Assistant add-on store][19] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Below are three of the more useful add-ons that I think should be standard for any HA deployment: + +![Home Assistant official add-ons][20] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +The **File Editor** allows you to manage Home Assistant configuration files directly from your browser. I find this far more convenient for quick edits than obtaining a copy of the file, editing it, and pushing it back to HA. If you use add-ons like the Visual Studio Code editor, you can edit the same files. + +The **Samba share** add-on is an excellent way to extract HA backups from the system or push configuration files or assets to the **web** directory. You should _never_ leave your backups sitting on the machine being backed up. + +Finally, **Mosquitto broker** is my preferred method for managing an [MQTT][21] client. While you can install a broker that's external to the HA machine, I find low value in doing this. Since I am using MQTT just to communicate with my IoT devices, and HA is the primary method of coordinating that communication, there is a low risk in having these components vertically integrated. If HA is offline, the MQTT broker is almost useless in my arrangement. + +#### Install community add-ons + +Home Assistant has a prolific community and passionate developers. In fact, many of the "community" add-ons are developed and maintained by the HA developers themselves. For my needs, I install: + +![Home Assistant community add-ons][22] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +**Grafana** (graphing program) and **InfluxDB** (a time-series database) are largely optional and relate to the ability to customize how you visualize the data HA collects. I like to have historical data handy and enjoy looking at the graphs from time to time. While not exactly HA-related, I have my pfSense firewall/router forward metrics to InfluxDB so that I can make some nice graphs over time. + +![Home Assistant Grafana add-on][23] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +**ESPHome** is definitely an optional add-on that's warranted only if you plan on making your own sensors. + +**NodeRED** is my preferred automation flow-handling solution. Although HA has some built-in automation, I find a visual flow editor is preferable for some of the logic I use in my system. + +#### Configure add-ons + +Some add-ons (such as File Editor) require no configuration to start them. However, most—such as Node-RED—require at least a small amount of configuration. Before you can start Node-RED, you will need to set a password: + +![Home Assistant Node-RED add-on][24] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +**IMPORTANT:** Many people will abstract passwords through the `secrets.yaml` file. This does not provide any additional security other than not having passwords in the add-on configuration's YAML. See [the official documentation][25] for more information. + +In addition to the password requirement, most of the add-ons that have a web UI default to having the `ssl: true` option set. A self-signed cert on my local LAN is not a requirement, so I usually set this to false. There is an add-on for Let's Encrypt, but dealing with certificates is outside the scope of this series. + +After you have looked through the **Configuration** tab, save your changes, and enable Node-RED on the add-on's main screen. + +![Home Assistant Node-RED add-on][26] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Don't forget to start the plugin. + +Most add-ons follow a similar procedure, so you can use this approach to set up other add-ons. + +### Wrapping up + +Whew, that was a lot of screenshots! Fortunately, when you are doing the configuration, the UI makes these steps relatively painless. + +At this point, your HA instance should be installed with some basic configurations and a few essential add-ons. + +In the next article, I will discuss integrating custom Internet of Things (IoT) devices into Home Assistant. Don't worry; the fun is just beginning! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/home-automation-addons + +作者:[Steve Ovens][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stratusss +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) +[2]: https://opensource.com/article/20/11/home-assistant +[3]: https://opensource.com/article/20/11/cloud-vs-local-home-automation +[4]: https://opensource.com/article/20/11/home-automation-part-3 +[5]: https://opensource.com/article/20/12/home-assistant +[6]: https://sonoff.tech/product/smart-home-security/zbbridge +[7]: https://www.digiblur.com/2020/07/how-to-use-sonoff-zigbee-bridge-with.html +[8]: https://tasmota.github.io/docs/ +[9]: https://opensource.com/sites/default/files/uploads/ha-setup20-configuration-integration.png (Home Assistant integration configuration) +[10]: https://creativecommons.org/licenses/by-sa/4.0/ +[11]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee.png (Add Zigbee integration in Home Assistant) +[12]: https://opensource.com/article/20/11/wireless-protocol-home-automation +[13]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-2.png (Enter device manually) +[14]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-3.png (Choose the radio type) +[15]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-4.png (IP, port, and speed numbers) +[16]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-5.png (Device pairing) +[17]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-6.png (Name device) +[18]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-7_0.png (Integration card) +[19]: https://opensource.com/sites/default/files/uploads/ha-setup7-addons.png (Home Assistant add-on store) +[20]: https://opensource.com/sites/default/files/uploads/ha-setup8-official-addons.png (Home Assistant official add-ons) +[21]: https://en.wikipedia.org/wiki/MQTT +[22]: https://opensource.com/sites/default/files/uploads/ha-setup9-community-addons.png (Home Assistant community add-ons) +[23]: https://opensource.com/sites/default/files/uploads/ha-setup9-community-grafana-pfsense.png (Home Assistant Grafana add-on) +[24]: https://opensource.com/sites/default/files/uploads/ha-setup27-nodered2.png (Home Assistant Node-RED add-on) +[25]: https://www.home-assistant.io/docs/configuration/secrets/ +[26]: https://opensource.com/sites/default/files/uploads/ha-setup26-nodered1.png (Home Assistant Node-RED add-on) diff --git a/sources/tech/20210207 3 ways to play video games on Linux.md b/sources/tech/20210207 3 ways to play video games on Linux.md new file mode 100644 index 0000000000..1c132ab588 --- /dev/null +++ b/sources/tech/20210207 3 ways to play video games on Linux.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 ways to play video games on Linux) +[#]: via: (https://opensource.com/article/21/2/linux-gaming) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +3 ways to play video games on Linux +====== +If you're ready to put down the popcorn and experience games from all +angles, start gaming on Linux. +![Gaming with penguin pawns][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll start with gaming. + +I used to think a "gamer" was a very specific kind of creature, carefully cataloged and classified by scientists after years of study and testing. I never classified myself as a gamer because most of the games I played were either on a tabletop (board games and pen-and-paper roleplaying games), NetHack, or Tetris. Now that games are available on everything from mobile devices, consoles, computers, and televisions, it feels like it's a good time to acknowledge that "gamers" come in all different shapes and sizes. If you want to call yourself a gamer, you can! There's no qualification exam. You don't have to know the Konami Code by heart (or even what that reference means); you don't have to buy and play "triple-A" games. If you enjoy a game from time to time, you can rightfully call yourself a gamer. And if you want to be a gamer, there's never been a better time to use Linux. + +### Welcome to the underground + +Peel back the glossy billboard ads, and underneath, you're sure to find a thriving gaming underground. It's a movement that began with the nascent gaming market before anyone believed money could be made off software that wasn't either a spreadsheet or typing tutor. Indie games have carved out a place in pop culture (believe it or not, [Minecraft, while not open source][2], started out as an indie game) in several ways, proving that in the eyes of players, gameplay comes before production value. + +There's a lot of cross-over in the indie and open source developer space. There's nothing quite like kicking back with your Linux laptop and browsing [itch.io][3] or your distribution's software repository for a little-known but precious gem of an open source game. + +There are all kinds of open source games available, including plenty of [first person shooters][4], puzzle games like [Nodulus][5], systems management games like [OpenTTD][6], racing games like [Jethook][7], tense escape campaigns like [Sauerbraten][8], and too many more to mention (with more arriving each year, thanks to great initiatives like [Open Jam][9]). + +![Jethook game screenshot][10] + +Jethook + +Overall, the experience of delving into the world of open source games is different than the immediate satisfaction of buying whatever a major game studio releases next. Games by the big studios provide plenty of visual and sonic stimuli, big-name actors, and upwards of 60 hours of gameplay. Independent and open source games aren't likely to match that, but then again, major studios can't match the sense of discovery and personal connection you get when you find a game that you just know nobody else _has ever heard of_. And they can't hope to match the sense of urgency you get when you realize that everybody in the world really, really needs to hear about the great game you've just played. + +Take some time to identify the kinds of games you enjoy the most, and then have a browse through your distribution's software repository, [Flathub][11], and open game jams. See what you can uncover and, if you like the game enough, help to promote it! + +### Proton and WINE + +Gaming on Linux doesn't stop with open source, but it is enabled by it. When Valve Software famously brought Linux back into the gaming market a few years ago by releasing their Steam client for Linux, the hope was that it would compel game studios to write code native to Linux systems. Some did, but Valve failed to push Linux as the primary platform even on their own Valve-branded gaming computers, and it seems that most studios have reverted to their old ways of Windows-only games. + +Interestingly, though, the end result has produced more open source code than probably intended. Valve's solution for Linux compatibility has been to create the [Proton][12] project, a compatibility layer to translate Windows games to Linux. At its core, Proton uses [WINE (Wine Is Not an Emulator)][13], the too-good-to-be-true reimplementation of major Windows libraries as open source. + +The game market's spoils have turned out to be a treasure trove for the open source world, and today, most games from major studios can be run on Linux as if they were native. + +Of course, if you're the type of gamer who has to have the latest title on the day of release, you can certainly expect unpleasant surprises. That's not surprising, though, because few major games are released without bugs requiring large patches a week later. Those bugs can be even worse when a game runs on Proton and WINE, so Linux gamers often benefit by refraining from early adoption. The trade-off may be worth it, though. I've played a few games that run perfectly on Proton, only to discover later from angry forum posts that it's apparently riddled with fatal errors when played on the latest version of Windows. In short, it seems that games from major studios aren't perfect, and so you can expect similar-but-different problems when playing them on Linux as you would on Windows. + +### Flatpak + +One of the most exciting developments of recent Linux history is [Flatpak][14], a cross between local containers and packaging. It's got nothing to do with gaming (or doesn't it?), but it enables Linux applications to essentially be distributed universally to any Linux distribution. This applies to gaming because there are often lots of fringe technologies used in games, and it can be pretty demanding on distribution maintainers to keep up with all the latest versions required by any given game. + +Flatpak abstracts that away from the distribution by establishing a common Flatpak-specific layer for application libraries. Distributors of flatpaks know that if a library isn't in a Flatpak SDK, then it must be included in the flatpak. It's simple and straightforward. + +Thanks to Flatpak, the Steam client runs on something obvious like Fedora and on distributions not traditionally geared toward the gaming market, like [RHEL][15] and Slackware! + +### Lutris + +If you're not eager to sign up on Steam, though, there's my preferred gaming client, [Lutris][16]. On the surface, Lutris is a simple game launcher for your system, a place you can go when you know you want to play a game but just can't decide what to launch yet. With Lutris, you can add [all the games you have on your system][17] to create your own gaming library, and then launch and play them right from the Lutris interface. Better still, Lutris contributors (like me!) regularly publish installer scripts to make it easy for you to install games you own. It's not always necessary, but it can be a nice shortcut to bypass some tedious configuration. + +Lutris can also enlist the help of _runners_, or subsystems that run games that wouldn't normally launch straight from your application menu. For instance, if you want to play console games like the open source [Warcraft Tower Defense][18], you must run an emulator, and Lutris can handle that for you (provided you have the emulator installed). Additionally, should you have a GOG.com (Good Old Games) account, Lutris can access it and import games from your library. + +There's no easier way to manage your games. + +### Play games + +Linux gaming is a fulfilling and empowering experience. I used to avoid computer gaming because I didn't feel I had much of a choice. It seemed that there were always expensive games being released, which inevitably got extreme reactions from happy and unhappy gamers alike, and then the focus shifted quickly to the next big thing. On the other hand, open source gaming has introduced me to the _people_ of the gaming world. I've met other players and developers, I've met artists and musicians, fans and promoters, and I've played an assortment of games that I never even realized existed. Some of them were barely long enough to distract me for just one afternoon, while others have provided me hours and hours of obsessive gameplay, modding, level design, and fun. + +If you're ready to put down the popcorn and experience games from all angles, start gaming on Linux. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-gaming + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gaming_grid_penguin.png?itok=7Fv83mHR (Gaming with penguin pawns) +[2]: https://opensource.com/alternatives/minecraft +[3]: https://itch.io/jam/open-jam-2020 +[4]: https://opensource.com/article/20/5/open-source-fps-games +[5]: https://hyperparticle.itch.io/nodulus +[6]: https://www.openttd.org/ +[7]: https://rcorre.itch.io/jethook +[8]: http://sauerbraten.org/ +[9]: https://opensource.com/article/18/9/open-jam-announcement +[10]: https://opensource.com/sites/default/files/game_0.png +[11]: http://flathub.org +[12]: https://github.com/ValveSoftware/Proton +[13]: http://winehq.org +[14]: https://opensource.com/business/16/8/flatpak +[15]: https://www.redhat.com/en/enterprise-linux-8 +[16]: http://lutris.net +[17]: https://opensource.com/article/18/10/lutris-open-gaming-platform +[18]: https://ndswtd.wordpress.com/download diff --git a/sources/tech/20210208 Fedora Aarch64 on the SolidRun HoneyComb LX2K.md b/sources/tech/20210208 Fedora Aarch64 on the SolidRun HoneyComb LX2K.md new file mode 100644 index 0000000000..e060ca66fa --- /dev/null +++ b/sources/tech/20210208 Fedora Aarch64 on the SolidRun HoneyComb LX2K.md @@ -0,0 +1,219 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fedora Aarch64 on the SolidRun HoneyComb LX2K) +[#]: via: (https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/) +[#]: author: (John Boero https://fedoramagazine.org/author/boeroboy/) + +Fedora Aarch64 on the SolidRun HoneyComb LX2K +====== + +![][1] + +Photo by [Tim Mossholder][2] on [Unsplash][3] + +Almost a year has passed since the [HoneyComb][4] development kit was released by SolidRun. I remember reading about this Mini-ITX Arm workstation board being released and thinking “what a great idea.” Then I saw the price and realized this isn’t just another Raspberry Pi killer. Currently that price is $750 USD plus shipping and duty. Niche devices like the HoneyComb aren’t mass produced like the simpler Pi is, and they pack in quite a bit of high end tech. Eventually COVID lockdown boredom got the best of me and I put a build together. Adding a case and RAM, the build ended up costing about $1100 shipped to London. This is a recount of my experiences and the current state of using Fedora on this fun bit of hardware. + +First and foremost, the tech packed into this board is impressive. It’s not about to kill a Xeon workstation in raw performance but it’s going to wallop it in performance/watt efficiency. Essentially this is a powerful server in the energy footprint of a small laptop. It’s also a powerful hybrid of compute and network functionality, combining powerful network features in a carrier board with modular daughter card sporting a 16-core A72 with 2 ECC-capable DDR4 SO-DIMM slots. The carrier board comes in a few editions, giving flexibility to swap or upgrade your RAM + CPU options. I purchased the edition pictured below with 16 cores, 32GB (non-ECC), 512GB NVMe, and 4x10Gbe. For an extra $250 you can add the 100Gbe option if you’re building a 5G deployment or an ISP for a small country (bottom right of board). Imagine this jacked into a 100Gb uplink port acting as proxy, tls inspector, router, or storage for a large 10gb TOR switch. + +![][5] + +When I ordered it I didn’t fully understand the network co processor included from NXP. NXP is the company that makes the unique [LX2160A][6] CPU/SOC for this as well as configurable ports and offload engine that enable handling up to 150Gb/s of network traffic without the CPU breaking a sweat. Here is a list of options from NXP’s Layerscape user manual. + +![Configure ports in switch, LAG, MUX mode, or straight NICs.][7] + +I have a 10gb network in my home attic via a Ubiquiti ES-16-XG so I was eager to see how much this board could push. I also have a QNAP connected via 10gb which rarely manages to saturate the line, so could this also be a NAS replacement? It turned out I needed to sort out drivers and get a stable install first. Since the board has been out for a year, I had some catching up to do. SolidRun keeps an active Discord on [Developer-Ecosystem][8] which was immensely helpful as install wasn’t as straightforward as previous blogs have mentioned. I’ve always been cursed. If you’ve ever seen Pure Luck, I’m bound to hit every hardware glitch. + +![][9] + +For starters, you can add a GPU and install graphically or install via USB console. I started with a spare GPU (Radeon Pro WX2100) intending to build a headless box which in the end over-complicated things. If you need to swap parts or re-flash a BIOS via the microSD card, you’ll need to swap display, keyboard + mouse. Chaos. Much simpler just to plug into the micro USB console port and access it via /dev/ttyUSB0 for that picture-in-picture experience. It’s really great to have the open ended PCIe3-x8 slot but I’ll keep it open for now. Note that the board does not support PCIe Atomics so some devices may have compatibility issues. + +Now comes the fun part. BIOS is not built-in here. You’ll need to [build][10] from source for to your RAM speed and install via microSDHC. At first this seems annoying but then you realize that with removable BIOS installer it’s pretty hard to brick this thing. Not bad. The good news is the latest UEFI builds have worked well for me. Just remember that every time you re-flash your BIOS you’ll need to set everything up again. This was enough to boot Fedora aarch64 from USB. The board offers 64GB of eMMC flash which you can install to if you like. I immediately benched it to find it reads about 165MB/s and writes 55MB/s which is practical speed for embedded usage but I’ll definitely be installing to NVMe instead. I had an older Samsung 950 Pro in my spares from a previous Linux box but I encountered major issues with it even with the widely documented kernel param workaround: +``` + +``` + +nvme_core.default_ps_max_latency_us=0 +``` + +``` + +In the end I upgraded my main workstation so I could repurpose its existing Samsung EVO 960 for the HoneyComb which worked much better. + +After some fidgeting I was able to install Fedora but it became apparent that the integrated network ports still don’t work with the mainline kernel. The NXP tech is great but requires a custom kernel build and tooling. Some earlier blogs got around this with a USB->RJ45 Ethernet adapter which works fine. Hopefully network support will be mainlined soon, but for now I snagged a kernel SRPM from the helpful engineers on Discord. With the custom kernel the 1Gbe NIC worked fine, but it turns out the SFP+ ports need more configuration. They won’t be recognized as interfaces until you use NXP’s _restool_ utility to map ports to their usage. In this case just a runtime mapping of _dmap -> dni_ was required. This is NXP’s way of mapping a MAC to a network interface via IOCTL commands. The restool binary isn’t provided either and must be built from source. It then layers on management scripts which use cheeky $arg0 references for redirection to call the restool binary with complex arguments. + +Since I was starting to accumulate quite a few custom packages it was apparent that a COPR repo was needed to simplify this for Fedora. If you’re not familiar with COPR I think it’s one of Fedora’s finest resources. This repo contains the uefi build (currently failing build), 5.10.5 kernel built with network support, and the restool binary with supporting scripts. I also added a oneshot systemd unit to enable the SFP+ ports on boot: +``` + +``` + +systemd enable --now [dpmac@7.service][11] +systemd enable --now [dpmac@8.service][12] +systemd enable --now [dpmac@9.service][13] +systemd enable --now [dpmac@10.service][14] +``` + +``` + +Now each SPF+ port will boot configured as eth1-4, with eth0 being the 1Gb. NetworkManager will struggle unless these are consistent, and if you change the service start order the eth devices will re-order. I actually put a sleep $@ in each activation so they are consistent and don’t have locking issues. Unfortunately it adds 10 seconds to boot time. This has been fixed in the latest kernel and won’t be an issue once mainlined. + +![][15] + +I’d love to explore the built-in LAG features but this still needs to be coded into the + +restool + +options. I’ll save it for later. In the meantime I managed a single 10gb link as primary, and a 3×10 LACP Team for kicks. Eventually I changed to 4×10 LACP via copper SFP+ cables mounted in the attic. + +### Energy Efficiency + +Now with a stable environment it’s time to raise some hell. It’s really nice to see PWM support was recently added for the CPU fan, which sounds like a mini jet engine without it. Now the sound level is perfectly manageable and thermal control is automatic. Time to test drive with a power meter. Total power usage is consistently between 20-40 watts (usually in the low 20s) which is really impressive. I tried a few _tuned_ profiles which didn’t seem to have much effect on energy. If you add a power-hungry GPU or device that can obviously increase but for a dev server it’s perfect and well below the Z600 workstations I have next to it which consume 160-250 watts each when fired up. + +### Remote Access + +I’m an old soul so I still prefer KDE with Xorg and NX via X2go server. I can access SSH or a full GUI at native performance without a GPU. This lets me get a feel for performance, thermal stats, and also helps to evaluate the device as a workstation or potential VDI. The version of KDE shipped with the aarch64 server spin doesn’t seem to recognize some sensors but that seems to be because of KDE’s latest widget changes which I’d have to dig into. + +![X2go KDE session over SSH][16] + +Cockpit support is also outstanding out of the box. If SSH and X2go remote access aren’t your thing, Cockpit provides a great remote management platform with a growing list of plugins. Everything works great in my experience. + +![Cockpit behaves as expected.][17] + +All I needed to do now is shift into high gear with jumbo frames. MTU 1500 yields me an iperf of about 2-4Gbps bottlenecked at CPU0. Ain’t nobody got time for that. Set MTU 9000 and suddenly it gets the full 10Gbps both ways with time to spare on the CPU. Again, it would be nice to use the hardware assisted LAG since the device is supposed to handle up to 150Gbps duplex no sweat (with the 100Gbe QSFP option), which is nice given the Ubiquiti ES-16-XG tops out at 160Gbps full duplex (10gb/16 ports). + +### Storage + +As a storage solution this hardware provides great value in a small thermal window and energy saving footprint. I could accomplish similar performance with an old x86 box for cheap but the energy usage alone would eclipse any savings in short order. By comparison I’ve seen some consumer NAS devices offer 10Gbe and NVMe cache sharing an inadequate number of PCIe2 lanes and bottlenecked at the bus. This is fully customizable and since the energy footprint is similar to a small laptop a small UPS backup should allow full writeback cache mode for maximum performance. This would make a great oVirt NFS or iSCSI storage pool if needed. I would pair it with a nice NAS case or rack mount case with bays. Some vendors such as [Bamboo][18] are actually building server options around this platform as we speak. + +The board has 4 SATA3 ports but if I were truly going to build a NAS with this I would probably add a RAID card that makes best use of the PCIe8x slot, which thankfully is open ended. Why some hardware vendors choose to include close-ended PCIe 8x,4x slots is beyond me. Future models will ship with a physical x16 slot but only 8x electrically. Some users on the SolidRun Discord talk about bifurcation and splitting out the 8 PCIe lanes which is an option as well. Note that some of those lanes are also reserved for NVMe, SATA, and network. The CEX7 form factor and interchangeable carrier board presents interesting possibilities later as the NXP LX2160A docs claim to support up to 24 lanes. For a dev board it’s perfectly fine as-is. + +### Network Perf + +For now I’ve managed to rig up a 4×10 LACP Team with NetworkManager for full load balancing. This same setup can be done with a QSFP+ breakout cable. KDE nm Network widget still doesn’t support Teams but I can set them up via nm-connection-editor or Cockpit. Automation could be achieved with _nmcli_ and _teamdctl_. An iperf3 test shows the connection maxing out at about 13Gbps to/from the 2×10 LACP team on my workstation. I know that iperf isn’t a true indication of real-world usage but it’s fun for benchmarks and tuning nonetheless. This did in fact require a lot of tuning and at this point I feel like I could fill a book just with iperf stats. + +``` +$ iperf3 -c honeycomb -P 4 --cport 5000 -R +Connecting to host honeycomb, port 5201 +Reverse mode, remote host honeycomb is sending +[ 5] local 192.168.2.10 port 5000 connected to 192.168.2.4 port 5201 +[ 7] local 192.168.2.10 port 5001 connected to 192.168.2.4 port 5201 +[ 9] local 192.168.2.10 port 5002 connected to 192.168.2.4 port 5201 +[ 11] local 192.168.2.10 port 5003 connected to 192.168.2.4 port 5201 +[ ID] Interval Transfer Bitrate +[ 5] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec +[ 7] 1.00-2.00 sec 382 MBytes 3.21 Gbits/sec +[ 9] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec +[ 11] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec +[SUM] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec +- - - - - - - - - - - - - - - - - - - - - - - - - +(TRUNCATED) +- - - - - - - - - - - - - - - - - - - - - - - - - +[ 5] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec +[ 7] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec +[ 9] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec +[ 11] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec +[SUM] 2.00-3.00 sec 1.48 GBytes 12.7 Gbits/sec +- - - - - - - - - - - - - - - - - - - - - - - - - +[ ID] Interval Transfer Bitrate Retr +[ 5] 0.00-10.00 sec 3.67 GBytes 3.16 Gbits/sec 1 sender +[ 5] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver +[ 7] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 7 sender +[ 7] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver +[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 36 sender +[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver +[ 11] 0.00-10.00 sec 3.69 GBytes 3.17 Gbits/sec 1 sender +[ 11] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver +[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec 45 sender +[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec receiver + +iperf Done +``` + +### Notes on iperf3 + +I struggled with LACP Team configuration for hours, having done this before with an HP cluster on the same switch. I’d heard stories about bonds being old news with team support adding better load balancing to single TCP flows. This still seems bogus as you still can’t load balance a single flow with a team in my experience. Also LACP claims to be fully automated and easier to set up than traditional load balanced trunks but I find the opposite to be true. For all it claims to automate you still need to have hashing algorithms configured correctly at switches and host. With a few quirks along the way I once accidentally left a team in broadcast mode (not LACP) which registered duplicate packets on the iperf server and made it look like a single connection was getting double bandwidth. That mistake caused confusion as I tried to reproduce it with LACP. + +Then I finally found the LACP hash settings in Ubiquiti’s new firmware GUI. It’s hidden behind a tiny pencil icon on each LAG. I managed to set my LAGs to hash on Src+Dest IP+port when they were defaulting to MAC/port. Still I was only seeing traffic on one slave of my 2×10 team even with parallel clients. Eventually I tried parallel clients with -V and it all made sense. By default iperf3 client ports are ephemeral but they follow an even sequence: 42174, 42176, 42178, 42180, etc… If your lb hash across a pair of sequential MACs includes src+dst port but those ports are always even, you’ll never hit the other interface with an odd MAC. How crazy is that for iperf to do? I tried looking at the source for iperf3 and I don’t even see how that could be happening. Instead if you specify a client port as well as parallel clients, they use a straight sequence: 50000, 50001, 50002, 50003, etc.. With odd+even numbers in client ports, I’m finally able to LB across all interfaces in all LAG groups. This setup would scale out well with more clients on the network. + +![Proper LACP load balancing.][19] + +Everything could probably be tuned a bit better but for now it is excellent performance and it puts my QNAP to shame. I’ll continue experimenting with the network co-processor and seeing if I can enable the native LAG support for even better performance. Across the network I would expect a practical peak of about 40 Gbps raw which is great. + +![][20] + +### Virtualization + +What about virt? One of the best parts about having a 16 A72 cores is support for Aarch64 VMs at full speed using KVM, which you won’t be able to do on x86. I can use this single box to spin up a dozen or so VMs at a time for CI automation and testing, or just to test our latest HashiCorp builds with aarch64 builds on COPR. Qemu on x86 without KVM can emulate aarch64 but crawls by comparison. I’ve not yet tried to add it to an oVirt cluster yet but it’s really snappy actually and proves more cost effective than spinning up Arm VMs in a cloud. One of the use cases for this environment is NFV, and I think it fits it perfectly so long as you pair it with ECC RAM which I skipped as I’m not running anything critical. If anybody wants to test drive a VM DM me and I’ll try to get you some temp access. + +![Virtual Machines in Cockpit][21] + +### Benchmarks + +[Phoronix][22] has already done quite a few benchmarks on [OpenBenchmarking.org][23] but I wanted to rerun them with the latest versions on my own Fedora 33 build for consistency. I also wanted to compare them to my Xeons which is not really a fair comparison. Both use DDR4 with similar clock speeds – around 2Ghz but different architectures and caches obviously yield different results. Also the Xeons are dual socket which is a huge cooling advantage for single threaded workloads. You can watch one process bounce between the coolest CPU sockets. The Honeycomb doesn’t have this luxury and has a smaller fan but the clock speed is playing it safe and slow at 2Ghz so I would bet the SoC has room to run faster if cooling were adjusted. I also haven’t played with the PWM settings to adjust the fan speed up just in case. Benchmarks performed using the tuned profile network-throughput. + +Strangely some single core operations seem to actually perform better on the Honeycomb than they do on my Xeons. I tried single-threaded zstd compression with default level 3 on a a few files and found it actually performs consistently better on the Honeycomb. However using the actual pts/compress-zstd benchmark with multithreaded option turns the tables. The 16 cores still manage an impressive **2073** MB/s: +``` + +``` + +Zstd Compression 1.4.5: +   pts/compress-zstd-1.2.1 &#91;Compression Level: 3] +   Test 1 of 1 +   Estimated Trial Run Count:    3                       +   Estimated Time To Completion: 9 Minutes &#91;22:41 UTC]   +       Started Run 1 @ 22:33:02 +       Started Run 2 @ 22:33:53 +       Started Run 3 @ 22:34:37 +   Compression Level: 3: +2079.3 +2067.5 +       2073.9 +   Average: 2073.57 MB/s +``` + +``` + +For apples to oranges comparison my 2×10 core Xeon E5-2660 v3 box does **2790** MB/s, so 2073 seems perfectly respectable as a potential workstation. Paired with a midrange GPU this device would also make a great video transcoder or media server. Some users have asked about mining but I wouldn’t use one of these for mining crypto currency. The lack of PCIe atomics means certain OpenCL and CUDA features might not be supported and with only 8 PCIe lanes exposed you’re fairly limited. That said it could potentially make a great mobile ML, VR, IoT, or vision development platform. The possibilities are pretty open as the whole package is very well balanced and flexible. + +### Conclusion + +I wasn’t organized enough this year to arrange a FOSDEM visit but this is something I would have loved to talk about. I’m definitely glad I tried out. Special thanks to Jon Nettleton and the folks on SolidRun’s Discord for the help and troubleshooting. The kit is powerful and potentially replaces a lot of energy waste in my home lab. It provides a great Arm platform for development and it’s great to see how solid Fedora’s alternative architecture support is. I got my Linux start on Gentoo back in the day, but Fedora really has upped it’s arch game. I’m really glad I didn’t have to sit waiting for compilation on a proprietary platform. I look forward to the remaining patches to be mainlined into the Fedora kernel and I hope to see a few more generations use this package, especially as Apple goes all in on Arm. It will also be interesting to see what features emerge if Nvidia’s Arm acquisition goes through. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/ + +作者:[John Boero][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/boeroboy/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/honeycomb-fed-aarch64-816x346.jpg +[2]: https://unsplash.com/@timmossholder?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[3]: https://unsplash.com/s/photos/honeycombs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[4]: http://solid-run.com/arm-servers-networking-platforms/honeycomb-workstation/#overview +[5]: https://www.solid-run.com/wp-content/uploads/2020/11/HoneyComb-layout-front.png +[6]: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/layerscape-processors/layerscape-lx2160a-processor:LX2160A +[7]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-894x1024.png +[8]: https://discord.com/channels/620838168794497044 +[9]: https://i.imgflip.com/11c7o.gif +[10]: https://github.com/SolidRun/lx2160a_uefi +[11]: mailto:dpmac@7.service +[12]: mailto:dpmac@8.service +[13]: mailto:dpmac@9.service +[14]: mailto:dpmac@10.service +[15]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-2-1024x403.png +[16]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/Screenshot_20210202_112051-1024x713.jpg +[17]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-2-1024x722.png +[18]: https://www.bamboosystems.io/b1000n/ +[19]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-4-1024x245.png +[20]: http://systems.cs.columbia.edu/files/kvm-arm-logo.png +[21]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-1024x717.png +[22]: https://www.phoronix.com/scan.php?page=news_item&px=SolidRun-ClearFog-ARM-ITX +[23]: https://openbenchmarking.org/result/1905313-JONA-190527343&obr_sor=y&obr_rro=y&obr_hgv=ClearFog-ITX diff --git a/sources/tech/20210208 How to set up custom sensors in Home Assistant.md b/sources/tech/20210208 How to set up custom sensors in Home Assistant.md new file mode 100644 index 0000000000..074f898a93 --- /dev/null +++ b/sources/tech/20210208 How to set up custom sensors in Home Assistant.md @@ -0,0 +1,290 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to set up custom sensors in Home Assistant) +[#]: via: (https://opensource.com/article/21/2/home-assistant-custom-sensors) +[#]: author: (Steve Ovens https://opensource.com/users/stratusss) + +How to set up custom sensors in Home Assistant +====== +Dive into the YAML files to set up custom sensors in the sixth article +in this home automation series. +![Computer screen with files or windows open][1] + +In the last article in this series about home automation, I started digging into Home Assistant. I [set up a Zigbee integration][2] with a Sonoff Zigbee Bridge and installed a few add-ons, including Node-RED, File Editor, Mosquitto broker, and Samba. I wrapped up by walking through Node-RED's configuration, which I will use heavily later on in this series. The four articles before that one discussed [what Home Assistant is][3], why you may want [local control][4], some of the [communication protocols][5] for smart home components, and how to [install Home Assistant][6] in a virtual machine (VM) using libvirt. + +In this sixth article, I'll walk through the YAML configuration files. This is largely unnecessary if you are just using the integrations supported in the user interface (UI). However, there are times, particularly if you are pulling in custom sensor data, where you have to get your hands dirty with the configuration files. + +Let's dive in. + +### Examine the configuration files + +There are several potential configuration files you will want to investigate. Although everything I am about to show you _can_ be done in the main configuration.yaml file, it can help to split your configuration into dedicated files, especially with large installations. + +Below I will walk through how I configure my system. For my custom sensors, I use the ESP8266 chipset, which is very maker-friendly. I primarily use [Tasmota][7] for my custom firmware, but I also have some components running [ESPHome][8]. Configuring firmware is outside the scope of this article. For now, I will assume you set up your devices with some custom firmware (or you wrote your own with [Arduino IDE][9] ). + +#### The /config/configuration.yaml file + +Configuration.yaml is the main file Home Assistant reads. For the following, use the File Editor you installed in the previous article. If you do not see File Editor in the left sidebar, enable it by going back into the **Supervisor** settings and clicking on **File Editor**. You should see a screen like this: + +![Install File Editor][10] + +(Steve Ovens, [CC BY-SA 4.0][11]) + +Make sure **Show in sidebar** is toggled on. I also always toggle on the **Watchdog** setting for any add-ons I use frequently. + +Once that is completed, launch File Editor. There is a folder icon in the top-left header bar. This is the navigation icon. The `/config` folder is where the configuration files you are concerned with are stored. If you click on the folder icon, you will see a few important files: + +![Configuration split files][12] + +The following is a default configuration.yaml: + +![Default Home Assistant configuration.yaml][13] + +(Steve Ovens, [CC BY-SA 4.0][11]) + +The notation `script: !include scripts.yaml` indicates that Home Assistant should reference the contents of scripts.yaml anytime it needs the definition of a script object. You'll notice that each of these files correlates to files observed when the folder icon is clicked. + +I added three lines to my configuration.yaml: + + +``` +input_boolean: !include input_boolean.yaml +binary_sensor: !include binary_sensor.yaml +sensor: !include sensor.yaml +``` + +As a quick aside, I configured my MQTT settings (see Home Assistant's [MQTT documentation][14] for more details) in the configuration.yaml file: + + +``` +mqtt: +  discovery: true +  discovery_prefix: homeassistant +  broker: 192.168.11.11 +  username: mqtt +  password: superpassword +``` + +If you make an edit, don't forget to click on the Disk icon to save your work. + +![Save icon in Home Assistant config][15] + +(Steve Ovens, [CC BY-SA 4.0][11]) + +#### The /config/binary_sensor.yaml file + +After you name your file in configuration.yaml, you'll have to create it. In the File Editor, click on the folder icon again. There is a small icon of a piece of paper with a **+** sign in its center. Click on it to bring up this dialog: + +![Create config file][16] + +(Steve Ovens, [CC BY-SA 4.0][11]) + +I have three main types of [binary sensors][17]: door, motion, and power. A binary sensor has only two states: on or off. All my binary sensors send their data to MQTT. See my article on [cloud vs. local control][4] for more information about MQTT. + +My binary_sensor.yaml file looks like this: + + +``` + - platform: mqtt +    state_topic: "BRMotion/state/PIR1" +    name: "BRMotion" +    qos: 1 +    payload_on: "ON" +    payload_off: "OFF" +    device_class: motion +    +  - platform: mqtt +    state_topic: "IRBlaster/state/PROJECTOR" +    name: "ProjectorStatus" +    qos: 1 +    payload_on: "ON" +    payload_off: "OFF" +    device_class: power +    +  - platform: mqtt +    state_topic: "MainHallway/state/DOOR" +    name: "FrontDoor" +    qos: 1 +    payload_on: "open" +    payload_off: "closed" +    device_class: door +``` + +Take a look at the definitions. Since `platform` is self-explanatory, start with `state_topic`. + + * `state_topic`, as the name implies, is the topic where the device's state is published. This means anyone subscribed to the topic will be notified any time the state changes. This path is completely arbitrary, so you can name it anything you like. I tend to use the convention `location/state/object`, as this makes sense for me. I want to be able to reference all devices in a location, and for me, this layout is the easiest to remember. Grouping by device type is also a valid organizational layout. + + * `name` is the string used to reference the device inside Home Assistant. It is normally referenced by `type.name`, as seen in this card in the Home Assistant [Lovelace][18] interface: + +![Binary sensor card][19] + +(Steve Ovens, [CC BY-SA 4.0][11]) + + * `qos`, short for quality of service, refers to how an MQTT client communicates with the broker when posting to a topic. + + * `payload_on` and `payload_off` are determined by the firmware. These sections tell Home Assistant what text the device will send to indicate its current state. + + * `device_class:` There are multiple possibilities for a device class. Refer to the [Home Assistant documentation][17] for more information and a description of each type available. + + + + +#### The /config/sensor.yaml file + +This file differs from binary_sensor.yaml in one very important way: The sensors within this configuration file can have vastly different data inside their payloads. Take a look at one of the more tricky bits of sensor data, temperature. + +Here is the definition for my DHT temperature sensor: + + +``` + - platform: mqtt +    state_topic: "Steve_Desk_Sensor/tele/SENSOR" +    name: "Steve Desk Temperature" +    value_template: '{{ value_json.DHT11.Temperature }}' +    +  - platform: mqtt +    state_topic: "Steve_Desk_Sensor/tele/SENSOR" +    name: "Steve Desk Humidity" +    value_template: '{{ value_json.DHT11.Humidity }}' +``` + +You'll notice two things right from the start. First, there are two definitions for the same `state_topic`. This is because this sensor publishes three different statistics. + +Second, there is a new definition of `value_template`. Most sensors, whether custom or not, send their data inside a JSON payload. The template tells Home Assistant where the important information is in the JSON file. The following shows the raw JSON coming from my homemade sensor. (I used the program `jq` to make the JSON more readable.) + + +``` +{ +  "Time": "2020-12-23T16:59:11", +  "DHT11": { +    "Temperature": 24.8, +    "Humidity": 32.4, +    "DewPoint": 7.1 +  }, +  "BH1750": { +    "Illuminance": 24 +  }, +  "TempUnit": "C" +} +``` + +There are a few things to note here. First, as the sensor data is stored in a time-based data store, every reading has a `Time` entry. Second, there are two different sensors attached to this output. This is because I have both a DHT11 temperature sensor and a BH1750 light sensor attached to the same ESP8266 chip. Finally, my temperature is reported in Celsius. + +Hopefully, the Home Assistant definitions will make a little more sense now. `value_json` is just a standard name given to any JSON object ingested by Home Assistant. The format of the `value_template` is `value_json..`. + +For example, to retrieve the dewpoint: + + +``` +`value_template: '{{ value_json.DHT11.DewPoint}}'` +``` + +While you can dump this information to a file from within Home Assistant, I use Tasmota's `Console` to see the data it is publishing. (If you want me to do an article on Tasmota, please let me know in the comments below.) + +As a side note, I also keep tabs on my local Home Assistant resource usage. To do so, I put this in my sensor.yaml file: + + +``` + - platform: systemmonitor +    resources: +      - type: disk_use_percent +        arg: / +      - type: memory_free +      - type: memory_use +      - type: processor_use +``` + +While this is technically not a sensor, I put it here, as I think of it as a data sensor. For more information, see the Home Assistant's [system monitoring][20] documentation. + +#### The /config/input_boolean file + +This last section is pretty easy to set up, and I use it for a wide variety of applications. An input boolean is used to track the status of something. It's either on or off, home or away, etc. I use these quite extensively in my automations. + +My definitions are: + + +``` +   steve_home: +        name: steve +    steve_in_bed: +        name: 'steve in bed' +    guest_home: +    +    kitchen_override: +        name: kitchen +    kitchen_fan_override: +        name: kitchen_fan +    laundryroom_override: +        name: laundryroom +    bathroom_override: +        name: bathroom +    hallway_override:   +        name: hallway +    livingroom_override:   +        name: livingroom +    ensuite_bathroom_override: +        name: ensuite_bathroom +    steve_desk_light_override: +        name: steve_desk_light +    projector_led_override: +        name: projector_led +        +    project_power_status: +        name: 'Projector Power Status' +    tv_power_status: +        name: 'TV Power Status' +    bed_time: +        name: "It's Bedtime" +``` + +I use some of these directly in the Lovelace UI. I create little badges that I put at the top of each of the pages I have in the UI: + +![Home Assistant options in Lovelace UI][21] + +(Steve Ovens, [CC BY-SA 4.0][11]) + +These can be used to determine whether I am home, if a guest is in my house, and so on. Clicking on one of these badges allows me to toggle the boolean, and this object can be read by automations to make decisions about how the “smart devices” react to a person's presence (if at all). I'll revisit the booleans in a future article when I examine Node-RED in more detail. + +### Wrapping up + +In this article, I looked at the YAML configuration files and added a few custom sensors into the mix. You are well on the way to getting some functioning automation with Home Assistant and Node-RED. In the next article, I'll dive into some basic Node-RED flows and introduce some basic automations. + +Stick around; I've got plenty more to cover, and as always, leave a comment below if you would like me to examine something specific. If I can, I'll be sure to incorporate the answers to your questions into future articles. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/home-assistant-custom-sensors + +作者:[Steve Ovens][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stratusss +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open) +[2]: https://opensource.com/article/21/1/home-automation-5-homeassistant-addons +[3]: https://opensource.com/article/20/11/home-assistant +[4]: https://opensource.com/article/20/11/cloud-vs-local-home-automation +[5]: https://opensource.com/article/20/11/home-automation-part-3 +[6]: https://opensource.com/article/20/12/home-assistant +[7]: https://tasmota.github.io/docs/ +[8]: https://esphome.io/ +[9]: https://create.arduino.cc/projecthub/Niv_the_anonymous/esp8266-beginner-tutorial-project-6414c8 +[10]: https://opensource.com/sites/default/files/uploads/ha-setup22-file-editor-settings.png (Install File Editor) +[11]: https://creativecommons.org/licenses/by-sa/4.0/ +[12]: https://opensource.com/sites/default/files/uploads/ha-setup29-configuration-split-files1.png (Configuration split files) +[13]: https://opensource.com/sites/default/files/uploads/ha-setup28-configuration-yaml.png (Default Home Assistant configuration.yaml) +[14]: https://www.home-assistant.io/docs/mqtt/broker +[15]: https://opensource.com/sites/default/files/uploads/ha-setup23-configuration-yaml2.png (Save icon in Home Assistant config) +[16]: https://opensource.com/sites/default/files/uploads/ha-setup24-new-config-file.png (Create config file) +[17]: https://www.home-assistant.io/integrations/binary_sensor/ +[18]: https://www.home-assistant.io/lovelace/ +[19]: https://opensource.com/sites/default/files/uploads/ha-setup25-bindary_sensor_card.png (Binary sensor card) +[20]: https://www.home-assistant.io/integrations/systemmonitor +[21]: https://opensource.com/sites/default/files/uploads/ha-setup25-input-booleans.png (Home Assistant options in Lovelace UI) diff --git a/sources/tech/20210209 My open source disaster recovery strategy for the home office.md b/sources/tech/20210209 My open source disaster recovery strategy for the home office.md new file mode 100644 index 0000000000..05f2fce5b5 --- /dev/null +++ b/sources/tech/20210209 My open source disaster recovery strategy for the home office.md @@ -0,0 +1,199 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (My open source disaster recovery strategy for the home office) +[#]: via: (https://opensource.com/article/21/2/high-availability-home-office) +[#]: author: (Howard Fosdick https://opensource.com/users/howtech) + +My open source disaster recovery strategy for the home office +====== +In the remote work era, it's more important than ever to have a disaster +recovery plan for your household infrastructure. +![Person using a laptop][1] + +I've worked from home for years, and with the COVID-19 crisis, millions more have joined me. Teachers, accountants, librarians, stockbrokers… you name it, these workers now operate full or part time from their homes. Even after the coronavirus crisis ends, many will continue working at home, at least part time. But what happens when the home worker's computer fails? Whether the device is a smartphone, tablet, laptop, or desktop—and whether the problem is hardware or software—the result might be missed workdays and lots of frustration. + +This article explores how to ensure high-availability home computing. Open source software is key. It offers device independence so that home workers can easily move between primary and backup devices. Most importantly, it gives users control of their environment, which is the surest route to high availability. This simple high-availability strategy, based on open source, is easy to modify for your needs. + +### Different strategies for different situations + +I need to emphasize one point upfront: different job functions require different solutions. Some at-home workers can use smartphones or tablets, while others rely on laptops, and still others require high-powered desktop workstations. Some can tolerate an outage of hours or even days, while others must be available without interruption. Some use company-supplied devices, and others must provide their own. Lastly, some home workers store their data in their company's cloud, while others self-manage their data. + +Obviously, no single high-availability strategy fits everyone. My strategy probably isn't "the answer" for you, but I hope it prompts you to think about the challenges involved (if you haven't already) and presents some ideas to help you prepare before disaster strikes. + +### Defining high availability + +Whatever computing device a home worker uses, high availability (HA) involves five interoperable components: + + * Device hardware + * System software + * Communications capability + * Applications + * Data + + + +The HA plan must encompass all five components to succeed. Missing any component causes HA failure. + +For example, last night, I worked on a cloud-based spreadsheet. If my communications link had failed and I couldn't access my cloud data, that would stop my work on the project… even if I had all the other HA components available in a backup computer. + +Of course, there are exceptions. Say last night's spreadsheet was stored on my local computer. If that device failed, I could have kept working as long as I had a backup computer with my data on it, even if I lacked internet access. + +To succeed as a high-availability home worker, you must first identify the components you require for your work. Once you've done that, develop a plan to continue working even if one or more components fails. + +#### Duplicate replacement + +One approach is to create a _duplicate replacement_. Having the exact same hardware, software, communications, apps, and data available on a backup device guarantees that you can work if your primary fails. This approach is simple, though it might cost more to keep a complete backup on hand. + +To economize, you might share computers with your family or flatmates. A _shared backup_ is always more cost-effective than a _dedicated backup_, so long as you have top priority on the shared computer when you need it. + +#### Functional replacement + +The alternative to duplicate replacement is a _functional replacement_. You substitute a working equivalent for the failed component. Say I'm working from my home laptop and connecting through home WiFi. My internet connection fails. Perhaps I can tether my computer to my phone and use the cell network instead. I achieve HA by replacing one technology with an equivalent. + +#### Know your requirements + +Beyond the five HA components, be sure to identify any special requirements you have. For example, if mobility is important, you might need to replace a broken laptop with another laptop, not a desktop. + +HA means identifying all the functions you need, then ensuring your HA plan covers them all. + +### Timing, planning, and testing + +You must also define your time frame for recovery. Must you be able to continue your work immediately after a failure? Or do you have the luxury of some downtime during which you can react? + +The longer your allowable downtime, the more options you have. For example, if you could miss work for several days, you could simply trot a broken device into a repair shop. No need for a backup. + +In this article, by "high availability," I mean getting back to work in very short order after a failure, perhaps less than one hour. This typically requires that you have access to a backup device that is immediately available and ready to go. While there might be occasions when you can recover your primary device in a matter of minutes—for example, by working around a failure or by quickly replacing a defective piece of hardware or software—a backup computer is normally part of the HA plan. + +HA requires planning and preparation. "Winging it" doesn't suffice; ensure your backup plan works by testing it beforehand. + +For example, say your data resides in the cloud. That data is accessible from anywhere, from any device. That sounds ideal. But what if you forget that there's a small but vital bit of data stored locally on your failed computer? If you can't access that essential data, your HA plan fails. A dry run surfaces problems like this. + +### Smartphones as backup + +Most of us in software engineering and support use laptops and desktops at home. Smartphones and tablets are useful adjuncts, but they aren't at the core of what we do. + +The main reasons are screen size and keyboard. For software work, you can't achieve the same level of productivity with a small screen and touchscreen keypad as you can with a large monitor and physical keyboard. + +If you normally use a laptop or desktop and opt for a smartphone or tablet as your backup, test it out beforehand to make sure it suffices. Here's an example of the kind of subtlety that might otherwise trip you up. Most videoconferencing platforms run on both smartphones and laptops or desktops, but their mobile apps can differ in small but important ways. And even when the platform does offer an equivalent experience (the way [Jitsi][2] does, for instance), it can be awkward to share charts, slide decks, and documents, to use a chat feature, and so on, just due to the difference in mobile form factors compared to a big computer screen and a variety of input options. + +Smartphones make convenient backup devices because nearly everyone has one. But if you designate yours as your functional replacement, then try using it for work one day to verify that it meets your needs. + +### Data accessibility + +Data access is vital when your primary device fails. Even if you back up your work data, if a device fails, you also may need credentials for VPN or SSH access, specialized software, or forms of data that might not be stored along with your day-to-day documents and directories. You must ensure that when you design a backup scheme for yourself, you include all important data and store encryption keys and other access information securely. + +The best way to keep your work data secure is to use your own service. Running [Nextcloud][3] or [Sparkleshare][4] is easy, and hosting is cheap. Both are automated: files you place in a specially designated directory are synchronized with your server. It's not exactly building your own cloud, but it's a great way to leverage the cloud for your own services. You can make the backup process seamless with tools like [Syncthing, Bacula][5], or [rdiff-backup][6]. + +Cloud storage enables you to access data from any device at any location, but cloud storage will work only if you have a live communications path to it after a failure event. And not all cloud storage meets the privacy and security specifications for all projects. If your workplace has a cloud backup solution, spend some time learning about the cloud vendor's services and find out what level of availability it promises. Check its track record in achieving it. And be sure to devise an alternate way to access your cloud if your primary communications link fails. + +### Local backups + +If you store your data on a local device, you'll be responsible for backing it up and recovering it. In that case, back up your data to an alternate device, and verify that you can restore it within your acceptable time frame. This is your _time-to-recovery_. + +You'll also need to secure that data and meet any privacy requirements your employer specifies. + +#### Acceptable loss + +Consider how much data you can afford to lose in the event of an outage. For example, if you back up your data nightly, you could lose up to a maximum of one day's work (all the work completed during the day prior to the nightly backup). This is your _backup data timeliness_. + +Open source offers many free applications for local data backup and recovery. Generally, the same applications used for remote backups can also apply to local backup plans, so take a look at the [Advanced Rsync][7] or the [Syncthing tutorial][8] articles here on Opensource.com. + +Many prefer a data strategy that combines both cloud and local storage. Store your data locally, and then use the cloud as a backup (rather than working on the cloud). Or do it the other way around (although automating the cloud to push backups to you is more difficult than automating your local machine to push backups to the cloud). Storing your data in two separate locations gives your data _geographical redundancy_, which is useful should either site become unavailable. + +With a little forethought, you can devise a simple plan to access your data regardless of any outage. + +### My high-availability strategy + +As a practical example, I'll describe my own HA approach. My goals are a time to recovery of an hour or less and backup data timeliness within a day. + +![High Availability Strategy][9] + +(Howard Fosdick, [CC BY-SA 4.0][10]) + +#### Hardware + +I use an Android smartphone for phone calls and audioconferences. I can access a backup phone from another family member if my primary fails. + +Unfortunately, my phone's small size and touch keyboard mean I can't use it as my backup computer. Instead, I rely on a few generic desktop computers that have standard, interchangeable parts. You can easily maintain such hardware with this simple [free how-to guide][11]. You don't need any hardware experience. + +Open source software makes my multibox strategy affordable. It runs so efficiently that even [10-year-old computers work fine][12] as backups for typical office work. Mine are dual-core desktops with 4GB of RAM and any disk that cleanly verifies. These are so inexpensive that you can often get them for free from recycling centers. (In my [charity work][13], I find that many people give them away as unsuitable for running current proprietary software, but they're actually in perfect working order given a flexible operating system like Linux.) + +Another way to economize is to designate another family member's computer for your shared backups. + +#### Systems software and apps + +Running open source software on top of this generic hardware enables me to achieve several benefits. First, the flexibility of open source software enables me to address any possible software failure. For example, with simple operating system commands, I can copy, move, back up, and recover the operating system, applications, and data across partitions, disks, or computers. I don't have to worry about software constraints, vendor lock-in, proprietary backup file formats, licensing or activation restrictions, or extra fees. + +Another open source benefit is that you control your operating system. If you don't have control over your own system, you could be subject to forced restarts, unexpected and unwanted updates, and forced upgrades. My relative has run into such problems more than once. Without his knowledge or consent, his computer suddenly launched a forced upgrade from Windows 7 to Windows 10, which cost him three days of lost income (and untold frustration). The lesson: Your vendor's agenda may not coincide with your own. + +All operating systems have bugs. The difference is that open source software doesn't force you to eat them. + +#### Data classification + +I use very simple techniques to make my data highly available. + +I can't use cloud services for my data due to privacy requirements. Instead, my data "master copy" resides on a USB-connected disk. I plug it into any of several computers. After every session, I back up any altered data on the computer I used. + +Of course, this approach is only feasible if your backups run quickly. For most home workers, that's easy. All you have to do is segregate your data by size and how frequently you update it. + +Isolate big files like photos, audio, and video into separate folders or partitions. Make sure you back up only the files that are new or modified, not older items that have already been backed up. + +Much of my work involves office suites. These generate small files, so I isolate each project in its own folder. For example, I stored the two dozen files I used to write this article in a single subdirectory. Backing it up is as simple as copying that folder. + +Giving a little thought to data segregation and backing up only modified files ensures quick, easy backups for most home workers. My approach is simple; it works best if you only work on a couple of projects in a session. And I can tolerate losing up to a day's work. You can easily automate a more refined backup scheme for yourself. + +For software development, I take an entirely different approach. I use software versioning, which transparently handles all software backup issues for me and coordinates with other developers. My HA planning in this area focuses just on ensuring I can access the online tool. + +#### Communications + +Like many home users, I communicate through both a cellphone network and the internet. If my internet goes down, I can use the cell network instead by tethering my laptop to my Android smartphone. + +### Learning from failure + +Using my strategy for 15 years, how have I fared? What failures have I experienced, and how did they turn out? + + 1. **Motherboard burnout:** One day, my computer wouldn't turn on. I simply moved my USB "master data" external disk to another computer and used that. I lost no data. After some investigation, I determined it was a motherboard failure, so I scrapped the computer and used it for parts. + 2. **Drive failure:** An internal disk failed while I was working. I just moved my USB master disk to a backup computer. I lost 10 minutes of data updates. After work, I created a new boot disk by copying one from another computer—flexibility that only open source software offers. I used the affected computer the next day. + 3. **Fatal software update:** An update caused a failure in an important login service. I shifted to a backup computer where I hadn't yet applied the fatal update. I lost no data. After work, I searched for help with this problem and had it solved in an hour. + 4. **Monitor burnout:** My monitor fizzled out. I just swapped in a backup display and kept working. This took 10 minutes. After work, I determined that the problem was a burned-out capacitor, so I recycled the monitor. + 5. **Power outage:** Now, here's a situation I didn't plan for! A tornado took down the electrical power in our entire town for two days. I learned that one should think through _all_ possible contingencies—including alternate work sites. + + + +### Make your plan + +If you work from home, you need to consider what will happen when your home computer fails. If not, you could experience frustrating workdays off while you scramble to fix the problem. + +Open source software is the key. It runs so efficiently on older, cheaper computers that they become affordable backup machines. It offers device independence, and it ensures that you can design solutions that work best for you. + +For most people, ensuring high availability is very simple. The trick is thinking about it in advance. Create a plan _and then test it_. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/high-availability-home-office + +作者:[Howard Fosdick][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/howtech +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) +[2]: https://jitsi.org/downloads/ +[3]: https://opensource.com/article/20/7/nextcloud +[4]: https://opensource.com/article/19/4/file-sharing-git +[5]: https://opensource.com/article/19/3/backup-solutions +[6]: https://opensource.com/life/16/3/turn-your-old-raspberry-pi-automatic-backup-server +[7]: https://opensource.com/article/19/5/advanced-rsync +[8]: https://opensource.com/article/18/9/take-control-your-data-syncthing +[9]: https://opensource.com/sites/default/files/uploads/my_ha_strategy.png (High Availability Strategy) +[10]: https://creativecommons.org/licenses/by-sa/4.0/ +[11]: http://www.rexxinfo.org/Quick_Guide/Quick_Guide_To_Fixing_Computer_Hardware +[12]: https://opensource.com/article/19/7/how-make-old-computer-useful-again +[13]: https://www.freegeekchicago.org/ diff --git a/sources/tech/20210209 Try Deno as an alternative to Node.js.md b/sources/tech/20210209 Try Deno as an alternative to Node.js.md new file mode 100644 index 0000000000..efce7c5f93 --- /dev/null +++ b/sources/tech/20210209 Try Deno as an alternative to Node.js.md @@ -0,0 +1,12 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Try Deno as an alternative to Node.js) +[#]: via: (https://opensource.com/article/21/2/deno) +[#]: author: (Bryant Son https://opensource.com/users/brson) + +Try Deno as an alternative to Node.js +====== +Deno is a secure runtime for JavaScript and TypeScript. diff --git a/sources/tech/20210210 Configure multi-tenancy with Kubernetes namespaces.md b/sources/tech/20210210 Configure multi-tenancy with Kubernetes namespaces.md new file mode 100644 index 0000000000..5ced955007 --- /dev/null +++ b/sources/tech/20210210 Configure multi-tenancy with Kubernetes namespaces.md @@ -0,0 +1,368 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Configure multi-tenancy with Kubernetes namespaces) +[#]: via: (https://opensource.com/article/21/2/kubernetes-namespaces) +[#]: author: (Mike Calizo https://opensource.com/users/mcalizo) + +Configure multi-tenancy with Kubernetes namespaces +====== +Namespaces provide basic building blocks of access control for +applications, users, or groups of users. +![shapes of people symbols][1] + +Most enterprises want a multi-tenancy platform to run their cloud-native applications because it helps manage resources, costs, and operational efficiency and control [cloud waste][2]. + +[Kubernetes][3] is the leading open source platform for managing containerized workloads and services. It gained this reputation because of its flexibility in allowing operators and developers to establish automation with declarative configuration. But there is a catch: Because Kubernetes grows rapidly, the old problem of velocity becomes an issue. The bigger your adoption, the more issues and resource waste you discover. + +### An example of scale + +Imagine your company started small with its Kubernetes adoption by deploying a variety of internal applications. It has multiple project streams running with multiple developers dedicated to each project stream. + +In a scenario like this, you need to make sure your cluster administrator has full control over the cluster to manage its resources and implement cluster policy and security standards. In a way, the admin is herding the cluster's users to use best practices. A namespace is very useful in this instance because it enables different teams to share a single cluster where computing resources are subdivided into multiple teams. + +While namespaces are your first step to Kubernetes multi-tenancy, they are not good enough on their own. There are a number of Kubernetes primitives you need to consider so that you can administer your cluster properly and put it into a production-ready implementation. + +The Kubernetes primitives for multi-tenancy are: + + 1. **RBAC:** Role-based access control for Kubernetes + 2. **Network policies:** To isolate traffic between namespaces + 3. **Resource quotas:** To control fair access to cluster resources + + + +This article explores how to use Kubernetes namespaces and some basic RBAC configurations to partition a single Kubernetes cluster and take advantage of this built-in Kubernetes tooling. + +### What is a Kubernetes namespace? + +Before digging into how to use namespaces to prepare your Kubernetes cluster to become multi-tenant-ready, you need to know what namespaces are. + +A [namespace][4] is a Kubernetes object that partitions a Kubernetes cluster into multiple virtual clusters. This is done with the aid of [Kubernetes names and IDs][5]. Namespaces use the Kubernetes name object, which means that each object inside a namespace gets a unique name and ID across the cluster to allow virtual partitioning. + +### How namespaces help in multi-tenancy + +Namespaces are one of the Kubernetes primitives you can use to partition your cluster into multiple virtual clusters to allow multi-tenancy. Each namespace is isolated from every other user's, team's, or application's namespace. This isolation is essential in multi-tenancy so that updates and changes in applications, users, and teams are contained within the specific namespace. (Note that namespace does not provide network segmentation.) + +Before moving ahead, verify the default namespace in a working Kubernetes cluster: + + +``` +[root@master ~]# kubectl get namespace +NAME              STATUS   AGE +default           Active   3d +kube-node-lease   Active   3d +kube-public       Active   3d +kube-system       Active   3d +``` + +Then create your first namespace, called **test**: + + +``` +[root@master ~]# kubectl create namespace test +namespace/test created +``` + +Verify the newly created namespace: + + +``` +[root@master ~]# kubectl get namespace +NAME              STATUS   AGE +default           Active   3d +kube-node-lease   Active   3d +kube-public       Active   3d +kube-system       Active   3d +test              Active   10s +[root@master ~]# +``` + +Describe the newly created namespace: + + +``` +[root@master ~]# kubectl describe namespace test +Name:         test +Labels:       <none> +Annotations:  <none> +Status:       Active +No resource quota. +No LimitRange resource. +``` + +To delete a namespace: + + +``` +[root@master ~]# kubectl delete namespace test +namespace "test" deleted +``` + +Your new namespace is active, but it doesn't have any labels, annotations, or quota-limit ranges defined. However, now that you know how to create and describe and delete a namespace, I'll show how you can use a namespace to virtually partition a Kubernetes cluster. + +### Partitioning clusters using namespace and RBAC + +Deploy the following simple application to learn how to partition a cluster using namespace and isolate an application and its related objects from "other" users. + +First, verify the namespace you will use. For simplicity, use the **test** namespace you created above: + + +``` +[root@master ~]# kubectl get namespaces +NAME              STATUS   AGE +default           Active   3d +kube-node-lease   Active   3d +kube-public       Active   3d +kube-system       Active   3d +test              Active   3h +``` + +Then deploy a simple application called **test-app** inside the test namespace by using the following configuration: + + +``` +apiVersion: v1 +kind: Pod +metadata: +  name: test-app                 ⇒ name of the application +  namespace: test                ⇒ the namespace where the app runs +  labels: +     app: test-app                      ⇒ labels for the app +spec: +  containers: +  - name: test-app +    image: nginx:1.14.2         ⇒ the image we used for the app. +    ports: +    - containerPort: 80 +``` + +Deploy it: + + +``` +$ kubectl create -f test-app.yaml +    pod/test-app created +``` + +Then verify the application pod was created: + + +``` +$ kubectl get pods -n test +  NAME       READY   STATUS    RESTARTS   AGE +  test-app   1/1     Running   0          18s +``` + +Now that the running application is inside the **test** namespace, test a use case where: + + * **auth-user** can edit and view all the objects inside the test namespace + * **un-auth-user** can only view the namespace + + + +I pre-created the users for you to test. If you want to know how I created the users inside Kubernetes, view the commands [here][6]. + + +``` +$ kubectl config view -o jsonpath='{.users[*].name}' +  auth-user +  kubernetes-admin +  un-auth-user +``` + +With this set up, create a Kubernetes [Role and RoleBindings][7] to isolate the target namespace **test** to allow **auth-user** to view and edit objects inside the namespace and not allow **un-auth-user** to access or view the objects inside the **test** namespace. + +Start by creating a ClusterRole and a Role. These objects are a list of verbs (action) permitted on specific resources and namespaces. + +Create a ClusterRole: + + +``` +$ cat clusterrole.yaml +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: +  name: list-deployments +  namespace: test +rules: +  - apiGroups: [ apps ] +    resources: [ deployments ] +    verbs: [ get, list ] +``` + +Create a Role: + + +``` +$ cat role.yaml +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: Role +metadata: +  name: list-deployments +  namespace: test +rules: +  - apiGroups: [ apps ] +    resources: [ deployments ] +    verbs: [ get, list ] +``` + +Apply the Role: + + +``` +$ kubectl create -f role.yaml +roles.rbac.authorization.k8s.io "list-deployments" created +``` + +Use the same command to create a ClusterRole: + + +``` +$ kubectl create -f clusterrole.yaml + +$ kubectl get role -n test +  NAME               CREATED AT +  list-deployments   2021-01-18T00:54:00Z +``` + +Verify the Roles: + + +``` +$ kubectl describe roles -n test +  Name:         list-deployments +  Labels:       <none> +  Annotations:  <none> +  PolicyRule: +    Resources         Non-Resource URLs  Resource Names  Verbs +    ---------         -----------------  --------------  ----- +    deployments.apps  []                 []              [get list] +``` + +Remember that you must create RoleBindings by namespace, not by user. This means you need to create two role bindings for user **auth-user**. + +Here are the sample RoleBinding YAML files to permit **auth-user** to edit and view. + +**To edit:** + + +``` +$ cat rolebinding-auth-edit.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: +  name: auth-user-edit +  namespace: test +subjects: +\- kind: User +  name: auth-user +  apiGroup: rbac.authorization.k8s.io +roleRef: +  kind: ClusterRole +  name: edit +  apiGroup: rbac.authorization.k8s.io +``` + +**To view:** + + +``` +$ cat rolebinding-auth-view.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: +  name: auth-user-view +  namespace: test +subjects: +\- kind: User +  name: auth-user +  apiGroup: rbac.authorization.k8s.io +roleRef: +  kind: ClusterRole +  name: view +  apiGroup: rbac.authorization.k8s.io +``` + +Create these YAML files: + + +``` +$ kubectl create rolebinding-auth-view.yaml +$ kubectl create rolebinding-auth-edit.yaml +``` + +Verify if the RoleBindings were successfully created: + + +``` +$ kubectl get rolebindings -n test +NAME             ROLE               AGE +auth-user-edit   ClusterRole/edit   48m +auth-user-view   ClusterRole/view   47m +``` + +With the requirements set up, test the cluster partitioning: + + +``` +[root@master]$ sudo su un-auth-user +[un-auth-user@master ~]$ kubect get pods -n test +[un-auth-user@master ~]$ kubectl get pods -n test +Error from server (Forbidden): pods is forbidden: User "un-auth-user" cannot list resource "pods" in API group "" in the namespace "test" +``` + +Log in as **auth-user**: + + +``` +[root@master ]# sudo su auth-user +[auth-user@master auth-user]$ kubectl get pods -n test +NAME       READY   STATUS    RESTARTS   AGE +test-app   1/1     Running   0          3h8m +[auth-user@master un-auth-user]$ + +[auth-user@master auth-user]$ kubectl edit pods/test-app -n test +Edit cancelled, no changes made. +``` + +You can view and edit the objects inside the **test** namespace. How about viewing the cluster nodes? + + +``` +[auth-user@master auth-user]$ kubectl get nodes +Error from server (Forbidden): nodes is forbidden: User "auth-user" cannot list resource "nodes" in API group "" at the cluster scope +[auth-user@master auth-user]$ +``` + +You can't because the role bindings for user **auth-user** dictate they have access to view or edit objects only inside the **test** namespace. + +### Enable access control with namespaces + +Namespaces provide basic building blocks of access control using RBAC and isolation for applications, users, or groups of users. But using namespaces alone as your multi-tenancy solution is not enough in an enterprise implementation. It is recommended that you use other Kubernetes multi-tenancy primitives to attain further isolation and implement proper security. + +Namespaces can provide some basic isolation in your Kubernetes cluster; therefore, it is important to consider them upfront, especially when planning a multi-tenant cluster. Namespaces also allow you to logically segregate and assign resources to individual users, teams, or applications. + +By using namespaces, you can increase resource efficiencies by enabling a single cluster to be used for a diverse set of workloads. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/kubernetes-namespaces + +作者:[Mike Calizo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mcalizo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Open%20Pharma.png?itok=GP7zqNZE (shapes of people symbols) +[2]: https://devops.com/the-cloud-is-booming-but-so-is-cloud-waste/ +[3]: https://opensource.com/resources/what-is-kubernetes +[4]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ +[5]: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/ +[6]: https://www.adaltas.com/en/2019/08/07/users-rbac-kubernetes/ +[7]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ diff --git a/sources/tech/20210210 Draw Mandelbrot fractals with GIMP scripting.md b/sources/tech/20210210 Draw Mandelbrot fractals with GIMP scripting.md new file mode 100644 index 0000000000..d38f3fb54d --- /dev/null +++ b/sources/tech/20210210 Draw Mandelbrot fractals with GIMP scripting.md @@ -0,0 +1,370 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Draw Mandelbrot fractals with GIMP scripting) +[#]: via: (https://opensource.com/article/21/2/gimp-mandelbrot) +[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana) + +Draw Mandelbrot fractals with GIMP scripting +====== +Create complex mathematical images with GIMP's Script-Fu language. +![Painting art on a computer screen][1] + +The GNU Image Manipulation Program ([GIMP][2]) is my go-to solution for image editing. Its toolset is very powerful and convenient, except for doing [fractals][3], which is one thing you cannot draw by hand easily. These are fascinating mathematical constructs that have the characteristic of being [self-similar][4]. In other words, if they are magnified in some areas, they will look remarkably similar to the unmagnified picture. Besides being interesting, they also make very pretty pictures! + +![Portion of a Mandelbrot fractal using GIMPs Coldfire palette][5] + +Portion of a Mandelbrot fractal using GIMP's Coldfire palette (Cristiano Fontana, [CC BY-SA 4.0][6]) + +GIMP can be automated with [Script-Fu][7] to do [batch processing of images][8] or create complicated procedures that are not practical to do by hand; drawing fractals falls in the latter category. This tutorial will show how to draw a representation of the [Mandelbrot fractal][9] using GIMP and Script-Fu. + +![Mandelbrot set drawn using GIMP's Firecode palette][10] + +Portion of a Mandelbrot fractal using GIMP's Firecode palette. (Cristiano Fontana, [CC BY-SA 4.0][6]) + +![Rotated and magnified portion of the Mandelbrot set using Firecode.][11] + +Rotated and magnified portion of the Mandelbrot set using the Firecode palette. (Cristiano Fontana, [CC BY-SA 4.0][6]) + +In this tutorial, you will write a script that creates a layer in an image and draws a representation of the Mandelbrot set with a colored environment around it. + +### What is the Mandelbrot set? + +Do not panic! I will not go into too much detail here. For the more math-savvy, the Mandelbrot set is defined as the set of [complex numbers][12] _a_ for which the succession + +_zn+1 = zn2 + a_ + +does not diverge when starting from _z₀ = 0_. + +In reality, the Mandelbrot set is the fancy-looking black blob in the pictures; the nice-looking colors are outside the set. They represent how many iterations are required for the magnitude of the succession of numbers to pass a threshold value. In other words, the color scale shows how many steps are required for the succession to pass an upper-limit value. + +### GIMP's Script-Fu + +[Script-Fu][7] is the scripting language built into GIMP. It is an implementation of the [Scheme programming language][13]. + +If you want to get more acquainted with Scheme, GIMP's documentation offers an [in-depth tutorial][14]. I also wrote an article about [batch processing images][8] using Script-Fu. Finally, the Help menu offers a Procedure Browser with very extensive documentation with all of Script-Fu's functions described in detail. + +![GIMP Procedure Browser][15] + +(Cristiano Fontana, [CC BY-SA 4.0][6]) + +Scheme is a Lisp-like language, so a major characteristic is that it uses a [prefix notation][16] and a [lot of parentheses][17]. Functions and operators are applied to a list of operands by prefixing them: + + +``` +(function-name operand operand ...) + +(+ 2 3) +↳ Returns 5 + +(list 1 2 3 5) +↳ Returns a list containing 1, 2, 3, and 5 +``` + +### Write the script + +You can write your first script and save it to the **Scripts** folder found in the preferences window under **Folders → Scripts**. Mine is at `$HOME/.config/GIMP/2.10/scripts`. Write a file called `mandelbrot.scm` with: + + +``` +; Complex numbers implementation +(define (make-rectangular x y) (cons x y)) +(define (real-part z) (car z)) +(define (imag-part z) (cdr z)) + +(define (magnitude z) +  (let ((x (real-part z)) +        (y (imag-part z))) +    (sqrt (+ (* x x) (* y y))))) + +(define (add-c a b) +  (make-rectangular (+ (real-part a) (real-part b)) +                    (+ (imag-part a) (imag-part b)))) + +(define (mul-c a b) +  (let ((ax (real-part a)) +        (ay (imag-part a)) +        (bx (real-part b)) +        (by (imag-part b))) +    (make-rectangular (- (* ax bx) (* ay by)) +                      (+ (* ax by) (* ay bx))))) + +; Definition of the function creating the layer and drawing the fractal +(define (script-fu-mandelbrot image palette-name threshold domain-width domain-height offset-x offset-y) +  (define num-colors (car (gimp-palette-get-info palette-name))) +  (define colors (cadr (gimp-palette-get-colors palette-name))) + +  (define width (car (gimp-image-width image))) +  (define height (car (gimp-image-height image))) + +  (define new-layer (car (gimp-layer-new image +                                         width height +                                         RGB-IMAGE +                                         "Mandelbrot layer" +                                         100 +                                         LAYER-MODE-NORMAL))) + +  (gimp-image-add-layer image new-layer 0) +  (define drawable new-layer) +  (define bytes-per-pixel (car (gimp-drawable-bpp drawable))) + +  ; Fractal drawing section. +  ; Code from: +  (define (iterations a z i) +    (let ((z′ (add-c (mul-c z z) a))) +       (if (or (= i num-colors) (> (magnitude z′) threshold)) +          i +          (iterations a z′ (+ i 1))))) + +  (define (iter->color i) +    (if (>= i num-colors) +        (list->vector '(0 0 0)) +        (list->vector (vector-ref colors i)))) + +  (define z0 (make-rectangular 0 0)) + +  (define (loop x end-x y end-y) +    (let* ((real-x (- (* domain-width (/ x width)) offset-x)) +           (real-y (- (* domain-height (/ y height)) offset-y)) +           (a (make-rectangular real-x real-y)) +           (i (iterations a z0 0)) +           (color (iter->color i))) +      (cond ((and (< x end-x) (< y end-y)) (gimp-drawable-set-pixel drawable x y bytes-per-pixel color) +                                           (loop (+ x 1) end-x y end-y)) +            ((and (>= x end-x) (< y end-y)) (gimp-progress-update (/ y end-y)) +                                            (loop 0 end-x (+ y 1) end-y))))) +  (loop 0 width 0 height) + +  ; These functions refresh the GIMP UI, otherwise the modified pixels would be evident +  (gimp-drawable-update drawable 0 0 width height) +  (gimp-displays-flush) +) + +(script-fu-register +  "script-fu-mandelbrot"          ; Function name +  "Create a Mandelbrot layer"     ; Menu label +                                  ; Description +  "Draws a Mandelbrot fractal on a new layer. For the coloring it uses the palette identified by the name provided as a string. The image boundaries are defined by its domain width and height, which correspond to the image width and height respectively. Finally the image is offset in order to center the desired feature." +  "Cristiano Fontana"             ; Author +  "2021, C.Fontana. GNU GPL v. 3" ; Copyright +  "27th Jan. 2021"                ; Creation date +  "RGB"                           ; Image type that the script works on +  ;Parameter    Displayed            Default +  ;type         label                values +  SF-IMAGE      "Image"              0 +  SF-STRING     "Color palette name" "Firecode" +  SF-ADJUSTMENT "Threshold value"    '(4 0 10 0.01 0.1 2 0) +  SF-ADJUSTMENT "Domain width"       '(3 0 10 0.1 1 4 0) +  SF-ADJUSTMENT "Domain height"      '(3 0 10 0.1 1 4 0) +  SF-ADJUSTMENT "X offset"           '(2.25 -20 20 0.1 1 4 0) +  SF-ADJUSTMENT "Y offset"           '(1.50 -20 20 0.1 1 4 0) +) +(script-fu-menu-register "script-fu-mandelbrot" "<Image>/Layer/") +``` + +I will go through the script to show you what it does. + +### Get ready to draw the fractal + +Since this image is all about complex numbers, I wrote a quick and dirty implementation of complex numbers in Script-Fu. I defined the complex numbers as [pairs][18] of real numbers. Then I added the few functions needed for the script. I used [Racket's documentation][19] as inspiration for function names and roles: + + +``` +(define (make-rectangular x y) (cons x y)) +(define (real-part z) (car z)) +(define (imag-part z) (cdr z)) + +(define (magnitude z) +  (let ((x (real-part z)) +        (y (imag-part z))) +    (sqrt (+ (* x x) (* y y))))) + +(define (add-c a b) +  (make-rectangular (+ (real-part a) (real-part b)) +                    (+ (imag-part a) (imag-part b)))) + +(define (mul-c a b) +  (let ((ax (real-part a)) +        (ay (imag-part a)) +        (bx (real-part b)) +        (by (imag-part b))) +    (make-rectangular (- (* ax bx) (* ay by)) +                      (+ (* ax by) (* ay bx))))) +``` + +### Draw the fractal + +The new function is called `script-fu-mandelbrot`. The best practice for writing a new function is to call it `script-fu-something` so that it can be identified in the Procedure Browser easily. The function requires a few parameters: an `image` to which it will add a layer with the fractal, the `palette-name` identifying the color palette to be used, the `threshold` value to stop the iteration, the `domain-width` and `domain-height` that identify the image boundaries, and the `offset-x` and `offset-y` to center the image to the desired feature. The script also needs some other parameters that it can deduce from the GIMP interface: + + +``` +(define (script-fu-mandelbrot image palette-name threshold domain-width domain-height offset-x offset-y) +  (define num-colors (car (gimp-palette-get-info palette-name))) +  (define colors (cadr (gimp-palette-get-colors palette-name))) + +  (define width (car (gimp-image-width image))) +  (define height (car (gimp-image-height image))) + +  ... +``` + +Then it creates a new layer and identifies it as the script's `drawable`. A "drawable" is the element you want to draw on: + + +``` +(define new-layer (car (gimp-layer-new image +                                       width height +                                       RGB-IMAGE +                                       "Mandelbrot layer" +                                       100 +                                       LAYER-MODE-NORMAL))) + +(gimp-image-add-layer image new-layer 0) +(define drawable new-layer) +(define bytes-per-pixel (car (gimp-drawable-bpp drawable))) +``` + +For the code determining the pixels' color, I used the [Racket][20] example on the [Rosetta Code][21] website. It is not the most optimized algorithm, but it is simple to understand. Even a non-mathematician like me can understand it. The `iterations` function determines how many steps the succession requires to pass the threshold value. To cap the iterations, I am using the number of colors in the palette. In other words, if the threshold is too high or the succession does not grow, the calculation stops at the `num-colors` value. The `iter->color` function transforms the number of iterations into a color using the provided palette. If the iteration number is equal to `num-colors`, it uses black because this means that the succession is probably bound and that pixel is in the Mandelbrot set: + + +``` +; Fractal drawing section. +; Code from: +(define (iterations a z i) +  (let ((z′ (add-c (mul-c z z) a))) +     (if (or (= i num-colors) (> (magnitude z′) threshold)) +        i +        (iterations a z′ (+ i 1))))) + +(define (iter->color i) +  (if (>= i num-colors) +      (list->vector '(0 0 0)) +      (list->vector (vector-ref colors i)))) +``` + +Because I have the feeling that Scheme users do not like to use loops, I implemented the function looping over the pixels as a recursive function. The `loop` function reads the starting coordinates and their upper boundaries. At each pixel, it defines some temporary variables with the `let*` function: `real-x` and `real-y` are the real coordinates of the pixel in the complex plane, according to the parameters; the `a` variable is the starting point for the succession; the `i` is the number of iterations; and finally `color` is the pixel color. Each pixel is colored with the `gimp-drawable-set-pixel` function that is an internal GIMP procedure. The peculiarity is that it is not undoable, and it does not trigger the image to refresh. Therefore, the image will not be updated during the operation. To play nice with the user, at the end of each row of pixels, it calls the `gimp-progress-update` function, which updates a progress bar in the user interface: + + +``` +(define z0 (make-rectangular 0 0)) + +(define (loop x end-x y end-y) +  (let* ((real-x (- (* domain-width (/ x width)) offset-x)) +         (real-y (- (* domain-height (/ y height)) offset-y)) +         (a (make-rectangular real-x real-y)) +         (i (iterations a z0 0)) +         (color (iter->color i))) +    (cond ((and (< x end-x) (< y end-y)) (gimp-drawable-set-pixel drawable x y bytes-per-pixel color) +                                         (loop (+ x 1) end-x y end-y)) +          ((and (>= x end-x) (< y end-y)) (gimp-progress-update (/ y end-y)) +                                          (loop 0 end-x (+ y 1) end-y))))) +(loop 0 width 0 height) +``` + +At the calculation's end, the function needs to inform GIMP that it modified the `drawable`, and it should refresh the interface because the image is not "automagically" updated during the script's execution: + + +``` +(gimp-drawable-update drawable 0 0 width height) +(gimp-displays-flush) +``` + +### Interact with the user interface + +To use the `script-fu-mandelbrot` function in the graphical user interface (GUI), the script needs to inform GIMP. The `script-fu-register` function informs GIMP about the parameters required by the script and provides some documentation: + + +``` +(script-fu-register +  "script-fu-mandelbrot"          ; Function name +  "Create a Mandelbrot layer"     ; Menu label +                                  ; Description +  "Draws a Mandelbrot fractal on a new layer. For the coloring it uses the palette identified by the name provided as a string. The image boundaries are defined by its domain width and height, which correspond to the image width and height respectively. Finally the image is offset in order to center the desired feature." +  "Cristiano Fontana"             ; Author +  "2021, C.Fontana. GNU GPL v. 3" ; Copyright +  "27th Jan. 2021"                ; Creation date +  "RGB"                           ; Image type that the script works on +  ;Parameter    Displayed            Default +  ;type         label                values +  SF-IMAGE      "Image"              0 +  SF-STRING     "Color palette name" "Firecode" +  SF-ADJUSTMENT "Threshold value"    '(4 0 10 0.01 0.1 2 0) +  SF-ADJUSTMENT "Domain width"       '(3 0 10 0.1 1 4 0) +  SF-ADJUSTMENT "Domain height"      '(3 0 10 0.1 1 4 0) +  SF-ADJUSTMENT "X offset"           '(2.25 -20 20 0.1 1 4 0) +  SF-ADJUSTMENT "Y offset"           '(1.50 -20 20 0.1 1 4 0) +) +``` + +Then the script tells GIMP to put the new function in the Layer menu with the label "Create a Mandelbrot layer": + + +``` +`(script-fu-menu-register "script-fu-mandelbrot" "/Layer/")` +``` + +Having registered the function, you can visualize it in the Procedure Browser. + +![script-fu-mandelbrot function][22] + +(Cristiano Fontana, [CC BY-SA 4.0][6]) + +### Run the script + +Now that the function is ready and registered, you can draw the Mandelbrot fractal! First, create a square image and run the script from the Layers menu. + +![script running][23] + +(Cristiano Fontana, [CC BY-SA 4.0][6]) + +The default values are a good starting set to obtain the following image. The first time you run the script, create a very small image (e.g., 60x60 pixels) because this implementation is slow! It took several hours for my computer to create the following image in full 1920x1920 pixels. As I mentioned earlier, this is not the most optimized algorithm; rather, it was the easiest for me to understand. + +![Mandelbrot set drawn using GIMP's Firecode palette][10] + +Portion of a Mandelbrot fractal using GIMP's Firecode palette. (Cristiano Fontana, [CC BY-SA 4.0][6]) + +### Learn more + +This tutorial showed how to use GIMP's built-in scripting features to draw an image created with an algorithm. These images show GIMP's powerful set of tools that can be used for artistic applications and mathematical images. + +If you want to move forward, I suggest you look at the official documentation and its [tutorial][14]. As an exercise, try modifying this script to draw a [Julia set][24], and please share the resulting image in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/gimp-mandelbrot + +作者:[Cristiano L. Fontana][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cristianofontana +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) +[2]: https://www.gimp.org/ +[3]: https://en.wikipedia.org/wiki/Fractal +[4]: https://en.wikipedia.org/wiki/Self-similarity +[5]: https://opensource.com/sites/default/files/uploads/mandelbrot_portion.png (Portion of a Mandelbrot fractal using GIMPs Coldfire palette) +[6]: https://creativecommons.org/licenses/by-sa/4.0/ +[7]: https://docs.gimp.org/en/gimp-concepts-script-fu.html +[8]: https://opensource.com/article/21/1/gimp-scripting +[9]: https://en.wikipedia.org/wiki/Mandelbrot_set +[10]: https://opensource.com/sites/default/files/uploads/mandelbrot.png (Mandelbrot set drawn using GIMP's Firecode palette) +[11]: https://opensource.com/sites/default/files/uploads/mandelbrot_portion2.png (Rotated and magnified portion of the Mandelbrot set using Firecode.) +[12]: https://en.wikipedia.org/wiki/Complex_number +[13]: https://en.wikipedia.org/wiki/Scheme_(programming_language) +[14]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html +[15]: https://opensource.com/sites/default/files/uploads/procedure_browser_0.png (GIMP Procedure Browser) +[16]: https://en.wikipedia.org/wiki/Polish_notation +[17]: https://xkcd.com/297/ +[18]: https://www.gnu.org/software/guile/manual/html_node/Pairs.html +[19]: https://docs.racket-lang.org/reference/generic-numbers.html?q=make-rectangular#%28part._.Complex_.Numbers%29 +[20]: https://racket-lang.org/ +[21]: https://rosettacode.org/wiki/Mandelbrot_set#Racket +[22]: https://opensource.com/sites/default/files/uploads/mandelbrot_documentation.png (script-fu-mandelbrot function) +[23]: https://opensource.com/sites/default/files/uploads/script_working.png (script running) +[24]: https://en.wikipedia.org/wiki/Julia_set diff --git a/sources/tech/20210210 How to Add Fingerprint Login in Ubuntu and Other Linux Distributions.md b/sources/tech/20210210 How to Add Fingerprint Login in Ubuntu and Other Linux Distributions.md new file mode 100644 index 0000000000..dcf6c9a3ae --- /dev/null +++ b/sources/tech/20210210 How to Add Fingerprint Login in Ubuntu and Other Linux Distributions.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: (scvoet) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Add Fingerprint Login in Ubuntu and Other Linux Distributions) +[#]: via: (https://itsfoss.com/fingerprint-login-ubuntu/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +How to Add Fingerprint Login in Ubuntu and Other Linux Distributions +====== + +Many high-end laptops come with fingerprint readers these days. Windows and macOS have been supporting fingerprint login for some time. In desktop Linux, the support for fingerprint login was more of geeky tweaks but [GNOME][1] and [KDE][2] have started supporting it through system settings. + +This means that on newer Linux distribution versions, you can easily use fingerprint reading. I am going to enable fingerprint login in Ubuntu here but you may use the steps on other distributions running GNOME 3.38. + +Prerequisite + +This is obvious, of course. Your computer must have a fingerprint reader. + +This method works for any Linux distribution running GNOME version 3.38 or higher. If you are not certain, you may [check which desktop environment version you are using][3]. + +KDE 5.21 also has a fingerprint manager. The screenshots will look different, of course. + +### Adding fingerprint login in Ubuntu and other Linux distributions + +Go to **Settings** and the click on **Users** from left sidebar. You should see all the user account on your system here. You’ll see several option including **Fingerprint Login**. + +Click on the Fingerprint Login option here. + +![Enable fingerprint login in Ubuntu][4] + +It will immediately ask you to scan a new fingerprint. When you click the + sign to add a fingerprint, it presents a few predefined options so that you can easily identify which finger or thumb it is. + +You may of course scan left thumb by clicking right index finger though I don’t see a good reason why you would want to do that. + +![Adding fingerprint][5] + +While adding the fingerprint, rotate your finger or thumb as directed. + +![Rotate your finger][6] + +Once the system registers the entire finger, it will give you a green signal that the fingerprint has been added. + +![Fingerprint successfully added][7] + +If you want to test it right away, lock the screen by pressing Super+L keyboard shortcut in Ubuntu and then using the fingerprint for login. + +![Login With Fingerprint in Ubuntu][8] + +#### Experience with fingerprint login on Ubuntu + +Fingerprint login is what its name suggests: login using your fingerprint. That’s it. You cannot use your finger when it asks for authentication for programs that need sudo access. It’s not a replacement of your password. + +One more thing. The fingerprint login allows you to log in but you cannot use your finger when your system asks for sudo password. The [keyring in Ubuntu][9] also remains locked. + +Another annoying thing is because of GNOME’s GDM login screen. When you login, you have to click on your account first to get to the password screen. This is where you can use your finger. It would have been nicer to not bothered about clicking the user account ID first. + +I also notice that fingerprint reading is not as smooth and quick as it is in Windows. It works, though. + +If you are somewhat disappointed with the fingerprint login on Linux, you may disable it. Let me show you the steps in the next section. + +### Disable fingerprint login + +Disabling fingerprint login is pretty much the same as enabling it in the first place. + +Go to **Settings→User** and then click on Fingerprint Login option. It will show a screen with options to add more fingerprints or delete the existing ones. You need to delete the existing fingerprints. + +![Disable Fingerprint Login][10] + +Fingerprint login does have some benefits, specially for lazy people like me. I don’t have to type my password every time I lock the screen and I am happy with the limited usage. + +Enabling sudo with fingerprint should not be entirely impossible with [PAM][11]. I remember that when I [set up face unlock in Ubuntu][12], it could be used with sudo as well. Let’s see if future versions add this feature. + +Do you have a laptop with fingerprint reader? Do you use it often or is it just one of things you don’t care about? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/fingerprint-login-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.gnome.org/ +[2]: https://kde.org/ +[3]: https://itsfoss.com/find-desktop-environment/ +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/enable-fingerprint-ubuntu.png?resize=800%2C607&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/adding-fingerprint-login-ubuntu.png?resize=800%2C496&ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/adding-fingerprint-ubuntu-linux.png?resize=800%2C603&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/fingerprint-added-ubuntu.png?resize=797%2C510&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/login-with-fingerprint-ubuntu.jpg?resize=800%2C320&ssl=1 +[9]: https://itsfoss.com/ubuntu-keyring/ +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/disable-fingerprint-login.png?resize=798%2C524&ssl=1 +[11]: https://tldp.org/HOWTO/User-Authentication-HOWTO/x115.html +[12]: https://itsfoss.com/face-unlock-ubuntu/ diff --git a/sources/tech/20210210 Manage your budget on Linux with this open source finance tool.md b/sources/tech/20210210 Manage your budget on Linux with this open source finance tool.md new file mode 100644 index 0000000000..d671e6cbf2 --- /dev/null +++ b/sources/tech/20210210 Manage your budget on Linux with this open source finance tool.md @@ -0,0 +1,86 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage your budget on Linux with this open source finance tool) +[#]: via: (https://opensource.com/article/21/2/linux-skrooge) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Manage your budget on Linux with this open source finance tool +====== +Make managing your finances easier with Skrooge, an open source +budgeting tool. +![2 cents penny money currency][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. This article is about personal financial management. + +Personal finances can be difficult to manage. It can be frustrating and even scary when you don't have enough money to get by without financial assistance, and it can be surprisingly overwhelming when you do have the money you need but no clear notion of where it all goes each month. To make matters worse, we're often told to "make a budget" as if declaring the amount of money you can spend each month will somehow manifest the money you need. The bottom line is that making a budget is hard, and not meeting your financial goals is discouraging. But it's still important, and Linux has several tools that can help make the task manageable. + +### Money management + +As with anything else in life, we all have our own ways of keeping track of our money. I used to take a simple and direct approach: My paycheck was deposited into an account, and I'd withdraw some percentage in cash. Once the cash was gone from my wallet, I had to wait until the next payday to spend anything. It only took one day of missing out on lunch to learn that I had to take my goals seriously, and I adjusted my spending behavior accordingly. For the simple lifestyle I had at the time, it was an effective means of keeping myself honest with my income, but it didn't translate well to online business transactions, long-term utility contracts, investments, and so on. + +As I continue to refine the way I track my finances, I've learned that personal accounting is always an evolving process. We each have unique financial circumstances, which inform what kind of solution we can or should use to track our income and debt. If you're out of work, then your budgeting goal is likely to spend as little as possible. If you're working but paying off a student loan, then your goal probably favors sending money to the bank. And if you're working but planning for retirement, then you're probably trying to save as much as you can. + +The thing to remember about a budget is that it's meant to compare your financial reality with your financial _goals_. You can't avoid some expenses, but after those, you get to set your own priorities. If you don't hit your goals, you can adjust your own behavior or rewrite your goals so that they better reflect reality. Adapting your financial plan doesn't mean you've failed. It just means that your initial projection wasn't accurate. During hard times, you may not be able to hit any budget goals, but if you keep up with your budget, you'll learn a lot about what it takes financially to maintain your current lifestyle (whatever it may be). Over time, you can learn to adjust settings you may never have realized were available to you. For instance, people are moving to rural towns for the lower cost of living now that remote work is a widely accepted option. It's pretty stunning to see how such a lifestyle shift can alter your budget reports. + +The point is that budgeting is an often undervalued activity, and in no small part because it's daunting. It's important to realize that you can budget, no matter your level of expertise or interest in finances. Whether you [just use a LibreOffice spreadsheet][2], or try a dedicated financial application, you can set goals, track your own behavior, and learn a lot of valuable lessons that could eventually pay dividends. + +### Open source accounting + +There are several dedicated [personal finance applications for Linux][3], including [HomeBank][4], [Money Manager EX][5], [GNUCash][6], [KMyMoney][7], and [Skrooge][8]. All of these applications are essentially ledgers, a place you can retreat to at the end of each month (or whenever you look at your accounts), import data from your bank, and review how your expenditures align with whatever budget you've set for yourself. + +![Skrooge interface with financial data displayed][9] + +Skrooge + +I use Skrooge as my personal budget tracker. It's an easy application to set up, even with multiple bank accounts. Skrooge, as with most open source finance apps, can import multiple file formats, so my workflow goes something like this: + + 1. Log in to my banks. + 2. Export the month's bank statement as QIF files. + 3. Open Skrooge. + 4. Import the QIF files. Each gets assigned to their appropriate accounts automatically. + 5. Review my expenditures compared to the budget goals I've set for myself. If I've gone over, then I dock next month's goals (so that I'll ideally spend less to make up the difference). If I've come in under my goal, then I move the excess to December's budget (so I'll have more to spend at the end of the year). + + + +I only track a subset of the household budget in Skrooge. Skrooge makes that process easy through a dynamic database that allows me to categorize multiple transactions at once with custom tags. This makes it easy for me to extract my personal expenditures from general household and utility expenses, and I can leverage these categories when reviewing the autogenerated reports Skrooge provides. + +![Skrooge budget pie chart][10] + +Skrooge budget pie chart + +Most importantly, the popular Linux financial apps allow me to manage my budget the way that works best for me. For instance, my partner prefers to use a LibreOffice spreadsheet, but with very little effort, I can extract a CSV file from the household budget, import it into Skrooge, and use an updated set of data. There's no lock-in, no incompatibility. The system is flexible and agile, allowing us to adapt our budget and our method of tracking expenses as we learn more about effective budgeting and about what life has in store. + +### Open choice + +Money markets worldwide differ, and the way we each interact with them also defines what tools we can use. Ultimately, your choice of what to use for your finances is a decision you must make based on your own requirements. And one thing open source does particularly well is provide its users the freedom of choice. + +When setting my own financial goals, I appreciate that I can use whatever application fits in best with my style of personal computing. I get to retain control of how I process the data in my life, even when it's data I don't necessarily enjoy having to process. Linux and its amazing set of applications make it just a little less of a chore. + +Try some financial apps on Linux and see if you can inspire yourself to set some goals and save money! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-skrooge + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Medical%20Costs%20Transparency_1.jpg?itok=CkZ_J88m (2 cents penny money currency) +[2]: https://opensource.com/article/20/3/libreoffice-templates +[3]: https://opensource.com/life/17/10/personal-finance-tools-linux +[4]: http://homebank.free.fr/en/index.php +[5]: https://www.moneymanagerex.org/download +[6]: https://opensource.com/article/20/2/gnucash +[7]: https://kmymoney.org/download.html +[8]: https://apps.kde.org/en/skrooge +[9]: https://opensource.com/sites/default/files/skrooge.jpg +[10]: https://opensource.com/sites/default/files/skrooge-pie_0.jpg diff --git a/sources/tech/20210211 31 open source text editors you need to try.md b/sources/tech/20210211 31 open source text editors you need to try.md new file mode 100644 index 0000000000..d9ca620bf4 --- /dev/null +++ b/sources/tech/20210211 31 open source text editors you need to try.md @@ -0,0 +1,182 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (31 open source text editors you need to try) +[#]: via: (https://opensource.com/article/21/2/open-source-text-editors) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +31 open source text editors you need to try +====== +Looking for a new text editor? Here are 31 options to consider. +![open source button on keyboard][1] + +Computers are text-based, so the more things you do with them, the more you find yourself needing a text-editing application. And the more time you spend in a text editor, the more likely you are to demand more from whatever you use. + +If you're looking for a good text editor, you'll find that Linux has plenty to offer. Whether you want to work in the terminal, on your desktop, or in the cloud, you can literally try a different editor every day for a month (or one a month for almost three years) in your relentless search for the perfect typing experience. + +### Vim-like editors + +![][2] + + * [Vi][3] ships with every Linux, BSD, Solaris, and macOS installation. It's the quintessential Unix text editor, with its unique combination of editing modes and super-efficient single-key shortcuts. The original Vi editor was an application written by Bill Joy, creator of the C shell. Modern incarnations of Vi, most notably Vim, have added many features, including multiple levels of undo, better navigation while in insert mode, line folding, syntax highlighting, plugin support, and much more. It takes practice (it even has its own tutor application, vimtutor.) + * [Kakoune][4] is a Vim-inspired application with a familiar, minimalistic interface, short keyboard shortcuts, and separate editing and insert modes. It looks and feels a lot like Vi at first, but with its own unique style, both in design and function. As a special bonus, it features an implementation of the Clippy interface. + + + +### emacs editors + +![][5] + + * The original free emacs, and one of the first official applications of the GNU project that started the Free Software movement, [GNU Emacs][6] is a wildly popular text editor. It's great for sysadmins, developers, and everyday users alike, with loads of features and seemingly endless extensions. Once you start using Emacs, you might find it difficult to think of a reason to close it because it's just that versatile! + * If you like Emacs but find GNU Emacs too bloated, then you might like [Jove][7]. Jove is a terminal-based emacs editor. It's easy to use, but if you're new to emacsen (the plural of emacs), Jove is also easy to learn, thanks to the teachjove command. + * Another lightweight emacs editor, [Jed][8] is a simple incarnation of a macro-based workflow. One thing that sets it apart from other editors is its use of [S-Lang][9], a C-like scripting language providing extensibility options to developers more comfortable with C than with Lisp. + + + +### Interactive editors + +![][10] + + * [GNU nano][11] takes a bold stance on terminal-based text editing: it provides a menu. Yes, this humble editor takes a cue from GUI editors by telling the user exactly which key they need to press to perform a specific function. This is a refreshing take on user experience, so it's no wonder that it's nano, not Vi, that's set as the default editor for "user-friendly" distributions. + * [JOE][12] is based on an old text-editing application called WordStar. If you're not familiar with Wordstar, JOE can also mimic Emacs or GNU nano. By default, it's a good compromise between something relatively mysterious like Emacs or Vi and the always-on verbosity of GNU Nano (for example, it tells you how to activate an onscreen help display, but it's not on by default). + * The excellent [e3][13] application is a tiny text editor with five built-in keyboard shortcut schemes to emulate Emacs, Vi, nano, NEdit, and WordStar. In other words, no matter what terminal-based editor you are used to, you're likely to feel right at home with e3. + + + +### ed and more + + * The [ed][14] line editor is part of the [POSIX][15] and Open Group's standard definition of a Unix-based operating system. You can count on it being installed on nearly every Linux or Unix system you'll ever encounter. It's tiny, terse, and tip-top. + * Building upon ed, the [Sed][16] stream editor is popular both for its functionality and its syntax. Most Linux users learn at least one sed command when searching for the easiest and fastest way to update a line in a config file, but it's worth taking a closer look. Sed is a powerful command with lots of useful subcommands. Get to know it better, and you may find yourself open text editor applications a lot less frequently. + * You don't always need a text editor to edit text. The [heredoc][17] (or Here Doc) system, available in any POSIX terminal, allows you to type text directly into your open terminal and then pipes what you type into a text file. It's not the most robust editing experience, but it is versatile and always available. + + + +### Minimalist editors + +![][18] + +If your idea of a good text editor is a word processor except without all the processing, you're probably looking for one of these classics. These editors let you write and edit text with minimal interference and minimal assistance. What features they do offer are often centered around markup, Markdown, or code. Some have names that follow a certain pattern: + + * [Gedit][19] from the GNOME team + * [medit][20] for a classic GNOME feel + * [Xedit][21] uses only the most basic X11 libraries + * [jEdit][22] for Java aficionados + + + +A similar experience is available for KDE users: + + * [Kate][23] is an unassuming editor with all the features you need. + * [KWrite][24] hides a ton of useful features in a deceptively simple, easy-to-use interface. + + + +And there are a few for other platforms: + + * [Notepad++][25] is a popular Windows application, while Notepadqq takes a similar approach for Linux. + * [Pe][26] is for Haiku OS (the reincarnation of that quirky child of the '90s, BeOS). + * [FeatherPad][27] is a basic editor for Linux but with some support for macOS and Haiku. If you're a Qt hacker looking to port code, take a look! + + + +### IDEs + +![][28] + +There's quite a crossover between text editors and integrated development environments (IDEs). The latter really is just the former with lots of code-specific features added on. If you use an IDE regularly, you might find an XML or Markdown editor lurking in your extension manager: + + * [NetBeans][29] is a handy text editor for Java users. + * [Eclipse][30] offers a robust editing suite with lots of extensions to give you the tools you need. + + + +### Cloud-based editors + +![][31] + +Working in the cloud? You can write there too, you know. + + * [Etherpad][32] is a text editor app that runs on the web. There are free and independent instances for you to use, or you can set up your own. + * [Nextcloud][33] has a thriving app scene and includes both a built-in text editor and a third-party Markdown editor with live preview. + + + +### Newer editors + +![][34] + +Everybody has an idea about what makes a text editor perfect. For that reason, new editors are released each year. Some reimplement classic old ideas in a new and exciting way, some have unique takes on the user experience, and some focus on specific needs. + + * [Atom][35] is an all-purpose modern text editor from GitHub featuring lots of extensions and Git integration. + * [Brackets][36] is an editor from Adobe for web developers. + * [Focuswriter][37] seeks to help you focus on writing with helpful features like a distraction-free fullscreen mode, optional typewriter sound effects, and beautiful configuration options. + * [Howl][38] is a progressive, dynamic editor based on Lua and Moonscript. + * [Norka][39] and [KJots][40] mimic a notebook with each document representing a "page" in your "binder." You can take individual pages out of your notebook through export functions. + + + +### DIY editor + +![][41] + +As the saying does _NOT_ go: Why use somebody else's application when you can write your own? Linux has over 30 text editors available, so probably the last thing it really needs is another one. Then again, part of the fun of open source is the ability to experiment. + +If you're looking for an excuse to learn how to program, making your own text editor is a great way to get started. You can achieve the basics in about 100 lines of code, and the more you use it, the more you'll be inspired to learn more so you can make improvements. Ready to get started? Go and [create your own text editor][42]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/open-source-text-editors + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard) +[2]: https://opensource.com/sites/default/files/kakoune-screenshot.png +[3]: https://opensource.com/article/20/12/vi-text-editor +[4]: https://opensource.com/article/20/12/kakoune +[5]: https://opensource.com/sites/default/files/jed.png +[6]: https://opensource.com/article/20/12/emacs +[7]: https://opensource.com/article/20/12/jove-emacs +[8]: https://opensource.com/article/20/12/jed +[9]: https://www.jedsoft.org/slang +[10]: https://opensource.com/sites/default/files/uploads/nano-31_days-nano-opensource.png +[11]: https://opensource.com/article/20/12/gnu-nano +[12]: https://opensource.com/article/20/12/31-days-text-editors-joe +[13]: https://opensource.com/article/20/12/e3-linux +[14]: https://opensource.com/article/20/12/gnu-ed +[15]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains +[16]: https://opensource.com/article/20/12/sed +[17]: https://opensource.com/article/20/12/heredoc +[18]: https://opensource.com/sites/default/files/uploads/gedit-31_days_gedit-opensource.jpg +[19]: https://opensource.com/article/20/12/gedit +[20]: https://opensource.com/article/20/12/medit +[21]: https://opensource.com/article/20/12/xedit +[22]: https://opensource.com/article/20/12/jedit +[23]: https://opensource.com/article/20/12/kate-text-editor +[24]: https://opensource.com/article/20/12/kwrite-kde-plasma +[25]: https://opensource.com/article/20/12/notepad-text-editor +[26]: https://opensource.com/article/20/12/31-days-text-editors-pe +[27]: https://opensource.com/article/20/12/featherpad +[28]: https://opensource.com/sites/default/files/uploads/eclipse-31_days-eclipse-opensource.png +[29]: https://opensource.com/article/20/12/netbeans +[30]: https://opensource.com/article/20/12/eclipse +[31]: https://opensource.com/sites/default/files/uploads/etherpad_0.jpg +[32]: https://opensource.com/article/20/12/etherpad +[33]: https://opensource.com/article/20/12/31-days-text-editors-nextcloud-markdown-editor +[34]: https://opensource.com/sites/default/files/uploads/atom-31_days-atom-opensource.png +[35]: https://opensource.com/article/20/12/atom +[36]: https://opensource.com/article/20/12/brackets +[37]: https://opensource.com/article/20/12/focuswriter +[38]: https://opensource.com/article/20/12/howl +[39]: https://opensource.com/article/20/12/norka +[40]: https://opensource.com/article/20/12/kjots +[41]: https://opensource.com/sites/default/files/uploads/this-time-its-personal-31_days_yourself-opensource.png +[42]: https://opensource.com/article/20/12/31-days-text-editors-one-you-write-yourself diff --git a/sources/tech/20210211 Getting to Know the Cryptocurrency Open Patent Alliance (COPA).md b/sources/tech/20210211 Getting to Know the Cryptocurrency Open Patent Alliance (COPA).md new file mode 100644 index 0000000000..86df7cf688 --- /dev/null +++ b/sources/tech/20210211 Getting to Know the Cryptocurrency Open Patent Alliance (COPA).md @@ -0,0 +1,92 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting to Know the Cryptocurrency Open Patent Alliance (COPA)) +[#]: via: (https://www.linux.com/news/getting-to-know-the-cryptocurrency-open-patent-alliance-copa/) +[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/) + +Getting to Know the Cryptocurrency Open Patent Alliance (COPA) +====== + +### ![][1] + +### Why is there a need for a patent protection alliance for cryptocurrency technologies? + +With the recent surge in popularity of cryptocurrencies and related technologies, Square felt an industry group was needed to protect against litigation and other threats against core cryptocurrency technology and ensure the ecosystem remains vibrant and open for developers and companies. + +The same way [Open Invention Network][2] (OIN) and [LOT Network][3] add a layer of patent protection to inter-company collaboration on open source technologies, COPA aims to protect open source cryptocurrency technology. Feeling safe from the threat of lawsuits is a precursor to good collaboration. + + * Locking up foundational cryptocurrency technologies in patents stifles innovation and adoption of cryptocurrency in novel and useful applications. + * The offensive use of patents threatens the growth and free availability of cryptocurrency technologies. Many smaller companies and developers do not own patents and cannot deter or defend threats adequately. + + + +By joining COPA, a member can feel secure it can innovate in the cryptocurrency space without fear of litigation between other members.  + +### What is Square’s involvement in COPA? + +Square’s core purpose is economic empowerment, and they see cryptocurrency as a core technological pillar. Square helped start and fund COPA with the hope that by encouraging innovation in the cryptocurrency space, more useful ideas and products would get created. COPA management has now diversified to an independent board of technology and regulatory experts, and Square maintains a minority presence. + +### Do we need cryptocurrency patents to join COPA?  + +No! Anyone can join and benefit from being a member of COPA, regardless of whether they have patents or not. There is no barrier to entry – members can be individuals, start-ups, small companies, or large corporations. Here is how COPA works: + + * First, COPA members pledge never to use their crypto-technology patents against anyone, except for defensive reasons, effectively making their patents freely available for all. + * Second, members pool all of their crypto-technology patents together to form a shared patent library, which provides a forum to allow members to reasonably negotiate lending patents to one another for defensive purposes. + * The patent pledge and the shared patent library work in tandem to help drive down the incidence and threat of patent litigation, benefiting the cryptocurrency community as a whole.  + * Additionally, COPA monitors core technologies and entities that support cryptocurrency and does its best to research and help address litigation threats against community members. + + + +### What types of companies should join COPA? + + * Financial services companies and technology companies working in regulated industries that use distributed ledger or cryptocurrency technology + * Companies or individuals who are interested in collaborating on developing cryptocurrency products or who hold substantial investments in cryptocurrency + + + +### What companies have joined COPA so far? + + * Square, Inc. + * Blockchain Commons + * Carnes Validadas + * Request Network + * Foundation Devices + * ARK + * SatoshiLabs + * Transparent Systems + * Horizontal Systems + * VerifyChain + * Blockstack + * Protocol Labs + * Cloudeya Ltd. + * Mercury Cash + * Bithyve + * Coinbase + * Blockstream + * Stakenet + + + +### How to join + +Please express interest and get access to our membership agreement here: + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/getting-to-know-the-cryptocurrency-open-patent-alliance-copa/ + +作者:[Linux.com Editorial Staff][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/author/linuxdotcom/ +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/wp-content/uploads/2021/02/copa-linuxdotcom.jpg +[2]: https://openinventionnetwork.com/ +[3]: https://lotnet.com/ diff --git a/sources/tech/20210211 Unikraft- Pushing Unikernels into the Mainstream.md b/sources/tech/20210211 Unikraft- Pushing Unikernels into the Mainstream.md new file mode 100644 index 0000000000..e9eb57d18e --- /dev/null +++ b/sources/tech/20210211 Unikraft- Pushing Unikernels into the Mainstream.md @@ -0,0 +1,115 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Unikraft: Pushing Unikernels into the Mainstream) +[#]: via: (https://www.linux.com/featured/unikraft-pushing-unikernels-into-the-mainstream/) +[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/) + +Unikraft: Pushing Unikernels into the Mainstream +====== + +![][1] + +Unikernels have been around for many years and are famous for providing excellent performance in boot times, throughput, and memory consumption, to name a few metrics [1]. Despite their apparent potential, unikernels have not yet seen a broad level of deployment due to three main drawbacks: + + * **Hard to build**: Putting a unikernel image together typically requires expert, manual work that needs redoing for each application. Also, many unikernel projects are not, and don’t aim to be, POSIX compliant, and so significant porting effort is required to have standard applications and frameworks run on them. + * **Hard to extract high performance**: Unikernel projects don’t typically expose high-performance APIs; extracting high performance often requires expert knowledge and modifications to the code. + * **Little or no tool ecosystem**: Assuming you have an image to run, deploying it and managing it is often a manual operation. There is little integration with major DevOps or orchestration frameworks. + + + +While not all unikernel projects suffer from all of these issues (e.g., some provide some level of POSIX compliance but the performance is lacking, others target a single programming language and so are relatively easy to build but their applicability is limited), we argue that no single project has been able to successfully address all of them, hindering any significant level of deployment. For the past three years, Unikraft ([www.unikraft.org][2]), a Linux Foundation project under the Xen Project’s auspices, has had the explicit aim to change this state of affairs to bring unikernels into the mainstream.  + +If you’re interested, read on, and please be sure to check out: + + * The [replay of our two FOSDEM talks][3] [2,3] and the [virtual stand ][4] + * Our website (unikraft.org) and source code (). + * Our upcoming source code release, 0.5 Tethys (more information at ) + * [unikraft.io][5], for industrial partners interested in Unikraft PoCs (or [info@unikraft.io][6]) + + + +### High Performance + +To provide developers with the ability to obtain high performance easily, Unikraft exposes a set of composable, performance-oriented APIs. The figure below shows Unikraft’s architecture: all components are libraries with their own **Makefile** and **Kconfig** configuration files, and so can be added to the unikernel build independently of each other. + +![][7] + +**Figure 1. Unikraft ‘s fully modular architecture showing high-performance APIs** + +APIs are also micro-libraries that can be easily enabled or disabled via a Kconfig menu; Unikraft unikernels can compose which APIs to choose to best cater to an application’s needs. For example, an RCP-style application might turn off the **uksched** API (➃ in the figure) to implement a high performance, run-to-completion event loop; similarly, an application developer can easily select an appropriate memory allocator (➅) to obtain maximum performance, or to use multiple different ones within the same unikernel (e.g., a simple, fast memory allocator for the boot code, and a standard one for the application itself).  + +![][8] | ![][9] +---|--- +**Figure 2. Unikraft memory consumption vs. other unikernel projects and Linux** | **Figure 3. Unikraft NGINX throughput versus other unikernels, Docker, and Linux/KVM.** + +  + +These APIs, coupled with the fact that all Unikraft’s components are fully modular, results in high performance. Figure 2, for instance, shows Unikraft having lower memory consumption than other unikernel projects (HermiTux, Rump, OSv) and Linux (Alpine); and Figure 3 shows that Unikraft outperforms them in terms of NGINX requests per second, reaching 90K on a single CPU core. + +Further, we are working on (1) a performance profiler tool to be able to quickly identify potential bottlenecks in Unikraft images and (2) a performance test tool that can automatically run a large set of performance experiments, varying different configuration options to figure out optimal configurations. + +### Ease of Use, No Porting Required + +Forcing users to port applications to a unikernel to obtain high performance is a showstopper. Arguably, a system is only as good as the applications (or programming languages, frameworks, etc.) can run. Unikraft aims to achieve good POSIX compatibility; one way of doing so is supporting a **libc** (e.g., **musl)**, along with a large set of Linux syscalls.  + +![][10] + +**Figure 4. Only a certain percentage of syscalls are needed to support a wide range of applications** + +While there are over 300 of these, many of them are not needed to run a large set of applications; as shown in Figure 1 (taken from [5]). Having in the range of 145, for instance, is enough to support 50% of all libraries and applications in a Ubuntu distribution (many of which are irrelevant to unikernels, such as desktop applications). As of this writing, Unikraft supports over 130 syscalls and a number of mainstream applications (e.g., SQLite, Nginx, Redis), programming languages and runtime environments such as C/C++, Go, Python, Ruby, Web Assembly, and Lua, not to mention several different hypervisors (KVM, Xen, and Solo5) and ARM64 bare-metal support. + +### Ecosystem and DevOps + +Another apparent downside of unikernel projects is the almost total lack of integration with existing, major DevOps and orchestration frameworks. Working towards the goal of integration, in the past year, we created the **kraft** tool, allowing users to choose an application and a target platform simply (e.g., KVM on x86_64) and take care of building the image running it. + +Beyond this, we have several sub-projects ongoing to support in the coming months: + + * **Kubernetes**: If you’re already using Kubernetes in your deployments, this work will allow you to deploy much leaner, fast Unikraft images _transparently._ + * **Cloud Foundry**: Similarly, users relying on Cloud Foundry will be able to generate Unikraft images through it, once again transparently. + * **Prometheus**: Unikernels are also notorious for having very primitive or no means for monitoring running instances. Unikraft is targeting Prometheus support to provide a wide range of monitoring capabilities.  + + + +In all, we believe Unikraft is getting closer to bridging the gap between unikernel promise and actual deployment. We are very excited about this year’s upcoming features and developments, so please feel free to drop us a line if you have any comments, questions, or suggestions at [**info@unikraft.io**][6]. + +_**About the author: [Dr. Felipe Huici][11] is Chief Researcher, Systems and Machine Learning Group, NEC Laboratories Europe GmbH**_ + +### References + +[1] Unikernels Rethinking Cloud Infrastructure. + +[2] Is the Time Ripe for Unikernels to Become Mainstream with Unikraft? FOSDEM 2021 Microkernel developer room. + +[3] Severely Debloating Cloud Images with Unikraft. FOSDEM 2021 Virtualization and IaaS developer room. + +[4] Welcome to the Unikraft Stand! + +[5] A study of modern Linux API usage and compatibility: what to support when you’re supporting. Eurosys 2016. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/featured/unikraft-pushing-unikernels-into-the-mainstream/ + +作者:[Linux.com Editorial Staff][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/author/linuxdotcom/ +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/wp-content/uploads/2021/02/unikraft.svg +[2]: http://www.unikraft.org +[3]: https://video.fosdem.org/2021/stands/unikraft/ +[4]: https://stands.fosdem.org/stands/unikraft/ +[5]: http://www.unikraft.io +[6]: mailto:info@unikraft.io +[7]: https://www.linux.com/wp-content/uploads/2021/02/unikraft1.png +[8]: https://www.linux.com/wp-content/uploads/2021/02/unikraft2.png +[9]: https://www.linux.com/wp-content/uploads/2021/02/unikraft3.png +[10]: https://www.linux.com/wp-content/uploads/2021/02/unikraft4.png +[11]: https://www.linkedin.com/in/felipe-huici-47a559127/ diff --git a/sources/tech/20210211 What-s new with ownCloud in 2021.md b/sources/tech/20210211 What-s new with ownCloud in 2021.md new file mode 100644 index 0000000000..aa6507c932 --- /dev/null +++ b/sources/tech/20210211 What-s new with ownCloud in 2021.md @@ -0,0 +1,180 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What's new with ownCloud in 2021?) +[#]: via: (https://opensource.com/article/21/2/owncloud) +[#]: author: (Martin Loschwitz https://opensource.com/users/martinloschwitzorg) + +What's new with ownCloud in 2021? +====== +The open source file sharing and syncing platform gets a total overhaul +based on Go and Vue.js and eliminates the need for a database. +![clouds in the sky with blue pattern][1] + +The newest version of ownCloud, [ownCloud Infinite Scale][2] (OCIS), is a complete rewrite of the venerable open source enterprise file sharing and syncing software stack. It features a new backend written in Go, a frontend in Vue.js, and many changes, including eliminating the need for a database. This scalable, modular approach replaces ownCloud's PHP, database, and [POSIX][3] filesystem and promises up to 10 times better performance. + +Traditionally, ownCloud was centered around the idea of having a POSIX-compatible filesystem to store data uploaded by users—different versions of the data and trash files, as well as configuration files and logs. By default, an ownCloud user's files were found in a path on their ownCloud instance, like `/var/www` or `/srv/www` (a web server's document root). + +Every admin who has maintained an ownCloud instance knows that they grow massive; today, they usually start out much larger than ownCloud was originally designed for. One of the largest ownCloud instances is Australia's Academic and Research Network (AARNet), a company that stores more than 100,000 users' data. + +### Let's 'Go' for microservices + +ownCloud's developers determined that rewriting the codebase with [Go][4] could bring many advantages over PHP. Even when computer programs appear to be one monolithic piece of code, most are split into different components internally. The web servers that are usually deployed with ownCloud (such as Apache) are an excellent example. Internally, one function handles TCP/IP connections, another function might handle SSL, and yet another piece of code executes the requested PHP files and delivers the results to the end user. All of those events must happen in a certain order. + +ownCloud's developers wanted the new version to serve multiple steps concurrently so that events can happen simultaneously. Software capable of handling requests in parallel doesn't have to wait around for one process to finish before the next can begin, so they can deliver results faster. Concurrency is one of the reasons Go is so popular in containerized micro-architecture applications. + +With OCIS, ownCloud is adapting to an architecture centered around the principle of microservices. OCIS is split into three tiers: storage, core, and frontend. I'll look at each of these tiers, but the only thing that really matters to people is overall performance. Users don't think about software in tiers; they just want the software to work well and work quickly. + +### Tier 1: Storage + +The storage available to the system is ownCloud's lowest tier. Performance also brings scalability; large ownCloud instances must be able to cope with the load of thousands of clients and add additional disk space if the existing storage fills up. + +Like so many other concepts today, object stores and scalable storage weren't available when ownCloud was designed. Administrators now are used to having more choices, so ownCloud permits outsourcing physical storage device handling to an external solution. While S3-based object storage, Samba-based storage, and POSIX-compatible filesystem options are still supported in OCIS, the preferred way to deploy it is with [Earth Observing System][5] (EOS) storage. + +#### EOS to the rescue + +EOS is optimized for very low latency when accessing files. It provides disk-based storage to clients through the [XRootD][6] framework but also permits other protocols to access files. ownCloud uses EOS's HTTP protocol extension to talk to the storage solution (using the HTTPS protocol). EOS also allows almost "infinite" scalability. For instance, [CERN's EOS setup][7] includes more than 200PB of disk storage and continues to grow. + +By choosing EOS, ownCloud eliminated several shortcomings of traditional storage solutions: + + * EOS doesn't have a typical single point of failure. + * All relevant services are run redundantly, including the ability to scale out and add instances of all existing services. + * EOS promises to never run out of actual disk space and comes with built-in redundancy for stored data. + + + +For large environments, ownCloud expects the administrator to deploy an EOS instance with OCIS. In exchange for the burden of maintaining a separate storage system, the admin gets the benefit of not having to worry about the OCIS instance's scalability and performance. + +#### What about small setups? + +This hints at ownCloud's assumed use case for OCIS: It's no longer a small business all-in-one server nor a small home server. ownCloud's strategy with OCIS targets large data centers. For small or home office setups, EOS is likely to be excessive and overly demanding for a single admin to manage. OCIS serves small setups through the [Reva][8] framework, which enables support for S3, Samba, and even POSIX-compatible filesystems. This is possible because EOS is not hardcoded into OCIS. Reva can't provide the same feature set as EOS, but it accomplishes most of the needs of end users and small installations. + +### Tier 2: Core + +OCIS's second tier is (due to Go) more of a collection of microservices than a singular core. Each one is responsible for handling a single task in the background (e.g., scanning for viruses). Basically, all of OCIS's functionality results from a specific microservice's work, like authenticating requests using OpenID Connect against an identity provider. In the end, that makes it a simple task to connect existing user directories—such as Active Directory Federation Services (ADFS), Azure AD, or Lightweight Directory Access Protocol (LDAP)—to ownCloud. For those that do not have an existing identity provider, ownCloud ships its own instance, effectively making ownCloud maintain its own user database. + +### Tier 3: Frontend + +OCIS's third tier, the frontend, is what the vendor calls ownCloud Web. It's a complete rewrite of the user interface and is based on the Vue.js JavaScript framework. Like the OCIS core, the web frontend is written based on microservices principles and hence allows better performance and scalability. The developers also used the opportunity to give the web interface a makeover; compared to previous ownCloud versions, the OCIS web interface looks smaller and slicker. + +OCIS's developers did an impressive job complying with modern software design principles. The fundamental problem in building applications according to the microservices approach is making the environment's individual components communicate with each other. APIs can come to the rescue, but that means every micro component must have its own well-defined API interface. + +Luckily, there are existing tools to take that burden off developers' shoulders, most notably [gRPC][9]. The idea behind gRPC is to have a set of predefined APIs that trigger actions in one component from within another. + +### Other notable design changes + +#### Tackling network traffic with Traefik + +This new application design brings some challenges to the underlying network. OCIS's developers chose the [Traefik][10] framework to tackle them. Traefik automatically load-balances different instances of microservices, manages automated SSL encryption, and allows additional deployments of firewall rules. + +The split between the backend and the frontend add advantages to OCIS. In fact, the user's actions triggered through ownCloud Web are completely decoupled from the ownCloud engine performing the task in the backend. If a user manually starts a virus check on files stored in ownCloud, they don't have to wait for the check to finish. Instead, the check happens in the background, and the user sees the results after the check is completed. This is the principle of concurrency at work. + +#### Extensions as microservices + +Like other web services, ownCloud supports extending its capabilities through extensions. OCIS doesn't change this, but it promises to tackle a well-known problem, especially with community apps. Apps of unknown origin can cause trouble in the server, hamper updates, and negatively impact the server's overall performance. + +OCIS's new, gRPC-based architecture makes it much easier to create extensions alongside existing microservices. Because the API is predefined by gRPC, developers merely need to create a microservice featuring the desired functionality that can be controlled by gRPC. Traefik, on a per-case basis, ensures that newly deployed add-ons are automatically added to the existing communication mesh. + +#### Goodbye, MySQL! + +ownCloud's switch to gRPC and microservices eliminates the need for a relational database. Instead, components that need to store metadata do it on their own. Due to Reva and the lack of a MySQL dependency, the complexity of running ownCloud in small environments is reduced considerably—an especially welcome bonus for maintainers of large-scale data centers, but nice for admins of any size installation. + +### Getting OCIS up and running + +ownCloud published a technical preview of OCIS 1.0 in December 2020, [shipping it][11] as a Docker container and binaries. More examples of getting it running are linked in the deployment section of its [GitHub repository][12]. + +#### Install with Docker + +Getting OCIS up and running with Docker containers is easy, although things can get complicated if you're new to EOS. Docker images for OCIS are available on [Docker Hub][13]. Look for the Latest tag for the current master branch. + +Any standard virtual machine from one of the big cloud providers or any entry-level server in a data center that uses a standard Linux distribution should be sufficient, provided the system has a container runtime installed. + +Assuming you have Docker or Podman installed, the command to start OCIS is simple: + + +``` +`$ docker run --rm -ti -p 9200:9200 owncloud/ocis` +``` + +That's it! OCIS is now waiting at your service on localhost port 9200. Open a web browser and navigate to `http://localhost:9200` to check it out. + +The demo accounts and passwords are `einstein:relativity`, `marie:radioactivity`, and `richard:superfluidity`. Admin accounts are `moss:vista` and `admin:admin`. If OCIS runs on a server with a resolvable hostname, it can request an SSL certificate from Let's Encrypt using Traefik. + +![OCIS contains no files at first login][14] + +(Martin Loschwitz, [CC BY-SA 4.0][15]) + +![OCIS user management interface][16] + +(Martin Loschwitz, [CC BY-SA 4.0][15]) + +#### Install with binary + +As an alternative to Docker, there also is a pre-compiled binary available. Thanks to Go, users can [download the latest binaries][17] from the Master branch. + +OCIS's binary edition expects `/var/tmp/ocis` as the default storage location, but you can change that in its configuration. You can start the OCIS server with: + + +``` +`$ ./ocis server` +``` + +Here are some of the subcommands available through the `ocis` binary: + + * `ocis health` runs a health check. A result greater than 0 indicates an error. + * `ocis list` prints all running OCIS extensions. + * `ocis run foo` starts a particular extension (`foo`, in this example). + * `ocis kill foo` stops a particular extension (`foo`, in this example). + * `ocis --help` prints a help message. + + + +The project's GitHub repository contains full [documentation][11]. + +### Setting up EOS (it's complicated) + +Following ownCloud's recommendations to deploy OCIS with EOS for large environments requires some additional steps. EOS not only adds required hardware and increases the whole environment's complexity, but it's also a slightly bigger task to set it up. CERN provides concise [EOS documentation][18] (linked from its [GitHub repository][19]), and ownCloud offers a [step-by-step guide][20]. + +In a nutshell, users have to get and start EOS and OCIS containers; configure LDAP support; and kill home, users', and metadata storage before starting them with the EOS configuration. Last but not least, the accounts service needs to be set up to work with EOS. All of these steps are "docker-compose" commands documented in the GitHub repository. The Storage Backends page on EOS also provides information on verification, troubleshooting, and a command reference for the built-in EOS shell. + +### Weighing risks and rewards + +ownCloud Infinite Scale is easy to install, faster than ever before, and better prepared for scalability. The modular design, with microservices and APIs (even for its extensions), looks promising. ownCloud is embracing new technology and developing for the future. If you run ownCloud, or if you've been thinking of trying it, there's never been a better time. Keep in mind that this is still a technology preview and is on a rolling release published every three weeks, so please report any bugs you find. + +Jos Poortvliet shares some of his favorite uses for the open source self-hosted storage platform. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/owncloud + +作者:[Martin Loschwitz][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/martinloschwitzorg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern) +[2]: https://owncloud.com/infinite-scale/ +[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains +[4]: https://golang.org/ +[5]: https://en.wikipedia.org/wiki/Earth_Observing_System +[6]: https://xrootd.slac.stanford.edu/ +[7]: https://eos-web.web.cern.ch/eos-web/ +[8]: https://reva.link/ +[9]: https://en.wikipedia.org/wiki/GRPC +[10]: https://opensource.com/article/20/3/kubernetes-traefik +[11]: https://owncloud.github.io/ocis/getting-started/ +[12]: https://github.com/owncloud/ocis +[13]: https://hub.docker.com/r/owncloud/ocis +[14]: https://opensource.com/sites/default/files/uploads/ocis5.png (OCIS contains no files at first login) +[15]: https://creativecommons.org/licenses/by-sa/4.0/ +[16]: https://opensource.com/sites/default/files/uploads/ocis2.png (OCIS user management interface) +[17]: https://download.owncloud.com/ocis/ocis/ +[18]: https://eos-docs.web.cern.ch/ +[19]: https://github.com/cern-eos/eos +[20]: https://owncloud.github.io/ocis/storage-backends/eos/ diff --git a/sources/tech/20210212 4 reasons to choose Linux for art and design.md b/sources/tech/20210212 4 reasons to choose Linux for art and design.md new file mode 100644 index 0000000000..6acb0ec2d8 --- /dev/null +++ b/sources/tech/20210212 4 reasons to choose Linux for art and design.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 reasons to choose Linux for art and design) +[#]: via: (https://opensource.com/article/21/2/linux-art-design) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +4 reasons to choose Linux for art and design +====== +Open source enhances creativity by breaking you out of a proprietary +mindset and opening your mind to possibilities. Explore several open +source creative programs. +![Painting art on a computer screen][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today I'll explain why Linux is an excellent choice for creative work. + +Linux gets a lot of press for its amazing server and cloud computing software. It comes as a surprise to some that Linux happens to have a great set of creative tools, too, and that they easily rival popular creative apps in user experience and quality. When I first started using open source creative software, it wasn't because I didn't have access to the other software. Quite the contrary, I started using open source tools when I had the greatest access to the proprietary tools offered by several leading companies. I chose to switch to open source because open source made more sense and produced better results. Those are some big claims, so allow me to explain. + +### High availability means high productivity + +The term _productivity_ means different things to different people. When I think of productivity, it's that when you sit down to do something, it's rewarding when you're able to meet whatever goal you've set for yourself. If you get interrupted or stopped by something outside your control, then your productivity goes down. + +Computers can seem unpredictable, and there are admittedly a lot of things that can go wrong. There are lots of hardware parts to a computer, and any one of them can break at any time. Software has bugs and updates to fix bugs, and then new bugs introduced by those updates. If you're not comfortable with computers, it can feel a little like a timebomb just waiting to ensnare you. With so much potentially working _against_ you in the digital world, it doesn't make sense to me to embrace software that guarantees not to work when certain requirements (like a valid license, or more often, an up-to-date subscription) aren't met. + +![Inkscape application][2] + +Inkscape + +Open source creative apps have no required subscription fee and no licensing requirements. They're available when you need them and usually on any platform. That means when you sit down at a working computer, you know you have access to your must-have software. And if you're having a rough day and you find yourself sitting in front of a computer that isn't working, the fix is to find one that does work, install your creative suite, and get to work. + +It's far harder to find a computer that _can't_ run Inkscape, for instance, than it is to find a computer that _is_ running a similar proprietary application. That's called high availability, and it's a game-changer. I've never found myself wasting hours of my day for lack of the software I want to run to get things done. + +### Open access is better for diversity + +When I was working in the creative industry, it sometimes surprised me how many of my colleagues were self-taught both in their artistic and technical disciplines. Some taught themselves on expensive rigs with all the latest "professional" applications, but there was always a large group of people who perfected their digital trade on free and open source software because, as kids or as poor college students, that was what they could afford and obtain easily. + +That's a different kind of high availability, but it's one that's important to me and many other users who wouldn't be in the creative industry but for open source. Even open source projects that do offer a paid subscription, like [Ardour][3], ensure that users have access to the software regardless of an ability to pay. + +![Ardour interface][4] + +Ardour + +When you don't restrict who gets to use your software, you're implicitly inviting more users. And when you do that, you enable a greater diversity of creative voices. Art loves influence, and the greater the variety of experiences and ideas you have to draw from, the better. That's what's possible with open source creative software. + +### Resolute format support is more inclusive + +We all acknowledge the value of inclusivity in basically every industry. Inviting _more people_ to the party results in a greater spectacle, in nearly every sense. Knowing this, it's painful when I see a project or initiative that invites people to collaborate, only to limit what kind of file formats are acceptable. It feels archaic, like a vestige of elitism out of the far past, and yet it's a real problem even today. + +In a surprise and unfortunate twist, it's not because of technical limitations. Proprietary software has access to open file formats because they're open source and free to integrate into any application. Integrating these formats requires no reciprocation. By stark contrast, proprietary file formats are often shrouded in secrecy, locked away for use by the select few who pay to play. It's so bad, in fact, that quite often, you can't open some files to get to your data without the proprietary software available. Amazingly, open source creative applications nevertheless include support for as many proprietary formats as they possibly can. Here's just a sample of Inkscape's staggering support list: + +![Available Inkscape file formats][5] + +Inkscape file formats + +And that's largely without contribution from the companies owning the file formats. + +Supporting open file formats is more inclusive, and it's better for everyone. + +### No restrictions for fresh ideas + +One of the things I've come to love about open source is the sheer diversity of how any given task is interpreted. When you're around proprietary software, you tend to start to see the world based on what's available to you. For instance, if you're thinking of manipulating some photos, then you generally frame your intent based on what you know to be possible. You choose from the three of four or ten applications on the shelf because they're the only options presented. + +You generally have several obligatory "obvious" solutions in open source, but you also get an additional _dozen_ contenders hanging out on the fringe. These options are sometimes only half-complete, or they're hyper-focused on a specific task, or they're challenging to learn, but most importantly, they're unique and innovative. Sometimes they've been developed by someone who's never seen the way a task is "supposed to be done," and so the approach is wildly different than anything else on the market. Other times, they're developed by someone familiar with the "right way" of doing something but is trying a different tactic anyway. It's a big, dynamic brainstorm of possibility. + +These kinds of everyday innovations can lead to flashes of inspiration, moments of brilliance, or widespread common improvements. For instance, the famous GIMP filter that removes items from photographs and automatically replaces the background was so popular that it later got "borrowed" by proprietary photo editing software. That's one metric of success, but it's the personal impact that matters most for an artist. I marvel at the creativity of new Linux users when I've shown them just one simple audio or video filter or paint application at a tech demo. Without any instruction or context, the ideas that spring out of a simple interaction with a new tool can be exciting and inspiring, and a whole new series of artwork can easily emerge from experimentation with just a few simple tools. + +There are also ways of working more efficiently, provided the right set of tools are available. While proprietary software usually isn't opposed to the idea of smarter work habits, there's rarely a direct benefit from concentrating on making it easy for users to automate tasks. Linux and open source are largely built exactly for [automation and orchestration][6], and not just for servers. Tools like [ImageMagick][7] and [GIMP scripts][8] have changed the way I work with images, both for bulk processing and idle experimentation. + +You never know what you might create, given tools that you've never imagined existed. + +### Linux artists + +There's a whole [community of artists using open source][9], from [photography][10] to [makers][11] to [musicians][12], and much much more. If you want to get creative, give Linux a go. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-art-design + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) +[2]: https://opensource.com/sites/default/files/inkscape_0.jpg +[3]: https://community.ardour.org/subscribe +[4]: https://opensource.com/sites/default/files/ardour.jpg +[5]: https://opensource.com/sites/default/files/formats.jpg +[6]: https://opensource.com/article/20/11/orchestration-vs-automation +[7]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux#imagemagick +[8]: https://opensource.com/article/21/1/gimp-scripting +[9]: https://librearts.org +[10]: https://pixls.us +[11]: https://www.redhat.com/en/blog/channel/red-hat-open-studio +[12]: https://linuxmusicians.com diff --git a/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md b/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md new file mode 100644 index 0000000000..b0ab9085c3 --- /dev/null +++ b/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Network address translation part 2 – the conntrack tool) +[#]: via: (https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/) +[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/) + +Network address translation part 2 – the conntrack tool +====== + +![][1] + +This is the second article in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 introduces the “conntrack” command. conntrack allows you to inspect and modify tracked connections. + +### Introduction + +NAT configured via iptables or nftables builds on top of netfilters connection tracking facility. The _conntrack_ command is used to inspect and alter the state table. It is part of the “conntrack-tools” package. + +### Conntrack state table + +The connection tracking subsystem keeps track of all packet flows that it has seen. Run “_sudo conntrack -L_” to see its content: + +``` +tcp 6 43184 ESTABLISHED src=192.168.2.5 dst=10.25.39.80 sport=5646 dport=443 src=10.25.39.80 dst=192.168.2.5 sport=443 dport=5646 [ASSURED] mark=0 use=1 +tcp 6 26 SYN_SENT src=192.168.2.5 dst=192.168.2.10 sport=35684 dport=443 [UNREPLIED] src=192.168.2.10 dst=192.168.2.5 sport=443 dport=35684 mark=0 use=1 +udp 17 29 src=192.168.8.1 dst=239.255.255.250 sport=48169 dport=1900 [UNREPLIED] src=239.255.255.250 dst=192.168.8.1 sport=1900 dport=48169 mark=0 use=1 +``` + +Each line shows one connection tracking entry. You might notice that each line shows the addresses and port numbers twice and even with inverted address and port pairs! This is because each entry is inserted into the state table twice. The first address quadruple (source and destination address and ports) are those recorded in the original direction, i.e. what the initiator sent. The second quadruple is what conntrack expects to see when a reply from the peer is received. This solves two problems: + + 1. If a NAT rule matches, such as IP address masquerading, this is recorded in the reply part of the connection tracking entry and can then be automatically applied to all future packets that are part of the same flow. + 2. A lookup in the state table will be successful even if its a reply packet to a flow that has any form of network or port address translation applied. + + + +The original (first shown) quadruple stored never changes: Its what the initiator sent. NAT manipulation only alters the reply (second) quadruple because that is what the receiver will see. Changes to the first quadruple would be pointless: netfilter has no control over the initiators state, it can only influence the packet as it is received/forwarded. When a packet does not map to an existing entry, conntrack may add a new state entry for it. In the case of UDP this happens automatically. In the case of TCP conntrack can be configured to only add the new entry if the TCP packet has the [SYN bit][3] set. By default conntrack allows mid-stream pickups to not cause problems for flows that existed prior to conntrack becoming active. + +### Conntrack state table and NAT + +As explained in the previous section, the reply tuple listed contains the NAT information. Its possible to filter the output to only show entries with source or destination nat applied. This allows to see which kind of NAT transformation is active on a given flow. _“sudo conntrack -L -p tcp –src-nat_” might show something like this: + +``` +tcp 6 114 TIME_WAIT src=10.0.0.10 dst=10.8.2.12 sport=5536 dport=80 src=10.8.2.12 dst=192.168.1.2 sport=80 dport=5536 [ASSURED] +``` + +This entry shows a connection from 10.0.0.10:5536 to 10.8.2.12:80. But unlike the previous example, the reply direction is not just the inverted original direction: the source address is changed. The destination host (10.8.2.12) sends reply packets to 192.168.1.2 instead of 10.0.0.10. Whenever 10.0.0.10 sends another packet, the router with this entry replaces the source address with 192.168.1.2. When 10.8.2.12 sends a reply, it changes the destination back to 10.0.0.10. This source NAT is due to a [nft masquerade][4] rule: + +``` +inet nat postrouting meta oifname "veth0" masquerade +``` + +Other types of NAT rules, such as “dnat to” or “redirect to” would be shown in a similar fashion, with the reply tuples destination different from the original one. + +### Conntrack extensions + +Two useful extensions are conntrack accounting and timestamping. _“sudo sysctl net.netfilter.nf_conntrack_acct=1”_ makes _“sudo conntrack -L_” track byte and packet counters for each flow. + +_“sudo sysctl net.netfilter.nf_conntrack_timestamp=1”_ records a “start timestamp” for each connection. _“sudo conntrack -L”_ then displays the seconds elapsed since the flow was first seen. Add “_–output ktimestamp_” to see the absolute start date as well. + +### Insert and change entries + +You can add entries to the state table. For example: + +``` +sudo conntrack -I -s 192.168.7.10 -d 10.1.1.1 --protonum 17 --timeout 120 --sport 12345 --dport 80 +``` + +This is used by conntrackd for state replication. Entries of an active firewall are replicated to a standby system. The standby system can then take over without breaking connectivity even on established flows. Conntrack can also store metadata not related to the packet data sent on the wire, for example the conntrack mark and connection tracking labels. Change them with the “update” (-U) option: + +``` +sudo conntrack -U -m 42 -p tcp +``` + +This changes the connmark of all tcp flows to 42. + +### **Delete entries** + +In some cases, you want to delete enries from the state table. For example, changes to NAT rules have no effect on packets belonging to flows that are already in the table. For long-lived UDP sessions, such as tunneling protocols like VXLAN, it might make sense to delete the entry so the new NAT transformation can take effect. Delete entries via _“sudo conntrack -D_” followed by an optional list of address and port information. The following example removes the given entry from the table: + +``` +sudo conntrack -D -p udp --src 10.0.12.4 --dst 10.0.0.1 --sport 1234 --dport 53 +``` + +### Conntrack error counters + +Conntrack also exports statistics: + +``` +# sudo conntrack -S +cpu=0 found=0 invalid=130 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=10 +cpu=1 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0 +cpu=2 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=1 +cpu=3 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0 +``` + +Most counters will be 0. “Found” and “insert” will always be 0, they only exist for backwards compatibility. Other errors accounted for are: + + * invalid: packet does not match an existing connection and doesn’t create a new connection. + * insert_failed: packet starts a new connection, but insertion into the state table failed. This can happen for example when NAT engine happened to pick identical source address and port when Masquerading. + * drop: packet starts a new connection, but no memory is available to allocate a new state entry for it. + * early_drop: conntrack table is full. In order to accept the new connection existing connections that did not see two-way communication were dropped. + * error: icmp(v6) received icmp error packet that did not match a known connection + * search_restart: lookup interrupted by an insertion or deletion on another CPU. + * clash_resolve: Several CPUs tried to insert identical conntrack entry. + + + +These error conditions are harmless unless they occur frequently. Some can be mitigated by tuning the conntrack sysctls for the expected workload. _net.netfilter.nf_conntrack_buckets_ and _net.netfilter.nf_conntrack_max_ are typical candidates. See the [nf_conntrack-sysctl documentation][5] for a full list. + +Use “_sudo sysctl_ _net.netfilter.nf_conntrack_log_invalid=255″_ to get more information when a packet is invalid. For example, when conntrack logs the following when it encounters a packet with all tcp flags cleared: + +``` +nf_ct_proto_6: invalid tcp flag combination SRC=10.0.2.1 DST=10.0.96.7 LEN=1040 TOS=0x00 PREC=0x00 TTL=255 ID=0 PROTO=TCP SPT=5723 DPT=443 SEQ=1 ACK=0 +``` + +### Summary + +This article gave an introduction on how to inspect the connection tracking table and the NAT information stored in tracked flows. The next part in the series will expand on the conntrack tool and the connection tracking event framework. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/ + +作者:[Florian Westphal][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/strlen/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/network-address-translation-part-2-816x345.jpg +[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/ +[3]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure +[4]: https://wiki.nftables.org/wiki-nftables/index.php/Performing_Network_Address_Translation_(NAT)#Masquerading +[5]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/nf_conntrack-sysctl.rst diff --git a/sources/tech/20210214 Why programmers love Linux packaging.md b/sources/tech/20210214 Why programmers love Linux packaging.md new file mode 100644 index 0000000000..837b4a2aed --- /dev/null +++ b/sources/tech/20210214 Why programmers love Linux packaging.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why programmers love Linux packaging) +[#]: via: (https://opensource.com/article/21/2/linux-packaging) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Why programmers love Linux packaging +====== +Programmers can distribute their software easily and consistently via +Flatpaks, letting them focus on their passion: Programming. +![Package wrapped with brown paper and red bow][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll talk about what makes packaging for Linux ideal for programmers. + +Programmers love to program. That probably seems like an obvious statement, but it's important to understand that developing software involves a lot more than just writing code. It includes compiling, documentation, source code management, install scripts, configuration defaults, support files, delivery format, and more. Getting from a blank screen to a deliverable software installer requires much more than just programming, but most programmers would rather program than package. + +### What is packaging? + +When food is sent to stores to be purchased, it is packaged. When buying directly from a farmer or from an eco-friendly bulk or bin store, the packaging is whatever container you've brought with you. When buying from a grocery store, packaging may be a cardboard box, plastic bag, a tin can, and so on. + +When software is made available to computer users at large, it also must be packaged. Like food, there are several ways software can be packaged. Open source software can be left unpackaged because users, having access to the raw code, can compile and package it themselves. However, there are advantages to packages, so applications are commonly delivered in some format specific to the user's platform. And that's where the problems begin, because there's not just one format for software packages. + +For the user, packages make it easy to install software because all the work is done by the system's installer. The software is extracted from its package and distributed to the appropriate places within the operating system. There's little opportunity for anything to go wrong. + +For the software developer, however, packaging means that you have to learn how to create a package—and not just one package, but a unique package for every operating system you want your software to be installable on. To complicate matters, there are multiple packaging formats and options for each operating system, and sometimes even for the programming language being used. + +### Packaging on Linux + +Packaging options for Linux have traditionally seemed pretty overwhelming. Linux distributions derived from Fedora, such as Red Hat and CentOS, default to `.rpm` packages. Debian and Ubuntu (and similar) default to `.deb` packages. Other distributions may use one or the other, or neither, opting for a custom format. When asked, many Linux users say that ideally, a programmer won't package their software for Linux at all but instead rely on the package maintainers of each distribution to create the package. All software installed onto any Linux system ought to come from that distribution's official repository. However, it remains unclear how to get your software reliably packaged and included by one distribution, let alone all distributions. + +### Flatpak for Linux + +The Flatpak packaging system was introduced to unify and decentralize Linux as a delivery target for developers. With Flatpak, either a developer or anyone (a member of a Linux community, a different developer, a Flatpak team member, or anyone else) is free to package software. They can then submit the package to Flathub or choose to self-host the package and offer it to basically any Linux distribution. The Flatpak system is available to all Linux distributions, so targeting one is the same as targeting them all. + +### How Flatpak technology works + +The secret to Flatpak's universal appeal is a standard base. The Flatpak system allows developers to reference a common set of Software Developer Kit (SDK) modules. These are packaged and managed by the maintainers of the Flatpak system. The SDKs get pulled in as needed whenever you install a Flatpak, ensuring compatibility with your system. Any given SDK is only required once because the libraries it contains can be shared across any Flatpak calling for it. + +If a developer requires a library not already included in an existing SDK, the developer can add that library in the Flatpak. + +The results speak for themselves. Users may install hundreds of packages on any Linux distribution from one central repository, called [Flathub][2]. + +### How developers use Flatpaks + +Flatpaks are designed to be reproducible, so the build process is easily integrated into a CI/CD workflow. A Flatpak is defined in a [YAML][3] or JSON manifest file. You can create your first Flatpak by following my [introductory article][4], and you can read the full documentation at [docs.flatpak.org][5]. + +### Linux makes it easy + +Creating software on Linux is easy, and packaging it up for Linux is simple and automatable. If you're a programmer, Linux makes it easy for you to forget about packaging by targeting one system and integrating that into your build process. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-packaging + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow) +[2]: https://flatpak.org/setup/ +[3]: https://www.redhat.com/sysadmin/yaml-beginners +[4]: https://opensource.com/article/19/10/how-build-flatpak-packaging +[5]: https://docs.flatpak.org/en/latest/index.html diff --git a/sources/tech/20210215 Installing Nextcloud 20 on Fedora Linux with Podman.md b/sources/tech/20210215 Installing Nextcloud 20 on Fedora Linux with Podman.md new file mode 100644 index 0000000000..ae9edc77ea --- /dev/null +++ b/sources/tech/20210215 Installing Nextcloud 20 on Fedora Linux with Podman.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Installing Nextcloud 20 on Fedora Linux with Podman) +[#]: via: (https://fedoramagazine.org/nextcloud-20-on-fedora-linux-with-podman/) +[#]: author: (dschier https://fedoramagazine.org/author/danielwtd/) + +Installing Nextcloud 20 on Fedora Linux with Podman +====== + +![][1] + +Nowadays, many open source projects offer container images for easy deployment. This is very handy when running a home server or lab environment. A previous Fedora Magazine article covered [installing Nextcloud from the source package][2]. This article explains how you can run Nextcloud on Fedora 33 as a container deployment with Podman. + +### What is Nextcloud? + +[Nextcloud][3] started in 2016 as a fork of Owncloud. Since then, it evolved into a full-fledged collaboration software offering file-, calendar-, and contact-syncing, plus much more. You can run a simple Kanban Board in it or write documents collaboratively. Nextcloud is fully open source under the AGPLv3 License and can be used for private or commercial use alike. + +### What is Podman? + +Podman is a container engine for developing, managing, and running OCI Containers on your Linux System. It offers a wide variety of features like rootless mode, cgroupv2 support, pod management, and it can run daemonless. Furthermore, you are getting a Docker compatible API for further development. It is available by default on Fedora Workstation and ready to be used. + +In case you need to install podman, run: + +``` +sudo dnf install podman +``` + +### Designing the Deployment + +Every deployment needs a bit of preparation. Sure, you can simply start a container and start using it, but that wouldn’t be so much fun. A well-thought and designed deployment should be easy to understand and offer some kind of flexibility. + +#### Container / Images + +First, you need to choose the proper container images for the deployment. This is quite easy for Nextcloud, since it offers already a pretty good documentation for container deployments. Nextcloud supports two variations: Nextcloud Apache httpd (which is fully self-contained) and Nextcloud php-fpm (which needs an additional nginx container). + +In both cases, you also need to provide a database, which can be MariaDB (recommended) or PostgreSQL (also supported). This article uses the Apache httpd + MariaDB installation. + +#### Volumes + +Running a container does not persist data you create during the runtime. You perform updates by recreating the container. Therefore, you will need some volumes for the database and the Nextcloud files. Nextcloud also recommends you put the “data” folder in a separate volume. So you will end up with three volumes: + + * nextcloud-app + * nextcloud-data + * nextcloud-db + + + +#### Network + +Lastly, you need to consider networking. One of the benefits of containers is that you can re-create your deployment as it may look like in production. [Network segmentation][4] is a very common practice and should be considered for a container deployment, too. This tutorial will not add advanced features like network load balancing or security segmentation. You will need only one network which you will use to publish the ports for Nextcloud. Creating a network also provides the dnsname plugin, which will allow container communication based on container names. + +#### The picture + +Now that every single element is prepared, you can put these together and get a really nice understanding of how the development will look. + +![][5] + +### Run, Nextcloud, Run + +Now you have prepared all of the ingredients and you can start running the commands to deploy Nextcloud. All commands can be used for root-privileged or rootless deployments. This article will stick to rootless deployments. + +Sart with the network: + +``` +# Creating a new network +$ podman network create nextcloud-net + +# Listing all networks +$ podman network ls + +# Inspecting a network +$ podman network inspect nextcloud-net +``` + +As you can see in the last command, you created a DNS zone with the name “dns.podman”. All containers created in this network are reachable via “CONTAINER_NAME.dns.podman”. + +Next, optionally prepare your volumes. This step can be skipped, since Podman will create named volumes on demand, if they do not exist. Podman supports named volumes, which it creates in special locations, so you don’t need to take care of SELinux or alike. + +``` +# Creating the volumes +$ podman volume create nextcloud-app +$ podman volume create nextcloud-data +$ podman volume create nextcloud-db + +# Listing volumes +$ podman volume ls + +# Inspecting volumes (this also provides the full path) +$ podman volume inspect nextcloud-app +``` + +Network and volumes are done. Now provide the containers. + +First, you need the database. According to the MariaDB image documentation, you need to provide some additional environment variables,. Additionally, you need to attach the created volume, connect the network, and provide a name for the container. Most of the values will be needed in the next commands again. (Note that you should replace DB_USER_PASSWORD and DB_ROOT_PASSWORD with unique passwords.) + +``` +# Deploy Mariadb +$ podman run --detach + --env MYSQL_DATABASE=nextcloud + --env MYSQL_USER=nextcloud + --env MYSQL_PASSWORD=DB_USER_PASSWORD + --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD + --volume nextcloud-db:/var/lib/mysql + --network nextcloud-net + --restart on-failure + --name nextcloud-db + docker.io/library/mariadb:10 + +# Check running containers +$ podman container ls +``` + +After the successful start of your new MariaDB container, you can deploy Nextcloud itself. (Note that you should replace DB_USER_PASSWORD with the password you used in the previous step. Replace NC_ADMIN and NC_PASSWORD with the username and password you want to use for the Nextcloud administrator account.) + +``` +# Deploy Nextcloud +$ podman run --detach + --env MYSQL_HOST=nextcloud-db.dns.podman + --env MYSQL_DATABASE=nextcloud + --env MYSQL_USER=nextcloud + --env MYSQL_PASSWORD=DB_USER_PASSWORD + --env NEXTCLOUD_ADMIN_USER=NC_ADMIN + --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD + --volume nextcloud-app:/var/www/html + --volume nextcloud-data:/var/www/html/data + --network nextcloud-net + --restart on-failure + --name nextcloud + --publish 8080:80 + docker.io/library/nextcloud:20 + +# Check running containers +$ podman container ls +``` + +Now that the two containers are running, you can configure your containers. Open your browser and point to “localhost:8080” (or another host name or IP address if it is running on a different server). + +The first load may take some time (30 seconds) or even report “unable to load”. This is coming from Nextcloud, which is preparing the first run. In that case, wait a minute or two. Nextcloud will prompt for a username and password. + +![][6] + +Enter the user name and password you used previously. + +![][7] + +Now you are ready to go and experience Nextcloud for testing, development ,or your home server. + +### Update + +If you want to update one of the containers, you need to pull the new image and re-create the containers. + +``` +# Update mariadb +$ podman pull mariadb:10 +$ podman stop nextcloud-db +$ podman rm nextcloud-db +$ podman run --detach + --env MYSQL_DATABASE=nextcloud + --env MYSQL_USER=nextcloud + --env MYSQL_PASSWORD=DB_USER_PASSWORD + --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD + --volume nextcloud-db:/var/lib/mysql + --network nextcloud-net + --restart on-failure + --name nextcloud-db + docker.io/library/mariadb:10 +``` + +Updating the Nextcloud container works exactly the same. + +``` +# Update Nextcloud + +$ podman pull nextcloud:20 +$ podman stop nextcloud +$ podman rm nextcloud +$ podman run --detach + --env MYSQL_HOST=nextcloud-db.dns.podman + --env MYSQL_DATABASE=nextcloud + --env MYSQL_USER=nextcloud + --env MYSQL_PASSWORD=DB_USER_PASSWORD + --env NEXTCLOUD_ADMIN_USER=NC_ADMIN + --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD + --volume nextcloud-app:/var/www/html + --volume nextcloud-data:/var/www/html/data + --network nextcloud-net + --restart on-failure + --name nextcloud + --publish 8080:80 + docker.io/library/nextcloud:20 +``` + +That’s it; your Nextcloud installation is up-to-date again. + +### Conclusion + +Deploying Nextcloud with Podman is quite easy. After just a couple of minutes, you will have a very handy collaboration software, offering filesync, calendar, contacts, and much more. Check out [apps.nextcloud.com][8], which will extend the features even further. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/nextcloud-20-on-fedora-linux-with-podman/ + +作者:[dschier][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/danielwtd/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/nextcloud-podman-816x345.jpg +[2]: https://fedoramagazine.org/build-your-own-cloud-with-fedora-31-and-nextcloud-server/ +[3]: https://nextcloud.com/ +[4]: https://en.wikipedia.org/wiki/Network_segmentation +[5]: https://fedoramagazine.org/wp-content/uploads/2021/01/nextcloud-podman-arch.png +[6]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-12-08-38-37-1024x211.png +[7]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-12-08-38-28-1024x377.png +[8]: https://apps.nextcloud.com diff --git a/sources/tech/20210215 Protect your Home Assistant with these backups.md b/sources/tech/20210215 Protect your Home Assistant with these backups.md new file mode 100644 index 0000000000..5761273b64 --- /dev/null +++ b/sources/tech/20210215 Protect your Home Assistant with these backups.md @@ -0,0 +1,160 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Protect your Home Assistant with these backups) +[#]: via: (https://opensource.com/article/21/2/home-assistant-backups) +[#]: author: (Steve Ovens https://opensource.com/users/stratusss) + +Protect your Home Assistant with these backups +====== +Make sure you can recover quickly from a home automation failure with a +solid backup and hardware plan in the seventh article in this series. +![A rack of servers, blue background][1] + +In the last two articles in this series on home automation with Home Assistant (HA), I walked through setting up a few [integrations][2] with a Zigbee Bridge and some [custom ESP8266][3] devices that I use for automation. The first four articles in the series discussed [what Home Assistant is][4], why you may want [local control][5], some of the [communication protocols][6] for smart home components, and how to [install Home Assistant][7] in a virtual machine (VM) using libvirt. + +Now that you have a basic home automation setup, it is a good time to take a baseline of your system. In this seventh article, I will talk about snapshots, backups, and backup strategies. Let's get right into it. + +### Backups vs. copies + +I'll start by clearing up some ambiguity: A copy of something is not the same as a backup. Here is a brief overview of the difference between a copy and a backup. Bear in mind that this comes from the lens of an IT professional. I work with client data day in and day out. I have seen many ways that backups can go sideways, so the following descriptions may be overkill for home use. You'll have to decide just how important your Home Assistant data really is. + + * **Copies:** A copy is just what it sounds. It is when you highlight something on your computer and hit **Ctrl**+**C** and paste it somewhere else with **Ctrl**+**V**. Many people may view this as backing up the source, and to some extent, that is true. However, a copy is merely a representation of a point in time. If it's taken incorrectly, the newly created file can be corrupt, leading to a false sense of security. In addition, the source may have a problem—meaning the copy will also have a problem. If you have just a single copy of a file, it's often the same as having nothing at all. When it comes to backup, the saying "one is none" is absolutely true. If you do not have files going back over time, you won't have a good idea of whether the system creating the backups has a problem. + * **Backups and snapshots:** In Home Assistant, it is a bit tricky to differentiate between a copy and a backup. First, Home Assistant uses the term "snapshot" to refer to what we traditionally think of backups. In this context, a backup is very similar to a copy because you don't use any type of backup software, at least not in the traditional sense. Normally, backup software is designed specifically to get all the files that are hidden or otherwise protected. For example, backup software for a computer (such as CloneZilla) makes an exact replica (in some cases) of the hard drive to ensure no files are missed. Home Assistant knows how to create snapshots and does it for you. You just need to worry about storing the files somewhere. + + + +### Set a good backup strategy + +Before I get into how to deal with snapshots in Home Assistant, I want to share a brief story from a recent client. Remember when I mentioned that simply having a single copy of your files doesn't give you any indication that a problem has occurred? My client was doing all of the right things when it came to backups. The team was using the proper methodology for backups, kept multiple files going back a certain period of time, ensured there were more than two copies of each backup, and was especially careful that backups were not being stored locally on the machine being backed up. Sounds great, doesn't it? They were doing everything right. Well, almost. The one thing they neglected was testing the backups. Most people put this off or disregard it entirely. I admit I am guilty of not testing my backups frequently. I do it when I remember, which is usually once every few months or so. + +In my client's case, a software upgrade created a new requirement from the backup program. This was missed. The backups continued to hum along, and the automated checks passed. There were files after every backup run, they were larger than a certain amount, and the [magic file checks][8] reported the correct file type. The problem was that the file sizes shrunk significantly due to the software change. This meant the client was not backing up the data it thought. + +This story has a happy ending, which brings me to my point: Because the client was doing everything else right, we could go through the backups and identify the precise moment when something changed. From this, we linked this to the date of an upgrade some weeks back. Fortunately, there was no emergency that precipitated the investigation. I happened to be doing a standard audit when I discovered the discrepancy. + +The moral of the story? Without proper backup strategies, we would have had a much harder time tracking down this problem. Also, in the event of a failure, we would have had no recovery point. + +A good backup strategy usually entails daily, weekly, and monthly backups. For example, you may decide to keep all your daily backups for two weeks, four weekly backups, and perhaps four monthly backups. This, in my opinion, is overkill for Home Assistant after you have a stable setup. You'll have to choose the level of precision you need. I take a backup before I make any change to the system. This gives me a known-good situation to revert to. + +### Create snapshots + +Great, so how do you create a snapshot in Home Assistant? The **Snapshots** menu resides inside the **Supervisor** tab on the left-size panel. + +![Home Assistant snapshots][9] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +You have two options for creating a snapshot: _Full snapshot_ or _Partial snapshot_. A Full snapshot is self-explanatory. You have some options with a Partial snapshot. + +![Home Assistant partial snapshots][11] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Any component you install in Home Assistant will populate in this menu. Choose a name for your backup and click **Create**. This will take some time, depending on the speed of the disk and the size of the backup. I recommend keeping at least four backups on your machine if you can afford the space. + +![Home Assistant snapshots][12] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +You can retrieve these files from Home Assistant with File Browser if you set up the **Samba share** extension. + +![Samba share][13] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Save these files somewhere safe. The name you give the backup is contained in the metadata inside Home Assistant, and the file names are randomly generated. After I copy them to a different location, I usually rename them because when I test the restoration process on a different machine, the new file name does not matter. + +### My homelab strategy + +I run my Home Assistant instance on top of KVM on a Linux host. I have had a few requests to go into a little more detail on this, so feel free to skip past this section as it's not directly related to HA. + +Due to the nature of my job, I have a fairly large variety of hardware, which I use for a [homelab][14]. Sometimes this is because physical hosts are easier to work with than VMs for certain clustering software. Other times, this is because I keep workloads isolated to specific hardware. Either way, this means I already have a certain amount of knowledge built up around managing and dealing with KVM. Not to mention the fact that I run _almost exclusively_ open source software (with a few exceptions). Here is a basic layout of my homelab: + +![KVM homelab architecture][15] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +The network-attached storage (NAS) has dual 10GB network cards that feed into uplink ports. Two of the KVM hosts have 10GB network interface cards (NICs), while the hosts on the right have regular 1GB network cards. + +For Home Assistant, this is well into overkill territory. However, this infrastructure was not designed for HA. HA runs just fine on a Raspberry Pi 4 (4GB version) at my parents' house. + +The VM that hosts Home Assistant has three vCPU cores of a Broadwell Core I5 CPU (circa 2015) with 8GB of RAM. The CPU tends to remain around 25% usage, and I rarely use more than 2.2GB of RAM. This is with 11 add-ons, including InfluxDB and Grafana, which are heavier applications. + +While I do have shared storage, I do not use it for live migration or anything similar. Instead, I use this for backend storage for specific mount points in a VM. For example, I store the `data` directory from Nextcloud on the NAS, divorcing the data from the service. + +At any rate, I have a few approaches to backups with this setup. Naturally, I use the Home Assistant snapshotting function to provide the first layer of protection. I tend to store only four weeks' worth of data on the VM. What do I do with the files themselves? Here is a diagram of how I try to keep my backups safe: + +![Home Assistant backup architecture][16] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Using the Samba add-on, I pull a copy of the snapshot onto my GNOME desktop. I configure Nextcloud using GNOME's **Online Accounts** settings. + +![GNOME Online Accounts settings][17] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +Nextcloud takes a copy and puts it on my NAS. Both my desktop and the NAS use [SpiderOak One Backup][18] clients to ensure the backups are linked to more than one host. In the unlikely event that I delete a device from my SpiderOak account, the file is still linked to another device. I chose SpiderOak because it supports a Linux client, and it is privacy-focused and has no insight into what files it stores. The files are encrypted before being uploaded, and only the owner has the ability to decrypt them. The downside is that if you lose or forget your password, you lose your backups. + +Finally, I keep a cold copy on an external hard drive. I have a 14TB external drive that remains off and disconnected from the NAS except when backups are running. It's not on the diagram, but I also occasionally replicate to a computer at my in-laws' house. + +I can also take snapshots of the VM during critical operations (such as Home Assistant's recent upgrade from using a numbered point release to a month-based numbering system). + +I use a similar pipeline for most things that I back up, although I recognize it is a bit overkill. Also, this whole process has the flaw that it relies on me. Aside from SpiderOak and Nextcloud, I have not automated this process. I have scripts that I run, but I do not run them in a cron or anything like that. In hindsight, perhaps I should work on that. + +This setup may be considered extreme, but the built-in versioning in Nextcloud and SpiderOak, along with making copies in multiple locations, means that I am unlikely to suffer a failure that I can't recover from. At the very least, I should be able to dig up a close reference. + +As a final precaution, I make sure to keep the important information about _how_ I set things up on my private wiki on [Wiki.js][19]. I keep a section just for home automation. + +![Home automation overview][20] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +When you get into creating Node-RED automations (in the next article), I suggest you keep your own notes. I take a screenshot of the flow, write a brief description, so I know what I was attempting to achieve, and dump the flow to JSON (for brevity, I omitted the JSON from this screenshot): + +![Node-RED routine][21] + +(Steve Ovens, [CC BY-SA 4.0][10]) + +### Wrapping up + +Backups are essential when you're using Home Assistant, as it is a critical component of your infrastructure that always needs to be functioning. Small downtime is acceptable, but the ability to recover from a failure quickly is crucial. Granted, I have found Home Assistant to be rock solid. It has never failed on its own; any problems I have had were external to HA. Still, if you are going to make HA a central part of your house, I strongly recommend putting a good backup strategy in place. + +In the next article, I'll take a look at setting up some simple automations with Node-RED. As always, leave a comment below if you have questions, ideas, or suggestions for topics you'd like to see covered. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/home-assistant-backups + +作者:[Steve Ovens][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stratusss +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rack_server_sysadmin_cloud_520.png?itok=fGmwhf8I (A rack of servers, blue background) +[2]: https://opensource.com/article/21/1/home-automation-5-homeassistant-addons +[3]: https://opensource.com/article/21/1/home-assistant-6-custom-sensors +[4]: https://opensource.com/article/20/11/home-assistant +[5]: https://opensource.com/article/20/11/cloud-vs-local-home-automation +[6]: https://opensource.com/article/20/11/home-automation-part-3 +[7]: https://opensource.com/article/20/12/home-assistant +[8]: https://linux.die.net/man/5/magic +[9]: https://opensource.com/sites/default/files/uploads/ha-setup33-snapshot1_0.png (Home Assistant snapshots) +[10]: https://creativecommons.org/licenses/by-sa/4.0/ +[11]: https://opensource.com/sites/default/files/uploads/ha-setup34-snapshot2.png (Home Assistant partial snapshots) +[12]: https://opensource.com/sites/default/files/uploads/ha-setup35-snapshot3.png (Home Assistant snapshots) +[13]: https://opensource.com/sites/default/files/uploads/ha-setup36-backup-samba.png (Samba share) +[14]: https://opensource.com/article/19/3/home-lab +[15]: https://opensource.com/sites/default/files/uploads/kvm_lab.png (KVM homelab architecture) +[16]: https://opensource.com/sites/default/files/uploads/home_assistant_backups.png (Home Assistant backup architecture) +[17]: https://opensource.com/sites/default/files/uploads/gnome-online-account.png (GNOME Online Accounts settings) +[18]: https://spideroak.com/ +[19]: https://wiki.js.org/ +[20]: https://opensource.com/sites/default/files/uploads/confluence_home_automation_overview.png (Home automation overview) +[21]: https://opensource.com/sites/default/files/uploads/node_red_bedtime.png (Node-RED routine) diff --git a/sources/tech/20210215 Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy.md b/sources/tech/20210215 Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy.md new file mode 100644 index 0000000000..88c4fc7c51 --- /dev/null +++ b/sources/tech/20210215 Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy) +[#]: via: (https://www.linux.com/news/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/) +[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/) + +Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy +====== + +_by Matt Zand_ + +As companies are catching up in adopting blockchain technology, the choice of a private blockchain platform becomes very vital. Hyperledger, whose open source projects support/power more [enterprise blockchain use cases][1] than others, is currently leading the race of private Distributed Ledger Technology (DLT) implementation. Working from the assumption that you know how blockchain works and what is the design philosophy behind [Hyperledger’s ecosystem][2], in this article we will briefly review five active Hyperledger DLTs. In addition to DLTs discussed in this article, Hyperledger ecosystem has more supporting tools and libraries that I will cover in more detail in my future articles. + +This article mainly targets those who are relatively new to Hyperledger. This article would be a great resource for those interested in providing blockchain solution architect services and doing blockchain enterprise consulting and development. The materials included in this article will help you understand Hyperledger DLTs as a whole and use its high-level overview as a guideline for making the best of each Hyperledger project. + +Since Hyperledger is supported by a robust open source community, new projects are being added to the Hyperledger ecosystem regularly. At the time of this writing, Feb 2021, it consists of six active projects and 10 others which are at the incubation stage. Each project has unique features and advantages. + +**1- Hyperledger Fabric** + +[Hyperledger Fabric][3] is the most popular Hyperledger framework. Smart contracts (also known as **chaincode**) are written in [Golang][4] or JavaScript, and run in Docker containers. Fabric is known for its extensibility and allows enterprises to build distributed ledger networks on top of an established and successful architecture. A permissioned blockchain, initially contributed by IBM and Digital Asset,  Fabric is designed to be a foundation for developing applications or solutions with a modular architecture. It takes plugin components for providing functionalities such as consensus and membership services. Like Ethereum, Hyperledger Fabric can host and execute smart contracts, which are named chaincode. A Fabric network consists of peer nodes, which execute smart contracts (chaincode), query ledger data, validate transactions, and interact with applications. User-entered transactions are channeled to an ordering service component, which initially serves to be a consensus mechanism for Hyperledger Fabric. Special nodes called Orderer nodes validate the transactions, ensure the consistency of the blockchain, and send the validated transactions to the peers of the network as well as to membership service provider (MSP) services. + +Two major highlights of Hyperledger Fabric versus Ethereum are: + + * **Multi-ledger**: Each node on Ethereum has a replica of a single ledger in the network. However, Fabric nodes can carry multiple ledgers on each node, which is a great feature for enterprise applications. + * **Private Data**: In addition to a private channel feature, unlike with Ethereum, Fabric members within a consortium can exchange private data among themselves without disseminating them through Fabric channel, which is very useful for enterprise applications. + + + +[Here][5] is a good article for reviewing all Hyperledger Fabric components like peer, channel and, chaincode that are essential for building blockchain applications. In short, thorough understanding of all Hyperledger Fabric components is highly recommended for building, deploying and managing enterprise-level Hyperledger Fabric applications. + +**2- Hyperledger Besu** + +Hyperledger Besu is an open source Ethereum client developed under the Apache 2.0 license and written in Java. It can be run on the Ethereum public network or on private permissioned networks, as well as test networks such as Rinkeby, Ropsten, and Gorli. Hyperledger Besu supports several consensus algorithms including PoW, PoA, and IBFT, and has comprehensive permissioning schemes designed specifically for uses in a consortium environment. + +Hyperledger Besu implements the Enterprise Ethereum Alliance (EEA) specification. The EEA specification was established to create common interfaces amongst the various open and closed source projects within Ethereum, to ensure users do not have vendor lock-in, and to create standard interfaces for teams building applications. Besu implements enterprise features in alignment with the EEA client specification. + +As a basic Ethereum Client, Besu has the following features: + + * It connects to the blockchain network to synchronize blockchain transaction data or emit events to the network. + * It processes transactions through smart contracts in an Ethereum Virtual Machine (EVM) environment. + * It uses a data storage of networks (blocks). + * It publishes client API interfaces for developers to interact with the blockchain network. + + + +Besu implements [Proof of Work][6] and [Proof of Authority][7] (PoA) consensus mechanisms. Further, Hyperledger Besu implements several PoA protocols, including Clique and IBFT 2.0. + +Clique is a proof-of-authority blockchain consensus protocol. The blockchain runs Clique protocol maintaining the list of authorized signers. These approved signers directly mine and seal all blocks without mining. Therefore, the transaction task is computationally light. When creating a block, a miner collects and executes transactions, updates the network state with the calculated hash of the block and signs the block using his private key. By using a defined period of time to create a block, Clique can limit the number of processed transactions. + +IBFT 2.0 (Istanbul BFT 2.0) is a PoA **Byzantine-Fault-Tolerant** (**BFT**) blockchain consensus protocol. Transactions and blocks in the network are validated by authorized accounts, known as validators. Validators collect, validate and execute transactions and create the next block. Existing validators can propose and vote to add or remove validators and maintain a dynamic validator set. The consensus can ensure immediate finality. As the name suggests, IBFT 2.0 builds upon the IBFT blockchain consensus protocol with improved safety and liveness. In IBFT 2.0 blockchain, all valid blocks are directly added in the main chain and there are no forks. + +**3- Hyperledger Sawtooth** + +Sawtooth is the second Hyperledger project to reach 1.0 release maturity. Sawtooth-core is written in Python, while Sawtooth Raft and Sawtooth Sabre are written in Rust. It also has JavaScript and Golang components. Sawtooth supports both permissioned and permissionless deployments. It supports the EVM through a collaboration with the Hyperledger Burrow. By design, Hyperledger Sawtooth is created to address issues of performance. As such, one of its distinct features compared to other Hyperledger DLTs is that each node in Sawtooth can act as an orderer by validating and approving a transaction. Other notable features are: + + * **Parallel Transaction Execution**: While many blockchains use serial transaction execution to ensure consistent ordering at every node on the network, Sawtooth follows an advanced parallel scheduler that classifies transactions into parallel flows that eventually leads to the boost in transaction processing performance. + * **Separation of Application from Core**: Sawtooth simplifies the development and deployment of an application by separating the application level from the core system level. It offers smart contract abstraction to allow developers to create contract logic in the programming language of their choice. + * **Custom Transaction Processors**: In Sawtooth, each application can define the custom transaction processors to meet its unique requirements. It provides transaction families to serve as an approach for low-level functions, like storing on-chain permissions, managing chain-wide settings and for particular applications such as saving block information and performance analysis. + + + +**4- Hyperledger Iroha** + +Hyperledger Iroha is designed to target the creation and management of complex digital assets and identities. It is written in C++ and is user friendly. Iroha has a powerful role-based model for access control and supports complex analytics. While using Iroha for identity management, querying and performing commands are only limited to the participants who have access to the Iroha network. A robust permissions system ensures that all transactions are secure and controlled. Some of its highlights are: + + * **Ease of use:** You can easily create and manage simple, as well as complex, digital assets (e.g., cryptocurrency or personal medical data). + * **Built-in Smart Contracts:** You can easily integrate blockchain into a business process using built-in smart-contracts called “commands.” As such, developers need not to write complicated smart-contracts because they are available in the form of commands. + * **BFT:** Iroha uses BFT consensus algorithm which makes it suitable for businesses that require verifiable data consistency at a low cost. + + + +**5- Hyperledger Indy** + +As a self-sovereign identity management platform, Hyperledger Indy is built explicitly for decentralized identity management. The server portion, Indy node, is built in Python, while the Indy SDK is written in Rust. It offers tools and reusable components to manage digital identities on blockchains or other distributed ledgers. Hyperledger Indy architecture is well-suited for every application that requires heavy work on identity management since Indy is easily interpretable across multiple domains, organization silos and applications. As such, identities are securely stored and shared with all parties involved. Some notable highlights of Hyperledger Indy are: + +●        Identity Correlation-resistant: According to the Hyperledger Indy documentation, Indy is completely identity correlation-resistant. So, you do not need to worry about connecting or mixing one Id with another. That means, you can not connect two Ids or find two similar Ids in the ledger. + +●        Decentralized Identifiers (DIDs): According to the Hyperledger Indy documentation, all the decentralized identifiers are globally resolvable and unique without needing any central party in the mix. That means, every decentralized identity on the Indy platform will have a unique identifier that will solely belong to you. As a result, no one can claim or even use your identity on your behalf. So, it would eliminate the chances of identity theft. + +●        Zero-Knowledge Proofs: With help from Zero-Knowledge Proof, you can disclose only the information necessary without anything else. So, when you have to prove your credentials, you can only choose to release the information that you need depending on the party that is requesting it. For instance, you may choose to share your data of birth only with one party whereas to release your driver license and financial docs to another. In short, Indy gives users great flexibility in sharing their private data whenever and wherever needed. + +**Summary** + +In this article, we briefly reviewed five popular Hyperledger DLTs. We started off by going over Hyperledger Fabric and its main components and some of its highlights compared to public blockchain platforms like Ethereum. Even though Fabric is currently used heavily for supply chain management, if you are doing lots of specific works in supply chain domain, you should explore Hyperledger Grid too. Then, we moved on to learning how to use Hyperledger Besu for building public consortium blockchain applications that support multiple consensus algorithms and how to manage Besu from EVM. Next, we covered some highlights of Hyperledger Sawtooth such as how it is designed for high performance. For instance, we learned how a single node in Sawtooth can act as an orderer by approving and validating transactions in the network. The last two DLTs (Hyperledger Iroha and Indy) are specifically geared toward digital asset management and identity . So if you are working on a project that heavily uses identity management, you should explore and use either Iroha or Indy instead of Fabric. + +I have included reference and resource links for those interested in exploring topics discussed in this article in depth. + +For more references on all Hyperledger projects, libraries and tools, visit the below documentation links: + + 1. [Hyperledger Indy Project][8] + 2. [Hyperledger Fabric Project][9] + 3. [Hyperledger Aries Library][10] + 4. [Hyperledger Iroha Project][11] + 5. [Hyperledger Sawtooth Project][12] + 6. [Hyperledger Besu Project][13] + 7. [Hyperledger Quilt Library][14] + 8. [Hyperledger Ursa Library][15] + 9. [Hyperledger Transact Library][16] + 10. [Hyperledger Cactus Project][17] + 11. [Hyperledger Caliper Tool][18] + 12. [Hyperledger Cello Tool][19] + 13. [Hyperledger Explorer Tool][20] + 14. [Hyperledger Grid (Domain Specific)][21] + 15. [Hyperledger Burrow Project][22] + 16. [Hyperledger Avalon Tool][23] + + + +**Resources** + + * Free Training Courses from The Linux Foundation & Hyperledger + * [Blockchain: Understanding Its Uses and Implications (LFS170)][24] + * [Introduction to Hyperledger Blockchain Technologies (LFS171)][25] + * [Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)][26] + * [Becoming a Hyperledger Aries Developer (LFS173)][27] + * [Hyperledger Sawtooth for Application Developers (LFS174)][28] + * eLearning Courses from The Linux Foundation & Hyperledger + * [Hyperledger Fabric Administration (LFS272)][29] + * [Hyperledger Fabric for Developers (LFD272)][30] + * Certification Exams from The Linux Foundation & Hyperledger + * [Certified Hyperledger Fabric Administrator (CHFA)][31] + * [Certified Hyperledger Fabric Developer (CHFD)][32] + * [Hands-On Smart Contract Development with Hyperledger Fabric V2][33] Book by Matt Zand and others. + * [Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers][34] + * [Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS][35] + * [Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth][36] + * [Intro to Blockchain Cybersecurity (Coding Bootcamps)][37] + * [Intro to Hyperledger Sawtooth for System Admins (Coding Bootcamps)][38] + * [Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS][39] + * [Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS][40] + * [Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS][41] + * [Intro blockchain development with Hyperledger Fabric (Coding Bootcamps)][42] + * [How to build DApps with Hyperledger Fabric][43] + * [Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth][44] + * [Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI][45] + * [Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface][46] + * [Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level][47] + * [Blockchain Management in Hyperledger for System Admins][48] + * [Hyperledger Fabric for Developers (Coding Bootcamps)][49] + * [Free White Papers from Hyperledger][50] + * [Free Webinars from Hyperledger][51] + * [Hyperledger Wiki][52] + + + +**About the Author** + +**Matt Zand** is a serial entrepreneur and the founder of 3 tech startups: [DC Web Makers][53], [Coding Bootcamps][54] and [High School Technology Services][55]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][33] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: + +The post [Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy][56] appeared first on [Linux Foundation – Training][57]. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/news/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ + +作者:[Dan Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ +[b]: https://github.com/lujun9972 +[1]: https://blockchain.dcwebmakers.com/blog/comprehensive-overview-and-analysis-of-blockchain-use-cases-in-many-industries.html +[2]: https://weg2g.com/application/touchstonewords/article-intro-to-hyperledger-family-and-hyperledger-blockchain-ecosystem.php +[3]: https://learn.coding-bootcamps.com/blog/202224/why-build-blockchain-applications-with-hyperledger-fabric +[4]: https://learn.coding-bootcamps.com/p/learn-go-programming-language-by-examples +[5]: https://coding-bootcamps.com/blog/review-of-hyperledger-fabric-architecture-and-components.html +[6]: https://coding-bootcamps.com/blog/how-proof-of-work-consensus-works-in-blockchain.html +[7]: https://coding-bootcamps.com/blog/how-proof-of-stake-consensus-works-in-blockchain.html +[8]: https://www.hyperledger.org/use/hyperledger-indy +[9]: https://www.hyperledger.org/use/fabric +[10]: https://www.hyperledger.org/projects/aries +[11]: https://www.hyperledger.org/projects/iroha +[12]: https://www.hyperledger.org/projects/sawtooth +[13]: https://www.hyperledger.org/projects/besu +[14]: https://www.hyperledger.org/projects/quilt +[15]: https://www.hyperledger.org/projects/ursa +[16]: https://www.hyperledger.org/projects/transact +[17]: https://www.hyperledger.org/projects/cactus +[18]: https://www.hyperledger.org/projects/caliper +[19]: https://www.hyperledger.org/projects/cello +[20]: https://www.hyperledger.org/projects/explorer +[21]: https://www.hyperledger.org/projects/grid +[22]: https://www.hyperledger.org/projects/hyperledger-burrow +[23]: https://www.hyperledger.org/projects/avalon +[24]: https://training.linuxfoundation.org/training/blockchain-understanding-its-uses-and-implications/ +[25]: https://training.linuxfoundation.org/training/blockchain-for-business-an-introduction-to-hyperledger-technologies/ +[26]: https://training.linuxfoundation.org/training/introduction-to-hyperledger-sovereign-identity-blockchain-solutions-indy-aries-and-ursa/ +[27]: https://training.linuxfoundation.org/training/becoming-a-hyperledger-aries-developer-lfs173/ +[28]: https://training.linuxfoundation.org/training/hyperledger-sawtooth-application-developers-lfs174/ +[29]: https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/ +[30]: https://training.linuxfoundation.org/training/hyperledger-fabric-for-developers-lfd272/ +[31]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-administrator-chfa/ +[32]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-developer/ +[33]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/ +[34]: https://weg2g.com/application/touchstonewords/article-essential-hyperledger-sawtooth-features-for-enterprise-blockchain-developers.php +[35]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-fabric-on-amazon-web-services.php +[36]: https://myhsts.org/tutorial-learn-how-to-install-and-work-with-blockchain-hyperledger-sawtooth.php +[37]: https://learn.coding-bootcamps.com/p/learn-how-to-secure-blockchain-applications-by-examples +[38]: https://learn.coding-bootcamps.com/p/introduction-to-hyperledger-sawtooth-for-system-admins +[39]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-iroha-on-amazon-web-services.php +[40]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-indy-on-amazon-web-services.php +[41]: https://myhsts.org/tutorial-learn-how-to-configure-hyperledger-sawtooth-validator-and-rest-api-on-aws.php +[42]: https://learn.coding-bootcamps.com/p/live-and-self-paced-blockchain-development-with-hyperledger-fabric +[43]: https://learn.coding-bootcamps.com/p/live-crash-course-for-building-dapps-with-hyperledger-fabric +[44]: https://myhsts.org/tutorial-learn-how-to-build-transaction-processor-as-a-service-and-python-egg-for-hyperledger-sawtooth.php +[45]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-iroha-cli-to-create-cryptocurrency.php +[46]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-indy-command-line-interface.php +[47]: https://myhsts.org/tutorial-comprehensive-blockchain-hyperledger-developer-guide-for-all-professional-programmers.php +[48]: https://learn.coding-bootcamps.com/p/learn-blockchain-development-with-hyperledger-by-examples +[49]: https://learn.coding-bootcamps.com/p/hyperledger-blockchain-development-for-developers +[50]: https://www.hyperledger.org/learn/white-papers +[51]: https://www.hyperledger.org/learn/webinars +[52]: https://wiki.hyperledger.org/ +[53]: https://blockchain.dcwebmakers.com/ +[54]: http://coding-bootcamps.com/ +[55]: https://myhsts.org/ +[56]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ +[57]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210216 How to install Linux in 3 steps.md b/sources/tech/20210216 How to install Linux in 3 steps.md new file mode 100644 index 0000000000..fd860505c8 --- /dev/null +++ b/sources/tech/20210216 How to install Linux in 3 steps.md @@ -0,0 +1,146 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to install Linux in 3 steps) +[#]: via: (https://opensource.com/article/21/2/linux-installation) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +How to install Linux in 3 steps +====== +Operating system installations may seem mysterious, but they are +actually pretty straight forward. Here are the steps for a successful +Linux installation. +![bash logo on green background][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Here's how to install Linux.  + +Installing an operating system (OS) is always daunting. It's something of a puzzle to most people: Installing an OS can't happen from inside the OS because it either hasn't been installed, or it's about to be replaced by a different one, so how does it happen? And worse yet, it usually involves confusing questions about hard drive formats, install destinations, time zones, user names, passwords, and a bunch of other stuff that you just don't normally think about. Linux distributions know this, and so they've worked diligently over the years to reduce the time you spend in the OS installer down to the absolute minimum. + +### What happens when you install + +Whether you're installing just an application or a whole operating system, the process of _installation_ is just a fancy way to copy files from one medium to another. Regardless of any user interface or animations used to disguise the procedure as something highly specialized, it all amounts to the same thing in the end: Files that were once stored on a disc or drive are copied to specific locations on your hard drive. + +When it's an application being installed, the valid locations for those files are highly restricted to your _file system_ or the part of your hard drive that your operating system knows it can use. This is significant because it's possible to partition a hard drive into separate spaces (Apple used this trick back in the early '00s for what they called "Bootcamp", allowing users to install both macOS and Windows onto a drive, but as separate entities). When you install an operating system, some special files are installed into places on your drive that are normally off-limits. More importantly, all existing data on your drive is, at least by default, erased to make room for the new system, so creating a backup is _essential_. + +### Installers + +Technically speaking, you don't actually _need_ an installer to install applications or even operating systems. Believe it or not, some people install Linux manually, by mounting a blank hard drive, compiling code, and copying files. This is accomplished with the help of a project called [Linux From Scratch (LFS)][2]. This project aims to help enthusiasts, students, and future OS designers to learn more about how computers work and what function each component performs. This isn't the recommended method of installing Linux, but you'll find that in open source, it's usually true that _if_ something can be done, then somebody's doing it. And it's a good thing, too, because these niche interests very often lead to surprisingly useful innovations. + +Assuming you're not trying to reverse engineer Linux, though, the normal way to install it is with an install disc or install image. + +### 3 easy steps to install Linux + +When you boot from a Linux install DVD or thumb drive, you're placed into a minimal operating environment designed to run one or more useful applications. The installer is the primary application, but because Linux is such a flexible system, you can usually also run standard desktop applications to get a feel for what the OS is like before you commit to installing it. + +Different Linux distributions have different installer interfaces. Here are two examples: + +Fedora Linux has a flexible installer (called **Anaconda**) capable of complex system configuration. + +![Anaconda installer screen on Fedora][3] + +The Anaconda installer on Fedora + +Elementary OS has a simple installer designed primarily for an install on a personal computer: + +![Elementary OS installer][4] + +Elementary OS installer + +#### 1\. Get an installer + +The first step toward installing Linux is to download an installer. You obtain a Linux install image from the distribution you've chosen to try. + + * [Fedora][5] is famous for being the first to update its software + * [Linux Mint][6] provides easy options to install missing drivers + * [Elementary][7] provides a beautiful desktop experience and several special, custom-built applications + + + +Linux installers are `.iso` files, which are "blueprints" for DVD media. You can burn the `.iso` file to a DVD-R if you still use optical media, or you can _flash_ it to a USB drive (make sure it's an empty USB drive, as all of its contents are erased when the image is flashed onto it). To flash an image to a USB drive, you can [use the open source Etcher application][8]. + +![Etcher used to flash a USB thumb drive][9] + +Etcher application can flash a USB thumb drive + +You're now ready to install Linux. + +#### 2\. Boot order + +To install an OS onto a computer, you must boot to the OS installer. This is not common behavior for a computer because it's so rarely done. In theory, you install an OS once, and then you update it. When you opt to install a different operating system onto a computer, you're interrupting that normal lifecycle. That's not a bad thing. It's your computer, so you have the authority to do re-image it. However, it's different from the default behavior of a computer, which is to boot to whatever operating system it finds on the hard drive immediately after being powered on. + +Before installing Linux, you must back up any data you have on your target computer because it will all be erased upon installation. + +Assuming you've saved your data to an external hard drive, which you've then secreted away to somewhere safe (and not attached to your computer), then you're ready to proceed. + +First, attach the USB drive containing your Linux installer to your computer. Power on the computer and watch the screen for some indication of how to interrupt its default boot sequence. This is usually a key like **F2**, **F8**, **Esc**, or even **Del**, but it varies depending on your motherboard manufacturer. If you miss your window of opportunity, just wait for the default OS to load, and then reboot and try again. + +When you interrupt the boot sequence, your computer prompts you for boot instructions. Specifically, the firmware embedded into the motherboard needs to know what drive to look to for an operating system it can load. In this case, you want the computer to boot from the USB drive containing the Linux image. How you're prompted for this information varies, depending on the motherboard manufacturer. Sometimes, it's a very direct question complete with a menu: + +![Boot device menu][10] + +The boot device selection menu + +Other times, you're taken into a rudimentary interface you can use to set the boot order. Computers are usually set by default to look to the internal hard drive first. Failing that, it moves on to a USB drive, a network drive, or an optical drive. You need to tell your computer to look for a USB drive _first_ so that it bypasses its own internal hard drive and instead boots the Linux image on your USB drive. + +![BIOS selection screen][11] + +BIOS selection screen + +This may seem daunting at first, but once you get familiar with the interface, it's a quick and easy task. You won't have to do this once Linux is installed because, after that, you'll want your computer to boot off the internal hard drive again. This is a great trick to get comfortable with, though, because it's the key to using Linux off of a thumb drive, testing a computer for Linux compatibility before installing, and general troubleshooting regardless of what OS is involved. + +Once you've selected your USB drive as the boot device, save your settings, let the computer reset, and boot to the Linux image. + +#### 3\. Install Linux + +Once you've booted into the Linux installer, it's just a matter of stepping through prompts. + +The Fedora installer, Anaconda, presents you a "menu" of all the things you can customize prior to installation. Most are set to sensible defaults and probably require no interaction from you, but others are marked with alert symbols to indicate that your configurations can't safely be guessed and so need to be set. These include the location of the hard drive you want the OS installed onto and the user name you want to use for your account. Until you resolve these issues, you can't proceed with the installation. + +For the hard drive location, you must know which drive you want to erase and re-image with your Linux distribution of choice. This might be an obvious choice on a laptop that has only one drive, to begin with: + +![Screen to select the installation drive][12] + +Select the drive to install the OS to (there's only one drive in this example) + +If you've got more than one drive in your computer, and you only want Linux on one of them, or else you want to treat both drives as one, then you must help the installer understand your goal. It's easiest to assign just one drive to Linux, letting the installer perform automatic partitioning and formatting, but there are plenty of other options for advanced users. + +Your computer must have at least one user, so create a user account for yourself. Once that's done, you can click the **Done** button at last and install Linux. + +![Anaconda options complete and ready for installation][13] + +Anaconda options are complete and you're ready to install + +Other installers can be even simpler, believe it or not, so your experience may differ from the images in this article. No matter what, the install process is one of the easiest operating system installations available outside of getting something pre-installed for you, so don't let the idea of installing an OS intimidate you. This is your computer. You can and should install an OS in which you have ownership. + +### Own your computer + +Ultimately, Linux is your OS. It's an operating system developed by people from all over the world, with one interest at heart: Create a computing culture of participation, mutual ownership, and co-operative stewardship. If you're interested in getting to know open source better, then take the step to know one of its shining examples and install Linux. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-installation + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background) +[2]: http://www.linuxfromscratch.org +[3]: https://opensource.com/sites/default/files/anaconda-installer.png +[4]: https://opensource.com/sites/default/files/elementary-installer.png +[5]: http://getfedora.org +[6]: http://linuxmint.com +[7]: http://elementary.io +[8]: https://opensource.com/article/18/7/getting-started-etcherio +[9]: https://opensource.com/sites/default/files/etcher_0.png +[10]: https://opensource.com/sites/default/files/boot-menu.jpg +[11]: https://opensource.com/sites/default/files/bios_1.jpg +[12]: https://opensource.com/sites/default/files/install-harddrive-chooser.png +[13]: https://opensource.com/sites/default/files/anaconda-done.png diff --git a/sources/tech/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md b/sources/tech/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md new file mode 100644 index 0000000000..deb84d80ea --- /dev/null +++ b/sources/tech/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Meet Plots: A Mathematical Graph Plotting App for Linux Desktop) +[#]: via: (https://itsfoss.com/plots-graph-app/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Meet Plots: A Mathematical Graph Plotting App for Linux Desktop +====== + +Plots is a graph plotting application that makes it easy to visualize mathematical formulae. You can use it for trigonometric, hyperbolic, exponential and logarithmic functions along with arbitrary sums and products. + +### Plot mathematical graphs with Plots on Linux + +[Plots][1] is a simple application inspired by graph plotting web apps like [Desmos][2]. It allows you to plot graphs of different math functions, which you can enter interactively, as well as customizing the color of your plots. + +Written in Python, Plots takes advantage of modern hardware using [OpenGL][3]. It uses GTK 3 and thus integrates well with the GNOME desktop. + +![][4] + +Using plots is straightforward. To add a new equation, click the plus sign. Clicking the trash icon deletes the equation. There is also the option to undo and redo. You can also zoom in and zoom out. + +![][5] + +The text box where you type is equation friendly. The hamburger menu has a ‘help’ option to access the documentation. You’ll find useful tips on how to write various mathematical notations here. You can also copy-paste the equations. + +![][6] + +In dark mode, the sidebar equation area turns dark but the main plotting area remains white. I believe that’s by design perhaps. + +You can use multiple functions and plot them all in one graph: + +![][7] + +I found it crashing while trying to paste some equations it could not understand. If you write something that it cannot understand or conflicts with existing equations, all plots disappear, Removing the incorrect equation brings back the plot. + +No option to export the plots or copy them to clipboard unfortunately. You can always [take screenshots in Linux][8] and use the image in your document where you have to add the graphs. + +**Recommended Read:** + +![][9] + +#### [KeenWrite: An Open Source Text Editor for Data Scientists and Mathematicians][10] + +### Installing Plots on Linux + +Plots has different installation options available for various kinds of distributions. + +Ubuntu 20.04 and 20.10 users can [take advantage of the PPA][11]: + +``` +sudo add-apt-repository ppa:apandada1/plots +sudo apt update +sudo apt install plots +``` + +For other Debian based distributions, you can [install it from the deb file][12] available [here][13]. + +I didn’t find it in AUR package list but as an Arch Linux user, you can either use the Flatpak package or install it using Python. + +[Plots Flatpak Package][14] + +If interested, you may check out the source code on its GitHub repository. If you like the application, please consider giving it a star on GitHub. + +[Plots Source Code on GitHub][1] + +**Conclusion** + +The primary use case for Plots is for students learning math or related subjects, but it can be useful in many other scenarios. I know not everyone would need that but surely helpful for the people in the academics and schools. + +I would have liked the option to export the images though. Perhaps the developers can add this feature in the future releases. + +Do you know any similar applications for plotting graphs? How does Plots stack up against them? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/plots-graph-app/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/alexhuntley/Plots/ +[2]: https://www.desmos.com/ +[3]: https://www.opengl.org/ +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/fourier-graph-plots.png?resize=800%2C492&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/plots-app-linux-1.png?resize=800%2C518&ssl=1 +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/plots-app-linux.png?resize=800%2C527&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/multiple-equations-plots.png?resize=800%2C492&ssl=1 +[8]: https://itsfoss.com/take-screenshot-linux/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/keenwrite.png?fit=800%2C450&ssl=1 +[10]: https://itsfoss.com/keenwrite/ +[11]: https://itsfoss.com/ppa-guide/ +[12]: https://itsfoss.com/install-deb-files-ubuntu/ +[13]: https://launchpad.net/~apandada1/+archive/ubuntu/plots/+packages +[14]: https://flathub.org/apps/details/com.github.alexhuntley.Plots diff --git a/sources/tech/20210217 5 reasons to use Linux package managers.md b/sources/tech/20210217 5 reasons to use Linux package managers.md new file mode 100644 index 0000000000..8f5a65c511 --- /dev/null +++ b/sources/tech/20210217 5 reasons to use Linux package managers.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 reasons to use Linux package managers) +[#]: via: (https://opensource.com/article/21/2/linux-package-management) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +5 reasons to use Linux package managers +====== +Package managers track all components of the software you install, +making updates, reinstalls, and troubleshooting much easier. +![Gift box opens with colors coming out][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll talk about software repositories + +Before I used Linux, I took the applications I had installed on my computer for granted. I would install applications as needed, and if I didn't end up using them, I'd forget about them, letting them languish as they took up space on my hard drive. Eventually, space on my drive would become scarce, and I'd end up frantically removing applications to make room for more important data. Inevitably, though, the applications would only free up so much space, and so I'd turn my attention to all of the other bits and pieces that got installed along with those apps, whether it was media assets or configuration files and documentation. It wasn't a great way to manage my computer. I knew that, but it didn't occur to me to imagine an alternative, because as they say, you don't know what you don't know. + +When I switched to Linux, I found that installing applications worked a little differently. On Linux, you were encouraged not to go out to websites for an application installer. Instead, you ran a command, and the application was installed on the system, with every individual file, library, configuration file, documentation, and asset recorded. + +### What is a software repository? + +The default method of installing applications on Linux is from a distribution software repository. That might sound like an app store, and that's because modern app stores have borrowed much from the concept of software repositories. [Linux has app stores, too][2], but software repositories are unique. You get an application from a software repository through a _package manager_, which enables your Linux system to record and track every component of what you've installed. + +Here are five reasons that knowing exactly what's on your system can be surprisingly useful. + +#### 1\. Removing old applications + +When your computer knows every file that was installed with any given application, it's really easy to uninstall files you no longer need. On Linux, there's no problem with installing [31 different text editors][3] only to later uninstall the 30 you don't love. When you uninstall on Linux, you really uninstall. + +#### 2\. Reinstall like you mean it + +Not only is an uninstall thorough, a _reinstall_ is meaningful. On many platforms, should something go wrong with an application, you're sometimes advised to reinstall it. Usually, nobody can say why you should reinstall an application. Still, there's often the vague suspicion that some file somewhere has become corrupt (in other words, data got written incorrectly), and so the hope is that a reinstall might overwrite the bad files and make things work again. It's not bad advice, but it's frustrating for any technician not to know what's gone wrong. Worse still, there's no guarantee, without careful tracking, that all files will be refreshed during a reinstall because there's often no way of knowing that all the files installed with an application were removed in the first place. With a package manager, you can force a complete removal of old files to ensure a fresh installation of new files. Just as significantly, you can account for every file and probably find out which one is causing problems, but that's a feature of open source and Linux rather than package management. + +#### 3\. Keep your applications updated + +Don't let anybody tell you that Linux is "more secure" than other operating systems. Computers are made of code, and we humans find ways to exploit that code in new and interesting ways every day. Because the vast majority of applications on Linux are open source, many exploits are filed publically as Common Vulnerability and Exposures (CVE). A flood of incoming security bug reports may seem like a bad thing, but this is definitely a case when _knowing_ is far better than _not knowing_. After all, just because nobody's told you that there's a problem doesn't mean that there's not a problem. Bug reports are good. They benefit everyone. And when developers fix security bugs, it's important for you to be able to get those fixes promptly, and preferably without having to remember to do it yourself. + +A package manager is designed to do exactly that. When applications receive updates, whether it's to patch a potential security problem or introduce an exciting new feature, your package manager application alerts you of the available update. + +#### 4\. Keep it light + +Say you have application A and application B, both of which require library C. On some operating systems, by getting A and B, you get two copies of C. That's obviously redundant, so imagine it happening several times per application. Redundant libraries add up quickly, and by having no single source of "truth" for a given library, it's nearly impossible to ensure you're using the most up-to-date or even just a consistent version of it. + +I admit I don't tend to sit around pondering software libraries all day, but I do remember the days when I did, even though I didn't know that's what was troubling me. Before I had switched to Linux, it wasn't uncommon for me to encounter errors when dealing with media files for work, or glitches when playing different video games, or quirks when reading a PDF, and so on. I spent a lot of time investigating these errors back then. I still remember learning that two major applications on my system each had bundled the same (but different) graphic backend technologies. The mismatch was causing errors when the output of one was imported into the other. It was meant to work, but because of a bug in an older version of the same collection of library files, a hotfix for one application didn't benefit the other. + +A package manager knows what backends (referred to as a _dependency_) are needed for each application and refrains from reinstalling software that's already on your system. + +#### 5\. Keep it simple + +As a Linux user, I appreciate a good package manager because it helps make my life simple. I don't have to think about the software I install, what I need to update, or whether something's really been uninstalled when I'm finished with it. I audition software without hesitation. And when I'm setting up a new computer, I run [a simple Ansible script][4] to automate the installation of the latest versions of all the software I rely upon. It's simple, smart, and uniquely liberating. + +### Better package management + +Linux takes a holistic view of applications and the operating system. After all, open source is built upon the work of other open source, so distribution maintainers understand the concept of a dependency _stack_. Package management on Linux has an awareness of your whole system, the libraries and support files on it, and the applications you install. These disparate parts work together to provide you with an efficient, optimized, and robust set of applications. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-package-management + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1 (Gift box opens with colors coming out) +[2]: http://flathub.org +[3]: https://opensource.com/article/21/1/text-editor-roundup +[4]: https://opensource.com/article/20/9/install-packages-ansible diff --git a/sources/tech/20210217 Use this bootable USB drive on Linux to rescue Windows users.md b/sources/tech/20210217 Use this bootable USB drive on Linux to rescue Windows users.md new file mode 100644 index 0000000000..9a8c541d12 --- /dev/null +++ b/sources/tech/20210217 Use this bootable USB drive on Linux to rescue Windows users.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use this bootable USB drive on Linux to rescue Windows users) +[#]: via: (https://opensource.com/article/21/2/linux-woeusb) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins) + +Use this bootable USB drive on Linux to rescue Windows users +====== +WoeUSB makes Windows boot disks from Linux and help your friends unlock +their out-of-commission machines. +![Puzzle pieces coming together to form a computer screen][1] + +People regularly ask me to help them rescue Windows computers that have become locked or damaged. Sometimes, I can use a Linux USB boot drive to mount Windows partitions and then transfer and back up files from the damaged systems. + +Other times, clients lose their passwords or otherwise lock their login account credentials. One way to unlock an account is to create a Windows boot disk to repair the computer. Microsoft allows you to download copies of Windows from its website and offers tools to create a USB boot device. But to use them, you need a Windows computer, which means, as a Linux user, I need another way to create a boot DVD or USB drive. I have found it difficult to create Windows USBs on Linux. My reliable tools, like [Etcher.io][2], [Popsicle][3] (for Pop!_OS), and [UNetbootin][4], or using `dd` from the command line to create bootable media, have not been very successful. + +That is until I discovered [WoeUSB-ng][5], a [GPL 3.0][6] Linux tool that creates a bootable USB drive for Windows Vista, 7, 8, and 10. The open source software has two programs: a command-line utility and a graphical user interface (GUI) version. + +### Install WoeUSB-ng + +The GitHub repository contains instructions for [installing][7] WoeUSB-ng on Arch, Ubuntu, Fedora, or with pip3. + +If you're on a supported Linux operating system, you can install WoeUSB-ng using your package manager. Alternatively, you can use Python's package manager, [pip][8], to install the application. This is universal across any Linux distribution. There's no functional difference between these methods, so use whichever's familiar to you. + +I'm running Pop!_OS, which is an Ubuntu derivative, but being comfortable with Python, I chose the pip3 install: + + +``` +`$ sudo pip3 install WoeUSB-ng` +``` + +### Create a boot disk + +You can use WoeUSB-ng from the command line or the GUI version. + +To create a boot disk from the command line, the syntax requires the command, a path to your Windows ISO file (`/dev/sdX` in this example; use the `lsblk` command to determine your drive), and a device:  + + +``` +`$ sudo woeusb --device Windows.iso /dev/sdX` +``` + +You can also launch the program for an easy-to-use interface. In the WoeUSB-ng application window, find the Windows.iso file and select it. Choose your USB target device—the drive you want to make into a Windows boot drive. This will ERASE all information on this drive, so choose carefully—and then double-check (and triple-check) your choice! + +Once you're sure you have the right destination drive selected, click the **Install** button. + +![WoeUSB-ng UI][9] + +(Don Watkins, [CC BY-SA 4.0][10]) + +Creating media takes five to 10 minutes, depending on your Linux computer's processor, memory, USB port speed, etc. Be patient. + +Once the process is finished and verified, you have a functional Windows USB boot device to help someone repair their Windows computer. + +### Help others + +Open source is all about helping other people. Very often, you can help Windows users by using the Linux-based [System Rescue CD][11]. But sometimes, the only way to help is directly from Windows, and WoeUSB-ng is a great open source tool to make that possible. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-woeusb + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) +[2]: https://etcher.io/ +[3]: https://github.com/pop-os/popsicle +[4]: https://github.com/unetbootin/unetbootin +[5]: https://github.com/WoeUSB/WoeUSB-ng +[6]: https://github.com/WoeUSB/WoeUSB-ng/blob/master/COPYING +[7]: https://github.com/WoeUSB/WoeUSB-ng#installation +[8]: https://opensource.com/downloads/pip-cheat-sheet +[9]: https://opensource.com/sites/default/files/uploads/woeusb-ng-gui.png (WoeUSB-ng UI) +[10]: https://creativecommons.org/licenses/by-sa/4.0/ +[11]: https://www.system-rescue.org/ diff --git a/sources/tech/20210218 5 must-have Linux media players.md b/sources/tech/20210218 5 must-have Linux media players.md new file mode 100644 index 0000000000..50d962c292 --- /dev/null +++ b/sources/tech/20210218 5 must-have Linux media players.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 must-have Linux media players) +[#]: via: (https://opensource.com/article/21/2/linux-media-players) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +5 must-have Linux media players +====== +Whether its movies or music, Linux has you covered with some great media +players. +![An old-fashioned video camera][1] + +In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Playing media is one of my favorite reasons to use Linux. + +You may prefer vinyl and cassette tapes or VHS and Laserdisc, but it's still most likely that you consume the majority of the media you enjoy on a digital device. There's a convenience to media on a computer that can't be matched, largely because most of us are near a computer for most of the day. Many modern computer users don't give much thought to what applications are available for listening to music and watching movies because most operating systems provide a media player by default or because they subscribe to a streaming service and don't keep media files around themselves. But if your tastes go beyond the usual hit list of popular music and shows, or if you work with media for fun or profit, then you have local files you want to play. You probably also have opinions about the available user interfaces. On Linux, _choice_ is a mandate, and so your options for media playback are endless. + +Here are five of my must-have media players on Linux. + +### 1\. mpv + +![mpv interface][2] + +The mpv interface License: Creative Commons Attribution-ShareAlike + +A modern, clean, and minimal media player. Thanks to its Mplayer, [ffmpeg][3], and `libmpv` backends, it can play any kind of media you're likely to throw at it. And I do mean "throw at it" because the quickest and easiest way to start a file playing is just to drag the file onto the mpv window. Should you drag more than one file, mpv creates a playlist for you. + +While it provides intuitive overlay controls when you mouse over it, the interface is best when operated through the keyboard. For instance, **Alt+1** causes your mpv window to become full-size, and **Alt+0** reduces it to half-size. You can use the **,** and **.** keys to step through the video frame by frame, the **[** and **]** keys to adjust playback speed, **/** and ***** to adjust volume, **m** to mute, and so on. These master controls make for quick adjustments, and once you learn them, you can adjust playback almost as quickly as the thought occurs to you to do so. For both work and entertainment, mpv is my top choice for media playback. + +### 2\. Kaffeine and Rhythmbox + +![Kaffeine interface][4] + +The Kaffeine interface License: Creative Commons Attribution-ShareAlike + +Both the KDE Plasma and GNOME desktops provide music applications that can act as frontends to your personal music library. They invite you to establish a standard location for your music files and then scan through your music collection so you can browse according to album, artist, and so on. Both are great for those times when you just can't quite decide what you want to listen to and want an easy way to rummage through what's available. + +[Kaffeine][5] is actually much more than just a music player. It can play video files, DVDs, CDs, and even digital TV (assuming you have an incoming signal). I've gone whole days without closing Kaffeine, because no matter whether I'm in the mood for music or movies, Kaffeine makes it easy to start something playing. + +### 3\. Audacious + +![Audacious interface][6] + +The Audacious interface License: Creative Commons Attribution-ShareAlike + +The [Audacious][7] media player is a lightweight application that can play your music files (even MIDI files) or stream music from the Internet. Its main appeal, for me, is its modular architecture, which encourages the development of plugins. These plugins enable playback of nearly every audio media format you can think of, adjust the sound with a graphic equalizer, apply effects, and even reskin the entire application to change its interface. + +It's hard to think of Audacious as just one application because it's so easy to make it into the application you want it to be. Whether you're a fan of XMMS on Linux, WinAmp on Windows, or any number of alternatives, you can probably approximate them with Audacious. Audacious also provides a terminal command, `audtool`, so you can control a running instance of Audacious from the command line, so it even approximates a terminal media player! + +### 4\. VLC + +![vlc interface][8] + +The VLC interface License: Creative Commons Attribution-ShareAlike + +The [VLC][9] player is probably at the top of the list of applications responsible for introducing users to open source. A tried and true player of all things multimedia, VLC can play music, video, optical discs. It can also stream and record from a webcam or microphone, making it an easy way to capture a quick video or voice message. Like mpv, it can be controlled mostly through single-letter keyboard presses, but it also has a helpful right-click menu. It can convert media from one format to another, create playlists, track your media library, and much more. VLC is the best of the best, and most players don't even attempt to match its capabilities. It's a must-have application no matter what platform you're on. + +### 5\. Music player daemon + +![mpd with the ncmpc interface][10] + +mpd and ncmpc License: Creative Commons Attribution-ShareAlike + +The [music player daemon (mpd)][11] is an especially useful player, because it runs on a server. That means you can fire it up on a [Raspberry Pi][12] and leave it idling so you can tap into it whenever you want to play a tune. There are many clients for mpd, but I use [ncmpc][13]. With ncmpc or a web client like [netjukebox][14], I can contact mpd from the local host or a remote machine, select an album, and play it from anywhere. + +### Media on Linux + +Playing media on Linux is easy, thanks to its excellent codec support and an amazing selection of players. I've only mentioned five of my favorites, but there are many, many more for you to explore. Try them all, find the best, and then sit back and relax. + +We see how four music players, VLC, QMMP, Clementine, Amarok, compare. The author also recommends... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-media-players + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera) +[2]: https://opensource.com/sites/default/files/mpv_0.png +[3]: https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats +[4]: https://opensource.com/sites/default/files/kaffeine.png +[5]: https://apps.kde.org/en/kaffeine +[6]: https://opensource.com/sites/default/files/audacious.png +[7]: https://audacious-media-player.org/ +[8]: https://opensource.com/sites/default/files/vlc_0.png +[9]: http://videolan.org +[10]: https://opensource.com/sites/default/files/mpd-ncmpc.png +[11]: https://www.musicpd.org/ +[12]: https://opensource.com/article/21/1/raspberry-pi-hifi +[13]: https://www.musicpd.org/clients/ncmpc/ +[14]: http://www.netjukebox.nl/ diff --git a/sources/tech/20210218 What is PPA Purge- How to Use it in Ubuntu and other Debian-based Distributions.md b/sources/tech/20210218 What is PPA Purge- How to Use it in Ubuntu and other Debian-based Distributions.md new file mode 100644 index 0000000000..2be48a8b15 --- /dev/null +++ b/sources/tech/20210218 What is PPA Purge- How to Use it in Ubuntu and other Debian-based Distributions.md @@ -0,0 +1,182 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What is PPA Purge? How to Use it in Ubuntu and other Debian-based Distributions?) +[#]: via: (https://itsfoss.com/ppa-purge/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +What is PPA Purge? How to Use it in Ubuntu and other Debian-based Distributions? +====== + +PPA is a popular method of installing additional applications or newer versions of a software in Ubuntu. + +I have written a [detailed guide on PPA][1] so I will just quickly recall it here. PPA is a mechanism developed by Ubuntu to enable developers to provide their own repositories. When you add a PPA, you add additional repository to your system and thus you can download applications from this additional repository. + +``` +sudo add-apt-repository ppa:ppa-address +sudo apt update +sudo apt install package_from_ppa +``` + +I have also written about [deleting PPAs from your system][2]. I briefly mentioned the PPA Purge tool in that article. In this tutorial, you’ll get more detailed information about this handy utility. + +### What is PPA Purge? + +PPA Purge is a command line tool that disables a PPA repository from your software sources list. Apart from that, it reverts the system back to official Ubuntu packages. This is a different behavior than simply deleting the PPA repository. + +Suppose application ABC has version x available from Ubuntu repositories. You add a PPA that provides a higher version y of the same application/package ABC. When your Linux system finds that the same package is available from multiple sources, it uses the source that provides a newer version. + +In this example, you’ll have version y of application ABC installed thanks to the PPA you added. + +Normally, you would remove the application and then remove the PPA from sources list. But if you use ppa-purge to disable the said PPA, your application ABC will automatically revert to the older version x provided by Ubuntu repositories. + +Do you see the difference? Probably not. Let me explain it to you with real examples. + +#### Reverting applications to the official version provided by Ubuntu + +I heard that the [upcoming VLC 4.0 version has major UI overhaul][3]. I wanted to try it before it is officially released and so I used the [daily build PPA of VLC][4] to get the under-development version 4. + +Take a look at the screenshot below. I have added the VLC PPA (videolan/master-daily) and this PPA provides VLC version 4.0 release candidate (RC) version. Ubuntu repositories provide VLC version 3.0.11. + +![][5] + +If I use the ppa-purge command with the VLC daily build PPA, it disables the PPA and reverts the installed VLC version to 3.0.11 which is available from Ubuntu’s universal repository. + +![][6] + +You can see that it informs you that some packages are going to be downgraded. + +![][7] + +When the daily build VLC PPA is purged, the installed version reverts to what Ubuntu provides from its official repositories. + +![][8] + +You might think that VLC was downgraded because it was upgraded from version 3.0.11 to VLC 4.0 with the PPA. But here is a funny thing. Even if I had used the PPA to install VLC 4.0 RC version afresh (instead of upgrading it), it would still be downgraded instead of being removed from the system. + +Does it mean ppa-purge command cannot remove applications along with disabling the PPA? Not quite so. Let me show another example. + +#### PPA Purge impact on application only available from a PPA + +I recently stumbled across Plots, a [nifty tool for plotting mathematical graphs][9]. Since it is a new application, it is not available in Ubuntu repositories yet. I used [its PPA][10] to install it. + +If I use ppa-purge command on this PPA, it disables the PPA first and then looks to revert it to the original version. But there is no ‘original version’ in Ubuntu’s repositories. So, it proceeds to [uninstall the application from Ubuntu][11]. + +The entire process is depicted in the single picture below. Pointer 1 is for adding PPA, pointer 2 is for installing the application named plots. I have discarded the input for these two commands with [redirection in Linux][12]. + +You can see that when PPA Purge is used (pointer 3), it disables the PPA (pointer 4) and then proceeds to inform that the application plots will be removed (pointer 5). + +![][13] + +#### Deleting a PPA vs disabling it + +I have repeatedly used the term ‘disabling PPA’ with PPA Purge. There is a difference between disabling PPA and deleting it. + +When you add a PPA, it adds a new file in the /etc/apt/sources.list.d directory. This file has the URL of the repository. + +Disabling the PPA keeps this file but it is commented out the repository in the PPA’s file. Now this repository is not considered while updating or installing software. + +![][14] + +You can see disabled PPA repository in Software & Updates tool: + +![][15] + +When you delete a PPA, it means deleting the PPA’s file from etc/apt/sources.list.d directory. You won’t see it anywhere on the system. + +![PPA deleted][16] + +Why disable a PPA instead of deleting it? Because it is easier to re-enable it. You can do just check the box in Software & Updates tool or edit the PPA file and remove the leading # to uncomment the repository. + +#### Recap of what PPA Purge does + +If it was too much information, let me summarize the main points of what the ppa-purge script/tool does: + + * PPA Purge disables a given PPA but doesn’t delete it. + * If there was a new application (which is not available from any sources other than only the PPA) installed with the given PPA, it is uninstalled. + * If the PPA upgraded an already installed application, that application will be reverted to the version provided by the official Ubuntu repositories. + * If you used the PPA to install (not upgrade) a newer version of an application (which is also available from the official Ubuntu repository), using PPA Purge will downgrade the application version to the one available from Ubuntu repositories. + + + +### Using PPA Purge + +Alright! Enough explanation. You might be wondering how to use PPA Purge. + +You need to install ppa-purge tool first. Ensure that you have [universe repository enabled][17] already. + +``` +sudo apt install ppa-purge +``` + +As far using PPA Purge, you should provide the PPA name in a format similar to what you use for adding it: + +``` +sudo ppa-purge ppa:ppa-name +``` + +Here’s a real example: + +![][18] + +If you are not sure of the PPA name, [use the apt show command][19] to display the source repository of the package in question. + +``` +apt show vlc +``` + +![Finding PPA source URL][20] + +For example, the source for VLC PPA shows groovy/main. Out of this the terms after ppa.launchpad.net and before Ubuntu are part of PPA name. So here, you get the PPA name as videolan/master-daily. + +If you have to use to purge the PPA ‘videolan/master-daily’, you use it like this by adding `ppa:` before PPA name: + +``` +sudo ppa-purge ppa:videolan/master-daily +``` + +### Do you purge your PPAs? + +I wanted to keep this article short and crisp but it seems I went in a little bit of more detail.As long as you learn something new, you won’t mind the additional details, will you? + +PPA Purge is a nifty utility that allows you to test newer or beta versions of applications and then easily revert to the original version provided by the distribution. If a PPA has more than one application, it works on all of them. + +Of course, you can do all these stuff manually which is to disable the PPA, remove the application and install it again to get the version provided by the distribution. PPA Purge makes the job easier. + +Do you use ppa-purge already or will you start using it from now onwards? Did I miss some crucial information or do you still have some doubts on this topic? Please feel free to use the comment sections. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ppa-purge/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/ppa-guide/ +[2]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/ +[3]: https://news.itsfoss.com/vlc-4-features/ +[4]: https://launchpad.net/~videolan/+archive/ubuntu/master-daily +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/vlc-ppa.png?resize=800%2C400&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/using-ppap-purge.png?resize=800%2C506&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/downgrade-packages-with-ppa-purge.png?resize=800%2C506&ssl=1 +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/package-reverted-ppa-purge.png?resize=800%2C405&ssl=1 +[9]: https://itsfoss.com/plots-graph-app/ +[10]: https://launchpad.net/~apandada1/+archive/ubuntu/plots/ +[11]: https://itsfoss.com/uninstall-programs-ubuntu/ +[12]: https://linuxhandbook.com/redirection-linux/ +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/ppa-purge-deleting-apps.png?resize=800%2C625&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/disabled-ppa.png?resize=800%2C295&ssl=1 +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/disabled-ppa-ubuntu.png?resize=800%2C398&ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/ppa-deleted.png?resize=800%2C271&ssl=1 +[17]: https://itsfoss.com/ubuntu-repositories/ +[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/ppa-purge-example-800x379.png?resize=800%2C379&ssl=1 +[19]: https://itsfoss.com/apt-search-command/ +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/apt-show-find-ppa-source.png?resize=800%2C341&ssl=1 diff --git a/sources/tech/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md b/sources/tech/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md new file mode 100644 index 0000000000..c6e7195bf8 --- /dev/null +++ b/sources/tech/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md @@ -0,0 +1,130 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 Ways to Customize Cinnamon Desktop in Linux [Beginner’s Guide]) +[#]: via: (https://itsfoss.com/customize-cinnamon-desktop/) +[#]: author: (Dimitrios https://itsfoss.com/author/dimitrios/) + +7 Ways to Customize Cinnamon Desktop in Linux [Beginner’s Guide] +====== + +Linux Mint is one the best [Linux distributions for beginners][1]. Especially Windows users that want to [switch to Linux][2], will find its flagship Cinnamon desktop environment very familiar. + +Cinnamon gives a traditional desktop experience and many users like it as it is. It doesn’t mean you have to content with what it provides. Cinnamon provides several ways for customizing the desktop. + +Reading about [MATE][3] and [KDE customization][4] guides, many readers requested similar tutorial for Linux Mint Cinnamon as well. Hence, I created this basic guide on tweaking the looks and feel of Cinnamon desktop. + +### 7 Different Ways for Customizing Cinnamon Desktop + +For this tutorial, I’m using [Linux Mint Debian Edition][5] (LMDE 4). You can use this on any Linux distribution that is running Cinnamon. If you are unsure, here’s [how to check which desktop environment][6] you are using. + +When it comes to changing the cinnamon desktop appearance, I find it very easy to do so as it is just 2 clicks away. Click on the menu icon and then on settings as shown below. + +![][7] + +All the appearance settings are placed on the top of the window. Everything on “System Settings” window looks neat and tidy. + +![][8] + +#### 1\. Effects + +The effects options are simple, self-explanatory and straightforward. You can turn on and off the effects for different elements of the desktop or change the window transitioning by changing the effects style. If you want to change the speed of the effects, you can do it through the customise tab. + +![][9] + +#### 2\. Font Selection + +In this section, you can differentiate the fonts you use throughout the system in size and type, and through the font settings you can fine-tune the appearance. + +![][10] + +#### 3\. Themes and icons + +A reason that I used to be a Linux Mint user for a few years, is that you don’t need to go all over the place to change what you want. Window manager, icon and panel customization all in one place! + +You can change your panel to a dark or light colour and the window borders to suit your changes. The default Cinnamon appearance settings look the best in my eyes, and I even applied the exact same when I was testing the [Ubuntu Cinnamon Remix][11] but in orange colour. + +![][12] + +#### 4\. Cinnamon Applets + +Cinnamon applets are all the elements included at your bottom panel like the calendar or the keyboard layout switcher. At the manage tab, you can add/remove the already installed applets. + +You should definitely explore the applets you can download, the weather and [CPU temperature][13] Indicator applets were my choices from the extras. + +![][14] + +#### 5\. Cinnamon Desklets + +Cinnamon Desklets are applications that can be placed directly to your desktop. Like all the other customization option, Desklets can be accessed from the settings menu and the wide variety of choices can attract anyone’s interest. Google calendar is a handy app to keep track of your schedule directly on your desktop. + +![][15] + +#### 6\. Desktop wallpaper + +To change the desktop background on Cinnamon desktop, simply right click on the desktop and choose “Change Desktop Background. It will open an easy to use window, where on the left side the available background system folders are listed and on the ride pane there is a preview of the images within each folder. + +![][16] + +You can add your own folders by clicking the plus (+) symbol by navigating to its path. At the Settings tab you can choose if you background will be static or slideshow and how the background is being positioned on the screen. + +![][17] + +#### 7\. Customize what’s on your desktop screen + +The background is not the only desktop element that you can change. You can find more options if you right click on the desktop and click on “Customise”. + +![][18] + +You can change the icon size, change the placement from vertical to horizontal and the spacing among them on both axis. If you don’t like what you did, click in reset grid spacing to go back to the default. + +![][19] + +Additionally, if you click on “Desktop Settings”, more options will be revealed. You can disable the icons on the desktop, place them on the primary or secondary monitor, or even both. As you can see, you can select some of the icons to appear on your desktop. + +![][20] + +### Conclusion + +Cinnamon desktop is one of the best to choose, especially if you are [switching from windows to Linux][21], but also for someone who is looking to a simple yet elegant desktop. + +Cinnamon desktop is very stable and never crashed on my hands, and it is one of the main reasons why it served me for so long on a variety of Linux distributions. + +I didn’t go in much details but gave you enough pointers to explore the settings on your own. Your feed to improve Cinnamon cuztomization is welcome. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/customize-cinnamon-desktop/ + +作者:[Dimitrios][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/dimitrios/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-linux-beginners/ +[2]: https://itsfoss.com/reasons-switch-linux-windows-xp/ +[3]: https://itsfoss.com/ubuntu-mate-customization/ +[4]: https://itsfoss.com/kde-customization/ +[5]: https://itsfoss.com/lmde-4-release/ +[6]: https://itsfoss.com/find-desktop-environment/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/6-Cinnamon-settings.png?resize=800%2C680&ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/7-Cinnamon-Settings.png?resize=800%2C630&ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/8-cinnamon-effects.png?resize=800%2C630&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/11-font-selection.png?resize=800%2C650&ssl=1 +[11]: https://itsfoss.com/ubuntu-cinnamon-remix-review/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/10-cinnamon-themes-and-icons.png?resize=800%2C630&ssl=1 +[13]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/ +[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/12-cinnamon-applets.png?resize=800%2C630&ssl=1 +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/13-cinnamon-desklets.png?resize=800%2C630&ssl=1 +[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/1.-Cinnamon-change-desktop-background.png?resize=800%2C400&ssl=1 +[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/2-Cinnamon-change-desktop-background.png?resize=800%2C630&ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/1.-desktop-additional-customization.png?resize=800%2C400&ssl=1 +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/4-desktop-additional-customization.png?resize=800%2C480&ssl=1 +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/5-desktop-additional-customization.png?resize=800%2C630&ssl=1 +[21]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ diff --git a/translated/talk/20160921 lawyer The MIT License, Line by Line.md b/translated/talk/20160921 lawyer The MIT License, Line by Line.md new file mode 100644 index 0000000000..eb050aad1b --- /dev/null +++ b/translated/talk/20160921 lawyer The MIT License, Line by Line.md @@ -0,0 +1,290 @@ +[#]: collector: (lujun9972) +[#]: translator: (bestony) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (lawyer The MIT License, Line by Line) +[#]: via: (https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html) +[#]: author: (Kyle E. Mitchell https://kemitchell.com/) + +逐行解读 MIT 许可证 +====== + +[MIT 许可证][1] 是世界上最流行的开源软件许可证。以下是它的逐行解读。 + +#### 阅读协议 + +如果你涉及到开源软件,并且没有花时间从头到尾的阅读整个许可证(它只有 171 个单词),你现在就需要这样去做。尤其是当许可证不是你日常的工作内容时。把任何看起来不对劲或不清楚的地方记下来,然后继续阅读。我会把每一个单词再重复一遍,并按顺序分块,加入上下文和注释。但最重要的还是要牢记整体。 + +> The MIT License (MIT) +> +> Copyright (c) +> +> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: +> +> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. +> +> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the Software. + +许可证可以分为五段,按照逻辑划分如下: + + * **头部** + * **许可证标题** : “MIT 许可证” + * **版权说明** : “Copyright (c) …” + * **许可证授权** : “特此批准 …” + * **授权范围** : “… 处理软件 …” + * **条件** : “… 服从了 …” + * **归属和通知** : “上述 … 应当被包含在内 …” + * **免责声明** : “软件按照“原状”提供 …” + * **责任限制** : “在任何情况下 …” + + +接下来详细看看 + +#### 头部 + +##### 许可证头部 + +> The MIT License (MIT) + +“MIT 许可证“不是单一的许可证,而是一系列从麻省理工学院为将要发布的语言准备的许可证衍生的许可证。多年来,无论是对于使用它的原始项目,还是作为其他项目的模型,他都经历了许多变化。Fedora 项目维护了一个纯文本的[麻省理工学院许可证其他版本]的页面,如同泡在甲醛中的解剖标本一般,平淡的追溯了无序的演变。 + +幸运的是,[OSI(开放源码倡议)][3] 和 [Software Package Data eXchange(软件数据包交换)]团体已经将一种通用的 MIT 式的许可证形式标准化为”MIT 许可证“。而 OSI 则采用了 SPDX 标准化的[字符串标志符][5],并将其中的 ”MIT“ 明确的指向标准化形式的”MIT 许可证“ + +即使你在 “LICENSE” 文件中包含 “MIT 许可证”或 “SPDX:MIT“ ,任何负责的审查员仍会将文本与标准格式进行比较,以确保安全。尽管自称为“MIT 许可证”的各种许可证形式只在细微的细节上有所不同,但“MIT 许可证”的松散性吸引了一些作者加入麻烦的“定制”。典型的糟糕、不好的例子是[JSON 许可证][6],一个 MIT 家族的许可证被加上了“这个软件应该被应用于好的,而不是恶的”。这件事情可能是“非常克罗克福特”的(译者注,JSON 格式和 JSON.org 的作者)。这绝对是一件麻烦事,也许这个笑话应该只是在律师身上,但它一直延伸到银行业。 + +这个故事的寓意是:“MIT 许可证”本身就是模棱两可的。大家可能很清楚你的意思,但你只需要把标准的 MIT 许可证文本复制到你的项目中,就可以节省每个人的时间。如果使用元数据(如包管理器中的元数据文件)来制定 “MIT 许可证”,请确保 “LICENSE” 文件和任何头部的注释都适用标准的许可证文本。所有的这些都可以[自动化完成][7]。 + + +##### 版权说明 + +> Copyright (c) + + +在 1976 年《版权法》颁布之前,美国的版权法要求采取具体的行动,即所谓的“手续”来确保创意作品的版权。如果你不遵守这些手续,你起诉他人未经授权使用你的作品的权力就会受到限制,往往完全丧失权力,其中一项手续就是 "通知"。在你的作品上打上记号,以其他方式让市场知道你拥有版权 ©是一个标准符号,用于标记受版权保护的作品,以发出版权通知。ASCII 字符集没有©符号,但`Copyright (c)`可以表达同样的意思。 + +1976 年的版权法“执行”了国际《伯尔尼公约》的许多要求, 取消了确保版权的手续。至少在美国,著作权人在起诉侵权之前,仍然需要对自己的版权作品进行登记,如果在侵权行为开始之前进行登记,可能会获得更高的赔偿。但在实践中,很多人在对某个人提起诉讼之前,都会先注册版权。你并不会因为没有在上面贴上告示、注册、向国会图书馆寄送副本等而失去版权。 + +即使版权声明不像过去那样绝对必要,但它们仍然有很多用处。说明作品的创作年份和版权属于谁,可以让人知道作品的版权何时到期,从而使作品进入公共领域。作者或作者们的身份也很有用。美国法律对个人作者和"公司"作者的版权条款的计算方式不同。特别是在商业用途中,公司在使用已知竞争对手的软件时,可能也要三思而行,即使许可条款给予了非常慷慨的许可。如果你希望别人看到你的作品并想从你这里获得许可,版权声明可以很好地起到归属作用。 + +至于"版权持有人"。并非所有标准形式的许可证都有写明这一点的空间。最新的许可证形式,如[Apache 2.0][8]和[GPL 3.0][9],公布了 "LICENSE" 文本,这些文本是要逐字复制的,并在其他地方加上标题注释和单独的文件,以表明谁拥有版权和给予许可证。这些办法巧妙地阻止了对 "标准"文本的意外或故意的修改。这还使自动许可证识别更加可靠。 + +麻省理工学院的许可证是从为机构发布代码而写的语言演变而来。对于机构发布的代码,只有一个明确的 "版权持有人",即发布代码的机构。其他机构则抄袭了这些许可证,用他们自己的名字代替 "MIT",最终形成了我们现在的通用形式。这一过程同样适用于该时代的其他简短机构许可证,特别是加州大学伯克利分校的[最初的四条款 BSD 许可证][10],现在用于[三条款][11]和[两条款][12]变体,以及麻省理工学院的变体 — 互联网系统联盟的[ISC 许可证][13]。 + +在每一种情况下,该机构都根据版权所有权规则将自己列为版权持有人,这些规则称为“[雇佣作品][14]”规则,这些规则赋予雇主和客户对其雇员和承包商代表其从事的某些工作的版权所有权。这些规则通常不适用于自愿提交代码的分布式协作者。这给项目统筹型基金会(如 Apache 基金会和 Eclipse 基金会)带来了一个问题,它们接受来自更多不同贡献者的贡献。到目前为止,通常的基础方法是使用一个房产型许可证,它规定了一个版权持有者,如[Apache 2.0][8] 和 [EPL 1.0][15] — 并由贡献者许可协议 [Apache CLAs][16] 以及 [Eclipse CLAs][17] 支持,以从贡献者中收集权利。在像 GPL 这样的 "copyleft" 许可证下,将版权所有权收集在一个地方就更加重要了,因为 GPL 依靠版权所有者来执行许可证条件,以促进软件自由的价值。 + +如今,没有任何机构或业务统筹的大量项目都使用 MIT 风格的许可条款。 SPDX 和 OSI 通过标准化不涉及特定实体或机构版权持有人的 MIT 和 ISC 之类的许可证形式,为这些用例提供了帮助。有了这些许可证,项目作者的普遍做法是在许可证的版权声明中很早就填上自己的名字...也许还会在这里和那里填上年份。至少根据美国的版权法,由此产生的版权通知并不能说明全部情况。 + +某个软件的原始所有者保留其工作的所有权。但是,尽管 MIT 风格的许可条款赋予了他人开发和更改软件的权利,创造了法律所谓的“衍生作品”,但它们并没有赋予原始作者他人的贡献的所有权。相反,每个贡献者都以他们使用现有代码为起点进行的任何[甚至是少量创造][18]的作品来拥有版权。 + +这些项目中的大多数也对获得贡献者许可协议的想法犹豫不决,更不用说签署的版权转让了。这既幼稚又可以理解。尽管一些较新的开源开发人员假设在 GitHub 上发送 Pull Request “自动”根据项目现有许可证的条款授权分发贡献,但美国法律不承认任何此类规则。默认而强大的版本保护是不允许许可的。 + +更新:GitHub 后来修改了全站的服务条款,包括试图改变这一默认值,至少在 GitHub.com 上是这样。我在[另一篇][19]中写了一些对这一发展的并非都是正面的看法。 + +为了填补法律上有效的、有据可查的贡献权利授予与完全没有文件线索之间的差距,一些项目采用了[开发者原创证书][20],这是贡献者在 Git 提交中使用 "Signed-Off-By" 元数据标签暗示的标准声明。 开发人员原创证书是在臭名昭著的 SCO 诉讼之后,为 Linux 内核开发而开发的,该诉讼称 Linux 的大部分代码源自 SCO 拥有的 Unix 作为来源。作为创建显示 Linux 的每一行都来自贡献者的书面记录的一种方法,开发人员原创证书功能很好。尽管开发人员原产地证书不是许可证,但它确实提供了许多充分的证据,表明提交代码的人希望项目分发其代码,并让其他人根据内核现有的许可证条款使用该代码。内核还维护着一个机器可读的 “CREDITS” 文件,列出了具有名称,隶属关系,贡献区域和其他元数据的贡献者。我已经做了[一些][21][实验][22]针对不使用内核开发流程的项目进行了调整。 + +#### 许可证授权 + +> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), + +MIT 许可证的实质是许可证(你猜对了)。一般来说,许可证是一个人或法律实体"许可人"给予另一个人"被许可人"做一些法律允许他们起诉的事情的许可。MIT 许可证是一种不起诉的承诺。 + +法律有时将许可证与给予许可证的承诺区分开来。如果有人违背了提供许可证的承诺,你可以起诉他们违背了承诺,但你最终可能得不到许可证。“特此”是律师们永远摆脱不了的一个老生常谈的词。这里使用它来显示许可证文本本身提供了许可证,而不仅仅是许可证的承诺。这是合法的[IIFE][23]。 + +尽管许多许可证都授予特定的命名许可证持有人许可,但 MIT 许可证是“公共许可证”。 公共许可证授予所有人(包括整个公众)许可。 这是开源许可中的三大创意之一。 MIT 许可证通过“向任何获得……软件副本的人”授予许可证来体现这一思想。 稍后我们将看到,获得此许可证还有一个条件,即确保其他人也可以了解他们的许可。 + +在美国式法律文件中,带引号大写的附加语(定义)是赋予术语特定含义的标准方式。当法院看到文件中其他地方使用了一个已定义的大写术语时,法院将可靠地回顾定义中的术语。 + +##### 授权范围 + +> to deal in the Software without restriction, + +从被许可人的角度来看,这是 MIT 许可证中最重要的七个字。关键的法律问题是被起诉侵犯版权和被起诉侵犯专利。无论是版权法还是专利法都没有将 "to deal in" 作为一个术语,它在法庭上没有特定的含义。因此,任何法院在裁决许可人和被许可人之间的纠纷时,都会问双方对这种语言的含义和理解。法院将看到的是,该措辞有意宽泛和开放。它使被许可人强烈反对许可人的任何主张,即他们没有许可被许可人使用该软件做特定的事情,即使在授予许可时双方都没有明显想到。 + +> including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, + +任何一篇法律文章都不是完美的,“意义上完全确定”或明确无误的。小心那些装作不一样的人。这是 MIT 许可证中最不完美的部分。主要有三个问题: + +首先,“包括但不限于”是一种法律反模式。它有多种衍生: + + * 包括但不限于 + + * 包括但不限于前述条文的一般性 + + * 包括但不限于 + + * 很多,很多毫无意义的变化 + +所有这些都有一个共同的目标,但都未能可靠地实现。从根本上说,使用它们的起草者也会尽量试探着去做。在MIT许可证中,这意味着引入“软件交易”的具体例子 — “使用、复制、修改”等等,但不意味着被许可方的行为必须与给出的例子类似,才能算作“交易”。问题是,如果你最终需要法庭来审查和解释许可证的条款,法庭将把它的工作看作是找出这些语言的含义。如果法院需要决定“交易”的含义,它不能“看不到”这些例子,即使你告诉它。我认为“不受限制的软件交易”本身对被许可方更好,也更短。 + +其次,作为 “deal in” 例子的动词是一个大杂烩。有些在版权法或专利法下有特定的含义,有些几乎有或根本没有: + + + * 使用出现在 [美国法典第 35 篇, 第 271(a)节][24], 专利法列出了专利权人可以在未经许可的情况下起诉他人的行为。 + + * 拷贝出现在 [美国法典第 17 篇, 第 106 节][25], 版权法列出了版权所有人可以在未经许可的情况下起诉他人的行为。 + + * 修改既不出现在版权法中,也不出现在专利法中。它可能最接近版权法下的“准备衍生作品”,但也可能涉及改进或其他衍生发明。 + + * 无论是在版权法还是专利法中,合并都没有出现。“合并”在版权方面有特定的含义,但这显然不是本文的意图。相反,法院可能会根据其在行业中的含义来解读“合并”,如“合并法典”。 + + * 无论是在版权法还是专利法中,都没有公布。由于“软件”是正在出版的东西,根据[版权法][25],它可能最接近于“发行”。该法令还包括“公开”表演和展示作品的权利,但这些权利只适用于特定类型的受版权保护的作品,如戏剧、录音和电影。 + + * 分发出现在[版权法][25]中。 + + * 分许可是知识产权法的总称。再许可权是指授予他人自己的许可,进行您所许可的部分或全部活动的权利。实际上,MIT 许可证的分许可证权利实际上在开放源代码许可证中并不常见。希瑟·米克(Heather Meeker)所说的规范是“直接许可”方法,在这种方法中,每个获得该软件及其许可条款副本的人都直接从所有者那里获得许可。任何可能根据 MIT 许可证获得分许可证的人都可能会得到一份许可证副本,告诉他们他们也有直接许可证。 + + + * 卖书是个混血儿。它接近于[专利法][24]中的“要约出售”和“出售”,但指的是“复制品”,一种版权概念。在版权方面,它似乎接近于“分发”,但[版权法][25]没有提到销售。 + + * 允许向其提供软件的人员这样做似乎是多余的“分许可”。这也是不必要的,因为获得拷贝的人也可以直接获得许可证。 + +最后,由于这种法律、行业、一般知识产权和一般使用条款的混杂,并不清楚《麻省理工学院许可证》是否包括专利许可。一般性语言 "交易 "和一些例子动词,尤其是 "使用",都指向了尽管是一个非常不明确的许可的专利许可。许可证来自于版权人,而版权人可能对软件中的发明拥有或不拥有专利权,以及大多数的例子动词和 "软件" 本身的定义,都强烈地指向版权许可证。诸如[Apache 2.0][8]之类的较新的开放源代码许可分别特别地处理了版权,专利甚至商标。 + +##### 三个许可条件 + +> subject to the following conditions: + +总有一个陷阱!麻省理工有三个! + +如果你不遵守麻省理工学院许可证的条件,你就得不到许可证提供的许可。因此,如果不按照条件所说的去做,至少在理论上会让你面临一场诉讼,很可能是一场版权诉讼。 + +开源软件的第二个好主意是,利用软件对被许可人的价值来激发对条件的遵守,即使被许可人不为许可支付任何费用。 最后一个想法,在《麻省理工学院许可证》中没有,它建立在许可证条件之上。像[GNU 通用公共许可证][9]这样的 "Copyleft "许可证,使用许可证条件来控制那些进行修改的人如何对其修改后的版本进行许可和发布。 + +##### 通知条件 + +> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +如果你给别人一份软件的副本,你需要包括许可证文本和任何版权声明。这有几个关键目的。 + + 1. 给别人一个通知,说明他们在公共许可证下对软件有许可。这是直接授权模式的一个关键部分,在这种模式下,每个用户都能直接从版权持有人那里获得授权。 + + 2. 让人们知道谁是软件的幕后推手,这样他们就可以得到赞美、荣耀和冷冰冰的现金捐赠。 + + 3. 确保保修免责声明和责任限制(下一步)跟在软件后面。每一个得到副本的人也应该得到一份这些许可人保护的副本。 + + +没有任何东西可以阻止你为提供一个没有源代码的副本,甚至是编译形式的副本而收费。但是当你这么做的时候,你不能假装 MIT 代码是你自己的专有代码,或者是在其他许可下提供的。获得“公共许可证”的人可以了解他们在“公共许可证”下的权利。 + +坦率地说,遵守这个条件是崩溃的。几乎所有的开源许可证都有这样的"归属"条件。系统和装机软件的制作者往往明白,他们需要为自己的每一个发行版本编制一个通知文件或 "许可证信息 "屏幕,并附上库和组件的许可证文本副本。项目统筹基金会在教授这些做法方面起到了重要作用。但是网络开发者作为一个整体,还没有得到备忘录。这不能用缺乏工具来解释,工具有很多,也不能用 npm 和其他资源库中的包的高度模块化来解释,它们统一了许可证信息的元数据格式。所有好的 JavaScript minifiers 都有命令行标志来保存许可证头注释。其他工具会从包树中连接`LICENSE`文件。这实在是无可厚非。 + +##### 免责声明 + +> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. + +美国几乎每个州都颁布了统一商业法典的版本,该法典是规范商业交易的法律范本。 UCC 的第2条(加利福尼亚州的“第2分部”)规定了商品销售合同,从批量购买的二手车到大批工业化学品再到制造厂。 + +UCC 关于销售合同的某些规则是强制性的。 这些规则始终适用,无论买卖双方是否喜欢。 其他只是“默认值”。 除非买卖双方以书面形式选择退出,否则 UCC 表示他们希望在 UCC 文本中找到交易的基准规则。 默认规则中包括隐含的“保证”,或卖方对买方关于所售商品的质量和可用性的承诺。 + +关于诸如 MIT 许可证之类的公共许可证是合同(许可方和被许可方之间的可执行协议)还是仅仅是许可,这在理论上存在很大争议,这是一种方式,但可能附带条件。 关于软件是否被视为“商品”,从而触发了 UCC 的规则,争论较少。 许可人之间没有就赔偿责任进行辩论:如果他们免费提供的软件可以免费休息,造成问题,无法正常工作或以其他方式引起麻烦,那么他们就不会被起诉要求巨额赔偿。 这与“默示保证”的三个默认规则完全相反: + + 1. [UCC 第2-314节][26]所隐含的“可商购性”保证是对“商品”(即软件)的质量至少为平均水平,并经过适当包装和标记,并符合其常规用途, 意在服务。 仅当提供该软件的人是该软件的“商人”时,此保证才适用,这意味着他们从事软件交易并表现出对软件的熟练程度。 + + 2. 当卖方知道买方依靠他们提供用于特定目的的货物时,[UCC第 2-315节][27]中的“针对特定目的的适用性”的隐含担保即刻生效。 为此,商品实际上需要“适合”。 + + 3. 隐含的“非侵权”保证不是 UCC 的一部分,而是一般合同法的共同特征。 如果事实证明买方收到的商品侵犯了他人的知识产权,则该隐含的承诺将保护买方。 如果根据MIT许可获得的软件实际上并不属于尝试许可该软件的软件,或者属于他人拥有的专利,那就属于这种情况。 + + +UCC 的[第2-316(3)节][28]要求,出于明显的目的,选择退出或“排除”隐含的适销性和适用性的默示保证。 反过来,“显眼”是指书写或格式化以引起人们的注意,这与微观精细印刷的反义词相反,意在溜过粗心的消费者。 州法律可能会对不侵权免责声明施加类似的引人注目的要求。 + +长期以来,律师们都有一种错觉,认为用 "全大写 "写任何东西都符合明显的要求。这是不正确的。法院曾批评律师协会自以为是,而且大多数人都认为,全大写更多的是阻止阅读,而不是强制阅读。同样的,大多数开源许可表格都将其担保免责声明设置为全大写,部分原因是这是在纯文本的 `LICENSE` 文件中唯一明显的方式。我更喜欢使用星号或其他 ASCII 艺术,但那是很久很久以前的事了。 + +##### 责任限制 + +> In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software. + +麻省理工学院许可证允许 "免费 "使用软件,但法律并不认为接受免费许可证的人在出错时放弃了起诉的权利,而要责怪许可人。"责任限制",通常与 "损害赔偿排除条款 "搭配使用,其作用与许可证很像,是不起诉的承诺。但这些都是保护许可人免受被许可人起诉的保护措施。 + +一般来说,法院对责任限制和损害赔偿排除条款的解读非常谨慎,因为这些条款可以将大量的风险从一方转移到另一方。为了保护社会的重大利益,让人们有办法在法庭上纠正错误,他们 "严格解释 "限制责任的语言,在可能的情况下对受其保护的一方进行解读。责任限制必须具体才能成立。特别是在 "消费者 "合同和其他放弃起诉权的人缺乏复杂性或讨价还价能力的情况下,法院有时会拒绝尊重那些似乎被埋没在视线之外的语言。部分是出于这个原因,部分是出于习惯,律师们往往也会给责任限制以全称处理。 + +再往下看,"责任限制 "部分是对被许可人可以起诉的金额的上限。在开源许可证中,这个上限总是没有钱,0元,"不负责任"。相比之下,在商业许可证中,它通常是过去 12 个月内支付的许可证费用的倍数,尽管它通常是经过谈判的。 + +“排除”部分具体列出了各种法律主张,即请求赔偿的理由,许可人无法使用。 像许多其他法律形式一样,MIT 许可证 提到了“违反合同”的行为(即违反合同)和“侵权”的行为。 侵权规则是防止粗心或恶意伤害他人的一般规则。 如果您在发短信时在路上撞人,则表示您犯了侵权行为。 如果您的公司销售的有问题的耳机会烧伤人们的耳朵,则说明您的公司已经侵权。 如果合同没有明确排除侵权索赔,那么法院有时会在合同中使用排除语言,以仅阻止合同索赔。 出于很好的考虑, MIT 许可证抛出“或其他”字样,只是为了抓住奇怪的海事法或其他奇特的法律主张。 + +“产生于、来自或与之相关”这句话是法律起草人固有的、焦虑的不安全感的反复出现的症状。关键是,任何与软件有关的诉讼都包含在限制和排除范围内。在偶然的情况下,有些东西可以“产生”,但不能“产生”,或者“与之相关”,把这三者都放在形式上感觉更好,所以把它们打包。更不用说,任何法院被迫在表格的这一部分分头讨论的问题,都必须对每一个问题给出不同的含义,前提是专业的起草者不会在一行中使用不同的词来表示同一件事。更不用说,在实践中,如果法院对一开始不受欢迎的限制感觉不好,那么他们将更愿意狭隘地解读范围触发器。但我离题了。同样的语言出现在数以百万计的合同中。 + +#### 总结 + +所有这些诡辩都有点像在进教堂的路上吐口香糖。MIT 许可证是一个法律经典且有效。 它绝不是所有软件 IP 弊病的灵丹妙药,尤其是它早在几十年前就已经出现的软件专利灾难。但麻省理工学院风格的许可证发挥了令人钦佩的作用,实现了一个狭隘的目的,用最少的谨慎的法律工具组合扭转了版权、销售和合同法等棘手的默认规则。在计算的大背景下,它的寿命是惊人的。麻省理工学院的许可证已经和将要超过绝大多数的软件许可证。我们只能猜测,当它最终失宠时,它将提供多少年忠实的法律服务。对于那些付不起自己律师的人来说,这是特别慷慨的。 + +我们已经看到,我们今天所知道的麻省理工学院的许可证是一套具体的、标准化的条款,最终将秩序带入了一个混乱的机构特定的、随意的变化。 + +我们已经看到了它的方法,归因和版权通知通知知识产权管理的做法,学术,标准,商业和基础机构。 + +我们已经看到了麻省理工学院的许可证是如何免费授予所有人软件许可的,但前提是要保护许可人不受担保和责任的影响。 + +我们已经看到,尽管有一些刻薄的措辞和律师的矫揉造作,但一百七十一个小词可以完成大量的法律工作,通过密集的知识产权和合同丛林为开源软件扫清了一条道路。 +我非常感谢所有花时间阅读这篇相当长的文章的人,让我知道他们发现它很有用,并帮助改进它。一如既往,我欢迎您通过[e-mail][29]、[Twitter][30]和[GitHub][31]发表评论。 + + +有很多人问,他们在哪里可以读到更多的东西,或者找到其他许可证,比如 GNU 通用公共许可证或 Apache 2.0 许可证。无论你的兴趣是什么,我都会向你推荐以下书籍: + + * Andrew M. St. Laurent 的 [Understanding Open Source & Free Software Licensing][32], 来自 O’Reilly. + +我先说这本,因为虽然它有些过时,但它的方法也最接近上面使用的逐行方法。O'Reilly 已经把它[放在网上][33]。 + + * Heather Meeker’s [Open (Source) for Business][34] + +在我看来,这是迄今为止关于 GN U通用公共许可证和更广泛的 copyleft 的最佳著作。这本书涵盖了历史、许可证、它们的发展,以及兼容性和合规性。这本书是我给那些考虑或处理 GPL 的客户的书。 + + * Larry Rosen’s [Open Source Licensing][35], from Prentice Hall. + +一本很棒的第一本书,也可以免费[在线阅读][36]。对于从零开始的程序员来说,这是开源许可和相关法律的最好介绍。这本在一些具体细节上也有点过时了,但 Larry 的许可证分类法和对开源商业模式的简洁总结经得起时间的考验。 + + + +所有这些都对我作为一个开源许可律师的教育至关重要。它们的作者都是我的职业英雄。请读一读吧 — K.E.M + +我将此文章基于 [Creative Commons Attribution-ShareAlike 4.0 license][37] 授权 + + +-------------------------------------------------------------------------------- + +via: https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html + +作者:[Kyle E. Mitchell][a] +选题:[lujun9972][b] +译者:[bestony](https://github.com/bestony) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://kemitchell.com/ +[b]: https://github.com/lujun9972 +[1]: http://spdx.org/licenses/MIT +[2]: https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT +[3]: https://opensource.org +[4]: https://spdx.org +[5]: http://spdx.org/licenses/ +[6]: https://spdx.org/licenses/JSON +[7]: https://www.npmjs.com/package/licensor +[8]: https://www.apache.org/licenses/LICENSE-2.0 +[9]: https://www.gnu.org/licenses/gpl-3.0.en.html +[10]: http://spdx.org/licenses/BSD-4-Clause +[11]: https://spdx.org/licenses/BSD-3-Clause +[12]: https://spdx.org/licenses/BSD-2-Clause +[13]: http://www.isc.org/downloads/software-support-policy/isc-license/ +[14]: http://worksmadeforhire.com/ +[15]: https://www.eclipse.org/legal/epl-v10.html +[16]: https://www.apache.org/licenses/#clas +[17]: https://wiki.eclipse.org/ECA +[18]: https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co. +[19]: https://writing.kemitchell.com/2017/02/16/Against-Legislating-the-Nonobvious.html +[20]: http://developercertificate.org/ +[21]: https://github.com/berneout/berneout-pledge +[22]: https://github.com/berneout/authors-certificate +[23]: https://en.wikipedia.org/wiki/Immediately-invoked_function_expression +[24]: https://www.govinfo.gov/app/details/USCODE-2017-title35/USCODE-2017-title35-partIII-chap28-sec271 +[25]: https://www.govinfo.gov/app/details/USCODE-2017-title17/USCODE-2017-title17-chap1-sec106 +[26]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2314.&lawCode=COM +[27]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2315.&lawCode=COM +[28]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2316.&lawCode=COM +[29]: mailto:kyle@kemitchell.com +[30]: https://twitter.com/kemitchell +[31]: https://github.com/kemitchell/writing/tree/master/_posts/2016-09-21-MIT-License-Line-by-Line.md +[32]: https://lccn.loc.gov/2006281092 +[33]: http://www.oreilly.com/openbook/osfreesoft/book/ +[34]: https://www.amazon.com/dp/1511617772 +[35]: https://lccn.loc.gov/2004050558 +[36]: http://www.rosenlaw.com/oslbook.htm +[37]: https://creativecommons.org/licenses/by-sa/4.0/legalcode \ No newline at end of file diff --git a/translated/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md b/translated/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md deleted file mode 100644 index fe8778a44a..0000000000 --- a/translated/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md +++ /dev/null @@ -1,78 +0,0 @@ -GCC:优化 Linux、互联网和一切 -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc-paper.jpg?itok=QFNUZWsV) - -软件如果不能被电脑运行,那么它就是无用的。而在处理 run-time 性能的问题上,即使是最有才华的开发人员也会受编译器的支配——因为如果没有可靠的编译器工具链,就无法构建任何重要的东西。GNU Compiler Collection(GCC)提供了一个健壮、成熟和高性能的工具,帮助您充分发挥你代码的潜能。经过几十年成千上万人的开发,GCC 成为了世界上最受尊敬的编译器之一。如果您正在构建应用程序而不使用 GCC,那么您可能错过了最佳解决方案。 - -根据 LLVM.org 的说法,GCC 是“如今事实上的标准开源编译器”[1],它可以从内核开始的构建完整的系统。GCC 支持超过 60 种硬件平台,包括 ARM、Intel、AMD、IBM POWER、SPARC、HP PA-RISC 和 IBM Z,以及各种操作环境,包括 GNU、Linux、Windows、macOS、FreeBSD、NetBSD、OpenBSD、DragonFly BSD、Solaris、AIX、HP- ux 和 RTEMS。它提供了高度兼容的 C/C++ 编译器,并支持流行的 C 库,如 GNU C Library(glibc)、Newlib、musl 和各种 BSD 操作系统中包含的 C 库,以及 Fortran、Ada 和 GO 语言的前端。GCC 还可以作为一个交叉编译器,为运行编译器的平台以外的其他平台创建可执行代码。GCC 是紧密集成的 GNU 工具链的核心组件,由 GNU 项目产生,包括 glibc、Binutils 和 GNU 调试器(GDB)。 - -“一直以来我最喜欢的 GNU 工具是 GCC。在开发工具非常昂贵的时候,GCC 是第二个 GNU 工具,它使一个社区能够编写和构建所有其他工具。这个工具一手改变了行业,导致了自由软件运动的诞生,因为一个好的、自由的编译器是一个社区软件的先决条件。”—— Red Hat 开源和标准团队的 Dave Neary。[2] - -### 优化 Linux - -作为 Linux 内核源代码的默认编译器,GCC 提供了可靠、稳定的性能以及正确构建内核所需的扩展。GCC 是流行的 Linux 发行版的标准组件,比如 ArchLinux、CentOS、Debian、Fedora、openSUSE 和 Ubuntu,在这些发行版中,GCC 通常用来编译支持系统的组件。这包括 Linux 使用的默认库(如 libc、libm、libintl、libssh、libssl、libcrypto、libexpat、libpthread 和 ncurses),这些库依赖于 GCC 来提供可靠性和高性能,并且使应用程序和系统程序可以访问 Linux 内核功能。发行版中包含的许多应用程序包也是用 GCC 构建的,例如 Python、Perl、Ruby、nginx、Apache HTTP 服务器、OpenStack、Docker 和 OpenShift。各个 Linux 发行版使用 GCC 构建的大量代码组成了内核、库和应用程序软件。对于 openSUSE 发行版,几乎 100% 的本机代码是由 GCC 构建的,包括 6135 个源程序包,5705 个共享库和 38927 个可执行文件。这相当于每周编译 24540 个源代码包。[3] - -Linux 发行版中包含的 GCC 的基本版本用于创建定义系统应用程序二进制接口(ABI)的内核和库。用户空间开发者可以选择下载 GCC 的最新稳定版本,以获得高级功能、性能优化和可用性改进。Linux 发行版提供安装说明或预构建的工具链,用于部署最新版本的 GCC 以及其他 GNU 工具,这些工具有助于提高开发人员的工作效率和缩短部署时间。 - -### 优化互联网 - -GCC 是嵌入式系统中被广泛采用的核心编译器之一,支持为日益增长的物联网设备开发软件。GCC 提供了许多扩展,使其非常适合嵌入式系统软件开发,包括使用编译器内置的细粒度控制、#pragmas、内联汇编和以应用程序为中心的命令行选项。GCC 支持广泛的嵌入式体系结构,包括 ARM、AMCC、AVR、Blackfin、MIPS、RISC-V、Renesas Electronics V850、NXP 和 Freescale Power 处理器,可以生成高效、高质量的代码。GCC 提供的交叉编译功能是开源社区至关重要的功能,预构建的交叉编译工具链[4]是被广泛运用的工具。例如,GNU ARM 嵌入式工具链是集成和已被验证的软件包,具有 ARM 嵌入式 GCC 编译器、库和裸机软件开发所需的其他工具。这些工具链可用于在 Windows、Linux 和 macOS 主机操作系统上对流行的 ARM Cortex-R 和 Cortex-M 处理器进行交叉编译,这些处理器已在数百亿台支持互联网的设备中发布。[5] - -GCC 支持云计算,为需要直接管理计算资源的软件提供可靠的开发平台,如数据库和 web 服务引擎以及备份和安全软件。GCC 完全兼容 C++ 11 和 C++ 14,为 C++ 17 和 C++ 2a [6] 提供实验支持,创建具有可靠调试信息的性能目标代码。使用 GCC 的应用程序的一些例子包括:MySQL 数据库管理系统,它依赖于 Linux 的 GCC [7];Apache HTTP 服务器,它建议使用 GCC [8];Bacula,一个企业级网络备份工具,它依赖于 GCC。[9] - -### 优化一切 - -为了研究和开发用于高性能计算(HPC)的科学代码,GCC 提供了成熟的 C、C++ 和 Fortran 前端,并支持基于指令的并行编程的 OpenMP 和 OpenACC api。因为 GCC 提供了跨计算环境的可移植性,它使得代码能够更容易地在各种新的和遗留的客户机和服务器平台上进行测试。GCC 为 C, C++ 和 Fortran 编译器提供了 OpenMP 4.0 的完整支持,为 C 和 C++ 编译器提供了 OpenMP 4.5 完整支持。对于 OpenACC, GCC 支持大多数 2.5 规范和性能优化,并且是唯一提供 [OpenACC][1] 支持的非商业、非学术编译器。 - -代码性能是这个社区的一个重要参数,GCC 提供了一个坚实的性能基础。Colfax Research 于 2017 年 11 月发表的一篇论文评估了 C++ 编译器在 OpenMP 4.x 指令下并行编译代码的速度和代码运行速度。图 1 描绘了不同编译器编译并使用单个线程运行时计算内核的相对性能。使 G++ 的性能标定为 1.0,将其他编译器性能值规范化处理。 - -![performance][3] - -图1 为由不同编译器编译的每个内核的相对性能。(单线程,越高越好)。 - -[Used with permission][4] - -论文总结道:“GNU 编译器在我们的测试中也做得很好。G++ 在六种情况中的三种情况下生成的代码速度是第二快的,并且在编译时间方面是最快的编译器之一。”[10] - -### 谁在用 GCC? - -在 JetBrains 2018 年的开发者生态状况调查中,在接受调查的 6000 名开发者中,66% 的 C++ 程序员和 73% 的 C 程序员经常使用 GCC。[11] 以下简要介绍 GCC 的优点,正是这些优点使它在开发人员社区中如此受欢迎。 - - * 对于需要为各种新的和遗留的计算平台和操作环境编写代码的开发人员,GCC 提供了对最广泛的硬件和操作环境的支持。硬件供应商提供的编译器主要侧重于对其产品的支持,而其他开源编译器在所支持的硬件和操作系统方面则受到很大限制。[12] - - * 有各种各样的基于 GCC 的预构建工具链,这对嵌入式系统开发人员特别有吸引力。这包括 GNU ARM 嵌入式工具链和 Bootlin 网站上提供的 138 个预编译交叉编译器工具链。[13] 虽然其他开源编译器(如 Clang/LLVM)可以取代现有交叉编译工具链中的 GCC,但开发人员需要完全重建这些工具链。[14] - - * GCC 通过成熟的编译器平台向应用程序开发人员提供可靠、稳定的性能。在 AMD EPYC 平台上用 GCC 8/9 与 LLVM Clang 6/7 编译器基准测试的文章提供了 49 个基准测试的结果,这些测试的编译器在三个优化级别上运行。使用 “-O3 -march=native”级别的 GCC 8.2 RC1 在 34% 的时间里排在第一位,而在相同的优化级别 LLVM Clang 6.0 在 20% 的时间里赢得了第二位。[15] - - * GCC 为编译调试 [16] 提供了改进的诊断,并为运行调试提供了准确而有用的信息。GCC 与 GDB 紧密集成,GDB 是一个成熟且功能齐全的工具,它提供“不间断”调试,可以在断点处停止单个线程。 - - * GCC 是一个受良好支持的平台,它有一个活跃的、承诺的社区,支持当前版本和以前的两个版本。对于每年发布一次的计划,这为一个版本提供了两年的支持。 - -### GCC:仍然在继续优化 Linux,互联网和所有事情 - -GCC 作为世界级的编译器继续向前发展。GCC 的最新版本是 8.2,于 2018 年 7 月发布,增加了对即将推出的 Intel CPU、更多 ARM CPU 的硬件支持,并提高了 AMD 的 ZEN CPU 的性能。初始 C17 支持已经添加到 C++ 2A 的初始工作中。诊断继续得到改进,包括更好的发射诊断,改进的定位、定位范围,并修复提示,特别是在 C++ 前端。Red Hat 的 David Malcolm 在 2018 年 3 月撰写的博客概述了 GCC 8 中的可用性改进。[17] - -新的硬件平台继续依赖 GCC 工具链进行软件开发,例如 RISC-V,这是一种对机器学习、人工智能(AI)和物联网细分市场感兴趣的免费开放 ISA。GCC 仍然是 Linux 系统持续开发的关键组件。Clear Linux Project for Intel Architecture 是一个针对云、客户端和物联网用例构建的新兴发行版,它提供了一个很好的示例,说明如何使用和改进 GCC 编译器技术来提高基于 Linux 的系统的性能和安全性。GCC 还被用于微软 Azure Sphere 的应用程序开发,这是一个基于 Linux 的物联网应用程序操作系统,最初支持基于 ARM 的联发科 MT3620 处理器。在开发下一代程序员方面,GCC 也是树莓派的 Windows 工具链的核心组件,树莓派是一种运行基于 Debian 的 GNU/Linux 的低成本嵌入式板,用于促进学校和发展中国家的基础计算机科学教学。 - -GCC 于 1987 年 3 月 22 日由 GNU 项目的创始人 richardstallman 首次发布,它被认为是一个重大突破,因为它是第一个作为自由软件发布的可移植的 ANSI C 优化编译器。GCC 由来自世界各地的程序员组成的社区在指导委员会的指导下维护,该指导委员会确保对项目进行广泛的、有代表性的监督。GCC 的社区方法是它的优势之一,它形成了一个由开发人员和用户组成的庞大而多样化的社区,为项目做出贡献并提供支持。根据 Open Hub,“GCC 是世界上最大的开源团队之一,在 Open Hub 上的所有项目团队中排名前 2%。”[18] - -关于 GCC 的许可问题,人们进行了大量的讨论,其中大多数是混淆而不是启发。GCC 在 GNU 通用公共许可证版本 3 或更高版本下发布,但运行时库例外。这是一个 copyleft 许可,这意味着衍生作品只能在相同的许可条款下分发。GPLv3 旨在保护 GCC 不被私有化,并要求对 GCC 代码的更改可以自由公开地进行。对于“最终用户”来说,编译器与其他编译器完全相同;使用 GCC 对您自己的代码所做的任何许可选择都没有区别。[19] - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/10/gcc-optimizing-linux-internet-and-everything - -作者:[Margaret Lewis][a] -选题:[lujun9972][b] -译者:[Chao-zhi](https://github.com/Chao-zhi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/margaret-lewis -[b]: https://github.com/lujun9972 -[1]: https://www.openacc.org/tools -[2]: /files/images/gccjpg-0 -[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc_0.jpg?itok=HbGnRqWX "performance" -[4]: https://www.linux.com/licenses/category/used-permission diff --git a/translated/tech/20190924 Integrate online documents editors, into a Python web app using ONLYOFFICE.md b/translated/tech/20190924 Integrate online documents editors, into a Python web app using ONLYOFFICE.md deleted file mode 100644 index d41b4534cb..0000000000 --- a/translated/tech/20190924 Integrate online documents editors, into a Python web app using ONLYOFFICE.md +++ /dev/null @@ -1,365 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (stevenzdg988) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Integrate online documents editors, into a Python web app using ONLYOFFICE) -[#]: via: (https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/) -[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/) - -利用 **ONLYOFFICE** 将在线文档编辑器集成到 Python Web 应用程序中 -====== - -[![][1]][2] - -_[ONLYOFFICE][3]是根据 GNU AGPL v.3 许可条款分发的开源协作办公套件。它包含三个用于文本文档,电子表格和演示文稿的编辑器,并具有以下功能:_ - - * 查看,编辑和协同编辑 `.docx, .xlsx , .pptx` 文件。 OOXML作为一种核心格式,可确保与 Microsoft Word,Excel 和 PowerPoint 文件的高度兼容性。 - * 通过内部转换为 OOXML,编辑其他流行格式(`.odt,.rtf,.txt,.html,.ods,.csv,.odp`)。 - * 熟悉的选项卡式界面。 - * 协作工具:两种协同编辑模式(快速和严谨),跟踪更改,评论和集成聊天。 - * 灵活的访问权限管理:完全访问权限,只读,审阅,表单填写和评论。 - * 使用 API ( 应用程序接口:Application Programming Interface ) 构建附加组件。 - * 250 种可用语言和象形字母表。 - -通过 API ( 应用程序接口:Application Programming Interface ) ,开发人员可以将 ONLYOFFICE 编辑器集成到网站和利用程序设计语言编写的应用程序中,并能配置和管理编辑器。 - -要集成 ONLYOFFICE 编辑器,我们需要一个集成应用程序来连接编辑器( ONLYOFFICE 文档服务器)和服务。 要在您的界面中使用编辑器,因该授予 ONLYOFFICE 以下权限: - - * 添加并执行自定义代码。 - * 用于下载和保存文件的匿名访问权限。这意味着编辑器仅与服务器端的服务通信,而不包括客户端的任何用户授权数据(浏览器cookies)。 - * 在 UI(用户界面) 添加新按钮(例如,“在 ONLYOFFICE 中打开”,“在 ONLYOFFICE 中编辑”)。 - * 开启一个新页面,ONLYOFFICE 可以在其中执行脚本以添加编辑器。 - * 能够指定文档服务器连接设置。 - -流行的协作解决方案的成功集成案例有很多,如 Nextcloud,ownCloud,Alfresco,Confluence和SharePoint,都是通过 ONLYOFFICE 提供的官方即用型连接器实现的。 - -实际的集成案例之一是 ONLYOFFICE 编辑器与以 C# 编写的开源协作平台的集成。 该平台具有文档和项目管理,CRM (客户关系管理) ,电子邮件聚合器,日历,用户数据库,博客,论坛,民意调查,Wiki和即时通讯程序的功能。 - -将在线编辑器与 CRM 和项目模块集成,您可以: - - * 文档关联到 CRM 时机和容器,项目任务和讨论,甚至创建一个单独的文件夹,其中包含与项目相关的文档,电子表格和演示文稿。 - * 直接在 CRM 或项目模块中创建新的文档,工作表和演示文稿。 - * 打开和编辑关联的文档,或者下载和删除。 - * 将联系人从 CSV 文件批量导入到 CRM 中,并将客户数据库导出为 CSV 文件。 - -在“邮件”模块中,您可以关联存储在“文档模块”中的文件,或者将指向所需文档的链接插入到邮件正文中。 当 ONLYOFFICE 用户收到带有附件的文档的消息时,他们可以:下载附件,在浏览器中查看文件,打开文件进行编辑或将其保存到“文档模块”。 如上所述,如果格式不同于 OOXML ,则文件将自动转换为 `.docx / .xlsx / .pptx`,并且其副本也将以原始格式保存。 - -在本文中,您将看到ONLYOFFICE 与最流行的编程语言之一的 Python 编写的文档管理系统的集成过程。 以下步骤将向您展示如何创建所有必要的部分,以使在 DMS(Document Management System)界面内的文档中可以进行协同工作成为可能:查看,编辑,协同编辑,保存文件和用户访问管理,并可以作为服务的示例集成到 Python 应用程序中。 - -**1\. What you will need 你需要什么** - -首先,创建集成过程的关键组件:[_ONLYOFFICE 文档服务器_][4] 和用 Python 编写的文件管理系统。 - -1.1 要安装 ONLYOFFICE 文档服务器,您可以从多个安装选项中进行选择:编译 GitHub 上可用的源代码,使用 `.deb` 或 `.rpm` 软件包亦或 Docker 映像。 -我们推荐使用下面这条命令利用 Docker 映像安装文档服务器和所有必需的依赖。请注意,选择此方法,您需要安装最新的 Docker 版本。 - -``` -docker run -itd -p 80:80 onlyoffice/documentserver-de -``` - -1.2我们需要利用 Python 开发 DMS。 如果已经拥有一个,请检查它是否满足以下条件: - - * 包含需要打开以查看/编辑的保留文件 - * 允许下载文件 - -对于该应用程序,我们将使用 Bottle 框架。我们将使用以下命令将其安装在工作目录中: - -``` -pip install bottle -``` - -然后我们创建应用程序代码 *main.py* 和模板 _index.tpl_。 -我们将以下代码添加到 *main.py* 文件中: - -``` -from bottle import route, run, template, get, static_file # connecting the framework and the necessary components -@route('/') # setting up routing for requests for / -def index(): -return template('index.tpl') # showing template in response to request -run(host="localhost", port=8080) # running the application on port 8080 -``` - -一旦我们运行该应用程序,点击 就会在浏览器上呈现一个空白页面 。 -为了使文档服务器能够创建新文档,添加默认文件并在模板中生成其名称列表,我们应该创建一个文件夹 _files_ 并将3种类型文件(.docx,.xlsx 和 .pptx)放入其中。 - -要读取这些文件的名称,我们使用 _listdir_ 组件(模块)。 - -``` -from os import listdir -``` - -现在让我们为文件夹中的所有文件名创建一个变量: - -``` -sample_files = [f for f in listdir('files')] -``` - -要在模板中使用此变量,我们需要通过 _template_ 方法传递它: - -``` -def index(): -return template('index.tpl', sample_files=sample_files) - -Here’s this variable in the template: -%for file in sample_files: -
-{{file}} -
-% end -``` - -我们重新启动应用程序以查看页面上的文件名列表。 -使这些文件可用于所有应用程序用户的方法如下: - -``` -@get("/files/") -def show_sample_files(filepath): -return static_file(filepath, root="files") -``` - -**2\. 如何利用 Python 应用程序在ONLYOFFICE中查看文档** -所有组件准备就绪后,让我们添加函数以使编辑者可以利用应用接口操作。 -第一个选项使用户可以打开和查看文档。连接模板中的文档编辑器 API : -``` - -``` - -_editor_url_ 是文档编辑器的链接接口。 -打开每个文件以供查看的按钮: - -``` - -``` - -现在我们需要添加带有 _id_ 的 `div` 标签,打开文档编辑器: - -``` -
-``` - -要打开编辑器,必须调用调用一个函数: - -``` - -``` - -DocEditor 函数有两个参数:将在其中打开编辑器的元素 `id` 和带有编辑器设置的 `JSON`。 -在此示例中,使用了以下必需参数: - - * _documentType_ 由其格式标识(`.docx,.xlsx,.pptx` 用于相应的文本,电子表格和演示文稿) - * _document.url_ 是您要打开的文件链接。 - * _editorConfig.mode_。 - -我们还可以添加将在编辑器中显示的 _title_。 -接下来,我们可以在 Python 应用程序中查看文档。 - -**3\. 如何在 Python 应用中利用 ONLYOFFICE 编辑文档** -首先,添加 “Edit”(编辑) 按钮: - -``` - -``` - -然后创建一个新功能,打开文件进行编辑。类似于查看功能。 -现在创建3个函数: - -``` - -``` - -_destroyEditor_ 被调用以关闭一个打开的编辑器。 -您可能会注意到,_edit()_ 函数中缺少 _editorConfig_ 参数,因为默认情况下它的值是 **{"mode":"edit"}.* - -现在,我们拥有了打开文档以在 Python 应用程序中进行协同编辑的所有功能。 - -**4\. 如何在 Python 应用中利用 ONLYOFFICE 协同编辑文档** -通过在编辑器中设置对同一文档使用相同的 `document.key` 来实现协同编辑。 如果没有此键,则每次打开文件时,编辑器都会创建编辑会话。 - -为每个文档设置唯一键,以使用户连接到同一编辑会话时进行协同编辑。 密钥格式应为以下格式:_filename +"_key"_。下一步是将其添加到当前文档的所有配置中。 - -``` -document: { -url: "host_url" + '/' + filepath, -title: filename, -key: filename + '_key' -}, -``` - -**5\. 如何在 Python 应用中利用 ONLYOFFICE 保存文档** -每次我们更改并保存文件时,ONLYOFFICE 都会存储其所有版本。 让我们仔细看看它是如何工作的。 关闭编辑器后,文档服务器将构建要保存的文件版本并将请求发送到 `callbackUrl` 地址。 该请求包含 `document.key`和指向刚刚构建的文件的链接。 -`document.key` 用于查找文件的旧版本并将其替换为新版本。 由于这里没有任何数据库,因此仅使用 `callbackUrl` 发送文件名。 -在 _editorConfig.callbackUrl_ 的设置中指定 _callbackUrl_ 参数并将其添加到 _edit()method_ 中: - -``` -function edit(filename) { -const filepath = 'files/' + filename; -if (editor) { -editor.destroyEditor() -} -editor = new DocsAPI.DocEditor("editor", -{ -documentType: get_file_type(filepath), -document: { -url: "host_url" + '/' + filepath, -title: filename, -key: filename + '_key' -} -, -editorConfig: { -mode: 'edit', -callbackUrl: "host_url" + '/callback' + '&filename=' + filename // add file name as a request parameter -} -}); -} -``` - -编写一种方法,在获取到 POST 请求发送到 */callback* 地址后将保存文件: - -``` -@post("/callback") # processing post requests for /callback -def callback(): -if request.json['status'] == 2: -file = requests.get(request.json['url']).content -with open('files/' + request.query['filename'], 'wb') as f: -f.write(file) -return "{\"error\":0}" -``` - -*# status 2* 是已生成的文件 -当我们关闭编辑器时,新版本的文件将保存到存储器中。 - -**6\. 如何在 Python 应用中利用 ONLYOFFICE 管理用户** -如果应用中有用户,并且您需要查看谁在编辑文档,请在编辑器的配置中输入其标识符(`id`和`name`)。 -在界面中添加选择用户的功能: - -``` - -``` - -如果在标记 _&lt;script&gt;_ 的开头添加对函数 *pick_user()* 的调用,负责初始化函数自身 `id`和`name`变量。 - -``` -function pick_user() { -const user_selector = document.getElementById("user_selector"); -this.current_user_name = user_selector.options[user_selector.selectedIndex].text; -this.current_user_id = user_selector.options[user_selector.selectedIndex].value; -} -``` - -使用 _editorConfig.user.id_ 和 _editorConfig.user.name_ 来配置用户设置。将这些参数添加到文件编辑函数中的编辑器配置中。 - -``` -function edit(filename) { -const filepath = 'files/' + filename; -if (editor) { -editor.destroyEditor() -} -editor = new DocsAPI.DocEditor("editor", -{ -documentType: get_file_type(filepath), -document: { -url: "host_url" + '/' + filepath, -title: filename -}, -editorConfig: { -mode: 'edit', -callbackUrl: "host_url" + '/callback' + '?filename=' + filename, -user: { -id: this.current_user_id, -name: this.current_user_name -} -} -}); -} -``` - -使用这种方法,您可以将 ONLYOFFICE 编辑器集成到用 Python 编写的应用程序中,并获得用于在文档上进行协同工作的所有必要工具。有关更多集成示例(Java,Node.js,PHP,Ruby),请参考官方的 [_API文档_][5]。 - -**By: Maria Pashkina** - --------------------------------------------------------------------------------- - -via: https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/ - -作者:[Aashima Sharma][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensourceforu.com/author/aashima-sharma/ -[b]: https://github.com/lujun9972 -[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?resize=696%2C420&ssl=1 (Typist composing text in laptop) -[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?fit=900%2C543&ssl=1 -[3]: https://www.onlyoffice.com/en/ -[4]: https://www.onlyoffice.com/en/developer-edition.aspx -[5]: https://api.onlyoffice.com/editors/basic diff --git a/translated/tech/20191227 The importance of consistency in your Python code.md b/translated/tech/20191227 The importance of consistency in your Python code.md deleted file mode 100644 index c48b0e379a..0000000000 --- a/translated/tech/20191227 The importance of consistency in your Python code.md +++ /dev/null @@ -1,70 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (stevenzdg988) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The importance of consistency in your Python code) -[#]: via: (https://opensource.com/article/19/12/zen-python-consistency) -[#]: author: (Moshe Zadka https://opensource.com/users/moshez) - -Python 代码前后一致的重要性 -====== -本部分是有关 Python 之禅的特殊系列,聚焦在第 12,13,14 规范:角色的二义和明确。 -![两台动画电脑挥舞着手臂打招呼,一台手臂脱落][1] - -设计用户界面的规范是 [准则][2]。它说,当用户执行一个操作时,程序应该做最不会让用户感到惊讶的事情。这和孩子们喜欢一遍又一遍地读同一本书的原因是一样的:没有什么比能够预测并让预测成真更让人欣慰的了。 - -在[ABC 语言][3] (Python 的灵感来源)的发展过程中,一个重要的见解是,编程设计是用户界面,需要 UI 设计者使用相同工具来设计。值得庆幸的是,从那以后,越来越多的语言采用了 UI 设计中的可视化和人体工程学概念,即使它们的应用并不严格。 - -这就引出了 [Python 之禅][4] 中的三个规范。 - -### 当存在多种可能,不要尝试去猜测。 - -**1 + "1"**的结果应该是什么? **“11”** 和 **2** 都是猜测。这个表达是 _ambiguous_:没有任何事情可以做,这至少对某些人来说并不令人惊讶。 - -一些语言选择猜测。在 JavaScript 中,结果为 **“11”**。在 Perl 中,结果为 ** 2 **。在 C 语言中,结果自然是空字符串。面对歧义,JavaScript,Perl 和 C 都在猜测。 - -在 Python 中,这会引发 **TypeError**:不是非静音的错误。捕获 **TypeError** 是非典型的:它通常将终止程序或至少终止当前任务(例如,在大多数Web框架中,它将终止对当前请求的处理)。 - -Python 拒绝猜测 **1 + "1"**的含义。程序员被迫以明确的意图编写代码:**1 + int("1")**,即 **2**;或者 **str(1) + "1"**,即 **"11"**;或 **"1"[1:]**,这将是一个空字符串。通过拒绝猜测,Python 使程序更具可预测性。 - -### 尽量找一种,最好是唯一一种明显的解决方案。 - -断言也会出现偏差。给定一个任务,你能预知要实现该任务的代码吗?当然,不可能完美地预测。毕竟,编程是一项具有创造性的任务。 - -但是,不必有意提供多种冗余方式来实现同一目标。从某种意义上说,感觉某些解决方案或许 “更好” 或 “更 Python”。 - -对 Python 美学原则的欣赏部分是关于哪种更好的解决方案的代码健壮性讨论是非常棒的。甚至可以持不同观点保持原有编程风格。为使其达到一致,不赞同甚至也是可以的。但在这一切之下,最终,必须有一种变得容易的正确的解决方案。我们希望,通过商定实现目标的最佳方法,最终达成真正的一致。 - -### 虽然这并不容易,因为你不是 Python 之父。 - -这是一个重要的警告:首先,实现任务的最佳方法通常 _not_ 明显。观念在不断发展。 _Python_ 正在进化。读取文件数据块的最佳方法可能要等到 Python3.8 并且使用 [walrus 运算符][5]。 - -读取文件数据块这样常见任务在 Python 已有近 _30 years(30年)_ 的历史中并没有 “唯一的最佳方法”。 - -1998年当我从 Python 1.5.2 开始使用 Python 时,没有一种最佳的逐行读取文件的方法。多年来,知道在字典中使用关键字 **.haskey** 是最佳的方法,直到 **in** 操作符出现才发生改变。 - -只是要意识到,找到一种(也是唯一一种)实现目标的方法可能需要 30 年的时间来尝试,Python 可以继续寻找这些方法。历史观点认为,为了做一件事 30 年是可以接受的,但对于美国人民来说,当国家已经存在了 200 多年时,常常会感到陌生。 - -根据 Python 禅 的这一部分,无论是 Python 的创造者 [Guido van Rossum][6] 这个荷兰人,还是著名的计算机科学家 [Edsger W. Dijkstra][7],都有不同的世界观。一定程度上欧洲人对时间的重视是欣赏它的必要条件。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/12/zen-python-consistency - -作者:[Moshe Zadka][a] -选题:[lujun9972][b] -译者:[stevenzdg988](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/moshez -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0 (Two animated computers waving one missing an arm) -[2]: https://www.uxpassion.com/blog/the-principle-of-least-surprise/ -[3]: https://en.wikipedia.org/wiki/ABC_(programming_language) -[4]: https://www.python.org/dev/peps/pep-0020/ -[5]: https://www.python.org/dev/peps/pep-0572/#abstract -[6]: https://en.wikipedia.org/wiki/Guido_van_Rossum -[7]: http://en.wikipedia.org/wiki/Edsger_W._Dijkstra diff --git a/sources/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md b/translated/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md similarity index 57% rename from sources/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md rename to translated/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md index 93b17b0fd3..b2e80cc517 100644 --- a/sources/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md +++ b/translated/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md @@ -1,96 +1,90 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Ansible Playbooks Quick Start Guide with Examples) -[#]: via: (https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/) -[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) +[#]: collector: "lujun9972" +[#]: translator: "MjSeven" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Ansible Playbooks Quick Start Guide with Examples" +[#]: via: "https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/" +[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/" -Ansible Playbooks Quick Start Guide with Examples +Ansible 剧本快速入门指南 ====== -We have already written two articles about Ansible, this is the third article. +我们已经写了两篇关于 Ansible 的文章,这是第三篇。 -If you are new to Ansible, I advise you to read the two topics below, which will teach you the basics of Ansible and what it is. +如果你是 Ansible 新手,我建议你阅读下面这两篇文章,它会教你一些 Ansible 的基础以及它是什么。 - * **Part-1: [How to Install and Configure Ansible on Linux][1]** - * **Part-2: [Ansible ad-hoc Command Quick Start Guide][2]** + * **第一篇: [在 Linux 如何安装和配置 Ansible][1]** + * **第二篇: [Ansible ad-hoc 命令快速入门指南][2]** +如果你已经阅读过了,那么在阅读本文时你才不会感到突兀。 +### 什么是 Ansible 剧本? -If you have finished them, you will feel the continuity as you read this article. +剧本比临时命令模式更强大,而且完全不同。 -### What is the Ansible Playbook? +它使用了 **"/usr/bin/ansible-playbook"** 二进制文件,并且提供丰富的特性使得复杂的任务变得更容易。 -Playbooks are much more powerful and completely different way than ad-hoc command mode. +如果你想经常运行一个任务,剧本是非常有用的。 -It uses the **“/usr/bin/ansible-playbook”** binary. It provides rich features to make complex task easier. +此外,如果你想在服务器组上执行多个任务,它也是非常有用的。 -Playbooks are very useful if you want to run a task often. +剧本由 YAML 语言编写。YAML 代表一种标记语言,它比其它常见的数据格式(如 XML 或 JSON)更容易读写。 -Also, this is useful if you want to perform multiple tasks at the same time on the group of server. - -Playbooks are written in YAML language. YAML stands for Ain’t Markup Language, which is easier for humans to read and write than other common data formats such as XML or JSON. - -The Ansible Playbook Flow Chart below will tell you its detailed structure. +下面这张 Ansible 剧本流程图将告诉你它的详细结构。 ![][3] -### Understanding the Ansible Playbooks Terminology +### 理解 Ansible 剧本的术语 - * **Control Node:** The machine where Ansible is installed. It is responsible for managing client nodes. - * **Managed Nodes:** List of hosts managed by the control node - * **Playbook:** A Playbook file contains a set of procedures used to automate a task. - * **Inventory:** The inventory file contains information about the servers you manage. - * **Task:** Each play has multiple tasks, tasks that are executed one by one against a given machine (it a host or multiple host or a group of host). - * **Module:** Modules are a unit of code that is used to gather information from the client node. - * **Role:** Roles are ways to automatically load some vars_files, tasks, and handlers based on known file structure. - * **Play:** Each playbook has multiple plays, and a play is the implementation of a particular automation from beginning to end. - * **Handlers:** This helps you reduce any service restart in a play. Lists of handler tasks are not really different from regular tasks, and changes are notified by notifiers. If the handler does not receive any notification, it will not work. + * **控制节点:** Ansible 安装的机器,它负责管理客户端节点。 + * **被控节点:** 被控制节点管理的主机列表。 + * **剧本:** 一个剧本文件,包含一组自动化任务。 + * **主机清单:*** 这个文件包含有关管理的服务器的信息。 + * **任务:** 每个剧本都有大量的任务。任务在指定机器上依次执行(一个主机或多个主机)。 + * **模块:** 模块是一个代码单元,用于从客户端节点收集信息。 + * **角色:** 角色是根据已知文件结构自动加载一些变量文件、任务和处理程序的方法。 + * **Play:** 每个剧本含有大量的 play, 一个 play 从头到尾执行一个特定的自动化。 + * **Handlers:** 它可以帮助你减少在剧本中的重启任务。处理程序任务列表实际上与常规任务没有什么不同,更改由通知程序通知。如果处理程序没有收到任何通知,它将不起作用。 +### 基本的剧本是怎样的? +下面是一个剧本的模板: -### How Does the Basic Playbook looks Like? - -Here’s how the basic playbook looks. - -``` ---- [YAML file should begin with a three dash] -- name: [Description about a script] - hosts: group [Add a host or host group] - become: true [It requires if you want to run a task as a root user] - tasks: [What action do you want to perform under task] - - name: [Enter the module options] - module: [Enter a module, which you want to perform] - module_options-1: value [Enter the module options] +```yaml +--- [YAML 文件应该以三个破折号开头] +- name: [脚本描述] + hosts: group [添加主机或主机组] + become: true [如果你想以 root 身份运行任务,则标记它] + tasks: [你想在任务下执行什么动作] + - name: [输入模块选项] + module: [输入要执行的模块] + module_options-1: value [输入模块选项] module_options-2: value . module_options-N: value ``` -### How to Understand Ansible Output +### 如何理解 Ansible 的输出 -The Ansible Playbook output comes with 4 colors, see below for color definitions. +Ansible 剧本的输出有四种颜色,下面是具体含义: - * **Green:** **ok –** If that is correct, the associated task data already exists and configured as needed. - * **Yellow: changed –** Specific data has updated or modified according to the needs of the tasks. - * **Red: FAILED –** If there is any problem while doing a task, it returns a failure message, it may be anything and you need to fix it accordingly. - * **White:** It comes with multiple parameters + * **绿色:** **ok –** 代表成功,关联的任务数据已经存在,并且已经根据需要进行了配置。 + * **黄色: 已更改 –** 指定的数据已经根据任务的需要更新或修改。 + * **红色: 失败–** 如果在执行任务时出现任何问题,它将返回一个失败消息,它可能是任何东西,你需要相应地修复它。 + * **白色:** 表示有多个参数。 +为此,创建一个剧本目录,将它们都放在同一个地方。 - -To do so, create a playbook directory to keep them all in one place. - -``` +```bash $ sudo mkdir /etc/ansible/playbooks ``` -### Playbook-1: Ansible Playbook to Install Apache Web Server on RHEL Based Systems +### 剧本-1:在 RHEL 系统上安装 Apache Web 服务器 -This sample playbook allows you to install the Apache web server on a given target node. +这个示例剧本允许你在指定的目标机器上安装 Apache Web 服务器: -``` +```bash $ sudo nano /etc/ansible/playbooks/apache.yml --- @@ -108,17 +102,17 @@ $ sudo nano /etc/ansible/playbooks/apache.yml state: started ``` -``` +```bash $ ansible-playbook apache1.yml ``` ![][3] -### How to Understand Playbook Execution in Ansible +### 如何理解 Ansible 中剧本的执行 -To check the syntax error, run the following command. If it finds no error, it only shows the given file name. If it detects any error, you will get an error as follows, but the contents may differ based on your input file. +使用以下命令来查看语法错误。如果没有发现错误,它只显示剧本文件名。如果它检测到任何错误,你将得到一个如下所示的错误,但内容可能根据你的输入文件而有所不同。 -``` +```bash $ ansible-playbook apache1.yml --syntax-check ERROR! Syntax Error while loading YAML. @@ -143,11 +137,11 @@ Should be written as: # ^--- all spaces here. ``` -Alternatively, you can check your ansible-playbook content from online using the following url @ [YAML Lint][4] +或者,你可以使用这个 URL [YAML Lint][4] 在线检查 Ansible 剧本内容。 -Run the following command to perform a **“Dry Run”**. When you run a ansible-playbook with the **“–check”** option, it does not make any changes to the remote machine. Instead, it will tell you what changes they have made rather than create them. +执行以下命令进行**“演练”**。当你运行带有 **"-check"** 选项的剧本时,它不会对远程机器进行任何修改。相反,它会告诉你它将要做什么改变但不是真的执行。 -``` +```bash $ ansible-playbook apache.yml --check PLAY [Install and Configure Apache Webserver] ******************************************************************** @@ -169,9 +163,9 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` -If you want detailed information about your ansible playbook implementation, use the **“-vv”** verbose option. It shows what it really does to gather this information. +如果你想要知道 ansible 剧本实现的详细信息,使用 **"-vv"** 选项,它会展示如何收集这些信息。 -``` +```bash $ ansible-playbook apache.yml --check -vv ansible-playbook 2.9.2 @@ -212,11 +206,11 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` -### Playbook-2: Ansible Playbook to Install Apache Web Server on Ubuntu Based Systems +### 剧本-2:在 Ubuntu 系统上安装 Apache Web 服务器 -This sample playbook allows you to install the Apache web server on a given target node. +这个示例剧本允许你在指定的目标节点上安装 Apache Web 服务器。 -``` +```bash $ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml --- @@ -250,13 +244,13 @@ $ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml enabled: yes ``` -### Playbook-3: Ansible Playbook to Install a List of Packages on Red Hat Based Systems +### 剧本-3:在 Red Hat 系统上安装软件包列表 -This sample playbook allows you to install a list of packages on a given target node. +这个示例剧本允许你在指定的目标节点上安装软件包。 -**Method-1:** +**方法-1:** -``` +```bash $ sudo nano /etc/ansible/playbooks/packages-redhat.yml --- @@ -273,9 +267,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat.yml - htop ``` -**Method-2:** +**方法-2:** -``` +```bash $ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml --- @@ -292,9 +286,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml - htop ``` -**Method-3: Using Array Variable** +**方法-3: 使用数组变量** -``` +```bash $ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml --- @@ -309,11 +303,11 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml with_items: "{{ packages }}" ``` -### Playbook-4: Ansible Playbook to Install Updates on Linux Systems +### 剧本-4:在 Linux 系统上安装更新 -This sample playbook allows you to install updates on your Linux systems, running Red Hat and Debian-based client nodes. +这个示例剧本允许你在基于 Red Hat 或 Debian 的 Linux 系统上安装更新。 -``` +```bash $ sudo nano /etc/ansible/playbooks/security-update.yml --- @@ -336,7 +330,7 @@ via: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/ 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[MjSeven](https://github.com/MjSeven) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20200522 A beginner-s guide to web scraping with Python.md b/translated/tech/20200522 A beginner-s guide to web scraping with Python.md deleted file mode 100644 index 0283173f68..0000000000 --- a/translated/tech/20200522 A beginner-s guide to web scraping with Python.md +++ /dev/null @@ -1,493 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (stevenzdg988) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (A beginner's guide to web scraping with Python) -[#]: via: (https://opensource.com/article/20/5/web-scraping-python) -[#]: author: (Julia Piaskowski https://opensource.com/users/julia-piaskowski) - -利用 Python 爬网站的新手指南 -====== -通过基本的 Python 工具获取爬完整 HTML 网站的实践经验。 - -![HTML代码][1] - -有很多很棒的书可以帮助您学习 Python ,但是谁真正读了这些(书名从A至Z)呢?(剧透:不是我)。 - -许多人觉得教学书籍很有用,但我通常不会从头到尾地阅读一本书来学习。我通过做一个项目,努力的,弄清楚一些内容,然后再读另一本书来学习。因此,暂时丢掉书,让我们一起学习 Python。 - -接下来是我的第一个 Python 抓取项目向导。假设在 Python 和 HTML 的知识处于很低水平。这旨在说明如何使用 Python 的 [requests][2] 库访问网页内容,如何使用 [BeatifulSoup4][3]库,以及 `JSON` 和 [pandas][4] 库解析网页内容。我将简要介绍 [Selenium][5] 库,但我不会深入研究如何使用该库——该主题应该作为它的指南。最终,我希望向您展示一些技巧和提示,以减少网络爬取过程中遇到问题而不知所措。 - -### 安装依赖 - -我的 [GitHub 存储库][6] 中提供了本指南的所有资源。如果需要安装 Python3 的帮助,请查看 [Linux][7],[Windows][8] 和 [Mac][9] 的教程。 - - -``` -$ python3 -m venv -$ source venv/bin/activate -$ pip install requests bs4 pandas -``` - -如果您喜欢使用 JupyterLab ,则可以使用 [notebook][10] 运行所有代码。[安装 JupyterLab][11] 有很多方法,这是其中一种: - - -``` -# from the same virtual environment as above, run: -$ pip install jupyterlab -``` - -### 为网站抓取项目设定目标 - -现在我们已经安装了依赖项,但是爬取网页需要做什么? - -让我们后退一步,确保使目标清晰。下面是成功完成网站爬取项目需求列表。 - - * 收集有效的构建网站爬取的信息。 - * 基于法律和遵循道德规范的收集利用网站爬取工具下载的信息。 - * 了解如何在 HTML 代码中找到目标信息。 - * 利用恰当的工具:在此情况下,需要使用 **BeautifulSoup** 库和 **requests** 库。 - * 知道(或愿意去学习)如何解析 JSON 对象。 - * 有足够的 **pandas** 数据处理技能。 - - - -关于 HTML 的注释:HTML 是运行在 Internet 上的“猛兽”,但我们最需要了解的是标签的工作方式。标签是一对由尖括号包围关键词(一般成对出现,其内容在两个标签中间)。比如,这是一个伪标签,称为 “`pro-tip`”: - - -``` -<pro-tip> All you need to know about html is how tags work </pro-tip> -``` - -我们可以通过调用标签 “`pro-tip`” 来访问其中的信息("All you need to know…")。本教程将进一步介绍如何查找和访问标签。要进一步了解 HTML 基础知识,请查看 [本文][12]。 - -### 在网站爬取项目中查找内容 - -利用网站爬取采集数据比利用其他方法更合适。接下来的就是我的教程。 - -没有可用于数据(处理)的公共 API。通过 API 抓取结构化数据会容易得多,这将有助于阐明收集数据的合法性和道德规范。这就需要大量的采用规则的结构化数据,重复的格式可以证明这一点。爬网站可能会很痛苦。 `BeautifulSoup(bs4)`使操作更容易,但是却不可避免地需要定制。不需要格式相同的数据,但这确实使事情变得更容易。当前存在的 “边际案例”(偏离规范)越多,爬取将越复杂。 - -免责声明:我没有参加过法律培训;以下内容不打算作为正式的法律建议。 - -关于合法性,访问大量有价值信息可能令人兴奋,但正因为如此可能不意味着允许这样做。 - -值得庆幸的是,有一些公共信息可以指导我们的道德规范和网站爬取工具。大多数网站都有与该网站关联的 [robots.txt][13] 文件,指出允许哪些爬取活动,哪些不被允许。它主要用于与搜索引擎(最终的网站抓取工具)进行交互。然而,网站上的许多信息都被视为公共信息。因此,将 `robots.txt` 文件视为一组建议,不如看成是具有法律约束力的文档。 `robots.txt` 文件未涉及道德规范下的数据收集和使用等主题。 - -在开始爬取项目之前,问自己以下问题: - - * 我是否在爬取版权材料? - * 我的爬取活动会危害个人隐私吗? - * 我是否发送了大量可能会使服务器超载或损坏的请求? - * 爬取是否会暴露我不拥有的知识产权? - * 是否有管理使用网站的服务条款,我是否遵循这些条款? - * 我的爬取活动会减少原始数据的价值吗?(例如,我是否打算按原样重新打包数据,或者可能从原始来源中抽取(占用)网站流量)? - - - -当我爬取一个网站时,请确保可以对所有这些问题回答 “否”。 - -要深入了解法律问题,请参阅2018年出版的 [Krotov 和 Silva 撰写的Web爬虫的合法性和道德性][14] 和 [Sellars 的二十年 Web 爬虫和计算机欺诈与滥用法案][15]。 - -### 现在开始爬网站 - -经过上述评估,我想到了一个项目。 我的目标是爬取爱达荷州所有 Family Dollar 商店的地址。 这些商店在农村地区规模很大,因此我想了解有多少家这样的商店。 - -起点是 [Family Dollar 的位置页面][16] - -![爱达荷州 Family Dollar 所在地页面][17] - -首先,让我们在 Python 虚拟环境中加载先决条件。 此处的代码旨在添加到 Python 文件(如果要查找名称,则为 _scraper.py_)或在 JupyterLab 的单元中运行。 - - -``` -import requests # for making standard html requests -from bs4 import BeautifulSoup # magical tool for parsing html data -import json # for parsing data -from pandas import DataFrame as df # premier library for data organization -``` - -接下来,我们从目标 URL 中请求数据。 - - -``` -page = requests.get("") -soup = BeautifulSoup(page.text, 'html.parser') -``` - -BeautifulSoup 将 HTML 或 XML 内容转换为复杂树对象。这是我们将使用的几种常见对象类型。 - - * **BeautifulSoup** ——解析的内容 - * **Tag**——标准 HTML 标记,您将遇到 `bs4`元素的主要类型 - * **NavigableString**——标签内的文本字符串 - * **Comment**—— NavigableString 的一种特殊类型 - - - -当我们查看 **requests.get()** 输出时,还有更多要考虑的问题。我仅使用 **page.text()** 将请求的页面转换为可读的内容,但是还有其他输出类型: - - * **page.text()** 表示文本(最常见) - * **page.content()** 用于逐字节输出 - * **page.json()** 用于 JSON 对象 - * **page.raw()** 用于原始套接字响应(没了) - - - -我只在使用拉丁字母的纯英语网站上操作。 **requests** 中的默认编码设置可以很好地解决这一问题。然而,除了纯英语网站之外,就是更大的互联网世界。为了确保 **requests** 正确解析内容,您可以设置文本的编码: - - -``` -page = requests.get(URL) -page.encoding = 'ISO-885901' -soup = BeautifulSoup(page.text, 'html.parser') -``` - -仔细研究 BeautifulSoup 标签,我们看到: - - * `bs4` 元素 **tag** 正在捕获 HTML 标记 - * 它具有名称和属性,可以像字典一样访问:**tag['someAttribute']** - * 如果标签具有相同名称的多个属性,则仅访问第一个实例。 - * 可通过 **tag.contents** 访问子标签。 - * 所有标签后代都可以通过 **tag.contents** 访问。 - * 你始终可以使用以下字符串:**re.compile("your_string")** 访问作为字符串的所有内容。 - - - -### 确定如何提取相应内容 - -警告:此过程可能令人沮丧。 - -网站爬取过程中的提取可能是一个令人生畏的充满了错误过程。我认为解决此问题的最佳方法是从一个有代表性的示例开始然后进行扩展(此原理对于任何编程任务都是适用的)。查看页面的 HTML 源代码至关重要。有很多方法可以做到这一点。 - -你可以在终端中使用 Python 查看页面的整个源代码(不建议使用)。运行此代码需要您自担风险: - - -``` -print(soup.prettify()) -``` - -虽然打印出页面的整个源代码可能适用于某些教程中显示的消遣示例,但大多数现代网站的页面上都有大量内容。甚至404页面也可能充满了页眉,页脚等代码。 - -通常,在您喜欢的浏览器中通过 **View Page Source** 浏览源代码是最容易的(单击右键,然后选择"view page source"(查看页面源代码))。这是找到目标内容的最可靠方法(稍后我将解释原因)。 - -![Family Dollar 页面源代码][18] - -  - -在这种情况下,我需要在广阔的 HTML 海洋中找到我的目标内容——地址,城市,州和邮政编码。通常,对页面源(**ctrl+ F**)的简单搜索就会产生目标位置所在的位置。一旦我实际看到目标内容的示例(至少一个商店的地址),便会找到将该内容与其他内容区分开的属性或标签。 - -首先,我需要在爱达荷州 Family Dollar 商店中收集不同城市的网址,并访问这些网站以获取地址信息。这些网址似乎都包含在 **href** 标记中。太棒了!我将尝试使用 **find_all** 命令进行搜索: - - -``` -dollar_tree_list = soup.find_all('href') -dollar_tree_list -``` - -搜索 **href** 不会产生任何结果,该死。这可能已失败,因为 **href** 嵌套在 **itemlist** 类中。对于下一次尝试,请搜索 **item_list**。由于“`class`”是 Python 中的保留字,因此使用 **class_**来作为替代。**soup.find_all()** 原来是 `bs4` 函数的瑞士军刀。 - - -``` -dollar_tree_list = soup.find_all(class_ = 'itemlist') -for i in dollar_tree_list[:2]: -  print(i) -``` - -有趣的是,我发现搜索一个特定类的方法一般是一种成功。通过找出对象的类型和长度,我们可以了解更多有关对象的信息。 - - -``` -type(dollar_tree_list) -len(dollar_tree_list) -``` - -The content from this BeautifulSoup "ResultSet" can be extracted using **.contents**. This is also a good time to create a single representative example. -可以使用 **.contents** 从 BeautifulSoup “ResultSet” 中提取内容。这也是创建单个代表性示例的好时机。 - -``` -example = dollar_tree_list[2] # a representative example -example_content = example.contents -print(example_content) -``` - -使用 **.attr** 查找该对象内容中存在的属性。注意:**.contents** 通常会返回一个项目的精确的列表,因此第一步是使用方括号符号为该项目建立索引。 - - -``` -example_content = example.contents[0] -example_content.attrs -``` - -现在,我可以看到 **href** 是一个属性,可以像字典项一样提取它: - - -``` -example_href = example_content['href'] -print(example_href) -``` - -### 整合网站抓取工具 - -所有的探索为我们提供了前进的方法。这是弄清楚上面逻辑的清理版本。 - - -``` -city_hrefs = [] # initialise empty list - -for i in dollar_tree_list: -    cont = i.contents[0] -    href = cont['href'] -    city_hrefs.append(href) - -#  check to be sure all went well -for i in city_hrefs[:2]: -  print(i) -``` - -输出的内容是一个关于抓取爱达荷州 Family Dollar 商店 URL 的列表 - -也就是说,我仍然没有获得地址信息!现在,需要抓取每个城市的 URL 以获得此信息。因此,我们使用一个具有代表性的示例重新开始该过程。 - - -``` -page2 = requests.get(city_hrefs[2]) # again establish a representative example -soup2 = BeautifulSoup(page2.text, 'html.parser') -``` - -![Family Dollar 地图和代码][19] - -地址信息嵌套在 **type="application/ld+json"** 里。经过大量的地理位置抓取之后,我开始认识到这是用于存储地址信息的一般结构。幸运的是,**soup.find_all()** 开启了利用 **type** 搜索。 - - -``` -arco = soup2.find_all(type="application/ld+json") -print(arco[1]) -``` - -地址信息在第二个列表成员中!原来如此! - -使用 **.contents** 提取(从第二个列表项中)内容(这是过滤后的合适的默认操作)。同样,由于输出的内容是一个列表,因此我为该列表项建立了索引: - - -``` -arco_contents = arco[1].contents[0] -arco_contents -``` - -喔,看起来不错。此处提供的格式与 JSON 格式一致(而且,该类型的名称中确实包含 “**json**”)。 JSON对象的行为就像是带有嵌套字典的字典。一旦你熟悉利用其去工作,它实际上是一种不错的格式(当然,它比一长串 RegEx 命令更容易编程)。尽管从结构上看起来像一个 JSON 对象,但它仍然是 `bs4` 对象,需要通过编程方式转换为 JSON 对象才能对其进行访问: - - -``` -arco_json =  json.loads(arco_contents) - -[/code] [code] - -type(arco_json) -print(arco_json) -``` - -在该内容中,有一个被调用的 **address** 键,该键要求地址信息在一个比较小的嵌套字典里。可以这样检索: - - -``` -arco_address = arco_json['address'] -arco_address -``` - -好吧,请大家注意。现在我可以遍历存储爱达荷州 URL 的列表: - - -``` -locs_dict = [] # initialise empty list - -for link in city_hrefs: -  locpage = requests.get(link)   # request page info -  locsoup = BeautifulSoup(locpage.text, 'html.parser') -      # parse the page's content -  locinfo = locsoup.find_all(type="application/ld+json") -      # extract specific element -  loccont = locinfo[1].contents[0]   -      # get contents from the bs4 element set -  locjson = json.loads(loccont)  # convert to json -  locaddr = locjson['address'] # get address -  locs_dict.append(locaddr) # add address to list -``` - -### 用 **Pandas** 整理我们的网站抓取结果 - -我们在字典中装载了大量数据,但是还有一些额外的无用项,它们会使重用数据变得比需要的更为复杂。要执行最终的数据组织,我们需要将其转换为 pandas 数据框架,删除不需要的列 “**@type**” 和 “**country**”),并检查前五行以确保一切正常。 - - -``` -locs_df = df.from_records(locs_dict) -locs_df.drop(['@type', 'addressCountry'], axis = 1, inplace = True) -locs_df.head(n = 5) -``` - -确保保存结果!! - - -``` -df.to_csv(locs_df, "family_dollar_ID_locations.csv", sep = ",", index = False) -``` - -我们做到了!所有爱达荷州 Family Dollar 商店都有一个用逗号分隔的列表。多令人兴奋。 - -### Selenium 和数据抓取的一点说明 - -[Selenium][5] 是用于与网页自动交互的常用应用。为了解释为什么有时必须使用它,让我们来看一个使用 Walgreens 网站的示例。 **Inspect Element** 为在浏览器显示内容提供代码: - -![Walgreens 位置页面和代码][20] - -  - -虽然 **View Page Source** 提供了有关 **requests** 将获得什么内容的代码: - -![Walgreens 源代码][21] - -如果这两个不一致,则有一些插件可以修改源代码——因此,应在将页面加载到浏览器后对其进行访问。 **requests** 不能做到这一点,但是 **Selenium** 可以做到。 - -Selenium 需要 Web 驱动程序来检索内容。实际上,它会打开 Web 浏览器,并收集此页面的内容。 Selenium 功能强大——它可以通过多种方式与加载的内容进行交互(请阅读文档)。使用 **Selenium** 获取数据后,继续像以前一样使用 **BeautifulSoup**: - - -``` -url = "[https://www.walgreens.com/storelistings/storesbycity.jsp?requestType=locator\&state=ID][22]" -driver = webdriver.Firefox(executable_path = 'mypath/geckodriver.exe') -driver.get(url) -soup_ID = BeautifulSoup(driver.page_source, 'html.parser') -store_link_soup = soup_ID.find_all(class_ = 'col-xl-4 col-lg-4 col-md-4') -``` - -对于 Family Dollar 这种情形,我不需要 Selenium,但是当呈现的内容与源代码不同时,我确实会保留使用 Selenium。 - -### 小结 - -总之,使用网站抓取来完成有意义的任务时: - - * 耐心一点 - * 查阅手册(它们非常有帮助) - - - -如果您对答案感到好奇: - -![Family Dollar 位置图][23] - -美国有很多 Family Dollar 商店。 - -完整的源代码是: - - -``` -import requests -from bs4 import BeautifulSoup -import json -from pandas import DataFrame as df - -page = requests.get("") -soup = BeautifulSoup(page.text, 'html.parser') - -# find all state links -state_list = soup.find_all(class_ = 'itemlist') - -state_links = [] - -for i in state_list: -    cont = i.contents[0] -    attr = cont.attrs -    hrefs = attr['href'] -    state_links.append(hrefs) - -# find all city links -city_links = [] - -for link in state_links: -    page = requests.get(link) -    soup = BeautifulSoup(page.text, 'html.parser') -    familydollar_list = soup.find_all(class_ = 'itemlist') -    for store in familydollar_list: -        cont = store.contents[0] -        attr = cont.attrs -        city_hrefs = attr['href'] -        city_links.append(city_hrefs) -# to get individual store links -store_links = [] - -for link in city_links: -    locpage = requests.get(link) -    locsoup = BeautifulSoup(locpage.text, 'html.parser') -    locinfo = locsoup.find_all(type="application/ld+json") -    for i in locinfo: -        loccont = i.contents[0] -        locjson = json.loads(loccont) -        try: -            store_url = locjson['url'] -            store_links.append(store_url) -        except: -            pass - -# get address and geolocation information -stores = [] - -for store in store_links: -    storepage = requests.get(store) -    storesoup = BeautifulSoup(storepage.text, 'html.parser') -    storeinfo = storesoup.find_all(type="application/ld+json") -    for i in storeinfo: -        storecont = i.contents[0] -        storejson = json.loads(storecont) -        try: -            store_addr = storejson['address'] -            store_addr.update(storejson['geo']) -            stores.append(store_addr) -        except: -            pass - -# final data parsing -stores_df = df.from_records(stores) -stores_df.drop(['@type', 'addressCountry'], axis = 1, inplace = True) -stores_df['Store'] = "Family Dollar" - -df.to_csv(stores_df, "family_dollar_locations.csv", sep = ",", index = False) -``` - -\-- -_作者注释:本文是2020年2月9日在俄勒冈州波特兰的[我在PyCascades的演讲][24]的改编。_ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/5/web-scraping-python - -作者:[Julia Piaskowski][a] -选题:[lujun9972][b] -译者:[stevenzdg988](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/julia-piaskowski -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_html_code.png?itok=VjUmGsnl (HTML code) -[2]: https://requests.readthedocs.io/en/master/ -[3]: https://beautiful-soup-4.readthedocs.io/en/latest/ -[4]: https://pandas.pydata.org/ -[5]: https://www.selenium.dev/ -[6]: https://github.com/jpiaskowski/pycas2020_web_scraping -[7]: https://opensource.com/article/20/4/install-python-linux -[8]: https://opensource.com/article/19/8/how-install-python-windows -[9]: https://opensource.com/article/19/5/python-3-default-mac -[10]: https://github.com/jpiaskowski/pycas2020_web_scraping/blob/master/example/Familydollar_location_scrape-all-states.ipynb -[11]: https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html -[12]: https://opensource.com/article/20/4/build-websites -[13]: https://www.contentkingapp.com/academy/robotstxt/ -[14]: https://www.researchgate.net/publication/324907302_Legality_and_Ethics_of_Web_Scraping -[15]: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3221625 -[16]: https://locations.familydollar.com/id/ -[17]: https://opensource.com/sites/default/files/uploads/familydollar1.png (Family Dollar Idaho locations page) -[18]: https://opensource.com/sites/default/files/uploads/familydollar2.png (Family Dollar page source code) -[19]: https://opensource.com/sites/default/files/uploads/familydollar3.png (Family Dollar map and code) -[20]: https://opensource.com/sites/default/files/uploads/walgreens1.png (Walgreens location page and code) -[21]: https://opensource.com/sites/default/files/uploads/walgreens2.png (Walgreens source code) -[22]: https://www.walgreens.com/storelistings/storesbycity.jsp?requestType=locator\&state=ID -[23]: https://opensource.com/sites/default/files/uploads/family_dollar_locations.png (Family Dollar locations map) -[24]: https://2020.pycascades.com/talks/adventures-in-babysitting-webscraping-for-python-and-html-novices/ diff --git a/translated/tech/20200629 LaTeX typesetting part 2 (tables).md b/translated/tech/20200629 LaTeX typesetting part 2 (tables).md new file mode 100644 index 0000000000..4b91e8cc0d --- /dev/null +++ b/translated/tech/20200629 LaTeX typesetting part 2 (tables).md @@ -0,0 +1,349 @@ +[#]: collector: (lujun9972) +[#]: translator: (Chao-zhi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (LaTeX typesetting part 2 (tables)) +[#]: via: (https://fedoramagazine.org/latex-typesetting-part-2-tables/) +[#]: author: (Earl Ramirez https://fedoramagazine.org/author/earlramirez/) + + +LaTex 排版 (2):表格 +====== + +![][1] + +LaTeX 提供了许多工具来创建和定制表格,在本系列中,我们将使用 tabular 和 tabularx 环境来创建和定制表。 + +### 基础表格 + +要创建表,只需指定环境 `\begin{tabular}{ 列选项}` + +``` +\begin{tabular}{c|c} + Release &Codename \\ \hline + Fedora Core 1 &Yarrow \\ + Fedora Core 2 &Tettnang \\ + Fedora Core 3 &Heidelberg \\ + Fedora Core 4 &Stentz \\ +\end{tabular} +``` + +![Basic Table][2] + +在上面的示例中,花括号中的 ”{c|c}” 表示文本在列中的位置。下表总结了位置参数及其说明。 + +参数 | 位置 +|:---:|:--- +c | 将文本置于中间 +l | 将文本左对齐 +r | 将文本右对齐 +p{width} | 文本对齐单元格顶部 +m{width} | 文本对齐单元格中间 +b{width} | 文本对齐单元格底部 + +> m{width} 和 b{width} 都要求在最前面指定数组包。 + +使用上面的例子,让我们来详细讲解使用的要点,并描述您将在本系列中看到的更多选项 + +选项 | 意义 +|:-:|:-| +& | 定义每个单元格,这个符号仅用于第二列 +\ | 这将终止该行并开始一个新行 +\| | 指定表格中的垂直线(可选) +\hline | 指定表格中水平线(可选) +*{num}{form} | 当您有许多列时,可以使用这个,并且是限制重复的有效方法 +\|\| | 指定表格中垂直双线 + +### 定制表格 + +学会了这些选项,让我们使用这些选项创建一个表。 + +``` +\begin{tabular}{*{3}{|l|}} +\hline + \textbf{Version} &\textbf{Code name} &\textbf{Year released} \\ +\hline + Fedora 6 &Zod &2006 \\ \hline + Fedora 7 &Moonshine &2007 \\ \hline + Fedora 8 &Werewolf &2007 \\ +\hline +\end{tabular} +``` + +![Customise Table][3] + +### 管理长文本 + +如果列中有很多文本,那么它的格式就不好处理,看起来也不好看。 + +下面的示例显示了文本的格式长度,我们将在导言区中使用 “blindtext”,以便生成示例文本。 + +``` +\begin{tabular}{|l|l|}\hline + Summary &Description \\ \hline + Test &\blindtext \\ +\end{tabular} +``` + +![Default Formatting][4] + +正如您所看到的,文本超出了页面宽度;但是,有几个选项可以克服这个问题。 + + * 指定列宽,例如 m{5cm} + * 利用 tablarx 环境,这需要在导言区中引用 tablarx 宏包。 + + +#### 使用列宽管理长文本 + +通过指定列宽,文本将被包装为如下示例所示的宽度。 + +``` +\begin{tabular}{|l|m{14cm}|} \hline + Summary &Description \\ \hline + Test &\blindtext \\ \hline +\end{tabular}\vspace{3mm} +``` + +![Column width][5] + +#### 使用 tabularx 管理长文本 + +在我们利用表格之前,我们需要在导言区中加上它。TABLARX 方法见以下示例 + +`\begin{tabularx}{ 宽度}{列选项}` + + +``` +\begin{tabularx}{\textwidth}{|l|X|} \hline +Summary & Tabularx Description\\ \hline +Text &\blindtext \\ \hline +\end{tabularx} +``` + + +![Tabularx][6] + +请注意,我们需要处理长文本的列在花括号中指定了大写 “X”。 + +### 合并行合并列 + +有时需要合并行或列。本节描述了如何完成。要使用 multirow 和 multicolumn,请将 multirow 添加到导言区。 + +#### 合并行 + +Multirow 采用以下参数 `\multirow{ 行的数量}{宽度}{文本}`,让我们看看下面的示例。 + +``` +\begin{tabular}{|l|l|}\hline + Release &Codename \\ \hline + Fedora Core 4 &Stentz \\ \hline + \multirow{2}{*}{MultiRow} &Fedora 8 \\ + &Werewolf \\ \hline +\end{tabular} +``` + +![MultiRow][7] + +在上面的示例中,指定了两行,'*'告诉 LaTeX 自动管理单元格的大小。 + +#### 合并列 + +Multicolumn 参数是 `{Multicolumn{ 列的数量}{单元格选项}{位置}{文本}`,下面的示例演示 Multicolumn。 + +``` +\begin{tabular}{|l|l|l|}\hline + Release &Codename &Date \\ \hline + Fedora Core 4 &Stentz &2005 \\ \hline + \multicolumn{3}{|c|}{Mulit-Column} \\ \hline +\end{tabular} +``` + +![Multi-Column][8] + +### 使用颜色 + +可以为文本、单个单元格或整行指定颜色。此外,我们可以为每一行配置交替的颜色。 + +在给表添加颜色之前,我们需要在导言区引用 `\usepackage[table]{xcolor}`。我们还可以使用以下颜色参考 [LaTeX Color][9] 或在颜色前缀后面添加感叹号(从 0 到 100 的阴影)来定义颜色。例如,`gray!30` + +``` +\definecolor{darkblue}{rgb}{0.0, 0.0, 0.55} +\definecolor{darkgray}{rgb}{0.66, 0.66, 0.66} +``` + +下面的示例演示了一个具有各种颜色的表,`\rowcolors` 采用以下选项 `\rowcolors{ 起始行颜色}{偶数行颜色}{奇数行颜色}`。 + +``` +\rowcolors{2}{darkgray}{gray!20} +\begin{tabular}{c|c} + Release &Codename \\ \hline + Fedora Core 1 &Yarrow \\ + Fedora Core 2 &Tettnang \\ + Fedora Core 3 &Heidelberg \\ + Fedora Core 4 &Stentz \\ +\end{tabular} +``` + +![Alt colour table][10] + +除了上面的例子,`\rowcolor` 可以用来指定每一行的颜色,这个方法在有合并行时效果最好。以下示例显示将 `\rowColors` 与合并行一起使用的影响以及如何解决此问题。 + +![Impact on multi-row][11] + +你可以看到,在合并行中,只有第一行能显示颜色。想要解决这个问题,需要这样做: + +``` +\begin{tabular}{|l|l|}\hline + \rowcolor{darkblue}\textsc{\color{white}Release} &\textsc{\color{white}Codename} \\ \hline + \rowcolor{gray!10}Fedora Core 4 &Stentz \\ \hline + \rowcolor{gray!40}&Fedora 8 \\ + \rowcolor{gray!40}\multirow{-2}{*}{Multi-Row} &Werewolf \\ \hline +\end{tabular} +``` + +![Multi-row][12] + +让我们讲解一下为解决合并行替换颜色问题而实施的更改。 + + * 第一行从合并行上方开始 + * 行数从 2 更改为 -2,这意味着从上面的行开始读取 + * `\rowcolor` 是为每一行指定的,更重要的是,多行必须具有相同的颜色,这样才能获得所需的结果。 + +关于颜色的最后一个注意事项是,要更改列的颜色,需要创建新的列类型并定义颜色。下面的示例说明了如何定义新的列颜色。 + +``` +\newcolumntype{g}{>{\columncolor{darkblue}}l} +``` + +我们把它分解一下 + + * `\newcolumntype{g}`:将字母 _g_ 定义为新列 + * `{>{\columncolor{darkblue}}l}`:在这里我们选择我们想要的颜色,并且 `l` 告诉列左对齐,这可以用 `c` 或 `r` 代替 + +``` +\begin{tabular}{g|l} + \textsc{Release} &\textsc{Codename} \\ \hline + Fedora Core 4 &Stentz \\ + &Fedora 8 \\ + \multirow{-2}{*}{Multi-Row} &Werewolf \\ +\end{tabular}\ +``` + +![Column Colour][13] + +### 横向表 + +有时,您的表可能有许多列,纵向排列会很不好看。在导言区加入 “rotating” 包,您将能够创建一个横向表。下面的例子说明了这一点。 + +对于横向表,我们将使用 `sidewaystable` 环境并在其中添加表格环境,我们还指定了其他选项。 + + * `\centering` 可以将表格放置在页面中心 + * `\caption{}` 为表命名 + * `\label{}` 这使我们能够引用文档中的表 + +``` +\begin{sidewaystable} +\centering +\caption{Sideways Table} +\label{sidetable} +\begin{tabular}{ll} + \rowcolor{darkblue}\textsc{\color{white}Release} &\textsc{\color{white}Codename} \\ + \rowcolor{gray!10}Fedora Core 4 &Stentz \\ + \rowcolor{gray!40} &Fedora 8 \\ + \rowcolor{gray!40}\multirow{-2}{*}{Multi-Row} &Werewolf \\ +\end{tabular}\vspace{3mm} +\end{sidewaystable} +``` + +![Sideways Table][14] + +### 列表和表格 + +要将列表包含到表中,可以使用 tabularx,并将列表包含在指定的列中。另一个办法是使用表格格式,但必须指定列宽。 + +#### 用 tabularx 处理列表 + +``` +\begin{tabularx}{\textwidth}{|l|X|} \hline + Fedora Version &Editions \\ \hline + Fedora 32 &\begin{itemize}[noitemsep] + \item CoreOS + \item Silverblue + \item IoT + \end{itemize} \\ \hline +\end{tabularx}\vspace{3mm} +``` + +![List in tabularx][15] + +#### 用 tabular 处理列表 + + +``` +\begin{tabular}{|l|m{6cm}|}\hline +        Fedora Version &amp;amp;Editions \\\ \hline +    Fedora 32 &amp;amp;\begin{itemize}[noitemsep] +        \item CoreOS +        \item Silverblue +        \item IoT +    \end{itemize} \\\ \hline +\end{tabular} +``` + +![List in tabular][16] + +### 结论 + +LaTeX 提供了许多使用 tablar 和 tablarx 自定义表的方法,您还可以在表环境 (\begin\table) 中添加 tablar 和 tablarx 来添加表的名称和定位表。 + +### LaTeX 宏包 + +所需的宏包有如下这些: + +``` +\usepackage{fullpage} +\usepackage{blindtext} % add demo text +\usepackage{array} % used for column positions +\usepackage{tabularx} % adds tabularx which is used for text wrapping +\usepackage{multirow} % multi-row and multi-colour support +\usepackage[table]{xcolor} % add colour to the columns +\usepackage{rotating} % for landscape/sideways tables +``` + +### 额外的知识 + +这是一堂关于表的小课,有关表和 LaTex 的更多高级信息,请访问 [LaTex Wiki][17] + +![][13] + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/latex-typesetting-part-2-tables/ + +作者:[Earl Ramirez][a] +选题:[lujun9972][b] +译者:[Chao-zhi](https://github.com/Chao-zhi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/earlramirez/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png +[2]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-13.png +[3]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-23.png +[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-10.png +[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-11.png +[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-12.png +[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-15.png +[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-16.png +[9]: https://latexcolor.com +[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-17.png +[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-18.png +[12]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-19.png +[13]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-24.png +[14]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-20.png +[15]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-21.png +[16]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-22.png +[17]: https://en.wikibooks.org/wiki/LaTeX/Tables diff --git a/translated/tech/20201230 Choose between Btrfs and LVM-ext4.md b/translated/tech/20201230 Choose between Btrfs and LVM-ext4.md deleted file mode 100644 index 0b9252c4a6..0000000000 --- a/translated/tech/20201230 Choose between Btrfs and LVM-ext4.md +++ /dev/null @@ -1,150 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (Chao-zhi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Choose between Btrfs and LVM-ext4) -[#]: via: (https://fedoramagazine.org/choose-between-btrfs-and-lvm-ext4/) -[#]: author: (Troy Curtis Jr https://fedoramagazine.org/author/troycurtisjr/) - - Btrfs 和 LVM-ext4 该如何选择? -====== - -![][1] - -图片来自 [Raul Petri][2] 发表在 [Unsplash][3] - -[Fedora 33][4] 在桌面版本 [Btrfs][5] 中引入了新的默认文件系统。多年以来,Fedora 一直在 [Logical Volume Manager(LVM)][7] 卷之上使用 [ext4][6],引入 Brtfs 对 Fedora 来说是一个很大的转变。更改默认文件系统需要[令人信服的原因 ][8]。虽然 Btrfs 是令人兴奋的下一代文件系统,但 LVM 上的 ext4 已经建立并稳定。本指南旨在探索每种功能的高级特性,并使其更容易在 Btrfs 和 LVM-ext4 之间进行选择。 - -### 先说结论 - -最简单的建议是坚持使用默认值。全新的 Fedora 33 安装默认为 Btrfs,升级之前的 Fedora 版本将继续使用最初安装的版本,通常是 lvm-ext4。对于现有的 Fedora 用户来说,获取 Btrfs 的最简单方式是全新安装。然而,全新安装比简单升级更具破坏性。除非有特殊需要,否则这种干扰可能是不必要的。Fedora 开发团队仔细考虑了这两个缺省值,因此对任何一个选择都要有信心。 - -### 那么所有其他文件系统呢? - -现在有很多 [Linux 系统的文件系统 ][9]。在添加卷管理器、加密方法和存储机制的组合后,这一数字呈爆炸式增长。那么,为什么要关注 btrfs 和 lvm-ext4 呢?对于 Fedora 的用户来说,这两种设置可能是最常见的。LVM 之上的 ext4 成为 Fedora 11 中的默认磁盘布局,在此之前则使用的是 ext3。 - -既然 Btrfs 是 Fedora 33 的默认设置,那么绝大多数现有用户将会考虑是应该原地踏步还是向前更新。面对全新的 Fedora 33 安装,有经验的 Linux 用户可能会想知道是使用这个新的文件系统,还是退回到他们熟悉的文件系统。因此,在众多可能的存储选项中,许多 Fedora 用户会想知道如何在 Btrfs 和 lvm-ext4 之间进行选择。 - -### 两者的共性 - -尽管两个设置之间存在核心差异,但 Btrfs 和 lvm-ext4 实际上有很多共同之处。两者都是成熟且经过充分测试的存储技术。从 Fedora Core 的早期开始,LVM 就一直在使用,ext4 在 2009 年成为 Fedora 11 的默认版本 [10]。Btrfs 在 2009 年并入主流 Linux 内核,并且 [Facebook 广泛使用该文件系统 ][11]。SUSE Linux Enterprise 12 使其成为 [2014 年的默认版本 ][12]。因此,他们在生产力工具上都有充分的运行时间。 - -这两个系统都能很好地防止因意外停电而导致的文件系统损坏,尽管它们的实现方式不同。支持的配置包括使用单个驱动器和跨多个磁盘驱动器,并且这两种配置都能够创建近乎即时的快照。通过命令行和图形界面,存在多种工具来帮助管理这两个系统。这两种解决方案在家用台式机和高端服务器上都同样有效。 - -### LVM-ext4 的优势。 - -![LVM 上 ext4 的结构 ][13]。 - -[ext4 文件系统 ][14] 专注于高性能和可伸缩性,没有太多额外的花哨之处。它在很长一段时间内有效地防止碎片化,并提供了[很好的工具 ][15] 来判断何时出现碎片化。Ext4 是坚如磐石的,因为它构建在以前的 ext3 文件系统之上,带来了多年的系统内测试和错误修复。 - -Lvm-ext4 设置中的大多数高级功能都来自 LVM 本身。LVM 位于文件系统的“下方”,这意味着它支持任何文件系统。逻辑卷 (LV) 是通用的块设备,因此[虚拟机可以直接使用它们。][16] 这种灵活性允许每个逻辑卷使用正确的文件系统和正确的选项,以应对各种情况。这种分层方法还遵循了小工具协同工作的 Unix 理念。 - -硬件的[卷组 ][17](VG) 抽象允许 LVM 创建灵活的逻辑卷。每个 LV 从同一存储池拉入,但具有自己的配置。调整卷的大小比调整物理分区的大小容易得多,因为数据的有序放置没有限制。LVM [物理卷 ][18](PV) 可以是任意数量的分区,甚至可以在系统运行时在设备之间移动。 - -LVM 支持只读和读写[快照 ][19],这使得从活动系统创建一致的备份变得很容易。每个快照都有一个定义的大小,更改源卷或快照卷将占用其中的空间。又或者,逻辑卷也可以是[精简资源调配池 ][20] 的一部分。这允许快照自动使用池中的数据,而不是使用在创建卷时定义的固定大小的区块。 - -#### 有多个磁盘驱动器的 LVM - -当有多个设备时,LVM 确实会焕发能量。它具有对大多数 [RAID 级别 ][21] 的本地支持,每个逻辑卷可以具有不同的 RAID 级别。LVM 将自动为 RAID 配置选择适当的物理设备,或者用户可以直接指定它。基本的 RAID 支持包括用于性能的数据分条 ([RAID0][22]) 和用于冗余的镜像 ([RAID1][23])。逻辑卷也可以使用 [RAID5][24],[RAID6][25] 和 [RAID10][26] 等高级设置。LVM RAID 支持已经成熟,因为 LVM 在后台使用了 [mdadm][29] 使用的相同的 [device-mapper(dm)][27] 和 [multi-device(md)][28] 内核支持。 - -对于具有快速和慢速驱动器的系统,逻辑卷也可以是[缓存的卷 ][30]。经典示例是 SSD 和传统磁盘驱动器的组合。高速缓存的卷使用更快的驱动器来存储更频繁访问的数据(或用作写缓存),而慢速的驱动器使用批量数据。 - -LVM 中大量稳定的功能以及 ext4 的可靠性在既往的使用中早已被证明了。当然,具有更多功能会带来复杂性。在配置 LVM 时,为正确的功能找到正确的选项可能具有挑战性。对于单驱动器台式机系统,LVM 的功能(例如 RAID 和缓存卷)不适用。但是,逻辑卷比物理分区更灵活,快照也很有用。对于正常的桌面使用,LVM 的复杂性会成为常见的用户从可能遇到的问题中恢复的障碍。 - -### Btrfs 的优势 - -![Btrfs 结构 ][31] - -从前几代文件系统中学到的经验指导了 [Btrfs][5] 内置的功能的设计。与 ext4 不同,它可以直接跨越多个设备,因此它具有通常仅在卷管理器中才能找到的功能。它还具有 Linux 文件系统空间中独有的功能 ([ZFS][32] 具有相似的功能集,但[不要在 Linux 内核中期望它 ][33])。 - -#### Btrfs 的主要功能 - -也许最重要的功能是所有数据的校验和。校验和与写时复制一起提供了 [key 方法 ][34],可确保在意外断电后确保文件系统的完整性。更独特的是,校验和可以检测数据本身中的错误。悄然的数据损坏(有时也称为 [bitrot][35]) 是大多数人意识到的更常见的现象。没有主动验证,损坏最终可能会传播到所有可用备份。这使用户没有有效的副本。通过透明地校验所有数据,Btrfs 能够立即检测到任何此类损坏。启用正确的 [dup 或 raid 选项 ][36],文件系统也可以透明地修复损坏。 - -[写入时复制 ][37](COW) 也是 Btrfs 的基本功能,因为它在提供文件系统完整性和即时子卷快照方面至关重要。从通用子卷创建快照后,快照会自动共享基础数据。另外,事后[重复数据删除 ][38] 使用相同的技术来消除相同的数据块。单个文件可以通过使用 [reflink 选项 ][39] 调用 _cp_ 来使用 COW 功能。Reflink 副本对于复制大型文件(例如虚拟机映像)特别有用,例如,随着时间的流逝,这些文件通常具有几乎相同的数据。 - -Btrfs 支持跨多个设备,而无需卷管理器。多设备支持可解锁数据镜像以实现冗余,而条带化则可提高性能。实验上还支持更高级的 RAID 级别,例如 [RAID 5][24] 和 [RAID 6][25]。与标准 RAID 设置不同,Btrfs _raid1_ 选项实际上允许奇数个设备。例如,即使它们的大小不同,它也可以使用 3 个设备。 - -所有 RAID 和 dup 选项都在文件系统级别指定。因此,各个子卷不能使用不同的选项。请注意,将 RAID 1 选项与多个设备一起使用意味着即使一个设备发生故障,并且校验和功能仍可保持数据本身的完整性,卷中的所有数据均可用。这超出了当前典型的 RAID 设置所能提供的范围。 - -#### 附加功能 - -Btrfs 还支持快速简便的远程备份。子卷快照可以[发送到远程系统 ][40] 进行存储。通过利用文件系统中固有的 COW 元数据,这些传输通过仅发送先前发送的快照中的增量更改而非常有效。诸如 [snapper][41] 之类的用户应用程序使管理这些快照变得容易。 - -另外,Btrfs 卷可以具有[透明压缩 ][42],并且 _ [chattr + c][43] _ 将标记单个文件或目录以进行压缩。压缩不仅可以减少数据消耗的空间,还可以通过减少写入操作量来帮助延长 SSD 的寿命。压缩当然会带来额外的 CPU 开销,但是有很多选项可用于权衡取舍。 - -Btrfs 集成了文件系统和卷管理器功能,这意味着总体维护比 LVM-ext4 更简单。当然,这种集成的灵活性较低,但是对于大多数台式机甚至服务器而言,设置已足够。 - -### LVM 上使用 Btrfs - -Btrfs 可以[就地转换 ext3/ext4 文件系统 ][44]。就地转换意味着没有数据可以复制出来然后再返回。数据块本身甚至都没有被修改。结果,现有 LVM-ext4 系统的一种选择是将 LVM 保留在原处,然后简单地将 ext4 转换为 Btrfs。虽然可行且受支持,但有一些原因使它不是最佳选择。 - -Btrfs 的吸引力之一是与卷管理器集成的文件系统附带的更轻松的管理。通过在 LVM 之上运行,还可以使用其他卷管理器来进行任何系统维护。同样,LVM 设置通常具有多个具有独立文件系统的固定大小的逻辑卷。虽然 Btrfs 在给定的计算机上支持多个卷,但是许多不错的功能都希望单个卷具有多个子卷。如果每个 LVM 卷都有一个独立的 Btrfs 卷,则用户仍然无法手动管理固定大小的 LVM 卷。但是,收缩挂载的 Btrfs 文件系统的能力确实使处理固定大小的卷的工作变得更轻松。通过在线收缩,无需启动[实时镜像 ][45]。 - -使用 Btrfs 的多设备支持时,必须仔细考虑逻辑卷的物理位置。对于 Btrfs 而言,每个 LV 都是一个单独的物理设备,如果实际情况并非如此,则某些数据可用性功能可能会做出错误的决定。例如,如果单个驱动器发生故障,对数据使用 _raid1_ 通常可以提供保护。如果实际逻辑卷在同一物理设备上,则没有冗余。 - -如果强烈需要某些特定的 LVM 功能,例如原始块设备或高速缓存的逻辑卷,则在 LVM 之上运行 Btrfs 是有意义的。在这种配置下,Btrfs 仍然提供其大多数优点,例如校验和和易于发送的增量快照。尽管使用 LVM 会产生一些操作开销,但 Btrfs 的这种开销并不比任何其他文件系统大。 - -### 总结 - -当尝试在 Btrfs 和 LVM-ext4 之间进行选择时,没有一个正确的答案。每个用户都有独特的要求,并且同一用户可能拥有具有不同需求的不同系统。看一下每个配置的功能集,并确定是否有令人心动的功能。如果没有,坚持默认值没有错。选择这两种设置都有很好的理由。 - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/choose-between-btrfs-and-lvm-ext4/ - -作者:[Troy Curtis Jr][a] -选题:[lujun9972][b] -译者:[Chao-zhi](https://github.com/Chao-zhi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/troycurtisjr/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2020/12/btrfs-lvm-ext4-816x345.jpg -[2]: https://unsplash.com/@raulpetri?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoramagazine.org/announcing-fedora-33/ -[5]: https://btrfs.wiki.kernel.org/index.php/Main_Page -[6]: https://ext4.wiki.kernel.org/index.php/Main_Page -[7]: https://man7.org/linux/man-pages/man8/lvm.8.html -[8]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault -[9]: https://man7.org/linux/man-pages/man5/filesystems.5.html -[10]: https://docs.fedoraproject.org/en-US/Fedora/11/html/Release_Notes/index.html#sect-Release_Notes-Fedora_11_Overview -[11]: https://facebookmicrosites.github.io/btrfs/docs/btrfs-facebook.html -[12]: https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-317221 -[13]: https://fedoramagazine.org/wp-content/uploads/2020/12/ext4-on-LVM.jpg -[14]: https://opensource.com/article/18/4/ext4-filesystem -[15]: https://man7.org/linux/man-pages/man8/e4defrag.8.html -[16]: https://libvirt.org/storage.html#StorageBackendLogical -[17]: https://www.redhat.com/sysadmin/create-volume-group -[18]: https://www.redhat.com/sysadmin/create-physical-volume -[19]: https://tldp.org/HOWTO/LVM-HOWTO/snapshotintro.html -[20]: https://man7.org/linux/man-pages/man7/lvmthin.7.html -[21]: https://rhea.dev/articles/2018-08/LVM-RAID-on-Fedora -[22]: https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0 -[23]: https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1 -[24]: https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5 -[25]: https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_6 -[26]: https://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10 -[27]: https://man7.org/linux/man-pages/man8/dmsetup.8.html -[28]: https://man7.org/linux/man-pages/man4/md.4.html -[29]: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/ -[30]: https://man7.org/linux/man-pages/man7/lvmcache.7.html -[31]: https://fedoramagazine.org/wp-content/uploads/2020/12/Btrfs-Volume.jpg -[32]: https://en.wikipedia.org/wiki/ZFS -[33]: https://itsfoss.com/linus-torvalds-zfs/ -[34]: https://btrfs.wiki.kernel.org/index.php/FAQ#Can_I_have_nodatacow_.28or_chattr_.2BC.29_but_still_have_checksumming.3F -[35]: https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/ -[36]: https://man7.org/linux/man-pages/man8/mkfs.btrfs.8.html#DUP_PROFILES_ON_A_SINGLE_DEVICE -[37]: https://en.wikipedia.org/wiki/Copy-on-write -[38]: https://btrfs.wiki.kernel.org/index.php/Deduplication -[39]: https://btrfs.wiki.kernel.org/index.php/UseCases#How_do_I_copy_a_large_file_and_utilize_COW_to_keep_it_from_actually_being_copied.3F -[40]: https://fedoramagazine.org/btrfs-snapshots-backup-incremental/ -[41]: http://snapper.io/ -[42]: https://btrfs.wiki.kernel.org/index.php/Compression -[43]: https://www.man7.org/linux/man-pages/man1/chattr.1.html -[44]: https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3 -[45]: https://fedoramagazine.org/reclaim-hard-drive-space-with-lvm/ diff --git a/translated/tech/20210113 How to get Battery status notification when a battery is full or low.md b/translated/tech/20210113 How to get Battery status notification when a battery is full or low.md deleted file mode 100644 index c2f1b9d1ff..0000000000 --- a/translated/tech/20210113 How to get Battery status notification when a battery is full or low.md +++ /dev/null @@ -1,157 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to get Battery status notification when a battery is full or low) -[#]: via: (https://www.2daygeek.com/linux-low-full-charge-discharge-battery-notification/) -[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) - -如何在电池充满或低电量时获得电池状态通知 -====== - -对于类 Unix 用户来说,Linux 笔记本是不错的选择,但它经常会耗尽电池。 - -我试过很多 Linux 操作系统,但没有像 Windows 那样电池寿命长。 - -充电时间长了会对电池造成损害,所以在电池 100% 充满时要拔掉电源线。 - -电池充电或放电时没有默认的应用程序来通知,需要安装第三方应用来通知你。 - -为此,我通常会安装 **[Battery Monitor][1]**,但它已经被废弃,所以我创建了一个 shell 脚本来获取通知。 - -笔记本电池充放电状态可以通过以下两个命令来识别。 - -使用 acpi 命令。 - -``` -$ acpi -b -Battery 0: Discharging, 71%, 00:58:39 remaining -``` - -使用 upower 命令。 - -``` -$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 | grep -w 'state|percentage' | awk '{print $2}' -discharging -64% -``` - -### 方法 1:当电池电量高于 95% 或低于 20% 时,用 Shell 脚本发送警报 - -这个脚本在启动时在后台运行,每分钟检查一次电池状态,然后在电池电量超过 95% 或放电低于 20% 时发送通知。 - -警报会直到你的电池电量超过 20% 或低于 95% 时才会停止。 - -``` -$ sudo vi /opt/scripts/battery-status.sh - -#!/bin/bash -while true -do -battery_level=`acpi -b | grep -P -o '[0-9]+(?=%)'` -if [ $battery_level -ge 95 ]; then -notify-send "Battery Full" "Level: ${battery_level}%" -paplay /usr/share/sounds/freedesktop/stereo/suspend-error.oga -elif [ $battery_level -le 20 ]; then -notify-send --urgency=CRITICAL "Battery Low" "Level: ${battery_level}%" -paplay /usr/share/sounds/freedesktop/stereo/suspend-error.oga -fi -sleep 60 -done -``` - -脚本完成后,设置可执行权限。 - -``` -$ sudo chmod +x /opt/scripts/battery-status.sh -``` - -最后,将该脚本添加到用户配置文件的底部。对于全局范围来说,你需要在 /etc/profile 文件中添加该脚本。 - -``` -$ vi /home/magi/.profile - -/opt/scripts/battery-status.sh & -``` - -**[重启你的 Linux 系统][2]**来检查这点。 - -``` -$ sudo reboot -``` - -### 方法 2:当电池充电(高于 95%)或放电(低于 20%)时发送通知的 Shell 脚本。 - -这个脚本与上面的脚本类似,但它是由交流适配器负责。 - -如果你的交流适配器插上了,而且电池的电量超过 95%,它就会发出一个带有声音的通知,但是这个通知不会停止,直到你拔掉交流适配器。 - -![][3] - -如果你拔掉交流适配器,你将永远不会再看到通知,直到你的电池电量下降到 20%。 - -![][3] - -``` -$ sudo vi /opt/scripts/battery-status-1.sh - -#!/bin/bash -while true -do -export DISPLAY=:0.0 -battery_level=`acpi -b | grep -P -o '[0-9]+(?=%)'` -if on_ac_power; then -if [ $battery_level -ge 95 ]; then -notify-send "Battery Full" "Level: ${battery_level}% " -paplay /usr/share/sounds/freedesktop/stereo/suspend-error.oga -fi -else -if [ $battery_level -le 20 ]; then -notify-send --urgency=CRITICAL "Battery Low" "Level: ${battery_level}%" -paplay /usr/share/sounds/freedesktop/stereo/suspend-error.oga -fi -fi -sleep 60 -done -``` - -脚本完成后,设置执行权限。 - -``` -$ sudo chmod +x /opt/scripts/battery-status-1.sh -``` - -最后将脚本添加到用户 **profile** 文件的底部。对于全局范围来说,你需要在 /etc/profile 文件中添加该脚本。 - -``` -$ vi /home/magi/.profile - -/opt/scripts/battery-status-1.sh & -``` - -重启系统来检查。 - -``` -$ sudo reboot -``` - -**参考:** [stackexchange][4] - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/linux-low-full-charge-discharge-battery-notification/ - -作者:[Magesh Maruthamuthu][a] -选题:[lujun9972][b] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/magesh/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/category/battery-monitor/ -[2]: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ -[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[4]: https://unix.stackexchange.com/questions/60778/how-can-i-get-an-alert-when-my-battery-is-about-to-die-in-linux-mint \ No newline at end of file diff --git a/translated/tech/20210208 Why choose Plausible for an open source alternative to Google Analytics.md b/translated/tech/20210208 Why choose Plausible for an open source alternative to Google Analytics.md new file mode 100644 index 0000000000..989e43db08 --- /dev/null +++ b/translated/tech/20210208 Why choose Plausible for an open source alternative to Google Analytics.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why choose Plausible for an open source alternative to Google Analytics) +[#]: via: (https://opensource.com/article/21/2/plausible) +[#]: author: (Ben Rometsch https://opensource.com/users/flagsmith) + +为什么选择 Plausible 作为 Google Analytics 的开源替代品? +====== +Plausible作为Google Analytics 的可行,有效的替代方案正在引起用户的关注。 +![Analytics: Charts and Graphs][1] + +发挥 Google Analytics 的威力似乎是一个巨大的挑战。实际上,你可以说这听起来似乎不合理。但这正是 [Plausible.io][2]取得巨大成功的原因,自 2018 年以来已注册了数千名新用户。 + +Plausible 的联合创始人 Uku Taht 和 Marko Saric 最近出现在 [The Craft of Open Source][3] 播客上,谈论了这个项目以及他们如何: + +* 创建了一个可行的替代 Google Analytics 的方案 +* 在不到两年的时间里获得了如此大的发展势头 +* 通过开源项目外包实现其目标 + + + +请继续阅读他们与播客主持人和 Flagsmith 创始人 Ben Rometsch 的对话摘要。 + +### Plausible 是如何开始的 + +2018年 冬天,Uku 开始编写一个他认为急需的项目:一个可行的、有效的 Google Analytics 替代方案。因为他对 Google 产品的发展方向感到失望,而且所有其他数据解决方案似乎都把 Google 当作“数据处理中间人”。 + +Uku 的第一直觉是利用现有的数据库解决方案专注于分析方面的工作。马上,他就遇到了一些挑战。第一次尝试使用了 PostgreSQL,这在技术上很幼稚,因为它很快就变得不堪重负,效率很低。因此,他的目标蜕变成了做一个分析产品,可以处理大量的数据点,而且性能不会有明显的下降。长话短说,Uku 成功了,Plausible 现在每月可以摄取超过 8000 万条记录。 + +Plausible 的第一个版本于 2019 年夏天发布。2020 年 3 月,Marko 加入,负责项目的传播和营销方面的工作。从那时起,它的受欢迎程度以相当大的势头增长。 + +### 为什么要开源? + +Uku 热衷于遵循“独立黑客”的软件开发路线:创建一个产品,把它放在那里,然后看看它如何成长。开源在这方面是有意义的,因为你可以迅速发展一个社区并获得人气。 + +但 Plausible 一开始并不是开源的。Uku 最初担心软件的敏感代码,比如计费代码,但他很快就发布了,因为这对没有 API 令牌的人来说是没有用的。 + + +现在,Plausible 是在 [AGPL][4] 下完全开源的,他们选择了 AGPL 而不是 MIT 许可。Uku 解释说,在 MIT 许可下,任何人都可以不受限制地对代码做任何事情。在 AGPL 下,如果有人修改代码,他们必须将他们的修改开源,并将代码回馈给社区。这意味着,大公司不能拿着原始代码,在此基础上进行构建,然后获得所有的回报。他们必须共享,使得竞争环境更加公平。例如,如果一家公司想插入他们的计费或登录系统,他们将有法律义务发布代码。 + + +在播客中,Uku 问我关于 Flagsmith 的授权,目前Flagsmith的授权采用 BSD 3-Clause 许可,该许可证是高度允许的,但我即将把一些功能移到更严格的许可后面。到目前为止,Flagsmith 社区已经理解了这一变化,因为他们意识到这将带来更多更好的功能。 + +### Plausible vs. Google Analytics + +Uku说,在他看来,开源的精神是,代码应该是开放的,任何人都可以进行商业使用,并与社区共享,但你可以把一个闭源的 API 模块作为专有附加组件保留下来。这样一来,Plausible 和其他公司就可以通过创建和销售定制的 API 附加许可来满足不同的使用场景。 + +Marko 职位上是一名开发者,但从营销方面来说,他努力让这个项目在 Hacker News 和 Lobster 等网站上得到报道,并建立了 Twitter 帐户以帮助产生动力。这种宣传带来的热潮也意味着该项目在 GitHub 上起飞,从 500 颗星到 4300 颗星。随着流量的增长,Plausible 出现在 GitHub 的趋势列表中,这帮助它的人气滚起了雪球。 + +Marko 还非常注重发布和推广博客文章。这一策略得到了回报,在最初的 6 个月里,有四五篇文章进入了病毒式传播,他利用这些峰值来放大营销信息,加速了增长。 + +Plausible 成长过程中最大的挑战是让人们从 Google Analytics 上转换过来。这个项目的主要目标是创建一个有用、高效、准确的网络分析产品。它还需要符合法规,并为企业和网站访问者提供高度的隐私。 + +Plausible 现在已经在 8000 多个网站上运行。通过与客户的交谈,Uku 估计其中约 90% 的客户会已经用过 Google Analytics。 + +Plausible 以标准的软件即服务 (SaaS) 订阅模式运行。为了让事情更公平,它按月页面浏览量收费,而不是按网站收费。对于季节性网站来说,这可能会有麻烦,比如说电子商务网站在节假日会激增,或者美国大选网站每四年激增一次。这些可能会导致月度订阅模式下的定价问题,但它通常对大多数网站很好。 + + +### 查看播客 + +想要了解更多关于 Uku 和 Marko 如何以惊人的速度发展开源 Plausible 项目,并使其获得商业上的成功,请[收听播客][3],并查看[其他集锦][5],了解更多关于“开源软件社区的来龙去脉”。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/plausible + +作者:[Ben Rometsch][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/flagsmith +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs) +[2]: https://plausible.io/ +[3]: https://www.flagsmith.com/podcast/02-plausible +[4]: https://www.gnu.org/licenses/agpl-3.0.en.html +[5]: https://www.flagsmith.com/podcast \ No newline at end of file diff --git a/translated/tech/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md b/translated/tech/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md new file mode 100644 index 0000000000..eda14eb676 --- /dev/null +++ b/translated/tech/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Viper Browser: A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism) +[#]: via: (https://itsfoss.com/viper-browser/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Viper 浏览器:一款基于 Qt5 的轻量级浏览器,注重隐私和简约 +====== + +**简介:Viper 浏览器是一个基于 Qt 的浏览器,它提供了简单的用户体验,同时考虑到隐私问题。**_ + +虽然大多数流行的浏览器都运行在 Chromium 之上,但像 [Firefox][1]、[Beaker 浏览器][2]以及其他一些 [chrome 替代品][3]这样独特的替代品不应该停止存在。 + +尤其是考虑到谷歌最近可能想到从 Chromium 中剥离[谷歌浏览器特有的功能][4],并给出了滥用的借口。 + +在寻找更多的 Chrome 替代品时,我在 [Mastodon][6] 上看到了一个有趣的项目 “[Viper 浏览器][5]”。 + +### Viper 浏览器:一个基于 Qt5 的开源浏览器 + +_**注意**:Viper 浏览器是一个只有几个贡献者的相当新的项目。它缺乏某些功能,我将在下文提及。_ + +![][7] + +Viper 是一款有趣的网络浏览器,在利用 [QtWebEngine][8] 的同时,它专注于成为一个强大而又轻巧的选择。 + +QtWebEngine 借用了 Chromium 的代码,但它不包括连接到 Google 平台的二进制文件和服务。 + +我花了一些时间使用它并进行一些日常浏览活动,我必须说,我对它相当感兴趣。不仅仅是因为它是一个简单易用的东西(浏览器可以那么复杂),而且它还专注于增强你的隐私,为你提供添加不同的广告阻止选项以及一些有用的选项。 + +![][9] + +虽然我认为它并不是为所有人准备的,但还是值得一看的。在你继续尝试之前,让我简单介绍一下它的功能。 + +### Viper 浏览器的功能 + +![][10] + +我将列出一些你会发现有用的关键功能: + +* 管理 cookies 的能力 +* 多个预设选项以选择不同的广告屏蔽器网络。 +* 简单且易于使用 +* 隐私友好的默认搜索引擎 - [Startpage][11] (你可以更改) +* 能够添加用户脚本 +* 能够添加新的 user-agent +* 禁用 JavaScript 的选项 +* 能够阻止图像加载 + + + +除了这些亮点之外,你还可以轻松地调整隐私设置,以删除你的历史记录、清理已有 cookies,以及一些更多的选项。 + +![][12] + +### 在 Linux 上安装 Viper 浏览器 + +它只是在[发布页][13]提供了一个 AppImage 文件,你可以利用它在任何 Linux 发行版上进行测试。 + +如果你需要帮助,你也可以参考我们的[在 Linux 上使用 AppImage 文件][14]指南。如果你好奇,你可以在 [GitHub][5] 上探索更多关于它的内容。 + +[Viper 浏览器][5] + +### 我对使用 Viper 浏览器的看法 + +我不认为这是一个可以立即取代你当前浏览器的东西,但如果你有兴趣测试尝试提供 Chrome 替代品的新项目,这肯定是其中之一。 + +当我试图登录我的谷歌账户时,它阻止了我,说它可能是一个不安全的浏览器或不支持的浏览器。因此,如果你依赖你的谷歌帐户,这是一个令人失望的消息。 + +但是,其他社交媒体平台也可以与 YouTube 一起正常运行(无需登录)。不支持 Netflix,但总体上浏览体验是相当快速和可用的。 + +你可以安装用户脚本,但 Chrome 扩展还不支持。当然,这要么是有意为之,要么是在开发过程中特别考虑到它是一款隐私友好型的网络浏览器。 + +### 总结 + +考虑到这是一个鲜为人知但对某些人来说很有趣的东西,你对我们有什么建议吗? 是一个值得关注的开源项目么? + +请在下面的评论中告诉我。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/viper-browser/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.mozilla.org/en-US/firefox/new/ +[2]: https://itsfoss.com/beaker-browser-1-release/ +[3]: https://itsfoss.com/open-source-browsers-linux/ +[4]: https://www.bleepingcomputer.com/news/google/google-to-kill-chrome-sync-feature-in-third-party-browsers/ +[5]: https://github.com/LeFroid/Viper-Browser +[6]: https://mastodon.social/web/accounts/199851 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser.png?resize=800%2C583&ssl=1 +[8]: https://wiki.qt.io/QtWebEngine +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser-setup.jpg?resize=793%2C600&ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-preferences.jpg?resize=800%2C660&ssl=1 +[11]: https://www.startpage.com +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser-tools.jpg?resize=800%2C262&ssl=1 +[13]: https://github.com/LeFroid/Viper-Browser/releases +[14]: https://itsfoss.com/use-appimage-linux/ diff --git a/translated/tech/20210215 A practical guide to JavaScript closures.md b/translated/tech/20210215 A practical guide to JavaScript closures.md new file mode 100644 index 0000000000..094da3e089 --- /dev/null +++ b/translated/tech/20210215 A practical guide to JavaScript closures.md @@ -0,0 +1,136 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A practical guide to JavaScript closures) +[#]: via: (https://opensource.com/article/21/2/javascript-closures) +[#]: author: (Nimisha Mukherjee https://opensource.com/users/nimisha) + +JavaScript 闭包实践 +====== + +> 通过深入了解 JavaScript 的高级概念之一:闭包,更好地理解 JavaScript 代码的工作和执行方式。 + +![女性编程][1] + +在《[JavaScript 如此受欢迎的 4 个原因][2]》中,我提到了一些高级 JavaScript 概念。在本文中,我将深入探讨其中的一个概念:闭包closure。 + +根据 [Mozilla 开发者网络][3](MDN),“闭包是将一个函数和对其周围的状态(词法环境)的引用捆绑在一起(封闭)的组合。”简而言之,这意味着在一个函数内部的函数可以访问其外部(父)函数的变量。 + +为了更好地理解闭包,可以看看作用域及其执行上下文。 + +下面是一个简单的代码片段: + +``` +var hello = "Hello"; + +function sayHelloWorld() { + var world = "World"; +    function wish() { +        var year = "2021"; +        console.log(hello + " " + world + " "+ year); + } + wish(); +} +sayHelloWorld(); +``` + +下面是这段代码的执行上下文: + +![JS 代码的执行上下文][4] + +每次创建函数时(在函数创建阶段)都会创建闭包。每个闭包有三个作用域。 + + * 本地作用域(自己的作用域) + * 外部函数范围 + * 全局范围 + +我稍微修改一下上面的代码来演示一下闭包: + +``` +var hello = "Hello"; + +var sayHelloWorld = function() { + var world = "World"; +    function wish() { +        var year = "2021"; +        console.log(hello + " " + world + " "+ year); + } + return wish; +} +var callFunc = sayHelloWorld(); +callFunc(); +``` + +内部函数 `wish()` 在执行之前就从外部函数返回。这是因为 JavaScript 中的函数形成了**闭包**。 + + * 当 `sayHelloWorld` 运行时,`callFunc` 持有对函数 `wish` 的引用。 + * `wish` 保持对其周围(词法)环境的引用,其中存在变量 `world`。 + +### 私有变量和方法 + +本身,JavaScript 不支持创建私有变量和方法。闭包的一个常见和实用的用途是模拟私有变量和方法,并允许数据隐私。在闭包范围内定义的方法是有特权的。 + +这个代码片段捕捉了 JavaScript 中闭包的常用编写和使用方式: + +``` +var resourceRecord = function(myName, myAddress) { +  var resourceName = myName; +  var resourceAddress = myAddress; +  var accessRight = "HR"; +  return { +    changeName: function(updateName, privilege) { +      // only HR can change the name +      if (privilege === accessRight ) { +        resourceName = updateName; +        return true; +      } else { +        return false; +      } +    },   +    changeAddress: function(newAddress) { +      // any associate can change the address +      resourceAddress = newAddress;           +    },   +    showResourceDetail: function() { +      console.log ("Name:" + resourceName + " ; Address:" + resourceAddress); +    } +  } +} +// Create first record +var resourceRecord1 = resourceRecord("Perry","Office"); +// Create second record +var resourceRecord2 = resourceRecord("Emma","Office"); +// Change the address on the first record +resourceRecord1.changeAddress("Home"); +resourceRecord1.changeName("Perry Berry", "Associate"); // Output is false as only an HR can change the name +resourceRecord2.changeName("Emma Freeman", "HR"); // Output is true as HR changes the name +resourceRecord1.showResourceDetail(); // Output - Name:Perry ; Address:Home +resourceRecord2.showResourceDetail(); // Output - Name:Emma Freeman ; Address:Office +``` + +资源记录(`resourceRecord1` 和 `resourceRecord2`)相互独立。每个闭包通过自己的闭包引用不同版本的 `resourceName` 和 `resourceAddress` 变量。你也可以应用特定的规则来处理私有变量,我添加了一个谁可以修改 `resourceName` 的检查。 + +### 使用闭包 + +理解闭包是很重要的,因为它可以更深入地了解变量和函数之间的关系,以及 JavaScript 代码如何工作和执行。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/javascript-closures + +作者:[Nimisha Mukherjee][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nimisha +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) +[2]: https://linux.cn/article-12830-1.html +[3]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures +[4]: https://opensource.com/sites/default/files/uploads/execution-context.png (Execution context for JS code) +[5]: https://creativecommons.org/licenses/by-sa/4.0/