From 4b57bf886d56c28a3a99d3f66bf8885814fa4ed4 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Jan 2020 21:38:33 +0800 Subject: [PATCH 01/10] PRF @alim0x --- ... Linux story- Learning Linux in the 90s.md | 28 ++++++++++--------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md b/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md index ea0847761d..8830fa539f 100644 --- a/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md +++ b/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (alim0x) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (My Linux story: Learning Linux in the 90s) @@ -9,34 +9,36 @@ 我的 Linux 故事:在 90 年代学习 Linux ====== -这是一个关于我如何在 WiFi 时代之前学习 Linux 的故事,那时的发行版还以 CD 的形式出现。 -![Sky with clouds and grass][1] + +> 这是一个关于我如何在 WiFi 时代之前学习 Linux 的故事,那时的发行版还以 CD 的形式出现。 + +![](https://img.linux.net.cn/data/attachment/album/202001/29/213829t00wmwu2w0z502zg.jpg) 大部分人可能不记得 1996 年时计算产业或日常生活世界的样子。但我很清楚地记得那一年。我那时候是堪萨斯中部一所高中的二年级学生,那是我的自由与开源软件(FOSS)旅程的开端。 -我从这里开始进步。我在 1996 年之前就开始对计算机感兴趣。我出生并成长于我家的第一台 Apple ][e,然后多年之后是 IBM Personal System/2。(是的,在这过程中有一些代际的跨越。)IBM PS/2 有一个非常激动人心的特性:一个 1200 波特的 Hayes 调制解调器。 +我从这里开始进步。我在 1996 年之前就开始对计算机感兴趣。我在我家的第一台 Apple ][e 上启蒙成长,然后多年之后是 IBM Personal System/2。(是的,在这过程中有一些代际的跨越。)IBM PS/2 有一个非常激动人心的特性:一个 1200 波特的 Hayes 调制解调器。 我不记得是怎样了,但在那不久之前,我得到了一个本地 [BBS][2] 的电话号码。一旦我拨号进去,我可以得到本地的一些其他 BBS 的列表,我的网络探险就此开始了。 -在 1995 年,[足够幸运][3]的人拥有了家庭互联网连接,每月可以使用不到 30 分钟。这个互联网不像我们现代的服务那样,通过卫星、光纤、有线电视同轴电缆或任何版本的铜线提供。大多数家庭通过一个调制解调器拨号,它连接到他们的电话线上。(这时离移动电话无处不在的时代还早得很,大多数人只有一部家庭电话。)尽管这还要取决你所在的位置,但我不认为那时有很多独立的互联网服务提供商(ISP),所以大多数人从仅有的几家大公司获得服务,包括 America Online,CompuServe 以及 Prodigy。 +在 1995 年,[足够幸运][3]的人拥有了家庭互联网连接,每月可以使用不到 30 分钟。那时的互联网不像我们现代的服务那样,通过卫星、光纤、有线电视同轴电缆或任何版本的铜线提供。大多数家庭通过一个调制解调器拨号,它连接到他们的电话线上。(这时离移动电话无处不在的时代还早得很,大多数人只有一部家庭电话。)尽管这还要取决你所在的位置,但我不认为那时有很多独立的互联网服务提供商(ISP),所以大多数人从仅有的几家大公司获得服务,包括 America Online,CompuServe 以及 Prodigy。 -你获取到的服务速率非常低,甚至在拨号上网演变的顶峰 56K,你也只能期望得到最高 3.5Kbps 的速率。如果你想要尝试 Linux,下载一个 200MB 到 800MB 的 ISO 镜像或(更加切合实际的)磁盘镜像要贡献出时间,决心,以及面临电话不可用的情形。 +你能获取到的服务速率非常低,甚至在拨号上网革命性地达到了顶峰的 56K,你也只能期望得到最高 3.5Kbps 的速率。如果你想要尝试 Linux,下载一个 200MB 到 800MB 的 ISO 镜像或(更加切合实际的)一套软盘镜像要贡献出时间、决心,以及减少电话的使用。 -我走了一条简单一点的路:在 1996 年,我从一家主要的 Linux 分发商订购了一套“tri-Linux”CD。这些光盘提供了三个发行版,我的这套包含了 Debian 1.1 (Debian 的第一个稳定版本),Red Hat Linux 3.0.3 以及 Slackware 3.1(代号 Slackware '96)。据我回忆,这些光盘是从一家叫做 [Linux Systems Labs][4] 的在线商店购买的。这家在线商店如今已经不存在了,但在 90 年代和 00 年代早期,这样的分发商很常见。对于多光盘 Linux 套件也是如此。这是 1998 年的一套光盘,你可以了解到他们都包含了什么: +我走了一条简单一点的路:在 1996 年,我从一家主要的 Linux 发行商订购了一套 “tri-Linux” CD 集。这些光盘提供了三个发行版,我的这套包含了 Debian 1.1(Debian 的第一个稳定版本)、Red Hat Linux 3.0.3 以及 Slackware 3.1(代号 Slackware '96)。据我回忆,这些光盘是从一家叫做 [Linux Systems Labs][4] 的在线商店购买的。这家在线商店如今已经不存在了,但在 90 年代和 00 年代早期,这样的发行商很常见。这些是多光盘 Linux 套件。这是 1998 年的一套光盘,你可以了解到他们都包含了什么: ![A tri-linux CD set][5] ![A tri-linux CD set][6] -在 1996 年夏天一个命中注定般的日子,那时我住在堪萨斯一个新的并且相对较为乡村的城市,我做出了安装并使用 Linux 的第一次尝试。在 1996 年的整个夏天,我尝试了那套三 Linux CD 套件里的全部三个发行版。他们都在我母亲的老 Pentium 75MHz 电脑上完美运行。 +在 1996 年夏天一个命中注定般的日子,那时我住在堪萨斯一个新的并且相对较为乡村的城市,我做出了安装并使用 Linux 的第一次尝试。在 1996 年的整个夏天,我尝试了那套三张 Linux CD 套件里的全部三个发行版。他们都在我母亲的老 Pentium 75MHz 电脑上完美运行。 -我最终选择了 [Slackware][7] 3.1 作为我喜欢的发行版,相比其他发行版可能更多的是因为它的终端的外观,这是决定选择一个发行版前需要考虑的重要因素。 +我最终选择了 [Slackware][7] 3.1 作为我的首选发行版,相比其它发行版可能更多的是因为它的终端的外观,这是决定选择一个发行版前需要考虑的重要因素。 -我将系统设置完毕并运行了起来。我连接到一家“杂牌”ISP(一家这个区域的本地服务商),通过我家的第二条电话线拨号(为了满足我的所有互联网使用而订购)。那就像在天堂一样。我有一台完美运行的双系统(Microsoft Windows 95 和 Slackware 3.1)电脑。我依然拨号进入我所知道和喜爱的 BBS,游玩在线 BBS 游戏,比如 Trade Wars,Usurper 以及 Legend of the Red Dragon。 +我将系统设置完毕并运行了起来。我连接到一家 “不太知名的” ISP(一家这个区域的本地服务商),通过我家的第二条电话线拨号(为了满足我的所有互联网使用而订购)。那就像在天堂一样。我有一台完美运行的双系统(Microsoft Windows 95 和 Slackware 3.1)电脑。我依然拨号进入我所知道和喜爱的 BBS,游玩在线 BBS 游戏,比如 Trade Wars、Usurper 以及 Legend of the Red Dragon。 -我能够记得花在 EFNet(IRC)上 #Linux 频道的一天天时光,帮助其他用户,回答他们的 Linux 问题以及和审核人员互动。 +我能够记得在 EFNet(IRC)上 #Linux 频道上渡过的日子,帮助其他用户,回答他们的 Linux 问题以及和版主们互动。 -在我第一次在家尝试使用 Linux 系统的 20 多年后,我现在正进入作为 Red Hat 顾问的第五年,仍然在使用 Linux(现在是 Fedora)作为我的日常系统,并且依然在 IRC 上帮助想要使用 Linux 的人们。 +在我第一次在家尝试使用 Linux 系统的 20 多年后,已经是我进入作为 Red Hat 顾问的第五年,我仍然在使用 Linux(现在是 Fedora)作为我的日常系统,并且依然在 IRC 上帮助想要使用 Linux 的人们。 -------------------------------------------------------------------------------- @@ -45,7 +47,7 @@ via: https://opensource.com/article/19/11/learning-linux-90s 作者:[Mike Harris][a] 选题:[lujun9972][b] 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0215d826b928946a8858d2c0dab357c15a6e4e51 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Jan 2020 21:39:53 +0800 Subject: [PATCH 02/10] PUB @alim0x https://linux.cn/article-11831-1.html --- .../20191108 My Linux story- Learning Linux in the 90s.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md b/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md index 8830fa539f..f31ae62e4f 100644 --- a/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md +++ b/translated/talk/20191108 My Linux story- Learning Linux in the 90s.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (alim0x) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11831-1.html) [#]: subject: (My Linux story: Learning Linux in the 90s) [#]: via: (https://opensource.com/article/19/11/learning-linux-90s) [#]: author: (Mike Harris https://opensource.com/users/mharris) From 3ae890212279cb49123cce302545eecc68845b23 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 00:53:05 +0800 Subject: [PATCH 03/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200129=204=20cool?= =?UTF-8?q?=20new=20projects=20to=20try=20in=20COPR=20for=20January=202020?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200129 4 cool new projects to try in COPR for January 2020.md --- ...rojects to try in COPR for January 2020.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/tech/20200129 4 cool new projects to try in COPR for January 2020.md diff --git a/sources/tech/20200129 4 cool new projects to try in COPR for January 2020.md b/sources/tech/20200129 4 cool new projects to try in COPR for January 2020.md new file mode 100644 index 0000000000..58a64cdc70 --- /dev/null +++ b/sources/tech/20200129 4 cool new projects to try in COPR for January 2020.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 cool new projects to try in COPR for January 2020) +[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/) +[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/) + +4 cool new projects to try in COPR for January 2020 +====== + +![][1] + +COPR is a [collection][2] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software. + +This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the [COPR User Documentation][3] for how to get started. + +### Contrast + +[Contrast][4] is a small app used for checking contrast between two colors and to determine if it meets the requirements specified in [WCAG][5]. The colors can be selected either using their RGB hex codes or with a color picker tool. In addition to showing the contrast ratio, Contrast displays a short text on a background in selected colors to demonstrate comparison. + +![][6] + +#### Installation instructions + +The [repo][7] currently provides contrast for Fedora 31 and Rawhide. To install Contrast, use these commands: + +``` +sudo dnf copr enable atim/contrast +sudo dnf install contrast +``` + +### Pamixer + +[Pamixer][8] is a command-line tool for adjusting and monitoring volume levels of sound devices using PulseAudio. You can display the current volume of a device and either set it directly or increase/decrease it, or (un)mute it. Pamixer can list all sources and sinks. + +#### Installation instructions + +The [repo][9] currently provides Pamixer for Fedora 31 and Rawhide. To install Pamixer, use these commands: + +``` +sudo dnf copr enable opuk/pamixer +sudo dnf install pamixer +``` + +### PhotoFlare + +[PhotoFlare][10] is an image editor. It has a simple and well-arranged user interface, where most of the features are available in the toolbars. PhotoFlare provides features such as various color adjustments, image transformations, filters, brushes and automatic cropping, although it doesn’t support working with layers. Also, PhotoFlare can edit pictures in batches, applying the same filters and transformations on all pictures and storing the results in a specified directory. + +![][11] + +#### Installation instructions + +The [repo][12] currently provides PhotoFlare for Fedora 31. To install Photoflare, use these commands: + +``` +sudo dnf copr enable adriend/photoflare +sudo dnf install photoflare +``` + +### Tdiff + +[Tdiff][13] is a command-line tool for comparing two file trees. In addition to showing that some files or directories exist in one tree only, tdiff shows differences in file sizes, types and contents, owner user and group ids, permissions, modification time and more. + +#### Installation instructions + +The [repo][14] currently provides tdiff for Fedora 29-31 and Rawhide, EPEL 6-8 and other distributions. To install tdiff, use these commands: + +``` +sudo dnf copr enable fif/tdiff +sudo dnf install tdiff +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/ + +作者:[Dominik Turecek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/dturecek/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg +[2]: https://copr.fedorainfracloud.org/ +[3]: https://docs.pagure.org/copr.copr/user_documentation.html# +[4]: https://gitlab.gnome.org/World/design/contrast +[5]: https://www.w3.org/WAI/standards-guidelines/wcag/ +[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png +[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/ +[8]: https://github.com/cdemoulins/pamixer +[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/ +[10]: https://photoflare.io/ +[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png +[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/ +[13]: https://github.com/F-i-f/tdiff +[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/ From 341610abc18f8596276da09f3491a4d52d9bacaf Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 00:57:30 +0800 Subject: [PATCH 04/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200130=20Meet=20F?= =?UTF-8?q?uryBSD:=20A=20New=20Desktop=20BSD=20Distribution?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200130 Meet FuryBSD- A New Desktop BSD Distribution.md --- ...FuryBSD- A New Desktop BSD Distribution.md | 94 +++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 sources/tech/20200130 Meet FuryBSD- A New Desktop BSD Distribution.md diff --git a/sources/tech/20200130 Meet FuryBSD- A New Desktop BSD Distribution.md b/sources/tech/20200130 Meet FuryBSD- A New Desktop BSD Distribution.md new file mode 100644 index 0000000000..eee1d27f9c --- /dev/null +++ b/sources/tech/20200130 Meet FuryBSD- A New Desktop BSD Distribution.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Meet FuryBSD: A New Desktop BSD Distribution) +[#]: via: (https://itsfoss.com/furybsd/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Meet FuryBSD: A New Desktop BSD Distribution +====== + +In the last couple of months, a few new desktop BSD have been announced. There is [HyperbolaBSD which was Hyperbola GNU/Linux][1] previously. Another new entry in the [BSD][2] world is [FuryBSD][3]. + +### FuryBSD: A new BSD distribution + +![][4] + +At its heart, FuryBSD is a very simple beast. According to [the site][5], “FuryBSD is a back to basics lightweight desktop distribution based on stock FreeBSD.” It is basically FreeBSD with a desktop environment pre-configured and several apps preinstalled. The goal is to quickly get a FreeBSD-based system running on your computer. + +You might be thinking that this sounds a lot like a couple of other BSDs that are available, such as [NomadBSD][6] and [GhostBSD][7]. The major difference between those BSDs and FuryBSD is that FuryBSD is much closer to stock FreeBSD. For example, FuryBSD uses the FreeBSD installer, while others have created their own installers and utilities. + +As it states on the [site][8], “Although FuryBSD may resemble past graphical BSD projects like PC-BSD and TrueOS, FuryBSD is created by a different team and takes a different approach focusing on tight integration with FreeBSD. This keeps overhead low and maintains compatibility with upstream.” The lead dev also told me that “One key focus for FuryBSD is for it to be a small live media with a few assistive tools to test drivers for hardware.” + +Currently, you can go to the [FuryBSD homepage][3] and download either an XFCE or KDE LiveCD. A GNOME version is in the works. + +### Who’s is Behind FuryBSD? + +The lead dev behind FuryBSD is [Joe Maloney][9]. Joe has been a FreeBSD user for many years. He contributed to other BSD projects, such as PC-BSD. He also worked with Eric Turgeon, the creator of GhostBSD, to rewrite the GhostBSD LiveCD. Along the way, he picked up a better understanding of BSD and started to form an idea of how he would make a distribution on his own. + +Joe is joined by several other devs who have also spent many years in the BSD world, such as Jaron Parsons, Josh Smith, and Damian Szidiropulosz. + +### The Future for FuryBSD + +At the moment, FuryBSD is nothing more than a pre-configured FreeBSD setup. However, the devs have a [list of improvements][5] that they want to make going forward. These include: + + * A sane framework for loading, 3rd party proprietary drivers graphics, wireless + * Cleanup up the LiveCD experience a bit more to continue to make it more friendly + * Printing support out of box + * A few more default applications included to provide a complete desktop experience + * Integrated [ZFS][10] replication tools for backup and restore + * Live image persistence options + * A custom pkg repo with sane defaults + * Continuous integration for applications updates + * Quality assurance for FreeBSD on the desktop + * Tailored artwork, color scheming, and theming + * Directory services integration + * Security hardening + + + +The devs make it quite clear that any changes they make will have a lot of thought and research behind them. They don’t want to compliment a feature, only to have to remove it or change it when it breaks something. + +![FuryBSD desktop][11] + +### How You Can Help FuryBSD? + +At this moment the project is still very young. Since all projects need help to survive, I asked Joe what kind of help they were looking for. He said, “We could use help [answering questions on the forums][12], [GitHub][13] tickets, help with documentation are all needed.” He also said that if people wanted to add support for other desktop environments, pull requests are welcome. + +### Final Thoughts + +Although I have not tried it yet, I have a good feeling about FuryBSD. It sounds like the project is in capable hands. Joe Maloney has been thinking about how to make the best BSD desktop experience for over a decade. Unlike majority of Linux distros that are basically a rethemed Ubuntu, the devs behind FuryBSD know what they are doing and they are choosing quality over the fancy bells and whistles. + +What are your thoughts on this new entry into the every growing desktop BSD market? Have you tried out FuryBSD or will you give it a try? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][14]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/furybsd/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/hyperbola-linux-bsd/ +[2]: https://itsfoss.com/bsd/ +[3]: https://www.furybsd.org/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/fury-bsd.jpg?ssl=1 +[5]: https://www.furybsd.org/manifesto/ +[6]: https://itsfoss.com/nomadbsd/ +[7]: https://ghostbsd.org/ +[8]: https://www.furybsd.org/furybsd-video-overview-at-knoxbug/ +[9]: https://github.com/pkgdemon +[10]: https://itsfoss.com/what-is-zfs/ +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/FuryBSDS-desktop.jpg?resize=800%2C450&ssl=1 +[12]: https://forums.furybsd.org/ +[13]: https://github.com/furybsd +[14]: https://reddit.com/r/linuxusersgroup From c9371d2cdcddb60c762d76b8c4025b86345528b1 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 00:58:17 +0800 Subject: [PATCH 05/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200129=203=20less?= =?UTF-8?q?ons=20I've=20learned=20writing=20Ansible=20playbooks?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200129 3 lessons I-ve learned writing Ansible playbooks.md --- ... I-ve learned writing Ansible playbooks.md | 221 ++++++++++++++++++ 1 file changed, 221 insertions(+) create mode 100644 sources/tech/20200129 3 lessons I-ve learned writing Ansible playbooks.md diff --git a/sources/tech/20200129 3 lessons I-ve learned writing Ansible playbooks.md b/sources/tech/20200129 3 lessons I-ve learned writing Ansible playbooks.md new file mode 100644 index 0000000000..a2cfe25265 --- /dev/null +++ b/sources/tech/20200129 3 lessons I-ve learned writing Ansible playbooks.md @@ -0,0 +1,221 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 lessons I've learned writing Ansible playbooks) +[#]: via: (https://opensource.com/article/20/1/ansible-playbooks-lessons) +[#]: author: (Jeff Geerling https://opensource.com/users/geerlingguy) + +3 lessons I've learned writing Ansible playbooks +====== +Avoid common pitfalls and supercharge your Ansible playbook maintenance +by following these best practices. +![plastic game pieces on a board][1] + +I've used Ansible since 2013 and maintain some of my original playbooks to this day. They have evolved with Ansible from version 1.4 to the current version (as of this writing, 2.9). + +Along the way, as Ansible grew from having dozens to hundreds and now thousands of modules, I've learned a lot about how to make sure my playbooks are maintainable and scalable as my systems grow. Even for simple projects (like the [playbook I use to manage my own laptop][2]), it pays dividends to avoid common pitfalls and make decisions that will make the future you thankful instead of regretful. + +The three main takeaways from this experience are: + + 1. Stay organized + 2. Test early and often + 3. Simplify, optimize + + + +The importance of each lesson I've learned follows in that order, too; it's no use trying to optimize something (point 3) that's already poorly assembled (point 1). Each step builds on the one above, so I'll guide you through each step. + +### Stay organized + +![Organized bins of equipment][3] + +At a bare minimum, you should **store your Ansible playbooks in a Git repository**. This helps with so many things: + + 1. Once you have a known working state, you can commit the work (ideally, with tags marking major versions, like 1.0.0 for the first stable version and 2.0.0 for an upgrade or rewrite). + 2. You can always walk back changes if necessary to a previous known-working state (e.g., by using `git reset` or `git checkout `). + 3. Large-scale changes (e.g., feature additions or a major upgrade) can be worked on in a branch, so you can still maintain the existing playbook and have adequate time to work on major changes. + + + +Storing playbooks in Git also helps with the second important organization technique: **run your playbooks from a build server**. + +Whether you use [Ansible Tower][4], [Jenkins][5], or some other build system, using a central interface for playbook runs gives you consistency and stability—you don't risk having one admin run a playbook one way (e.g., with the wrong version of roles or an old checkout) and someone else running it another way, breaking your servers. + +It also helps because it forces you to ensure all your playbook's resources are encapsulated in the playbook's repository and build configuration. Ideally, the entire build (including the job configuration) would be captured in the repository (e.g., through the use of a `Jenkinsfile` or its equivalent). + +Another important aspect to organization is **documentation**; at a bare minimum, I have a README in every playbook repository with the following contents: + + * The playbook's purpose + * Links to relevant resources (CI build status, external documentation, issue tracking, primary contacts) + * Instructions for local testing and development + + + +Even if you have the playbook automated through a build server, it is important to have thorough and correct documentation for how to run the playbook otherwise (e.g., locally in a test environment). I like to make sure my projects are easily approachable—not only for others who might eventually need to work with them but also myself! I often forget a nuance or dependency when running a playbook, and the README is the perfect place to outline any peculiarities. + +Finally, the _structure_ of the Ansible tasks themselves are important, and I like to ensure I have a maintainable structure by having **small, readable task files** and by extracting related sets of tasks into **Ansible roles**. + +Generally, if an individual playbook reaches around 100 lines of YAML, I'll start breaking it up into separate task files and using `include_tasks` to include those files. If I find a set of tasks that operates independently and could be broken out into its own [Ansible role][6], I'll work on extracting those tasks and related handlers, variables, and templates. + +Using roles is the best way to supercharge Ansible playbook maintenance; I often have to do similar tasks in many (if not most) playbooks, like managing user accounts or installing and configuring a web server or database. Abstracting these tasks into Ansible roles means I can maintain one set of tasks to be used among many playbooks, with variables to give flexibility where needed. + +Ansible roles can also be contributed back to the community via [Ansible Galaxy][7] if you're able to make them generic and provide the code with an open source license. I have contributed over a hundred roles to Galaxy, and they are made better by the fact that thousands of other playbooks (besides my own) rely on them and break if there is a bug in the role. + +One final note on roles: If you choose to use external roles (either from Galaxy or a private Git repository), I recommend committing the role to your repository (instead of adding it to a `.gitignore` file and downloading the role every time you run your playbook) because I like to avoid relying on downloads from Ansible Galaxy for every playbook run. You should still use a `requirements.yml` file to define role dependencies and define specific versions for the roles so you can choose when to upgrade your dependencies. + +### Test early and often + +![A stack of computer boards][8] + +Ansible allows you to define infrastructure as code. And like any software, it is essential to be able to verify that the code you write does what you expect. + +Like any software, it's best to _test_ your Ansible playbooks. And when I consider testing for any individual Ansible project I build, I think of a spectrum of CI testing options I can use, going in order from the easiest to hardest to implement: + + 1. `yamllint` + 2. `ansible-playbook --syntax-check` + 3. `ansible-lint` + 4. [Molecule test][9] (integration tests) + 5. `ansible-playbook --check` (testing against production) + 6. Building parallel infrastructure + + + +The first three options (linting and running a syntax check on your playbook) are essentially free; they run very fast and can help you avoid the most common problems with your playbook's task structure and formatting. + +They provide some value, but unless the playbook is extremely simple, I like to go beyond basic linting and run tests using [Molecule][9]. I usually use Molecule's built-in Docker integration to run my playbook against a local Docker instance running the same base OS as my production server. For some of my roles, which I run on different Linux distributions (e.g., CentOS and Debian), I run the Molecule test playbook once for each distro—and sometimes with extra test scenarios for more complex roles. + +If you're interested in learning how to test roles with Molecule, I wrote a blog post on the topic a couple of years ago called [Testing your Ansible roles with Molecule][10]. The process for testing full playbooks is similar, and in both cases, the tests can be run inside most CI environments (for example, my [geerlingguy.apache][11] role runs a suite of [Molecule tests via Travis CI][12]). + +The final two test options, running the playbook in `--check` mode or building parallel production infrastructure, require more setup work and often go beyond what's necessary for efficient testing processes. But in cases where playbooks manage servers critical to business revenue, they can be necessary. + +There are a few other things that are important to watch for when running tests and periodically checking or updating your playbooks: + + * Make sure you track (and fix) any `DEPRECATION WARNING`s you see in Ansible's output. Usually, you'll have a year or two before the warning leads to a failure in the latest Ansible version, so the earlier you can update your playbook code, the better. + * Every Ansible version has a [porting guide][13]) that is extremely helpful when you're updating from one version to the next. + * If you see annoying `WARN` messages in playbook output when you're using a module like `command`, and you know you can safely ignore them, you can add a `warn: no` under the `args` in a task. It's better to squelch these warnings so that more actionable warnings (like deprecation warnings) will be noticed at a glance. + + + +Finally, I like to make sure my CI environments are always running the latest Ansible release (and not locked into a specific version that I know works with my playbooks), because I know if a playbook will break right after the new release comes out. My build server is locked into a specific Ansible version, which may be one or two versions behind the latest version, so this gives me the time to ensure I fix any new issues discovered in CI tests before I upgrade my build server to the latest version. + +### Simplify, optimize + +![Charging AirPods][14] + +> "YAML is not a programming language." +> — Jeff Geerling + +Simplicity in your playbooks makes maintenance and future changes a lot easier. Sometimes I'll look at a playbook and be puzzled as to what's happening because there are multiple `when` and `until` conditions with a bunch of Python mixed in with Jinja filters. + +If I start to see more than one or two chained filters or Python method calls (especially anything having to do with regular expressions), I see that as a prime candidate for rewriting the required functionality as an Ansible module. The module could be maintained in Python and tested independently and would be easier to maintain as strictly Python code rather than mixing in all the Python inline with your YAML task definitions. + +So my first point is: Stick to Ansible's modules and simple task definitions as much as possible. Try to use Jinja filters wherever possible, and avoid chaining more than one or two filters on a variable at a time. If you have a lot of complex inline Python or Jinja, it's time to consider refactoring it into a custom Ansible module. + +Another common thing I see people do, especially when building out roles the first time, is using complex dict variables where separate "flat" variables may be more flexible. + +For example, instead of having an **apache** role with many options in one giant dict, like this: + + +``` +apache: +  startservers: 2 +  maxclients: 2 +``` + +And consider using separate flat variables: + + +``` +apache_startservers: 2 +apache_maxclients: 2 +``` + +The reason for this is simple: Using flat variables allows playbooks to override one particular value easily, without having to redefine the entire dictionary. This is especially helpful when you have dozens (or in some rare cases, _hundreds_) of default variables in a role. + +Once the playbook and role code looks good, it's time to start thinking about **optimization**. + +A few of the first things I look at are: + + * Can I disable `gather_facts`? Not every playbook needs all the facts, and it adds a bit of overhead on every run, on every server. + * Can I increase the number of `forks` Ansible uses? The default is five, but if I have 50 servers, can I operate on 20 or 25 at a time to vastly reduce the amount of time Ansible takes to run a playbook on all the servers? + * In CI, can I parallelize test scenarios? Instead of running one test, then the next, if I can start all the tests at once, it will make my CI test cycle much faster. If CI is slow, you'll tend to ignore it or not wait until the test run is complete, so it's important to make sure your test cycle is short. + + + +When I'm looking through tasks in a role or playbook, I also look for a few blatant performance issues that are common with certain modules: + + * When using `package` (or `apt`, `yum`, `dnf`, etc.), if there is more than one package being managed, the list should be passed directly to the `name` parameter and not via `with_items` or a `loop`—this way Ansible can efficiently operate on the whole list in one go instead of doing it package by package. + * When using `copy`, how many files are being copied? If there is a single file or even a few dozen, it might be fine, but the `copy` module is very slow if you have hundreds or thousands of files to be copied (better to use a module like `synchronize` or a different strategy like copying a tarball and expanding it on the server). + * If using `lineinfile` in a loop, it might be more efficient (and sometimes easier to maintain) to use `template` instead and control the entire file in one pass. + + + +Once I've gotten most of the low-hanging fruit out of the way, I like to profile my playbook, and Ansible has some built-in tools for this. You can configure extra callback plugins to measure role and task performance by setting the `callback_whitelist` option under `defaults` in your `ansible.cfg`: + + +``` +[defaults] +callback_whitelist = profile_roles, profile_tasks, timer +``` + +Now, when you run your playbook, you get a summary of the slowest roles and tasks at the end: + + +``` +Monday 10 September       22:31:08 -0500 (0:00:00.851)       0:01:08.824 ****** +=============================================================================== +geerlingguy.docker ------------------------------------------------------ 9.65s +geerlingguy.security ---------------------------------------------------- 9.33s +geerlingguy.nginx ------------------------------------------------------- 6.65s +geerlingguy.firewall ---------------------------------------------------- 5.39s +geerlingguy.munin-node -------------------------------------------------- 4.51s +copy -------------------------------------------------------------------- 4.34s +geerlingguy.backup ------------------------------------------------------ 4.14s +geerlingguy.htpasswd ---------------------------------------------------- 4.13s +geerlingguy.ntp --------------------------------------------------------- 3.94s +geerlingguy.swap -------------------------------------------------------- 2.71s +template ---------------------------------------------------------------- 2.64s +... +``` + +If anything takes more than a few seconds, it might be good to figure out exactly why it's taking so long. + +### Summary + +I hope you learned a few ways you can make your Ansible Playbooks more maintainable; as I said in the beginning, each of the three takeaways (stay organized, test, then simplify and optimize) builds on the previous, so start by making sure you have clean, documented code, then make sure it's well-tested, and finally look at how you can make it even better and faster! + +* * * + +_This article is a follow up to Jeff's presentation, [Make your Ansible playbooks flexible, maintainable, and scalable][15], at AnsibleFest 2018, which you can [watch here][16]._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/1/ansible-playbooks-lessons + +作者:[Jeff Geerling][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/geerlingguy +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-game-play-inclusive-diversity-collaboration.png?itok=8sUXV7W1 (plastic game pieces on a board) +[2]: https://github.com/geerlingguy/mac-dev-playbook +[3]: https://opensource.com/sites/default/files/uploads/organized.jpg (Organized bins of equipment) +[4]: https://www.ansible.com/products/tower +[5]: https://jenkins.io +[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html +[7]: https://galaxy.ansible.com +[8]: https://opensource.com/sites/default/files/uploads/test-early-often.jpg (A stack of computer boards) +[9]: https://molecule.readthedocs.io/en/stable/ +[10]: https://www.jeffgeerling.com/blog/2018/testing-your-ansible-roles-molecule +[11]: https://github.com/geerlingguy/ansible-role-apache +[12]: https://travis-ci.org/geerlingguy/ansible-role-apache +[13]: https://docs.ansible.com/ansible/latest/porting_guides/porting_guides.html +[14]: https://opensource.com/sites/default/files/uploads/simplify-optimize.jpg (Charging AirPods) +[15]: https://www.jeffgeerling.com/blog/2019/make-your-ansible-playbooks-flexible-maintainable-and-scalable-ansiblefest-austin-2018 +[16]: https://www.youtube.com/watch?v=kNDL13MJG6Y From d37116e9ad28c2742744a6f7189f9eacfd7b7b01 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 00:58:54 +0800 Subject: [PATCH 06/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200129=20Use=20Em?= =?UTF-8?q?acs=20to=20get=20social=20and=20track=20your=20todo=20list?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200129 Use Emacs to get social and track your todo list.md --- ... to get social and track your todo list.md | 162 ++++++++++++++++++ 1 file changed, 162 insertions(+) create mode 100644 sources/tech/20200129 Use Emacs to get social and track your todo list.md diff --git a/sources/tech/20200129 Use Emacs to get social and track your todo list.md b/sources/tech/20200129 Use Emacs to get social and track your todo list.md new file mode 100644 index 0000000000..3893aac377 --- /dev/null +++ b/sources/tech/20200129 Use Emacs to get social and track your todo list.md @@ -0,0 +1,162 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use Emacs to get social and track your todo list) +[#]: via: (https://opensource.com/article/20/1/emacs-social-track-todo-list) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) + +Use Emacs to get social and track your todo list +====== +Access Twitter, Reddit, chat, email, RSS, and your todo list in the +nineteenth in our series on 20 ways to be more productive with open +source in 2020. +![Team communication, chat][1] + +Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using. + +### Doing (almost) all the things with Emacs, part 2 + +[Yesterday][2], I talked about how to read email, access your addresses, and show calendars in Emacs. Emacs has tons and tons of functionality, and you can also use it for Twitter, chatting, to-do lists, and more! + +![All the things with Emacs][3] + +To do all of this, you need to install some Emacs packages. As you did yesterday, open the Emacs package manager with **Meta**+**x package-manager** (Meta is **Alt** on most keyboards or **Option** on MacOS). Now select the following packages with **i**, then install them by typing **x**: + + +``` +nnreddit +todotxt +twittering-mode +``` + +Once they are installed, open **~/.emacs.d/init.el** with **Ctrl**+**x Ctrl**+**x**, and add the following before the **(custom-set-variables** line: + + +``` +;; Todo.txt +(require 'todotxt) +(setq todotxt-file (expand-file-name "~/.todo/todo.txt")) + +;; Twitter +(require 'twittering-mode) +(setq twittering-use-master-password t) +(setq twittering-icon-mode t) + +;; Python3 for nnreddit +(setq elpy-rpc-python-command "python3") +``` + +Save the file with **Ctrl**+**x Ctrl**+**a**, exit Emacs with **Ctrl**+**x Ctrl**+**c**, then restart Emacs. + +#### Tweet from Emacs with twittering-mode + +![Twitter in Emacs][4] + +[Twittering-mode][5] is one of the best Emacs interfaces for Twitter. It supports almost all the features of Twitter and has some easy-to-use keyboard shortcuts. + +To get started, type **Meta**+**x twit** to launch twittering-mode. It will give a URL to open—and prompt you to launch a browser with it if you want—so you can log in and get an authorization token. Copy and paste the token into Emacs, and your Twitter timeline should load. You can scroll with the **Arrow** keys, use **Tab** to move from item to item, and press **Enter** to view the URL the cursor is on. If the cursor is on a username, pressing **Enter** will open that timeline in a web browser. If you are on a tweet's text, pressing **Enter** will reply to that tweet. You can create a new tweet with **u**, retweet something with **Ctrl**+**c**+**Enter**, and send a direct message with **d**—the dialog it opens has instructions on how to send, cancel, and shorten URLs. + +Pressing **V** will open a prompt to get to other timelines. To open your mentions, type **:mentions**. The home timeline is **:home**, and typing a username will take you to that user's timeline. Finally, pressing **q** will quit twittering-mode and close the window. + +There is a lot more functionality available in twittering-mode, and I encourage you to read the [full list][6] on its GitHub page. + +#### Track your to-do's in Emacs with Todotxt.el + +![todo.txt in emacs][7] + +[Todotxt.el][8] is a nice interface for the [todo.txt][9] to-do list manager. It has hotkeys for just about everything. + +To start it up, type **Meta**+**x todotxt**, and it will load the todo.txt file you specified in the **todotxt-file** variable (which you set in the first part of this article). Inside the buffer (window) for todo.txt, you can press **a** to add a new task and **c** to mark it complete. You can set priorities with **r**, and add projects and context to an item with **t**. When you are ready to move everything to **done.txt**, just press **A**. And you can filter the list with **/** or refresh back to the full list with **l**. And again, you can press **q** to exit. + +#### Chat in Emacs with ERC + +![Chatting with erc][10] + +One of Vim's shortcomings is that trying to use chat with it is difficult (at best). Emacs, on the other hand, has the [ERC][11] client built into the default distribution. Start ERC with **Meta**+**x erc**, and you will be prompted for a server name, username, and password. You can use the same information you used a few days ago when you set up [BitlBee][12]: server **localhost**, port **6667**, and the same username with no password. It should be the same as using almost any other IRC client. Each channel will be split into a new buffer (window), and you can switch between them with **Ctrl**+**x Ctrl**+**b**, which also switches between other buffers in Emacs. The **/quit** command will exit ERC. + +#### Read email, Reddit, and RSS feeds with Gnus + +![Mail, Reddit, and RSS feeds with Gnus][13] + +I'm sure many long-time Emacs users were asking, "but what about [Gnus][14]?" yesterday when I was talking about reading mail in Emacs. And it's a valid question. Gnus is a mail and newsreader built into Emacs, although it doesn't support [Notmuch][15] as a mail reader, just as a search engine. However, if you are configuring it for Reddit and RSS feeds (as you'll do in a moment), it's smart to add in mail functionality as well. + +Gnus was created for reading Usenet News and grew from there. So, a lot of its look and feel (and terminology) seem a lot like a Usenet newsreader. + +Gnus has its own configuration file in **~/.gnus** (the configuration can also be included in the main **~/.emacs.d/init.el**). Open **~/.gnus** with **Ctrl**+**x Ctrl**+**f** and add the following: + + +``` +;; Required packages +(require 'nnir) +(require 'nnrss) + +;; Primary Mailbox +(setq gnus-select-method +      '(nnmaildir "Local" +                  (directory "~/Maildir") +                  (nnir-search-engine notmuch) +      )) +(add-to-list 'gnus-secondary-select-methods +             '(nnreddit "")) +``` + +Save the file with **Ctrl**+**x Ctrl**+**s**. This tells Gnus to read mail from the local mailbox in **~/Maildir** as the primary source (**gnus-select-method**) and add a second source (**gnus-secondary-select-methods**) using the [nnreddit][16] plugin. You can also define multiple secondary sources, including Usenet News (nntp), IMAP (nnimap), mbox (nnmbox), and virtual collections (nnvirtual). You can learn more about all the options in the [Gnus manual][17]. + +Once you save the file, start Gnus with **Meta**+**x gnus**. The first run will install [Reddit Terminal Viewer][18] in a Python virtual environment, which is how it gets Reddit articles. It will then launch your browser to log into Reddit. After that, it will scan and load your subscribed Reddit groups. You will see a list of email folders with new mail and the list of subreddits with new content. Pressing **Enter** on any of them will load the list of messages for the group. You can navigate with the **Arrow** keys and press **Enter** to load and read a message. Pressing **q** will go back to the prior view when viewing message lists, and pressing **q** from the main window will exit Gnus. When reading a Reddit group, **a** creates a new message; in a mail group, **m** creates a new email; and **r** replies to messages in either view. + +You can also add RSS feeds to the Gnus interface and read them like mail and newsgroups. To add an RSS feed, type **G**+**R** and fill in the RSS feed's URL. You will be prompted for the title and description of the feed, which should be auto-filled from the feed. Now type **g** to check for new messages (this checks for new messages in all groups). Reading a feed is like reading Reddit groups and mail, so it uses the same keys. + +There is a _lot_ of functionality in Gnus, and there are a whole lot more key combinations. The [Gnus Reference Card][19] lists all of them for each view (on five pages in very small type). + +#### See your position with nyan-mode + +As a final note, you might notice [Nyan cat][20] at the bottom of some of my screenshots. This is [nyan-mode][21], which indicates where you are in a buffer, so it gets longer as you get closer to the bottom of a document or buffer. You can install it with the package manager and set it up with the following code in **~/.emacs.d/init.el**: + + +``` +;; Nyan Cat +(setq nyan-wavy-trail t) +(setq nyan-bar-length 20) +(nyan-mode) +``` + +### Scratching Emacs' surface + +This is just scratching the surface of all the things you can do with Emacs. It is _very_ powerful, and it is one of my go-to tools for being productive whether I'm tracking to-dos, reading and responding to mail, editing text, or chatting with my friends and co-workers. It takes a bit of getting used to, but once you do, it can become one of the most useful tools on your desktop. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/1/emacs-social-track-todo-list + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) +[2]: https://opensource.com/article/20/1/emacs-mail-calendar +[3]: https://opensource.com/sites/default/files/uploads/productivity_19-1.png (All the things with Emacs) +[4]: https://opensource.com/sites/default/files/uploads/productivity_19-2.png (Twitter in Emacs) +[5]: https://github.com/hayamiz/twittering-mode +[6]: https://github.com/hayamiz/twittering-mode#features +[7]: https://opensource.com/sites/default/files/uploads/productivity_19-3.png (todo.txt in emacs) +[8]: https://github.com/rpdillon/todotxt.el +[9]: http://todotxt.org/ +[10]: https://opensource.com/sites/default/files/uploads/productivity_19-4.png (Chatting with erc) +[11]: https://www.gnu.org/software/emacs/manual/html_mono/erc.html +[12]: https://opensource.com/article/20/1/open-source-chat-tool +[13]: https://opensource.com/sites/default/files/uploads/productivity_19-5.png (Mail, Reddit, and RSS feeds with Gnus) +[14]: https://www.gnus.org/ +[15]: https://opensource.com/article/20/1/organize-email-notmuch +[16]: https://github.com/dickmao/nnreddit +[17]: https://www.gnus.org/manual/gnus.html +[18]: https://pypi.org/project/rtv/ +[19]: https://www.gnu.org/software/emacs/refcards/pdf/gnus-refcard.pdf +[20]: http://www.nyan.cat/ +[21]: https://github.com/TeMPOraL/nyan-mode From a2672d1ae8dd70fcb0cb8186a296a66fe0b24290 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 00:59:23 +0800 Subject: [PATCH 07/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200129=207=20open?= =?UTF-8?q?=20source=20desktop=20tools:=20Download=20our=20new=20eBook?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200129 7 open source desktop tools- Download our new eBook.md --- ...e desktop tools- Download our new eBook.md | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 sources/tech/20200129 7 open source desktop tools- Download our new eBook.md diff --git a/sources/tech/20200129 7 open source desktop tools- Download our new eBook.md b/sources/tech/20200129 7 open source desktop tools- Download our new eBook.md new file mode 100644 index 0000000000..303f86919c --- /dev/null +++ b/sources/tech/20200129 7 open source desktop tools- Download our new eBook.md @@ -0,0 +1,52 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 open source desktop tools: Download our new eBook) +[#]: via: (https://opensource.com/article/20/1/open-source-desktop-tools-guide) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +7 open source desktop tools: Download our new eBook +====== +Choice is more than a feature of Linux; it's a way of life thanks to a +wealth of open source tools. +![Browser of things][1] + +Linux users say that choice is one of the platform's strengths. On the surface, this might sound self-aggrandizing (or self-deprecating, depending on your perspective). Other operating systems offer choice, too, but once you look at the options available for nearly anything you want to do on Linux, it doesn't take long to conclude that a new word ought to be invented for what we mean by "choice." + +User choice isn't a "feature" of Linux; it's a way of life. Whether you're looking for a whole new desktop or just a new system tray, Linux hackers provide you options. You might also be able to hack some simple commands together to create a batch processor for yourself—and you might publish it online for others, thereby contributing to the array of choice. + +With so many options available, it can be a real challenge to find the solutions you prefer. One of the most effective ways to discover cool new things in the Linux world is through personal recommendation. That's one of the many reasons Opensource.com covers what might seem like random applications—through sharing your experiences with software, others can discover new applications to love without the pain of rummaging through piles of choice. + +### Sharing and open source + +Obviously, you can share software _recommendations_ with friends, whether the software is open source or not. However, in the proprietary world, you can't share the software that you're recommending, and in the world of proprietary software as a service (SaaS), part of the act of sharing is the key component to a pyramid scheme for more user data. It's not quite the same as the no-strings-attached gift of open source. + +Sharing is an integral part of free and open source software. It's one of the [four freedoms][2] defined by the Free Software Foundation, and it's the central concern of [Creative Commons][3]. + +While it's easy to fall into the trap of viewing open source sharing as something that applies only to lines of sometimes cryptic-looking code, it goes well beyond that. Sharing is almost endemic to open culture, explicitly allowing and encouraging it on every level, from code, to tutorials and tips, to physical redistribution of a wealth of common goods and services. Part of that is the simple act of telling others about a cool technology that has improved the way we work and live. + +### Download the eBook + +Opensource.com contributor and productivity aficionado Kevin Sonney has shared many of his favorite desktop applications in our latest eBook, [7 open source desktop tools][4]. As is often the case in the open source world, he doesn't just share his knowledge about his favorite desktop tools, he explains how and why he chooses those tools to help you can evaluate them for yourself. Download it today! + +### [Download the 7 open source desktop tools eBook][4] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/1/open-source-desktop-tools-guide + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) +[2]: https://www.gnu.org/philosophy/free-sw.en.html +[3]: https://creativecommons.org +[4]: https://opensource.com/downloads/desktop-tools From f6c1490ac95dd5eef58654e31002d511b1c0b16a Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 01:00:31 +0800 Subject: [PATCH 08/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200130=20Intel=20?= =?UTF-8?q?denies=20reports=20of=20Xeon=20shortage?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/talk/20200130 Intel denies reports of Xeon shortage.md --- ...0 Intel denies reports of Xeon shortage.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/talk/20200130 Intel denies reports of Xeon shortage.md diff --git a/sources/talk/20200130 Intel denies reports of Xeon shortage.md b/sources/talk/20200130 Intel denies reports of Xeon shortage.md new file mode 100644 index 0000000000..fce436ea0a --- /dev/null +++ b/sources/talk/20200130 Intel denies reports of Xeon shortage.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Intel denies reports of Xeon shortage) +[#]: via: (https://www.networkworld.com/article/3516392/intel-denies-reports-of-xeon-shortage.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Intel denies reports of Xeon shortage +====== +The PC side of Intel's Xeon processor supply remains constrained, but server customers should get their orders this year. +Intel + +Intel has denied reports that its Xeon supply chain is suffering the same constraints as its PC desktop/laptop business. CEO Bob Swan said during the company's recent earnings call that its inventory was depleted but customers are getting orders. + +The issue blew up last week when HPE – one of Intel's largest server OEM partners – reportedly [told UK-based publication The Register][1] that there were supply constraints with Cascade Lake processors, the most recent generation of Xeon Scalable processors, and urged HPE customers "to consider alternative processors." HPE did not clarify if it meant Xeon processors other than Cascade Lake or AMD Epyc processors. + +AMD must have loved that. + +[][2] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][2] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +At the time, Intel was in the quiet period prior to announcing fourth quarter 2019 results, so when I initially approached them for comment, company executives could not answer. But on last week’s earnings call, Swan set the record straight. While supply of desktop CPUs remains constrained, especially on the low-end, Xeon supply is in “pretty good shape,” as he put it, even after a 19% growth in demand for the quarter. + +“When you have that kind of spike in demand, we are not perfect across all products or all SKUs. But server CPUs, we really prioritize that and try to put ourselves in a position where we are not constrained, and we are in pretty good shape. Pretty great shape, macro. Micro, a few challenges here and there. But server CPU supply is pretty good,” he [said on an earnings call][3] with Wall Street analysts. + +Intel CFO George Davis added that supply is expected to improve in the second half of this year, across the board, thanks to an expansion of production capacity. "In the second half of the year we would expect to be able to bring both our server products and, most importantly, our PC products back to a more normalized inventory level," Davis said. + +Intel’s data center group had record revenue of $7.2 billion in Q4 2019, up 19% from Q4 2018. In particular, cloud revenue was up 48% year-over-year as cloud service providers continue building out crazy levels of capacity. + +**[ Check out our [12 most powerful hyperconverged infrasctructure vendors][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]** + +Hyperscalers like Amazon and Google are building data centers the size of football stadiums and filling them with tens of thousands of servers at a time. I’ve heard concerns about this trend of a half-dozen or so companies hoovering up all of the supply of CPUs, memory, flash and traditional disk, and so on, but so far any real shortages have not come to pass. + +Perhaps not surprisingly, Intel's enterprise and government revenue was down 7% as more and more companies reduce their data center footprint, while communication and service providers' revenue grew 14% as customers continue to adopt AI-based solutions to transform their networks and transition to 5G. + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3516392/intel-denies-reports-of-xeon-shortage.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.theregister.co.uk/2020/01/20/intel_hpe_xeon_shortage/ +[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[3]: https://seekingalpha.com/article/4318803-intel-corporation-intc-ceo-bob-swan-on-q4-2019-results-earnings-call-transcript?part=single +[4]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll +[5]: https://www.networkworld.com/newsletters/signup.html +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world From ccd604a3bb7a5a7069f382cdac1e7fe8a0753353 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 01:03:09 +0800 Subject: [PATCH 09/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200129=20You=20ca?= =?UTF-8?q?n=20now=20have=20a=20Mac=20Pro=20in=20your=20data=20center?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/talk/20200129 You can now have a Mac Pro in your data center.md --- ... now have a Mac Pro in your data center.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/talk/20200129 You can now have a Mac Pro in your data center.md diff --git a/sources/talk/20200129 You can now have a Mac Pro in your data center.md b/sources/talk/20200129 You can now have a Mac Pro in your data center.md new file mode 100644 index 0000000000..f779534e5f --- /dev/null +++ b/sources/talk/20200129 You can now have a Mac Pro in your data center.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (You can now have a Mac Pro in your data center) +[#]: via: (https://www.networkworld.com/article/3516490/you-can-now-have-a-mac-pro-in-your-data-center.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +You can now have a Mac Pro in your data center +====== +The company that once eschewed the enterprise now has a server version of the Mac Pro. Apple's rack-mountable Mac Pro starts at $6,499. +Apple + +Steve Jobs rather famously said he hated the enterprise because the people who use the product have no say in its purchase. Well, Apple's current management has adopted the enterprise, ever so slowly, and is now shipping its first server in years. Sort of. + +Apple introduced a new version of the Mac Pro in December 2019, after a six-year gap in releases, and said it would make the computer rack-mountable for data centers. But at the time, all the attention was on the computer’s aesthetics, because it looked like a cheese grater. The other bit of focus was on the price; a fully decked Mac Pro cost an astronomical $53,799. Granted, that did include specs like 1.5TB of DRAM and 8TB of SSD storage. Those are impressive specs for a server, although the price is still a little crazy. + +Earlier this month, Apple quietly delivered on the promise to make the Mac Pro rack-mountable. The Mac Pro rack configuration comes with a $500 premium over the cost of the standing tower, which means it starts at $6,499. + +[][1] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][1] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +That gives you an 8-core Intel Xeon W CPU, 32GB of memory, a Radeon Pro 580X GPU, and 256GB of SSD storage. Most importantly, it gives you the rack mounting rails (which ship in a separate box for some reason) needed to install it in a cabinet. Once installed, the Mac Pro is roughly the size of a 4U server. + +Mac Pros are primarily used in production facilities, where they are used with other audio and video production hardware. MacStadium, a Mac developer with its own data centers, has been installing and testing the servers and thus far has had high praise for both the [ease of install][2] and [performance][3]. + +The server-ready version features a slight difference in its case, according to people who have tested it. The twist handle on the Mac Pro case is replaced with two lock switches that allow the case to be removed to access the internal components. It comes with two Thunderbolt 3 ports and a power button. + +The Mac Pro may be expensive, but you get a lot of performance for your money. Popular YouTube Mac enthusiast Marques Brownlee [tested it out][4] on a 8k resolution video encoding job. Brownlee found a MacBook Pro took 20 minutes to render the five-minute-long video, a iMac Pro desktop took 12 minutes, and the Mac Pro processed the video in 4:20. So the Mac Pro encoded 8k resolution video faster than real time. + +**[ Learn [how server disaggregation can boost data center efficiency][5] and [how Windows Server 2019 embraces hyperconverged data centers][6] . | Get regularly scheduled insights by [signing up for Network World newsletters][7]. ]** + +Apple’s last server was the Xserve, killed off in 2010 after several years of neglect. Instead, it made a version of MacOS for the whole Mac line that would let the hardware be run as a server, which is exactly what the new rack-mountable version of the Mac Pro is. + +MacStadium is doing benchmarks like Node.js, a JavaScript runtime. It will be interesting to see if anyone outside of audio/video encoding uses a Mac Pro in their data centers. + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3516490/you-can-now-have-a-mac-pro-in-your-data-center.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[2]: https://twitter.com/brianstucki/status/1219299028791226368 +[3]: https://blog.macstadium.com/blog/2019-mac-pros-at-macstadium +[4]: https://www.youtube.com/watch?v=DOPswcaSsu8&t= +[5]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html +[6]: https://www.networkworld.com/article/3263718/software/windows-server-2019-embraces-hybrid-cloud-hyperconverged-data-centers-linux.html +[7]: https://www.networkworld.com/newsletters/signup.html +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world From 176f883711443e3ecb50417b5a080f73f14aa849 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Jan 2020 01:05:20 +0800 Subject: [PATCH 10/10] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200129=20Showing?= =?UTF-8?q?=20memory=20usage=20in=20Linux=20by=20process=20and=20user?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200129 Showing memory usage in Linux by process and user.md --- ...mory usage in Linux by process and user.md | 194 ++++++++++++++++++ 1 file changed, 194 insertions(+) create mode 100644 sources/tech/20200129 Showing memory usage in Linux by process and user.md diff --git a/sources/tech/20200129 Showing memory usage in Linux by process and user.md b/sources/tech/20200129 Showing memory usage in Linux by process and user.md new file mode 100644 index 0000000000..8e21baf042 --- /dev/null +++ b/sources/tech/20200129 Showing memory usage in Linux by process and user.md @@ -0,0 +1,194 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Showing memory usage in Linux by process and user) +[#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Showing memory usage in Linux by process and user +====== +There are several commands for checking up on memory usage in a Linux system, and here are some of the better ones. +[Fancycrave][1] [(CC0)][2] + +There are a lot of tools for looking at memory usage on Linux systems. Some are commonly used commands like **free** and **ps** while others are tools like **top** that allow you to display system performance stats in various ways. In this post, we’ll look at some commands that can be most helpful in identifying the users and processes that are using the most memory. + +Here are some that address memory usage by process. + +### Using top + +One of the best commands for looking at memory usage is **top**. One extremely easy way to see what processes are using the most memory is to start **top** and then press **shift+m** to switch the order of the processes shown to rank them by the percentage of memory each is using. Once you’ve entered **shift+m**, your top output should reorder the task entries to look something like this: + +[][3] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][3] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +``` +$top +top - 09:39:34 up 5 days, 3 min, 3 users, load average: 4.77, 4.43, 3.72 +Tasks: 251 total, 3 running, 247 sleeping, 1 stopped, 0 zombie +%Cpu(s): 50.6 us, 35.9 sy, 0.0 ni, 13.4 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st +MiB Mem : 5944.4 total, 128.9 free, 2509.3 used, 3306.2 buff/cache +MiB Swap: 2048.0 total, 2045.7 free, 2.2 used. 3053.5 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 400 nemo 20 0 3309580 550188 168372 S 0.3 9.0 1:33.27 Web Content +32469 nemo 20 0 3492840 447372 163296 S 7.3 7.3 3:55.60 firefox +32542 nemo 20 0 2845732 433388 140984 S 6.0 7.1 4:11.16 Web Content + 342 nemo 20 0 2848520 352288 118972 S 10.3 5.8 4:04.89 Web Content + 2389 nemo 20 0 1774412 236700 90044 S 39.7 3.9 9:32.64 vlc +29527 nemo 20 0 2735792 225980 84744 S 9.6 3.7 3:02.35 gnome-shell +30497 nemo 30 10 1088476 159636 88884 S 0.0 2.6 0:11.99 update-manager +30058 nemo 20 0 1089464 140952 33128 S 0.0 2.3 0:04.58 gnome-software +32533 nemo 20 0 2389088 104712 79544 S 0.0 1.7 0:01.43 WebExtensions + 2256 nemo 20 0 1217884 103424 31304 T 0.0 1.7 0:00.28 vlc + 1713 nemo 20 0 2374396 79588 61452 S 0.0 1.3 0:00.49 Web Content +29306 nemo 20 0 389668 74376 54340 S 2.3 1.2 0:57.25 Xorg +32739 nemo 20 0 289528 58900 34480 S 1.0 1.0 1:04.08 RDD Process +29732 nemo 20 0 789196 57724 42428 S 0.0 0.9 0:00.38 evolution-alarm + 2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd +``` + +Notice the **%MEM** ranking. The list will be limited by your window size, but the most significant processes with respect to memory usage will show up at the top of the process list. + +### Using ps + +The **ps** command includes a column that displays memory usage for each process. To get the most useful display for viewing the top memory users, however, you can pass the **ps** output from this command to the **sort** command. Here’s an example that provides a very useful display: + +``` +$ ps aux | sort -rnk 4 | head -5 +nemo 400 3.4 9.2 3309580 563336 ? Sl 08:59 1:36 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 32469 8.2 7.7 3492840 469516 ? Sl 08:54 4:15 /usr/lib/firefox/firefox -new-window +nemo 32542 8.9 7.6 2875428 462720 ? Sl 08:55 4:36 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4 +``` + +In the example above (truncated for this post), sort is being used with the **-r** (reverse), the **-n** (numeric) and the **-k** (key) options which are telling the command to sort the output in reverse numeric order based on the fourth column (memory usage) in the output from **ps**. If we first display the heading for the **ps** output, this is a little easier to see. + +``` +$ ps aux | head -1; ps aux | sort -rnk 4 | head -5 +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +nemo 400 3.4 9.2 3309580 563336 ? Sl 08:59 1:36 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 32469 8.2 7.7 3492840 469516 ? Sl 08:54 4:15 /usr/lib/firefox/firefox -new-window +nemo 32542 8.9 7.6 2875428 462720 ? Sl 08:55 4:36 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4 +``` + +If you like this command, you can set it up as an alias with a command like the one below. Don't forget to add it to your ~/.bashrc file if you want to make it permanent. + +``` +$ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4" +``` + +Here are some commands that reveal memory usage by user. + +### Using top + +Examining memory usage by user is somewhat more complicated because you have to find a way to group all of a user’s processes into a single memory-usage total. + +If you want to home in on a single user, **top** can be used much in the same way that it was used above. Just add a username with the -U option as shown below and press the **shift+m** keys to order by memory usage: + +``` +$ top -U nemo +top - 10:16:33 up 5 days, 40 min, 3 users, load average: 1.91, 1.82, 2.15 +Tasks: 253 total, 2 running, 250 sleeping, 1 stopped, 0 zombie +%Cpu(s): 28.5 us, 36.8 sy, 0.0 ni, 34.4 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st +MiB Mem : 5944.4 total, 224.1 free, 2752.9 used, 2967.4 buff/cache +MiB Swap: 2048.0 total, 2042.7 free, 5.2 used. 2812.0 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 400 nemo 20 0 3315724 623748 165440 S 1.0 10.2 1:48.78 Web Content +32469 nemo 20 0 3629380 607492 161688 S 2.3 10.0 6:06.89 firefox +32542 nemo 20 0 2886700 404980 136648 S 5.6 6.7 6:50.01 Web Content + 342 nemo 20 0 2922248 375784 116096 S 19.5 6.2 8:16.07 Web Content + 2389 nemo 20 0 1762960 234644 87452 S 0.0 3.9 13:57.53 vlc +29527 nemo 20 0 2736924 227260 86092 S 0.0 3.7 4:09.11 gnome-shell +30497 nemo 30 10 1088476 156372 85620 S 0.0 2.6 0:11.99 update-manager +30058 nemo 20 0 1089464 138160 30336 S 0.0 2.3 0:04.62 gnome-software +32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions +``` + +### Using ps + +You can also use a **ps** command to rank an individual user's processes by memory usage. In this example, we do this by selecting a single user's processes with a **grep** command: + +``` +$ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more +USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND +nemo 32469 7.1 11.5 3724364 701388 ? Sl 08:54 7:21 /usr/lib/firefox/firefox -new-window +nemo 400 2.0 8.9 3308556 543232 ? Sl 08:59 2:01 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni/usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 32542 7.9 7.1 2903084 436196 ? Sl 08:55 8:07 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 342 10.8 7.0 2941056 426484 ? Rl 08:59 10:45 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab +nemo 2389 16.9 3.8 1762960 234644 pts/1 Sl+ 09:15 13:57 vlc videos/edge_computing.mp4 +nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-shell +``` + +### Using ps along with other commands + +What gets complicated is when you want to compare users' memory usages with each other. In that case, creating a by-user total and ranking them is a good technique, but it requires a little more work and uses a number of commands. In the script below, we get a list of users with the **ps aux | grep -v COMMAND | awk '{print $1}' | sort -u** command. This includes system users like **syslog**. We then collect stats for each user and total the memory usage stat for each task with **awk**. As a last step, we display each user's memory usage sum in numerical (largest first) order. + +``` +#!/bin/bash + +stats=”” +echo "% user" +echo "============" + +# collect the data +for user in `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u` +do + stats="$stats\n`ps aux | egrep ^$user | awk 'BEGIN{total=0}; \ + {total += $4};END{print total,$1}'`" +done + +# sort data numerically (largest first) +echo -e $stats | grep -v ^$ | sort -rn | head +``` + +Output from this script might look like this: + +``` +$ ./show_user_mem_usage +% user +============ +69.6 nemo +5.8 root +0.5 www-data +0.3 shs +0.2 whoopsie +0.2 systemd+ +0.2 colord +0.2 clamav +0 syslog +0 rtkit +``` + +There are a lot of ways to report on memory usage on Linux. Focusing on which processes and users are consuming the most memory can benefit from a few carefully crafted tools and commands. + +Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://unsplash.com/photos/37LPYOkEE2o +[2]: https://creativecommons.org/publicdomain/zero/1.0/ +[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world