Merge pull request #4 from LCTT/master

sync from origin
This commit is contained in:
qianmingtian 2020-02-14 18:35:16 +08:00 committed by GitHub
commit d66dad95a5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
85 changed files with 7846 additions and 2289 deletions

View File

@ -0,0 +1,159 @@
MidnightBSD或许是你通往 FreeBSD 的大门
======
![](https://www.linux.com/wp-content/uploads/2019/08/midnight_4_0.jpg)
[FreeBSD][1] 是一个开源操作系统,衍生自著名的 <ruby>[伯克利软件套件][2]<rt>Berkeley Software Distribution</rt></ruby>BSD。FreeBSD 的第一个版本发布于 1993 年并且仍然在继续发展。2007 年左右Lucas Holt 想要利用 OpenStep现在是 Cocoa的 Objective-C 框架、widget 工具包和应用程序开发工具的 [GnuStep][3] 实现,来创建一个 FreeBSD 的分支。为此,他开始开发 MidnightBSD 桌面发行版。
MidnightBSD以 Lucas 的猫 Midnight 命名)仍然在积极地(尽管缓慢)开发。从 2017 年 8 月开始可以获得最新的稳定发布版本0.8.6LCTT 译注:截止至本译文发布时,当前是 2019/10/31 发布的 1.2 版)。尽管 BSD 发行版不是你所说的用户友好型发行版,但上手安装是熟悉如何处理 文本ncurses安装过程以及通过命令行完成安装的好方法。
这样,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。这需要花费一点精力,但是如果你是一名正在寻找扩展你的技能的 Linux 用户……这是一个很好的起点。
我将带你走过安装 MidnightBSD 的流程,如何添加一个图形桌面环境,然后如何安装应用程序。
### 安装
正如我所提到的这是一个文本ncurses安装过程因此在这里找不到可以用鼠标点击的地方。相反你将使用你键盘的 `Tab` 键和箭头键。在你下载[最新的发布版本][4]后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装程序将打开并给你三个选项(图 1。使用你的键盘的箭头键选择 “Install”并敲击回车键。
![MidnightBSD installer][6]
*图 1: 启动 MidnightBSD 安装程序。*
在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
1. 设置非默认键盘映射(是/否)
2. 设置主机名称
3. 添加可选系统组件文档、游戏、32 位兼容性、系统源码代码)
4. 对硬盘分区
5. 管理员密码
6. 配置网络接口
7. 选择地区(时区)
8. 启用服务(例如 ssh
9. 添加用户(图 2
![Adding a user][7]
*图 2: 向系统添加一个用户。*
在你向系统添加用户后,你将被进入到一个窗口中(图 3在这里你可以处理任何你可能忘记配置或你想重新配置的东西。如果你不需要作出任何更改选择 “Exit”然后你的配置就会被应用。
![Applying your configurations][8]
*图 3: 应用你的配置。*
在接下来的窗口中,当出现提示时,选择 “No”接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
### 后安装阶段
当你最新安装的 MidnightBSD 启动时你将发现你自己处于命令提示符当中。此刻还没有图形界面。要安装应用程序MidnightBSD 依赖于 `mport` 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
```
sudo mport index
sudo mport install xorg
```
你现在已经安装好 Xorg 窗口服务器了,它允许你安装桌面环境。使用命令来安装 Xfce
```
sudo mport install xfce
```
现在 Xfce 已经安装好。不过,我们必须让它同命令 `startx` 一起启用。为此,让我们先安装 nano 编辑器。发出命令:
```
sudo mport install nano
```
随着 nano 安装好,发出命令:
```
nano ~/.xinitrc
```
这个文件仅包含一行内容:
```
exec startxfce4
```
保存并关闭这个文件。如果你现在发出命令 `startx`, Xfce 桌面环境将会启动。你应该会感到有点熟悉了吧(图 4
![ Xfce][9]
*图 4: Xfce 桌面界面已准备好服务。*
因为你不会总是想必须发出命令 `startx`,你希望启用登录守护进程。然而,它却没有安装。要安装这个子系统,发出命令:
```
sudo mport install mlogind
```
当完成安装后,通过在 `/etc/rc.conf` 文件中添加一个项目来在启动时启用 mlogind。在 `rc.conf` 文件的底部,添加以下内容:
```
mlogind_enable=”YES”
```
保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后我最后得到一个空白屏幕和讨厌的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必须使用 `startx` 命令。
### 安装应用
默认情况下,你找不到很多能可用的应用程序。如果你尝试使用 `mport` 安装应用程序,你很快就会感到沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 `svnlite` 命令来查看检出的可用 mport 软件列表。回到终端窗口,并发出命令:
```
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
```
在你完成这些后,你应该看到一个命名为 `~/mports` 的新目录。使用命令 `cd ~/.mports` 更改到这个目录。发出 `ls` 命令,然后你应该看到许多的类别(图 5
![applications][10]
*图 5: mport 现在可用的应用程序类别。*
你想安装 Firefox 吗?如果你查看 `www` 目录,你将看到一个 `linux-firefox` 列表。发出命令:
```
sudo mport install linux-firefox
```
现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项。翻找所有的类别,并使用 `mport` 命令来安装你需要的所有软件。
### 一个悲哀的警告
一个悲哀的小警告是,`mport` (通过 `svnlite`)仅能找到的一个办公套件的版本是 OpenOffice 3 。那是非常过时的。尽管在 `~/mports/editors` 目录中能找到 Abiword ,但是它看起来不能安装。甚至在安装 OpenOffice 3 后,它会输出一个执行格式错误。换句话说,你不能使用 MidnightBSD 在办公生产效率方面做很多的事情。但是,嘿嘿,如果你周围正好有一个旧的 Palm Pilot你可以安装 pilot-link。换句话说可用的软件不足以构成一个极其有用的桌面发行版……至少对普通用户不是。但是如果你想在 MidnightBSD 上开发,你将找到很多可用的工具可以安装(查看 `~/mports/devel` 目录)。你甚至可以使用命令安装 Drupal
```
sudo mport install drupal7
```
当然在此之后你将需要创建一个数据库MySQL 已经安装)、安装 Apache`sudo mport install apache24`),并配置必要的 Apache 配置。
显然地,已安装的和可以安装的是一个应用程序、系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务于特殊目的的发行版。
### 享受 \*BSD 优良
这就是如何使 MidnightBSD 启动,并使其运行某种有用的桌面发行版的方法。它不像很多其它的 Linux 发行版一样快速简便,但是如果你想要一个促使你思考的发行版,这可能正是你正在寻找的。尽管大多数竞争对手都准备了很多可以安装的应用软件,但 MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣挑战。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.freebsd.org/
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]:https://en.wikipedia.org/wiki/GNUstep
[4]:http://www.midnightbsd.org/download/
[5]:https://www.virtualbox.org/
[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,236 @@
[#]: collector: (lujun9972)
[#]: translator: (svtter)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11889-1.html)
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
通过 Git 来管理多媒体文件
======
> 在我们有关 Git 鲜为人知的用法系列的最后一篇文章中,了解如何使用 Git 跟踪项目中的大型多媒体文件。
![](https://img.linux.net.cn/data/attachment/album/202002/13/235436mhub12qhxzmbw11p.png)
Git 是专用于源代码版本控制的工具。因此Git 很少被用于非纯文本的项目以及行业。然而,异步工作流的优点是十分诱人的,尤其是在一些日益增长的行业中,这种类型的行业把重要的计算和重要的艺术创作结合起来,这包括网页设计、视觉效果、视频游戏、出版、货币设计(是的,这是一个真实的行业)、教育……等等。还有许多行业属于这个类型。
在这个 Git 系列文章中,我们分享了六种鲜为人知的 Git 使用方法。在最后一篇文章中,我们将介绍将 Git 的优点带到管理多媒体文件的软件。
### Git 管理多媒体文件的问题
众所周知Git 用于处理非文本文件不是很好,但是这并不妨碍我们进行尝试。下面是一个使用 Git 来复制照片文件的例子:
```
$ du -hs
108K .
$ cp ~/photos/dandelion.tif .
$ git add dandelion.tif
$ git commit -m 'added a photo'
[master (root-commit) fa6caa7] two photos
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 dandelion.tif
$ du -hs
1.8M .
```
目前为止没有什么异常。增加一个 1.8MB 的照片到一个目录下,使得目录变成了 1.8 MB 的大小。所以下一步,我们尝试删除文件。
```
$ git rm dandelion.tif
$ git commit -m 'deleted a photo'
$ du -hs
828K .
```
在这里我们可以看到有些问题:删除一个已经被提交的文件,还是会使得存储库的大小扩大到原来的 8 倍(从 108K 到 828K。我们可以测试多次来得到一个更好的平均值但是这个简单的演示与我的经验一致。提交非文本文件在一开始花费空间比较少但是一个工程活跃地时间越长人们可能对静态内容修改的会更多更多的零碎文件会被加和到一起。当一个 Git 存储库变的越来越大,主要的成本往往是速度。拉取和推送的时间,从最初抿一口咖啡的时间到你觉得你可能断网了。
静态内容导致 Git 存储库的体积不断扩大的原因是什么呢?那些通过文本的构成的文件,允许 Git 只拉取那些修改的部分。光栅图以及音乐文件对 Git 文件而言与文本不同,你可以查看一下 .png 和 .wav 文件中的二进制数据。所以Git 只不过是获取了全部的数据,并且创建了一个新的副本,哪怕是一张图仅仅修改了一个像素。
### Git-portal
在实践中,许多多媒体项目不需要或者不想追踪媒体的历史记录。相对于文本或者代码的部分,项目的媒体部分一般有一个不同的生命周期。媒体资源一般按一个方向产生:一张图片从铅笔草稿开始,以数字绘画的形式抵达它的目的地。然后,尽管文本能够回滚到早起的版本,但是艺术制品只会一直向前发展。工程中的媒体很少被绑定到一个特定的版本。例外情况通常是反映数据集的图形,通常是可以用基于文本的格式(如 SVG完成的表、图形或图表。
所以在许多同时包含文本无论是叙事散文还是代码和媒体的工程中Git 是一个用于文件管理的,可接受的解决方案,只要有一个在版本控制循环之外的游乐场来给艺术家游玩就行。
![Graphic showing relationship between art assets and Git][2]
一个启用这个特性的简单方法是 [Git-portal][3],这是一个通过带有 Git 钩子的 Bash 脚本,它可将静态文件从文件夹中移出 Git 的范围并通过符号链接来取代它们。Git 提交链接文件有时候称作别名或快捷方式这种符号链接文件比较小所以所有的提交都是文本文件和那些代表媒体文件的链接。因为替身文件是符号链接所以工程还会像预期的运行因为本地机器会处理他们转换成“真实的”副本。当用符号链接替换出文件时Git-portal 维护了项目的结构,因此,如果你认为 Git-portal 不适合你的项目,或者你需要构建项目的一个没有符号链接的版本(比如用于分发),则可以轻松地逆转该过程。
Git-portal 也允许通过 `rsync` 来远程同步静态资源,所以用户可以设置一个远程存储位置,来做为一个中心的授权源。
Git-portal 对于多媒体的工程是一个理想的解决方案。类似的多媒体工程包括视频游戏、桌面游戏、需要进行大型 3D 模型渲染和纹理的虚拟现实工程、[带图][4]以及 .odt 输出的书籍、协作型的[博客站点][5]、音乐项目等等。艺术家在应用程序中以图层在图形世界中和曲目在音乐世界中的形式执行版本控制并不少见——因此Git 不会向多媒体项目文件本身添加任何内容。Git 的功能可用于艺术项目的其他部分(例如散文和叙述、项目管理、字幕文件、致谢、营销副本、文档等),而结构化远程备份的功能则由艺术家使用。
#### 安装 Git-portal
Git-portal 的 RPM 安装包位于 <https://klaatu.fedorapeople.org/git-portal>,可用于下载和安装。
此外,用户可以从 Git-portal 的 Gitlab 主页手动安装。这仅仅是一个 Bash 脚本以及一些 Git 钩子(也是 Bash 脚本),但是需要一个快速的构建过程来让它知道安装的位置。
```
$ git clone https://gitlab.com/slackermedia/git-portal.git git-portal.clone
$ cd git-portal.clone
$ ./configure
$ make
$ sudo make install
```
#### 使用 Git-portal
Git-portal 与 Git 一起使用。这意味着,如同 Git 的所有大型文件扩展一样,都需要记住一些额外的步骤。但是,你仅仅需要在处理你的媒体资源的时候使用 Git-portal所以很容易记住除非你把大文件都当做文本文件来进行处理对于 Git 用户很少见)。使用 Git-portal 必须做的一个安装步骤是:
```
$ mkdir bigproject.git
$ cd !$
$ git init
$ git-portal init
```
Git-portal 的 `init` 函数在 Git 存储库中创建了一个 `_portal` 文件夹并且添加到 `.gitignore` 文件中。
在平日里使用 Git-portal 和 Git 协同十分平滑。一个较好的例子是基于 MIDI 的音乐项目:音乐工作站产生的项目文件是基于文本的,但是 MIDI 文件是二进制数据:
```
$ ls -1
_portal
song.1.qtr
song.qtr
song-Track_1-1.mid
song-Track_1-3.mid
song-Track_2-1.mid
$ git add song*qtr
$ git-portal song-Track*mid
$ git add song-Track*mid
```
如果你查看一下 `_portal` 文件夹,你会发现那里有最初的 MIDI 文件。这些文件在原本的位置被替换成了指向 `_portal` 的链接文件,使得音乐工作站像预期一样运行。
```
$ ls -lG
[...] _portal/
[...] song.1.qtr
[...] song.qtr
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
```
与 Git 相同,你也可以添加一个目录下的文件。
```
$ cp -r ~/synth-presets/yoshimi .
$ git-portal add yoshimi
Directories cannot go through the portal. Sending files instead.
$ ls -lG _portal/yoshimi
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
```
删除功能也像预期一样工作,但是当从 `_portal` 中删除一些东西时,你应该使用 `git-portal rm` 而不是 `git rm`。使用 Git-portal 可以确保文件从 `_portal` 中删除:
```
$ ls
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
$ git-portal rm song-Track_1-3.mid
rm 'song-Track_1-3.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
```
如果你忘记使用 Git-portal那么你需要手动删除 `_portal` 下的文件:
```
$ git-portal rm song-Track_1-1.mid
rm 'song-Track_1-1.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
$ trash _portal/song-Track_1-1.mid
```
Git-portal 其它的唯一功能,是列出当前所有的链接并且找到里面可能已经损坏的符号链接。有时这种情况会因为项目文件夹中的文件被移动而发生:
```
$ mkdir foo
$ mv yoshimi foo
$ git-portal status
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
```
如果你使用 Git-portal 用于私人项目并且维护自己的备份,以上就是技术方面所有你需要知道关于 Git-portal 的事情了。如果你想要添加一个协作者或者你希望 Git-portal 来像 Git 的方式来管理备份,你可以创建一个远程位置。
#### 增加 Git-portal 远程位置
为 Git-portal 增加一个远程位置是通过 Git 已有的远程功能来实现的。Git-portal 实现了 Git 钩子(隐藏在存储库 `.git` 文件夹中的脚本),来寻找你的远程位置上是否存在以 `_portal` 开头的文件夹。如果它找到一个,它会尝试使用 `rsync` 来与远程位置同步文件。Git-portal 在用户进行 Git 推送以及 Git 合并的时候(或者在进行 Git 拉取的时候,实际上是进行一次获取和自动合并),都会执行此操作。
如果你仅克隆了 Git 存储库,那么你可能永远不会自己添加一个远程位置。这是一个标准的 Git 过程:
```
$ git remote add origin git@gitdawg.com:seth/bigproject.git
$ git remote -v
origin git@gitdawg.com:seth/bigproject.git (fetch)
origin git@gitdawg.com:seth/bigproject.git (push)
```
对你的主要 Git 存储库来说,`origin` 这个名字是一个流行的惯例,将其用于 Git 数据是有意义的。然而,你的 Git-portal 数据是分开存储的,所以你必须创建第二个远程位置来让 Git-portal 了解向哪里推送和从哪里拉取。取决于你的 Git 主机,你可能需要一个单独的服务器,因为空间有限的 Git 主机不太可能接受 GB 级的媒体资产。或者,可能你的服务器仅允许你访问你的 Git 存储库而不允许访问外部的存储文件夹:
```
$ git remote add _portal seth@example.com:/home/seth/git/bigproject_portal
$ git remote -v
origin git@gitdawg.com:seth/bigproject.git (fetch)
origin git@gitdawg.com:seth/bigproject.git (push)
_portal seth@example.com:/home/seth/git/bigproject_portal (fetch)
_portal seth@example.com:/home/seth/git/bigproject_portal (push)
```
你可能不想为所有用户提供服务器上的个人帐户,也不必这样做。为了提供对托管资源库大文件资产的服务器的访问权限,你可以运行一个 Git 前端,比如 [Gitolite][8] 或者你可以使用 `rrsync` (受限的 rsync
现在你可以推送你的 Git 数据到你的远程 Git 存储库,并将你的 Git-portal 数据到你的远程的门户:
```
$ git push origin HEAD
master destination detected
Syncing _portal content...
sending incremental file list
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
total size is 60,358,015 speedup is 6,474.10
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
```
如果你已经安装了 Git-portal并且配置了 `_portal` 的远程位置,你的 `_portal` 文件夹将会被同步,并且从服务器获取新的内容,以及在每一次推送的时候发送新的内容。尽管你不需要进行 Git 提交或者推送来和服务器同步(用户可以使用直接使用 `rsync`),但是我发现对于艺术性内容的改变,提交是有用的。这将会把艺术家及其数字资产集成到工作流的其余部分中,并提供有关项目进度和速度的有用元数据。
### 其他选择
如果 Git-portal 对你而言太过简单,还有一些用于 Git 管理大型文件的其他选择。[Git 大文件存储][9]LFS是一个名为 git-media 的停工项目的分支,这个分支由 GitHub 维护和支持。它需要特殊的命令(例如 `git lfs track` 来保护大型文件不被 Git 追踪)并且需要用户维护一个 `.gitattributes` 文件来更新哪些存储库中的文件被 LFS 追踪。对于大文件而言,它**仅**支持 HTTP 和 HTTPS 远程主机。所以你必须配置 LFS 服务器,才能使得用户可以通过 HTTP 而不是 SSH 或 `rsync` 来进行鉴权。
另一个相对 LFS 更灵活的选择是 [git-annex][10]。你可以在我的文章 [管理 Git 中大二进制 blob][11] 中了解更多(忽略其中 git-media 这个已经废弃项目的章节,因为其灵活性没有被它的继任者 Git LFS 延续下来。Git-annex 是一个灵活且优雅的解决方案。它拥有一个细腻的系统来用于添加、删除、移动存储库中的大型文件。因为它灵活且强大,有很多新的命令和规则需要进行学习,所以建议看一下它的[文档][12]。
然而,如果你的需求很简单,你可能更加喜欢整合已有技术来进行简单且明显任务的解决方案,则 Git-portal 可能是对于工作而言比较合适的工具。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/manage-multimedia-files-git
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[svtter](https://github.com/svtter)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
[3]: http://gitlab.com/slackermedia/git-portal.git
[4]: https://www.apress.com/gp/book/9781484241691
[5]: http://mixedsignals.ml
[6]: mailto:git@gitdawg.com
[7]: mailto:seth@example.com
[8]: https://opensource.com/article/19/4/file-sharing-git
[9]: https://git-lfs.github.com/
[10]: https://git-annex.branchable.com/
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
[12]: https://git-annex.branchable.com/walkthrough/

View File

@ -0,0 +1,208 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11881-1.html)
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
如何进行 Linux 启动时间优化
======
![][2]
> 快速启动嵌入式设备或电信设备,对于时间要求紧迫的应用程序是至关重要的,并且在改善用户体验方面也起着非常重要的作用。这个文章给予一些关于如何增强任意设备的启动时间的重要技巧。
快速启动或快速重启在各种情况下起着至关重要的作用。为了保持所有服务的高可用性和更好的性能,嵌入式设备的快速启动至关重要。设想有一台运行着没有启用快速启动的 Linux 操作系统的电信设备,所有依赖于这个特殊嵌入式设备的系统、服务和用户可能会受到影响。这些设备维持其服务的高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
一台电信设备的一次小故障或关机,即使只是几秒钟,都可能会对无数互联网上的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的设备中加入快速启动的功能以帮助它们快速恢复工作是非常重要的。让我们从图 1 中理解 Linux 启动过程。
![图 1启动过程][3]
### 监视工具和启动过程
在对机器做出更改之前,用户应注意许多因素。其中包括计算机的当前启动速度,以及占用资源并增加启动时间的服务、进程或应用程序。
#### 启动图
为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装:
```
sudo apt-get install pybootchartgui
```
你每次启动时,启动图会在日志中保存一个 png 文件,使用户能够查看该 png 文件来理解系统的启动过程和服务。为此,使用下面的命令:
```
cd /var/log/bootchart
```
用户可能需要一个应用程序来查看 png 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器它没有一个精致的图形用户界面但它只用来显示图片。Feh 可以用于查看 png 文件。你可以使用下面的命令来安装它:
```
sudo apt-get install feh
```
你可以使用 `feh xxxx.png` 来查看 png 文件。
![图 2启动图][4]
图 2 显示了一个正在查看的引导图 png 文件。
#### systemd-analyze
但是,对于 Ubuntu 15.10 以后的版本不再需要引导图。为获取关于启动速度的简短信息,使用下面的命令:
```
systemd-analyze
```
![图 3systemd-analyze 的输出][5]
图表 3 显示命令 `systemd-analyze` 的输出。
命令 `systemd-analyze blame` 用于根据初始化所用的时间打印所有正在运行的单元的列表。这个信息是非常有用的,可用于优化启动时间。`systemd-analyze blame` 不会显示服务类型为简单(`Type=simple`)的服务,因为 systemd 认为这些服务应是立即启动的;因此,无法测量初始化的延迟。
![图 4systemd-analyze blame 的输出][6]
图 4 显示 `systemd-analyze blame` 的输出。
下面的命令打印时间关键的服务单元的树形链条:
```
command systemd-analyze critical-chain
```
图 5 显示命令 `systemd-analyze critical-chain` 的输出。
![图 5systemd-analyze critical-chain 的输出][7]
### 减少启动时间的步骤
下面显示的是一些可以减少启动时间的各种步骤。
#### BUM启动管理器
BUM 是一个运行级配置编辑器允许在系统启动或重启时配置初始化服务。它显示了可以在启动时启动的每个服务的列表。用户可以打开和关闭各个服务。BUM 有一个非常清晰的图形用户界面,并且非常容易使用。
在 Ubuntu 14.04 中BUM 可以使用下面的命令安装:
```
sudo apt-get install bum
```
为在 15.10 以后的版本中安装它,从链接 http://apt.ubuntu.com/p/bum 下载软件包。
以基本的服务开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习服务的基础知识,因为这可能会影响计算机或操作系统。图 6 显示 BUM 的图形用户界面。
![图 6BUM][8]
#### 编辑 rc 文件
要编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
```
cd /etc/init.d
```
然而,访问 `init.d` 需要 root 用户权限,该目录基本上包含的是开始/停止脚本,这些脚本用于在系统运行时或启动期间控制(开始、停止、重新加载、启动启动)守护进程。
`init.d` 目录中的 `rc` 文件被称为<ruby>运行控制<rt>run control</rt></ruby>脚本。在启动期间,`init` 执行 `rc` 脚本并发挥它的作用。为改善启动速度,我们可以更改 `rc` 文件。使用任意的文件编辑器打开 `rc` 文件(当你在 `init.d` 目录中时)。
例如,通过输入 `vim rc` ,你可以更改 `CONCURRENCY=none``CONCURRENCY=shell`。后者允许某些启动脚本同时执行,而不是依序执行。
在最新版本的内核中,该值应该被更改为 `CONCURRENCY=makefile`
图 7 和图 8 显示编辑 `rc` 文件前后的启动时间比较。可以注意到启动速度有所提高。在编辑 `rc` 文件前的启动时间是 50.98 秒,然而在对 `rc` 文件进行更改后的启动时间是 23.85 秒。
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 `init.d` 文件。
![图 7对 rc 文件进行更改之前的启动速度][9]
![图 8对 rc 文件进行更改之后的启动速度][10]
#### E4rat
E4rat 代表 e4 <ruby>减少访问时间<rt>reduced access time</rt></ruby>(仅在 ext4 文件系统的情况下)。它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目。E4rat 是一个通过碎片整理来帮助快速启动的应用程序。它还会加速应用程序的启动。E4rat 使用物理文件的重新分配来消除寻道时间和旋转延迟,因而达到较高的磁盘传输速度。
E4rat 可以 .deb 软件包形式获得,你可以从它的官方网站 http://e4rat.sourceforge.net/ 下载。
Ubuntu 默认安装的 ureadahead 软件包与 e4rat 冲突。因此必须使用下面的命令安装这几个软件包:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
现在使用下面的命令来安装 e4rat 的依赖关系:
```
sudo apt-get install libblkid1 e2fslibs
```
打开下载的 .deb 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
遵循下面所给的步骤来使 e4rat 正确地运行并提高启动速度。
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 `shift` 按键来完成。
* 选择通常用于启动的选项(内核版本),并按 `e`
* 查找以 `linux /boot/vmlinuz` 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键):`init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
`。
* 现在,按 `Ctrl+x` 来继续启动。这可以让 e4rat 在启动后收集数据。在这台机器上工作,并在接下来的两分钟时间内打开并关闭应用程序。
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:`cd /var/log/e4rat`。
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件就绪,再次访问 Grub 菜单,并对你的选项按 `e`
* 在你之前已经编辑过的同一行的末尾输入 `single`。这可以让你访问命令行。如果出现其它菜单选择恢复正常启动Resume normal boot。如果你不知为何不能进入命令提示符`Ctrl+Alt+F1` 组合键。
* 在你看到登录提示后,输入你的登录信息。
* 现在输入下面的命令:`sudo e4rat-realloc /var/lib/e4rat/startup.log`。此过程需要一段时间,具体取决于机器的磁盘速度。
* 现在使用下面的命令来重启你的机器:`sudo shutdown -r now`。
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat。
* 使用任意的编辑器访问 grub 文件。例如,`gksu gedit /etc/default/grub`。
* 查找以 `GRUB CMDLINE LINUX DEFAULT=` 开头的一行,并在引号之间和任何选项之前添加下面的行:`init=/sbin/e4rat-preload 18`。
* 它应该看起来像这样:`GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash`。
* 保存并关闭 Grub 菜单,并使用 `sudo update-grub` 更新 Grub 。
* 重启系统,你将发现启动速度有明显变化。
图 9 和图 10 显示在安装 e4rat 前后的启动时间之间的差异。可注意到启动速度的提高。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
![图 9使用 e4rat 之前的启动速度][11]
![图 10使用 e4rat 之后的启动速度][12]
### 一些易做的调整
使用很小的调整也可以达到良好的启动速度,下面列出其中两个。
#### SSD
使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也有助于加快文件传输和运行应用程序方面的速度。
#### 禁用图形用户界面
图形用户界面、桌面图形和窗口动画占用大量的资源。禁用图形用户界面是获得良好的启动速度的另一个好方法。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?ssl=1

View File

@ -1,49 +1,45 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (mengxinayan) [#]: translator: (mengxinayan)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11870-1.html)
[#]: subject: (8 Commands to Check Memory Usage on Linux) [#]: subject: (8 Commands to Check Memory Usage on Linux)
[#]: via: (https://www.2daygeek.com/linux-commands-check-memory-usage/) [#]: via: (https://www.2daygeek.com/linux-commands-check-memory-usage/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
检查 Linux 中内存使用情况的8条命令 检查 Linux 中内存使用情况的 8 条命令
====== ======
![](https://img.linux.net.cn/data/attachment/album/202002/09/121112mg0jigxtcc5xr8or.jpg)
Linux 并不像 Windows你经常不会有图形界面可供使用特别是在服务器环境中。 Linux 并不像 Windows你经常不会有图形界面可供使用特别是在服务器环境中。
作为一名 Linux 管理员知道如何获取当前可用的和已经使用的资源情况比如内存、CPU、磁盘等是相当重要的。 作为一名 Linux 管理员知道如何获取当前可用的和已经使用的资源情况比如内存、CPU、磁盘等是相当重要的。如果某一应用在你的系统上占用了太多的资源,导致你的系统无法达到最优状态,那么你需要找到并修正它。
如果某一应用在你的系统上占用了太多的资源,导致你的系统无法达到最优状态,那么你需要找到并修正它 如果你想找到消耗内存前十名的进程,你需要去阅读这篇文章:[如何在 Linux 中找出内存消耗最大的进程][1]
如果你想找到消耗内存前十名的进程,你需要去阅读这篇文章: **[在 Linux 系统中找到消耗内存最多的 10 个进程][1]** 在 Linux 中,命令能做任何事,所以使用相关命令吧。在这篇教程中,我们将会给你展示 8 个有用的命令来即查看在 Linux 系统中内存的使用情况,包括 RAM 和交换分区
在 Linux 中,命令能做任何事,所以使用相关命令吧。 创建交换分区在 Linux 系统中是非常重要的,如果你想了解如何创建,可以去阅读这篇文章:[在 Linux 系统上创建交换分区][2]。
在这篇教程中,我们将会给你展示 8 个有用的命令来即查看在 Linux 系统中内存的使用情况,包括 RAM 和交换分区。
创建交换分区在 Linux 系统中是非常重要的,如果你想了解如何创建,可以去阅读这篇文章: **[在 Linux 系统上创建交换分区][2]** 。
下面的命令可以帮助你以不同的方式查看 Linux 内存使用情况。 下面的命令可以帮助你以不同的方式查看 Linux 内存使用情况。
* free 命令 * `free` 命令
* /proc/meminfo 文件 * `/proc/meminfo` 文件
* vmstat 命令 * `vmstat` 命令
* ps_mem 命令 * `ps_mem` 命令
* smem 命令 * `smem` 命令
* top 命令 * `top` 命令
* htop 命令 * `htop` 命令
* glances 命令 * `glances` 命令
### 1如何使用 free 命令查看 Linux 内存使用情况 ### 1如何使用 free 命令查看 Linux 内存使用情况
**[Free 命令][3]** 是被 Linux 管理员广泛使用地命令。但是它提供的信息比 “/proc/meminfo” 文件少。 [free 命令][3] 是被 Linux 管理员广泛使用的主要命令。但是它提供的信息比 `/proc/meminfo` 文件少。
Free 命令会分别展示物理内存和交换分区内存中已使用的和未使用的数量,以及内核使用的缓冲区和缓存。 `free` 命令会分别展示物理内存和交换分区内存中已使用的和未使用的数量,以及内核使用的缓冲区和缓存。
这些信息都是从 “/proc/meminfo” 文件中获取的。 这些信息都是从 `/proc/meminfo` 文件中获取的。
``` ```
# free -m # free -m
@ -52,24 +48,18 @@ Mem: 15867 9199 1702 3315 4965 3039
Swap: 17454 666 16788 Swap: 17454 666 16788
``` ```
* **total:** 总的内存量 * `total`:总的内存量
* **used:** 当前正在被运行中的进程使用的内存量 (used = total free buff/cache) * `used`:被当前运行中的进程使用的内存量(`used` = `total` `free` `buff/cache`
* **free:** 未被使用的内存量 (free = total used buff/cache) * `free` 未被使用的内存量(`free` = `total` `used` `buff/cache`
* **shared:** 在两个或多个进程之间共享的内存量 (多进程) * `shared` 在两个或多个进程之间共享的内存量
* **buffers:** 内核用于记录进程队列请求的内存量 * `buffers` 内存中保留用于内核记录进程队列请求的内存量
* **cache:** 在 RAM 中最近使用的文件中的页缓冲大小 * `cache` 在 RAM 中存储最近使用过的文件的页缓冲大小
* **buff/cache:** 缓冲区和缓存总的使用内存量 * `buff/cache` 缓冲区和缓存总的使用内存量
* **available:** 启动新应用不含交换分区的可用内存量 * `available` 可用于启动新应用的可用内存量(不含交换分区)
### 2) 如何使用 /proc/meminfo 文件查看 Linux 内存使用情况 ### 2) 如何使用 /proc/meminfo 文件查看 Linux 内存使用情况
“/proc/meminfo” 文件是一个包含了多种内存使用的实时信息的虚拟文件。 `/proc/meminfo` 文件是一个包含了多种内存使用的实时信息的虚拟文件。它展示内存状态单位使用的是 kB其中大部分属性都难以理解。然而它也包含了内存使用情况的有用信息。
它展示内存状态单位使用的是 kB其中大部分属性都难以理解。
然而它也包含了内存使用情况的有用信息。
``` ```
# cat /proc/meminfo # cat /proc/meminfo
@ -126,12 +116,9 @@ DirectMap1G: 2097152 kB
### 3) 如何使用 vmstat 命令查看 Linux 内存使用情况 ### 3) 如何使用 vmstat 命令查看 Linux 内存使用情况
**[vmstat 命令][4]** 是另一个报告虚拟内存统计信息的有用工具。 [vmstat 命令][4] 是另一个报告虚拟内存统计信息的有用工具。
vmstat 报告的信息包括:进程、内存、页面映射、块 I/O、陷阱、磁盘和 cpu 功能信息。 `vmstat` 报告的信息包括:进程、内存、页面映射、块 I/O、陷阱、磁盘和 CPU 特性信息。`vmstat` 不需要特殊的权限,并且它可以帮助诊断系统瓶颈。
vmstat 不需要特殊的权限,并且它可以帮助诊断系统瓶颈。
``` ```
# vmstat # vmstat
@ -143,54 +130,31 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
如果你想详细了解每一项的含义,阅读下面的描述。 如果你想详细了解每一项的含义,阅读下面的描述。
**Procs** * `procs`:进程
* `r` 可以运行的进程数目(正在运行或等待运行)
* **r:** 可以运行的进程数目(正在运行或等待运行) * `b` 处于不可中断睡眠中的进程数目
* **b:** 不间断睡眠中的进程数目 * `memory`:内存
* `swpd` 使用的虚拟内存数量
* `free` 空闲的内存数量
* `buff` 用作缓冲区内存的数量
**Memory** * `cache` 用作缓存内存的数量
* `inact` 不活动的内存数量(使用 `-a` 选项)
* **swpd:** 使用的虚拟内存数量 * `active` 活动的内存数量(使用 `-a` 选项)
* **free:** 空闲的内存数量 * `Swap`:交换分区
* **buff:** 用作缓冲区内存的数量 * `si` 每秒从磁盘交换的内存数量
* **cache:** 用作缓存内存的数量 * `so` 每秒交换到磁盘的内存数量
* **inact:** 不活动的内存数量(-a 选项) * `IO`:输入输出
* **active:** 活动的内存数量(-a 选项) * `bi` 从一个块设备中收到的块(块/秒)
* `bo` 发送到一个块设备的块(块/秒)
* `System`:系统
* `in` 每秒的中断次数,包括时钟。
**Swap** * `cs` 每秒的上下文切换次数。
* `CPU`:下面这些是在总的 CPU 时间占的百分比
* **si:** 从磁盘交换的内存数量 (/s). * `us` 花费在非内核代码上的时间占比(包括用户时间,调度时间)
* **so:** 交换到磁盘的内存数量 (/s). * `sy` 花费在内核代码上的时间占比 (系统时间)
* `id` 花费在闲置的时间占比。在 Linux 2.5.41 之前,包括 I/O 等待时间
* `wa` 花费在 I/O 等待上的时间占比。在 Linux 2.5.41 之前,包括在空闲时间中
* `st` 被虚拟机偷走的时间占比。在 Linux 2.6.11 之前,这部分称为 unknown
**IO**
* **bi:** 从一个块设备中收到的块 (blocks/s).
* **bo:** 发送到一个块设备的块 (blocks/s).
**System**
* **in:** 每秒的中断此数,包括时钟。
* **cs:** 每秒的上下文切换次数。
**CPU : 下面这些是在总的 CPU 时间占的百分比 **
* **us:** 花费在非内核上的时间占比(包括用户时间,调度)
* **sy:** 花费在内核上的时间占比 (系统时间)
* **id:** 花费在闲置的时间占比。在 Linux 2.5.41 之前,包括 I/O 等待时间
* **wa:** 花费在 I/O 等待上的时间占比。在 Linux 2.5.41 之前,包括空闲时间
* **st:** 被虚拟机偷走的时间占比。在 Linux 2.6.11 之前,这部分称为 unknow
运行下面的命令查看详细的信息。 运行下面的命令查看详细的信息。
@ -226,9 +190,7 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
``` ```
### 4) 如何使用 ps_mem 命令查看 Linux 内存使用情况 ### 4) 如何使用 ps_mem 命令查看 Linux 内存使用情况
**[ps_mem][5]** 是一个简单的 Python 脚本用来查看当前内存使用情况。 [ps_mem][5] 是一个用来查看当前内存使用情况的简单的 Python 脚本。该工具可以确定每个程序使用了多少内存(不是每个进程)。
该工具可以确定每个程序使用了多少内存(不是每个进程)。
该工具采用如下的方法计算每个程序使用内存:总的使用 = 程序进程私有的内存 + 程序进程共享的内存。 该工具采用如下的方法计算每个程序使用内存:总的使用 = 程序进程私有的内存 + 程序进程共享的内存。
@ -287,13 +249,11 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
### 5如何使用 smem 命令查看 Linux 内存使用情况 ### 5如何使用 smem 命令查看 Linux 内存使用情况
**[smem][6]** 是一个可以为 Linux 系统提供多种内存使用情况报告的工具。不同于现有的工具,smem 可以报告比例集大小PSS、唯一集大小USS和居住集大小RSS [smem][6] 是一个可以为 Linux 系统提供多种内存使用情况报告的工具。不同于现有的工具,`smem` 可以报告<ruby>比例集大小<rt>Proportional Set Size</rt></ruby>PSS<ruby>唯一集大小<rt>Unique Set Size</rt></ruby>USS<ruby>驻留集大小<rt>Resident Set Size</rt></ruby>RSS
比例集PSS库和应用在虚拟内存系统中的使用量。 - 比例集大小PSS库和应用在虚拟内存系统中的使用量。
- 唯一集大小USS其报告的是非共享内存。
唯一集大小USS其报告的是非共享内存。 - 驻留集大小RSS物理内存通常多进程共享使用情况其通常高于内存使用量。
居住集大小RSS物理内存通常多进程共享使用情况其通常高于内存使用量。
``` ```
# smem -tk # smem -tk
@ -338,11 +298,9 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
### 6) 如何使用 top 命令查看 Linux 内存使用情况 ### 6) 如何使用 top 命令查看 Linux 内存使用情况
**[top 命令][7]** 是一个 Linux 系统的管理员最常使用的用于查看进程的资源使用情况的命令。 [top 命令][7] 是一个 Linux 系统的管理员最常使用的用于查看进程的资源使用情况的命令。
该命令会展示了系统总的内存量,当前内存使用量,空闲内存量和缓冲区使用的内存总量。 该命令会展示了系统总的内存量、当前内存使用量、空闲内存量和缓冲区使用的内存总量。此外,该命令还会展示总的交换空间内存量、当前交换空间的内存使用量、空闲的交换空间内存量和缓存使用的内存总量。
此外,该命令还会展示总的交换空间内存量,当前交换空间的内存使用量,空闲的交换空间内存量和缓存使用的内存总量。
``` ```
# top -b | head -10 # top -b | head -10
@ -370,25 +328,23 @@ KiB Swap: 17873388 total, 17873388 free, 0 used. 9179772 avail Mem
### 7) 如何使用 htop 命令查看 Linux 内存使用情况 ### 7) 如何使用 htop 命令查看 Linux 内存使用情况
**[htop 命令][8]** 是一个可交互的 Linux/Unix 系统进程查看器。它是一个文本模式应用,且使用它需要 Hisham 开发的 ncurses库。 [htop 命令][8] 是一个可交互的 Linux/Unix 系统进程查看器。它是一个文本模式应用,且使用它需要 Hisham 开发的 ncurses 库。
该名令的设计目的使用来代替 top 命令。 该名令的设计目的使用来代替 `top` 命令。该命令与 `top` 命令很相似,但是其允许你可以垂直地或者水平地的滚动以便可以查看系统中所有的进程情况。
该命令与 top 命令很相似,但是其允许你可以垂直地或者水平地的滚动以便可以查看系统中所有的进程情况。 `htop` 命令拥有不同的颜色,这个额外的优点当你在追踪系统性能情况时十分有用。
htop 命令拥有不同的颜色,这个额外的优点当你在追踪系统性能情况时十分有用。
此外你可以自由地执行与进程相关的任务比如杀死进程或者改变进程的优先级而不需要其进程号PID 此外你可以自由地执行与进程相关的任务比如杀死进程或者改变进程的优先级而不需要其进程号PID
[![][9]][10] ![][10]
### 8如何使用 glances 命令查看 Linux 内存使用情况 ### 8如何使用 glances 命令查看 Linux 内存使用情况
**[Glances][11]** 是一个 Python 编写的跨平台的系统监视工具。 [Glances][11] 是一个 Python 编写的跨平台的系统监视工具。
你可以在一个其中查看所有信息比如CPU使用情况内存使用情况正在运行的进程网络接口磁盘 I/ORAID传感器文件系统信息Docker系统信息运行时间等等。 你可以在一个地方查看所有信息比如CPU 使用情况、内存使用情况、正在运行的进程、网络接口、磁盘 I/O、RAID、传感器、文件系统信息、Docker、系统信息、运行时间等等。
![][9] ![][12]
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -397,20 +353,21 @@ via: https://www.2daygeek.com/linux-commands-check-memory-usage/
作者:[Magesh Maruthamuthu][a] 作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[萌新阿岩](https://github.com/mengxinayan) 译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/ [a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972 [b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/how-to-find-high-cpu-consumption-processes-in-linux/ [1]: https://linux.cn/article-11542-1.html
[2]: https://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/ [2]: https://linux.cn/article-9579-1.html
[3]: https://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/ [3]: https://linux.cn/article-8314-1.html
[4]: https://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/ [4]: https://linux.cn/article-8157-1.html
[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/ [5]: https://linux.cn/article-8639-1.html
[6]: https://www.2daygeek.com/smem-linux-memory-usage-statistics-reporting-tool/ [6]: https://linux.cn/article-7681-1.html
[7]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/ [7]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[8]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/ [8]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 [9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[10]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-2.jpg [10]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-2.jpg
[11]: https://www.2daygeek.com/linux-glances-advanced-real-time-linux-system-performance-monitoring-tool/ [11]: https://www.2daygeek.com/linux-glances-advanced-real-time-linux-system-performance-monitoring-tool/
[12]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-3.jpg

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11875-1.html)
[#]: subject: (Top CI/CD resources to set you up for success)
[#]: via: (https://opensource.com/article/19/12/cicd-resources)
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
顶级 CI / CD 资源,助你成功
======
> 随着企业期望实现无缝、灵活和可扩展的部署,持续集成和持续部署成为 2019 年的关键主题。
![Plumbing tubes in many directions][1]
对于 CI/CD 和 DevOps 来说2019 年是非常棒的一年。Opensource.com 的作者分享了他们专注于无缝、灵活和可扩展部署时是如何朝着敏捷和 scrum 方向发展的。以下是我们 2019 年发布的 CI/CD 文章中的一些重要文章。
### 学习和提高你的 CI/CD 技能
我们最喜欢的一些文章集中在 CI/CD 的实操经验上,并涵盖了许多方面。通常以 [Jenkins][2] 管道开始Bryant Son 的文章《[用 Jenkins 构建 CI/CD 管道][3]》将为你提供足够的经验以开始构建你的第一个管道。Daniel Oh 在《[用 DevOps 管道进行自动验收测试][4]》一文中,提供了有关验收测试的重要信息,包括可用于自行测试的各种 CI/CD 应用程序。我写的《[安全扫描 DevOps 管道][5]》非常简短,其中简要介绍了如何使用 Jenkins 平台在管道中设置安全性。
### 交付工作流程
正如 Jithin Emmanuel 在《[Screwdriver一个用于持续交付的可扩展构建平台][6]》中分享的,在学习如何使用和提高你的 CI/CD 技能方面工作流程很重要特别是当涉及到管道时。Emily Burns 在《[为什么 Spinnaker 对 CI/CD 很重要][7]》中解释了灵活地使用 CI/CD 工作流程准确构建所需内容的原因。Willy-Peter Schaub 还盛赞了为所有产品创建统一管道的想法,以便《[在一个 CI/CD 管道中一致地构建每个产品][8]》。这些文章将让你很好地了解在团队成员加入工作流程后会发生什么情况。
### CI/CD 如何影响企业
2019 年也是认识到 CI/CD 的业务影响以及它是如何影响日常运营的一年。Agnieszka Gancarczyk 分享了 Red Hat 《[小型 Scrum vs. 大型 Scrum][9]》的调查结果, 包括受访者对 Scrum、敏捷运动及对团队的影响的不同看法。Will Kelly 的《[持续部署如何影响整个组织][10]》也提及了开放式沟通的重要性。Daniel Oh 也在《[DevOps 团队必备的 3 种指标仪表板][11]》中强调了指标和可观测性的重要性。最后是 Ann Marie Fred 的精彩文章《[不在生产环境中测试?要在生产环境中测试!][12]》详细说明了在验收测试前在生产环境中测试的重要性。
感谢许多贡献者在 2019 年与 Opensource 的读者分享他们的见解,我期望在 2020 年里从他们那里了解更多有关 CI/CD 发展的信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/cicd-resources
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
[2]: https://jenkins.io/
[3]: https://linux.cn/article-11546-1.html
[4]: https://opensource.com/article/19/4/devops-pipeline-acceptance-testing
[5]: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
[6]: https://opensource.com/article/19/3/screwdriver-cicd
[7]: https://opensource.com/article/19/8/why-spinnaker-matters-cicd
[8]: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
[9]: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
[10]: https://opensource.com/article/19/7/organizational-impact-continuous-deployment
[11]: https://linux.cn/article-11183-1.html
[12]: https://opensource.com/article/19/5/dont-test-production

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (laingke) [#]: translator: (laingke)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11857-1.html)
[#]: subject: (Data streaming and functional programming in Java) [#]: subject: (Data streaming and functional programming in Java)
[#]: via: (https://opensource.com/article/20/1/javastream) [#]: via: (https://opensource.com/article/20/1/javastream)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu) [#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Java 中的数据流和函数式编程 Java 中的数据流和函数式编程
====== ======
学习如何使用 Java 8 中的流 API 和函数式编程结构。
![computer screen ][1]
当 Java SE 8又名核心 Java 8在 2014 年被推出时,它引入了一些从根本上影响 IT 编程的更改。这些更改中有两个紧密相连的部分流API和功能编程构造。本文使用代码示例从基础到高级特性介绍每个部分并说明它们之间的相互作用。 > 学习如何使用 Java 8 中的流 API 和函数式编程结构。
![](https://img.linux.net.cn/data/attachment/album/202002/06/002505flazlb4cg4aavvb4.jpg)
当 Java SE 8又名核心 Java 8在 2014 年被推出时,它引入了一些更改,从根本上影响了用它进行的编程。这些更改中有两个紧密相连的部分:流 API 和函数式编程构造。本文使用代码示例,从基础到高级特性,介绍每个部分并说明它们之间的相互作用。
### 基础特性 ### 基础特性
流 API 是在数据序列中迭代元素的简洁而高级的方法。包 `java.util.stream``java.util.function` 包含用于流 API 和相关函数式编程构造的新库。当然,一个代码示例胜过千言万语。 流 API 是在数据序列中迭代元素的简洁而高级的方法。包 `java.util.stream``java.util.function` 包含用于流 API 和相关函数式编程构造的新库。当然,代码示例胜过千言万语。
下面的代码段用大约 2,000 个随机整数值填充了一个 `List` 下面的代码段用大约 2,000 个随机整数值填充了一个 `List`
@ -26,7 +28,9 @@ List<Integer> list = new ArrayList<Integer>(); // 空 list
for (int i = 0; i < 2048; i++) list.add(rand.nextInt()); // 填充它 for (int i = 0; i < 2048; i++) list.add(rand.nextInt()); // 填充它
``` ```
另一个 `for` 循环可用于遍历填充 list以将偶数值收集到另一个 list 中。流 API 提供了一种更简洁的方法来执行此操作: 另外用一个 `for` 循环可用于遍历填充列表,以将偶数值收集到另一个列表中。
流 API 提供了一种更简洁的方法来执行此操作:
``` ```
List <Integer> evens = list List <Integer> evens = list
@ -37,43 +41,42 @@ List <Integer> evens = list
这个例子有三个来自流 API 的函数: 这个例子有三个来自流 API 的函数:
> `stream` 函数可以将**集合**转换为流,而流是一个每次可访问一个值的传送带。流化是惰性的(因此也是高效的),因为值是根据需要产生的,而不是一次性产生的。 - `stream` 函数可以将**集合**转换为流,而流是一个每次可访问一个值的传送带。流化是惰性的(因此也是高效的),因为值是根据需要产生的,而不是一次性产生的。
- `filter` 函数确定哪些流的值(如果有的话)通过了处理管道中的下一个阶段,即 `collect` 阶段。`filter` 函数是 <ruby>高阶的<rt>higher-order</rt></ruby>,因为它的参数是一个函数 —— 在这个例子中是一个 lambda 表达式,它是一个未命名的函数,并且是 Java 新的函数式编程结构的核心。
> `filter` 函数确定哪些流的值(如果有的话)通过了处理管道中的下一个阶段,即 `collect` 阶段。`filter` 函数是 _<ruby>高阶的<rt>higher-order</rt></ruby>_,因为它的参数是一个函数 —— 在这个例子中是一个 lambda 表达式,它是一个未命名的函数,并且是 Java 新的函数式编程结构的核心。
lambda 语法与传统的 Java 完全不同: lambda 语法与传统的 Java 完全不同:
``` ```
`n -> (n & 0x1) == 0` n -> (n & 0x1) == 0
``` ```
箭头(一个减号后面紧跟着一个大于号)将左边的参数列表与右边的函数体分隔开。参数 `n` 虽未明确输入,但可以显式输入。在任何情况下,编译器都会计算出 `n` `Integer`。如果有多个参数,这些参数将被括在括号中,并用逗号分隔。 箭头(一个减号后面紧跟着一个大于号)将左边的参数列表与右边的函数体分隔开。参数 `n` 虽未明确类型,但也可以明确。在任何情况下,编译器都会发现 `n` 是个 `Integer`。如果有多个参数,这些参数将被括在括号中,并用逗号分隔。
在本例中,函数体检查一个整数的最低顺序(最右)位是否为零,以表示为偶数。过滤器应返回一个布尔值。尽管可以,但函数的主体中没有显式的 `return`。如果主体没有显式的 `return`,则主体的最后一个表达式是返回值。在这个例子中,主体按照 lambda 编程的思想编写,由一个简单的布尔表达式 `(n & 0x1) == 0` 组成。 在本例中,函数体检查一个整数的最低位(最右)是否为零,这用来表示偶数。过滤器应返回一个布尔值。尽管可以,但函数的主体中没有显式的 `return`。如果主体没有显式的 `return`,则主体的最后一个表达式是返回值。在这个例子中,主体按照 lambda 编程的思想编写,由一个简单的布尔表达式 `(n & 0x1) == 0` 组成。
> `collect` 函数将偶数值收集到引用为 `evens` 的 list 中。如下例所示,`collect` 函数是线程安全的,因此,即使在多个线程之间共享了过滤操作,该函数也可以正常工作。 - `collect` 函数将偶数值收集到引用为 `evens` 的列表中。如下例所示,`collect` 函数是线程安全的,因此,即使在多个线程之间共享了过滤操作,该函数也可以正常工作。
### 方便的功能和轻松实现多线程 ### 方便的功能和轻松实现多线程
在生产环境中,数据流的源可能是文件或网络连接。为了学习流 API, Java 提供了诸如 `IntStream` 这样的类型,它可以用各种类型的元素生成流。这里有一个 `IntStream` 的例子: 在生产环境中,数据流的源可能是文件或网络连接。为了学习流 API, Java 提供了诸如 `IntStream` 这样的类型,它可以用各种类型的元素生成流。这里有一个 `IntStream` 的例子:
``` ```
IntStream // 整型流 IntStream // 整型流
.range(1, 2048) // 生成此范围内的整型流 .range(1, 2048) // 生成此范围内的整型流
.parallel() // 为多个线程分区数据 .parallel() // 为多个线程分区数据
.filter(i -> ((i & 0x1) > 0)) // 奇偶校验 - 只允许奇数通过 .filter(i -> ((i & 0x1) > 0)) // 奇偶校验 - 只允许奇数通过
.forEach(System.out::println); // 打印每个值 .forEach(System.out::println); // 打印每个值
``` ```
`IntStream` 类型包括一个 `range` 函数,该函数在指定的范围内生成一个整数值流,在本例中,以 1 为增量,从 1 递增到 2,048。`parallel` 函数自动工作划分到多个线程中,在各个线程中进行过滤和打印。(线程数通常与主机系统上的 CPU 数量匹配。)函数 `forEach` 参数是一个_方法引用_,在本例中是对封装在 `System.out` 中的 `println` 方法的引用,方法输出类型为 `PrintStream`。方法和构造器引用的语法将在稍后讨论。 `IntStream` 类型包括一个 `range` 函数,该函数在指定的范围内生成一个整数值流,在本例中,以 1 为增量,从 1 递增到 2048。`parallel` 函数自动划分该工作到多个线程中,在各个线程中进行过滤和打印。(线程数通常与主机系统上的 CPU 数量匹配。)函数 `forEach` 参数是一个*方法引用*,在本例中是对封装在 `System.out` 中的 `println` 方法的引用,方法输出类型为 `PrintStream`。方法和构造器引用的语法将在稍后讨论。
由于具有多线程,因此整数值整体上以任意顺序打印,但在给定线程中按顺序打印。例如,如果线程 T1 打印 409 和 411那么 T1 将按照顺序 409-411 打印,但是其它某个线程可能会预先打印 2,045。`parallel` 调用后面的线程是并发执行的,因此它们的输出顺序是不确定的。 由于具有多线程,因此整数值整体上以任意顺序打印,但在给定线程中按顺序打印。例如,如果线程 T1 打印 409 和 411那么 T1 将按照顺序 409-411 打印,但是其它某个线程可能会预先打印 2045。`parallel` 调用后面的线程是并发执行的,因此它们的输出顺序是不确定的。
### map/reduce 模式 ### map/reduce 模式
_map/reduce_ 模式在处理大型数据集方面已变得很流行。一个 map/reduce 宏操作由两个微操作构成。首先将数据分散_mapped_)到各个工作程序中,然后将单独的结果收集在一起 —— 也可能收集统计起来成为一个值,即 _reduction_。Reduction 可以采用不同的形式,如以下示例所示。 *map/reduce* 模式在处理大型数据集方面变得很流行。一个 map/reduce 宏操作由两个微操作构成。首先,将数据分散(<ruby>映射<rt>mapped</rt></ruby>)到各个工作程序中,然后将单独的结果收集在一起 —— 也可能收集统计起来成为一个值,即<ruby>归约<rt>reduction</rt></ruby>。归约可以采用不同的形式,如以下示例所示。
下面 `Number` 类的实例用 **EVEN****ODD** 表示有奇偶校验的整数值: 下面 `Number` 类的实例用 `EVEN``ODD` 表示有奇偶校验的整数值:
``` ```
public class Number { public class Number {
@ -92,7 +95,7 @@ public class Number {
} }
``` ```
下面的代码演示了带有 `Number` 流进行 map/reduce 的情形,从而表明流 API 不仅可以处理 `int``float` 等基本类型,还可以处理程序员自定义的类类型。 下面的代码演示了 `Number` 流进行 map/reduce 的情形,从而表明流 API 不仅可以处理 `int``float` 等基本类型,还可以处理程序员自定义的类类型。
在下面的代码段中,使用了 `parallelStream` 而不是 `stream` 函数对随机整数值列表进行流化处理。与前面介绍的 `parallel` 函数一样,`parallelStream` 变体也可以自动执行多线程。 在下面的代码段中,使用了 `parallelStream` 而不是 `stream` 函数对随机整数值列表进行流化处理。与前面介绍的 `parallel` 函数一样,`parallelStream` 变体也可以自动执行多线程。
@ -115,14 +118,14 @@ System.out.println("The sum of the randomly generated values is: " + sum4All);
方法引用 `Number::getValue` 可以用下面的 lambda 表达式替换。参数 `n` 是流中的 `Number` 实例中的之一: 方法引用 `Number::getValue` 可以用下面的 lambda 表达式替换。参数 `n` 是流中的 `Number` 实例中的之一:
``` ```
`mapToInt(n -> n.getValue())` mapToInt(n -> n.getValue())
``` ```
通常lambdas 和方法引用是可互换的:如果像 `mapToInt` 这样的高阶函数可以采用一种形式作为参数,那么这个函数也可以采用另一种形式。这两个函数式编程结构具有相同的目的 —— 对作为参数传入的数据执行一些自定义操作。在两者之间进行选择通常是为了方便。例如lambda 可以在没有封装类的情况下编写,而方法则不能。我的习惯是使用 lambda除非已经有了适当的封装方法。 通常lambda 表达式和方法引用是可互换的:如果像 `mapToInt` 这样的高阶函数可以采用一种形式作为参数,那么这个函数也可以采用另一种形式。这两个函数式编程结构具有相同的目的 —— 对作为参数传入的数据执行一些自定义操作。在两者之间进行选择通常是为了方便。例如lambda 可以在没有封装类的情况下编写,而方法则不能。我的习惯是使用 lambda除非已经有了适当的封装方法。
当前示例末尾的 `sum` 函数通过结合来自 `parallelStream` 线程的部分和,以线程安全的方式进行归约。但是,程序员有责任确保在 `parallelStream` 调用引发的多线程过程中,程序员自己的函数调用(在本例中为 `getValue`)是线程安全的。 当前示例末尾的 `sum` 函数通过结合来自 `parallelStream` 线程的部分和,以线程安全的方式进行归约。但是,程序员有责任确保在 `parallelStream` 调用引发的多线程过程中,程序员自己的函数调用(在本例中为 `getValue`)是线程安全的。
最后一点值得强调。lambda 语法鼓励编写 _<ruby>纯函数<rt>pure function</rt></ruby>_,即函数的返回值仅取决于传入的参数(如果有);纯函数没有副作用,例如更新类中的 `static` 字段。因此,纯函数是线程安全的,并且如果传递给高阶函数的函数参数(例如 `filter``map` )是纯函数,则流 API 效果最佳。 最后一点值得强调。lambda 语法鼓励编写<ruby>纯函数<rt>pure function</rt></ruby>,即函数的返回值仅取决于传入的参数(如果有);纯函数没有副作用,例如更新一个类中的 `static` 字段。因此,纯函数是线程安全的,并且如果传递给高阶函数的函数参数(例如 `filter``map` )是纯函数,则流 API 效果最佳。
对于更细粒度的控制,有另一个流 API 函数,名为 `reduce`,可用于对 `Number` 流中的值求和: 对于更细粒度的控制,有另一个流 API 函数,名为 `reduce`,可用于对 `Number` 流中的值求和:
@ -135,11 +138,10 @@ Integer sum4AllHarder = listOfNums
此版本的 `reduce` 函数带有两个参数,第二个参数是一个函数: 此版本的 `reduce` 函数带有两个参数,第二个参数是一个函数:
> 第一个参数(在这种情况下为零)是 _特征_ 值,该值用作求和操作的初始值,并且在求和过程中流结束时用作默认值。 - 第一个参数(在这种情况下为零)是*特征*值,该值用作求和操作的初始值,并且在求和过程中流结束时用作默认值。
- 第二个参数是*累加器*,在本例中,这个 lambda 表达式有两个参数:第一个参数(`sofar`)是正在运行的和,第二个参数(`next`)是来自流的下一个值。运行的和以及下一个值相加,然后更新累加器。请记住,由于开始时调用了 `parallelStream`,因此 `map``reduce` 函数现在都在多线程上下文中执行。
> 第二个参数是 _累加器_,在本例中,这个 lambda 表达式有两个参数:第一个参数(`sofar`)是正在运行的和,第二个参数(`next`)是来自流的下一个值。运行总和以及下一个值相加,然后更新累加器。请记住,由于开始时调用了 `parallelStream`,因此 `map``reduce` 函数现在都在多线程上下文中执行。 在到目前为止的示例中,流值被收集,然后被规约,但是,通常情况下,流 API 中的 `Collectors` 可以累积值,而不需要将它们规约到单个值。正如下一个代码段所示,收集活动可以生成任意丰富的数据结构。该示例使用与前面示例相同的 `listOfNums`
在到目前为止的示例中,流值被收集,然后被归并,但是,通常情况下,流 API 中的 `Collectors` 可以累积值,而不需要将它们减少到单个值。正如下一个代码段所示,收集活动可以生成任意丰富的数据结构。该示例使用与前面示例相同的 `listOfNums`
``` ```
Map<Number.Parity, List<Number>> numMap = listOfNums Map<Number.Parity, List<Number>> numMap = listOfNums
@ -150,7 +152,7 @@ List<Number> evens = numMap.get(Number.Parity.EVEN);
List<Number> odds = numMap.get(Number.Parity.ODD); List<Number> odds = numMap.get(Number.Parity.ODD);
``` ```
第一行中的 `numMap` 指的是一个 `Map`,它的键是一个 `Number` 奇偶校验位(**ODD** 或 **EVEN**),其值是一个具有指定奇偶校验位值的 `Number` 实例的`List`。同样,通过 `parallelStream` 调用进行多线程处理,然后 `collect` 调用(以线程安全的方式)将部分结果组装到 `numMap` 引用的 `Map` 中。然后,在 `numMap` 上调用 `get` 方法两次,一次获取 `evens`,第二次获取 `odds` 第一行中的 `numMap` 指的是一个 `Map`,它的键是一个 `Number` 奇偶校验位(`ODD` 或 `EVEN`),其值是一个具有指定奇偶校验位值的 `Number` 实例的 `List`。同样,通过 `parallelStream` 调用进行多线程处理,然后 `collect` 调用(以线程安全的方式)将部分结果组装到 `numMap` 引用的 `Map` 中。然后,在 `numMap` 上调用 `get` 方法两次,一次获取 `evens`,第二次获取 `odds`
实用函数 `dumpList` 再次使用来自流 API 的高阶 `forEach` 函数: 实用函数 `dumpList` 再次使用来自流 API 的高阶 `forEach` 函数:
@ -184,7 +186,7 @@ Value: 41 (parity: odd)
### 用于代码简化的函数式结构 ### 用于代码简化的函数式结构
函数式结构(如方法引用和 lambdas)非常适合在流 API 中使用。这些构造代表了 Java 中对高阶函数的主要简化。即使在糟糕的过去Java 也通过 `Method``Constructor` 类型在技术上支持高阶函数,这些类型的实例可以作为参数传递给其它函数。由于其复杂性,这些类型在生产级 Java 中很少使用。例如,调用 `Method` 需要对象引用(如果方法是非**静态**的)或至少一个类标识符(如果方法是**静态**的)。然后,被调用的 `Method` 的参数作为**对象**实例传递给它如果没有发生多态那会出现另一种复杂性则可能需要显式向下转换。相比之下lambda 和方法引用很容易作为参数传递给其它函数。 函数式结构(如方法引用和 lambda 表达式)非常适合在流 API 中使用。这些构造代表了 Java 中对高阶函数的主要简化。即使在糟糕的过去Java 也通过 `Method``Constructor` 类型在技术上支持高阶函数,这些类型的实例可以作为参数传递给其它函数。由于其复杂性,这些类型在生产级 Java 中很少使用。例如,调用 `Method` 需要对象引用(如果方法是非**静态**的)或至少一个类标识符(如果方法是**静态**的)。然后,被调用的 `Method` 的参数作为**对象**实例传递给它如果没有发生多态那会出现另一种复杂性则可能需要显式向下转换。相比之下lambda 和方法引用很容易作为参数传递给其它函数。
但是,新的函数式结构在流 API 之外具有其它用途。考虑一个 Java GUI 程序,该程序带有一个供用户按下的按钮,例如,按下以获取当前时间。按钮按下的事件处理程序可能编写如下: 但是,新的函数式结构在流 API 之外具有其它用途。考虑一个 Java GUI 程序,该程序带有一个供用户按下的按钮,例如,按下以获取当前时间。按钮按下的事件处理程序可能编写如下:
@ -201,7 +203,7 @@ updateCurrentTime.addActionListener(new ActionListener() {
这个简短的代码段很难解释。关注第二行,其中方法 `addActionListener` 的参数开始如下: 这个简短的代码段很难解释。关注第二行,其中方法 `addActionListener` 的参数开始如下:
``` ```
`new ActionListener() {` new ActionListener() {
``` ```
这似乎是错误的,因为 `ActionListener` 是一个**抽象**接口,而**抽象**类型不能通过调用 `new` 实例化。但是,事实证明,还有其它一些实例被实例化了:一个实现此接口的未命名内部类。如果上面的代码封装在名为 `OldJava` 的类中,则该未命名的内部类将被编译为 `OldJava$1.class`。`actionPerformed` 方法在这个未命名的内部类中被重写。 这似乎是错误的,因为 `ActionListener` 是一个**抽象**接口,而**抽象**类型不能通过调用 `new` 实例化。但是,事实证明,还有其它一些实例被实例化了:一个实现此接口的未命名内部类。如果上面的代码封装在名为 `OldJava` 的类中,则该未命名的内部类将被编译为 `OldJava$1.class`。`actionPerformed` 方法在这个未命名的内部类中被重写。
@ -209,7 +211,7 @@ updateCurrentTime.addActionListener(new ActionListener() {
现在考虑使用新的函数式结构进行这个令人耳目一新的更改: 现在考虑使用新的函数式结构进行这个令人耳目一新的更改:
``` ```
`updateCurrentTime.addActionListener(e -> currentTime.setText(new Date().toString()));` updateCurrentTime.addActionListener(e -> currentTime.setText(new Date().toString()));
``` ```
lambda 表达式中的参数 `e` 是一个 `ActionEvent` 实例,而 lambda 的主体是对按钮上的 `setText` 的简单调用。 lambda 表达式中的参数 `e` 是一个 `ActionEvent` 实例,而 lambda 的主体是对按钮上的 `setText` 的简单调用。
@ -227,7 +229,7 @@ interface BinaryIntOp {
} }
``` ```
注释 `@FunctionalInterface` 适用于声明 _唯一_ 抽象方法的任何接口;在本例中,这个抽象接口是 `compute`。一些标准接口,(例如具有唯一声明方法 `run``Runnable` 接口)同样符合这个要求。在此示例中,`compute` 是已声明的方法。该接口可用作引用声明中的目标类型: 注释 `@FunctionalInterface` 适用于声明*唯一*抽象方法的任何接口;在本例中,这个抽象接口是 `compute`。一些标准接口,(例如具有唯一声明方法 `run``Runnable` 接口)同样符合这个要求。在此示例中,`compute` 是已声明的方法。该接口可用作引用声明中的目标类型:
``` ```
BinaryIntOp div = (arg1, arg2) -> arg1 / arg2; BinaryIntOp div = (arg1, arg2) -> arg1 / arg2;
@ -236,7 +238,7 @@ div.compute(12, 3); // 4
`java.util.function` 提供各种函数式接口。以下是一些示例。 `java.util.function` 提供各种函数式接口。以下是一些示例。
下面的代码段介绍了参数化的 `Predicate` 函数式接口。在此示例中,带有参数 `String``Predicate<String>` 类型可以引用具有 `String` 参数的 lambda 表达式或诸如 `isEmpty` 之类的 `String` 方法。通常情况下,_predicate_ 是一个返回布尔值的函数。 下面的代码段介绍了参数化的 `Predicate` 函数式接口。在此示例中,带有参数 `String``Predicate<String>` 类型可以引用具有 `String` 参数的 lambda 表达式或诸如 `isEmpty` 之类的 `String` 方法。通常情况下,Predicate 是一个返回布尔值的函数。
``` ```
Predicate<String> pred = String::isEmpty; // String 方法的 predicate 声明 Predicate<String> pred = String::isEmpty; // String 方法的 predicate 声明
@ -247,7 +249,7 @@ Arrays.asList(strings)
.forEach(System.out::println); // 只打印空字符串 .forEach(System.out::println); // 只打印空字符串
``` ```
在字符串长度为零的情况下,`isEmpty` predicate 判定结果为 `true`。 因此,只有空字符串才能进入管道的 `forEach` 阶段。 在字符串长度为零的情况下,`isEmpty` Predicate 判定结果为 `true`。 因此,只有空字符串才能进入管道的 `forEach` 阶段。
下一段代码将演示如何将简单的 lambda 或方法引用组合成更丰富的 lambda 或方法引用。考虑这一系列对 `IntUnaryOperator` 类型的引用的赋值,它接受一个整型参数并返回一个整型值: 下一段代码将演示如何将简单的 lambda 或方法引用组合成更丰富的 lambda 或方法引用。考虑这一系列对 `IntUnaryOperator` 类型的引用的赋值,它接受一个整型参数并返回一个整型值:
@ -308,16 +310,16 @@ Arrays.asList(arrayBR).stream().forEach(BedRocker::dump);
`Arrays.asList` 实用程序再次用于流化一个数组 `names`,然后将流的每一项传递给 `map` 函数,该函数的参数现在是构造器引用 `BedRocker::new`。这个构造器引用通过在每次调用时生成和初始化一个 `BedRocker` 实例来充当一个对象工厂。在第二行执行之后,名为 `bedrockers` 的流由五项 `BedRocker` 组成。 `Arrays.asList` 实用程序再次用于流化一个数组 `names`,然后将流的每一项传递给 `map` 函数,该函数的参数现在是构造器引用 `BedRocker::new`。这个构造器引用通过在每次调用时生成和初始化一个 `BedRocker` 实例来充当一个对象工厂。在第二行执行之后,名为 `bedrockers` 的流由五项 `BedRocker` 组成。
这个例子可以通过关注高阶 `map` 函数来进一步阐明。在通常情况下,一个映射将一个类型的值(例如,一个 `int`)转换为另一个 _相同_ 类型的值(例如,一个整数的后继): 这个例子可以通过关注高阶 `map` 函数来进一步阐明。在通常情况下,一个映射将一个类型的值(例如,一个 `int`)转换为另一个*相同*类型的值(例如,一个整数的后继):
``` ```
map(n -> n + 1) // 将 n 映射到其后继 map(n -> n + 1) // 将 n 映射到其后继
``` ```
然而,在 `BedRocker` 这个例子中,转换更加戏剧化,因为一个类型的值(代表一个名字的 `String`)被映射到一个 _不同_ 类型的值,在这个例子中,就是一个 `BedRocker` 实例,这个字符串就是它的名字。转换是通过一个构造器调用来完成的,它是由构造器引用来实现的: 然而,在 `BedRocker` 这个例子中,转换更加戏剧化,因为一个类型的值(代表一个名字的 `String`)被映射到一个*不同*类型的值,在这个例子中,就是一个 `BedRocker` 实例,这个字符串就是它的名字。转换是通过一个构造器调用来完成的,它是由构造器引用来实现的:
``` ```
`map(BedRocker::new) // 将 String 映射到 BedRocker map(BedRocker::new) // 将 String 映射到 BedRocker
``` ```
传递给构造器的值是 `names` 数组中的其中一项。 传递给构造器的值是 `names` 数组中的其中一项。
@ -325,13 +327,13 @@ map(n -> n + 1) // 将 n 映射到其后继
此代码示例的第二行还演示了一个你目前已经非常熟悉的转换:先将数组先转换成 `List`,然后再转换成 `Stream` 此代码示例的第二行还演示了一个你目前已经非常熟悉的转换:先将数组先转换成 `List`,然后再转换成 `Stream`
``` ```
`Stream<BedRocker> bedrockers = Arrays.asList(names).stream().map(BedRocker::new);` Stream<BedRocker> bedrockers = Arrays.asList(names).stream().map(BedRocker::new);
``` ```
第三行则是另一种方式 —— 流 `bedrockers` 通过使用_数组_构造器引用 `BedRocker[]::new` 调用 `toArray` 方法: 第三行则是另一种方式 —— 流 `bedrockers` 通过使用*数组*构造器引用 `BedRocker[]::new` 调用 `toArray` 方法:
``` ```
`BedRocker[ ] arrayBR = bedrockers.toArray(BedRocker[]::new);` BedRocker[ ] arrayBR = bedrockers.toArray(BedRocker[]::new);
``` ```
该构造器引用不会创建单个 `BedRocker` 实例,而是创建这些实例的整个数组:该构造器引用现在为 `BedRocker[]:new`,而不是 `BedRocker::new`。为了进行确认,将 `arrayBR` 转换为 `List`,再次对其进行流式处理,以便可以使用 `forEach` 来打印 `BedRocker` 的名字。 该构造器引用不会创建单个 `BedRocker` 实例,而是创建这些实例的整个数组:该构造器引用现在为 `BedRocker[]:new`,而不是 `BedRocker::new`。为了进行确认,将 `arrayBR` 转换为 `List`,再次对其进行流式处理,以便可以使用 `forEach` 来打印 `BedRocker` 的名字。
@ -348,7 +350,7 @@ Baby Puss
### <ruby>柯里化<rt>Currying</rt></ruby> ### <ruby>柯里化<rt>Currying</rt></ruby>
_柯里化_ 函数是指减少函数执行任何工作所需的显式参数的数量(通常减少到一个)。(该术语是为了纪念逻辑学家 Haskell Curry。一般来说函数的参数越少调用起来就越容易也更健壮。回想一下一些需要半打左右参数的噩梦般的函数因此应将柯里化视为简化函数调用的一种尝试。`java.util.function` 包中的接口类型适合于柯里化,如以下示例所示。 *柯里化*函数是指减少函数执行任何工作所需的显式参数的数量(通常减少到一个)。(该术语是为了纪念逻辑学家 Haskell Curry。一般来说函数的参数越少调用起来就越容易也更健壮。回想一下一些需要半打左右参数的噩梦般的函数因此应将柯里化视为简化函数调用的一种尝试。`java.util.function` 包中的接口类型适合于柯里化,如以下示例所示。
引用的 `IntBinaryOperator` 接口类型是为函数接受两个整型参数,并返回一个整型值: 引用的 `IntBinaryOperator` 接口类型是为函数接受两个整型参数,并返回一个整型值:
@ -368,13 +370,12 @@ mult2.applyAsInt(10, 30); // 300
arg1 -> (arg2 -> arg1 * arg2) // 括号可以省略 arg1 -> (arg2 -> arg1 * arg2) // 括号可以省略
``` ```
完整的 lambda 以 `arg1,` 开头,而该 lambda 的主体以及返回的值是另一个以 `arg2` 开头的 lambda。返回的 lambda 仅接受一个参数(`arg2`),但返回了两个数字的乘积(`arg1` 和 `arg2`)。下面的概述,再加上代码,应该可以更好地进行说明。 完整的 lambda 以 `arg1` 开头,而该 lambda 的主体以及返回的值是另一个以 `arg2` 开头的 lambda。返回的 lambda 仅接受一个参数(`arg2`),但返回了两个数字的乘积(`arg1` 和 `arg2`)。下面的概述,再加上代码,应该可以更好地进行说明。
以下是如何柯里化 `mult2` 的概述: 以下是如何柯里化 `mult2` 的概述:
> 类型为 `IntFunction<IntUnaryOperator>` 的 lambda 被写入并调用,其整型值为 10。返回的 `IntUnaryOperator` 缓存了值 10因此变成了已柯里化版本的 `mult2`,在本例中为 `curriedMult2` - 类型为 `IntFunction<IntUnaryOperator>` 的 lambda 被写入并调用,其整型值为 10。返回的 `IntUnaryOperator` 缓存了值 10因此变成了已柯里化版本的 `mult2`,在本例中为 `curriedMult2`
- 然后使用单个显式参数例如20调用 `curriedMult2` 函数,该参数与缓存的参数(在本例中为 10相乘以生成返回的乘积。。
> 然后使用单个显式参数例如20调用 `curriedMult2` 函数,该参数与缓存的参数(在本例中为 10相乘以生成返回的乘积。。
这是代码的详细信息: 这是代码的详细信息:
@ -391,7 +392,7 @@ IntFunction<IntUnaryOperator> curriedMult2Maker = n1 -> (n2 -> n1 * n2);
IntUnaryOperator curriedMult2 = curriedMult2Maker2.apply(10); IntUnaryOperator curriedMult2 = curriedMult2Maker2.apply(10);
``` ```
值 10 现在缓存在 `curriedMult2` 函数中,以便 `curriedMult2` 调用中的显式整型参数乘以 10 `10` 现在缓存在 `curriedMult2` 函数中,以便 `curriedMult2` 调用中的显式整型参数乘以 10
``` ```
curriedMult2.applyAsInt(20); // 200 = 10 * 20 curriedMult2.applyAsInt(20); // 200 = 10 * 20
@ -423,9 +424,9 @@ dataStream
.collect(...); // 或者,也可以进行归约:阶段 N .collect(...); // 或者,也可以进行归约:阶段 N
``` ```
自动多线程,以 `parallel``parallelStream` 调用为例,建立在 Java 的 fork/join 框架上,该框架支持 _<ruby>任务窃取<rt>task stealing</rt></ruby>_ 以提高效率。假设 `parallelStream` 调用后面的线程池由八个线程组成,并且 `dataStream` 被八种方式分区。某个线程例如T1可能比另一个线程例如T7工作更快这意味着应该将 T7 的某些任务移到 T1 的工作队列中。这会在运行时自动发生。 自动多线程,以 `parallel``parallelStream` 调用为例,建立在 Java 的 fork/join 框架上,该框架支持 <ruby>任务窃取<rt>task stealing</rt></ruby> 以提高效率。假设 `parallelStream` 调用后面的线程池由八个线程组成,并且 `dataStream` 被八种方式分区。某个线程例如T1可能比另一个线程例如T7工作更快这意味着应该将 T7 的某些任务移到 T1 的工作队列中。这会在运行时自动发生。
在这个简单的多线程世界中,程序员的主要职责是编写线程安全函数,这些函数作为参数传递给在流 API 中占主导地位的高阶函数。 尤其是 lambda 鼓励编写纯函数(因此是线程安全的)函数。 在这个简单的多线程世界中,程序员的主要职责是编写线程安全函数,这些函数作为参数传递给在流 API 中占主导地位的高阶函数。尤其是 lambda 鼓励编写纯函数(因此是线程安全的)函数。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -434,7 +435,7 @@ via: https://opensource.com/article/20/1/javastream
作者:[Marty Kalin][a] 作者:[Marty Kalin][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke) 译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11877-1.html)
[#]: subject: (What's HTTPS for secure computing?)
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
用于安全计算的 HTTPS 是什么?
======
> 在默认的情况下,网站的安全性还不足够。
![](https://img.linux.net.cn/data/attachment/album/202002/11/123552rqncn4c7474j44jq.jpg)
在过去的几年里,寻找一个只以 “http://...” 开头的网站变得越来越难,这是因为业界终于意识到,网络安全“是件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算(使用例如<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby> TEEs 这样的硬件功能来提供数据和算法这种类型的保护)可以保护主机系统中的或者易受攻击的环境中的数据。
关于 [TEEs][2],当然,还有我和 Nathaniel McCallum 共同创立的 [Enarx 项目][3],我已经写了几次文章(参见《[给每个人的 Enarx一个任务][4]》 和 《[Enarx 迈向多平台][5]》。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台以此来让你能够安全地将敏感应用或者敏感组件例如微服务部署在你不信任的主机上。当然Enarx 是完全开源的(顺便提一下,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作负载,这是可信计算的承诺,它扩展了使用静态敏感数据和传输中数据的常规做法:
* **存储**:你要加密你的静态数据,因为你不完全信任你的基础存储架构。
* **网络**:你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
* **计算**:你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
关于信任,我有非常多的话想说,而且,上述说法里的单词“**完全**”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。不论哪种情况,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如<ruby>同态加密<rt>homomorphic encryption</rt></ruby>一类的技术,这些技术正在开始提供一些可能性,但是它们依然有限,这些技术还不够成熟)。
考虑到发现的一些 CPU 安全性问题,是否应该完全信任 CPU 有时自然会产生疑问,以及它们是否在针对其所在的主机的物理攻击中具有完全的安全性。
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的[威胁模型][6]并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号、T 型号和 U 型号的芯片以及任何 P、M 和 N 供应商的任何芯片上去。”
我认为这里发生了三处改变,这些改变引起了人们现在对<ruby>机密计算<rt>confidential computing</rt></ruby>的兴趣和采用。
1. **硬件可用**:只是在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
2. **行业就绪**就像上云越来越多地被接受作为应用程序部署的模型监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始呼吁在不受信任的主机运行敏感程序或者是处理敏感数据的应用程序的方法更确切地说是在无法完全信任且带有敏感数据的主机上运行的方法。这不足为奇如果芯片制造商看不到这项技术的市场他们就不会投太多的钱在这项技术上。Linux 基金会的[机密计算联盟CCC][7]的成立就是业界对如何寻找使用加密计算的通用模型并且鼓励开源项目使用这些技术感兴趣的案例。(红帽发起的 Enarx 是一个 CCC 项目。)
3. **开放源码**:就像区块链一样,机密计算是使用开源绝对明智的技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内执行你的工作负载的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE”但是如果你不够了解 TEE 软件环境那你就是将一种软件不透明换成另外一种。TEEs 的开源支持将允许你或者社区(实际上是你与社区)以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会旗下(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果它们还没有成为开源)。
我认为,在过去的 15 到 20 年里,硬件可用、行业就绪和开放源码已成为推动技术改变的驱动力。区块链、人工智能、云计算、<ruby>大规模计算<rt>webscale computing</rt></ruby>、大数据和互联网商务都是这三个点同时发挥作用的例子,并且在业界带来了巨大的改变。
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的安全变得越来越实用和无处不在,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化 —— 而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/confidential-computing
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
[3]: https://enarx.io/
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
[7]: https://confidentialcomputing.io/
[8]: tmp.VEZpFGxsLv#1
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/

View File

@ -0,0 +1,172 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11886-1.html)
[#]: subject: (Use this Python script to find bugs in your Overcloud)
[#]: via: (https://opensource.com/article/20/1/logtool-root-cause-identification)
[#]: author: (Arkady Shtempler https://opensource.com/users/ashtempl)
用 Python 脚本发现 OpenStack Overcloud 中的问题
======
> LogTool 是一组 Python 脚本,可帮助你找出 Overcloud 节点中问题的根本原因。
![](https://img.linux.net.cn/data/attachment/album/202002/12/211455woy57xx5q19cx175.jpg)
OpenStack 在其 Overcloud 节点和 Undercloud 主机上存储和管理了一堆日志文件。因此,使用 OSP 日志文件来排查遇到的问题并不是一件容易的事,尤其在你甚至都不知道是什么原因导致问题时。
如果你正处于这种情况,那么 [LogTool][2] 可以使你的生活变得更加轻松它会为你节省本需要人工排查问题所需的时间和精力。LogTool 基于模糊字符串匹配算法,可提供过去发生的所有唯一错误和警告信息。你可以根据日志中的时间戳导出特定时间段(例如 10 分钟前、一个小时前、一天前等)的这些信息。
LogTool 是一组 Python 脚本,其主要模块 `PyTool.py` 在 Undercloud 主机上执行。某些操作模式使用直接在 Overcloud 节点上执行的其他脚本,例如从 Overcloud 日志中导出错误和警告信息。
LogTool 支持 Python 2 和 Python 3你可以根据需要更改工作目录[LogTool_Python2][3] or [LogTool_Python3][4]。
### 操作方式
#### 1、从 Overcloud 日志中导出错误和警告信息
此模式用于从过去发生的 Overcloud 节点中提取 **错误****警告** 信息。作为用户,系统将提示你提供“开始时间”和“调试级别”,以用于提取错误或警告消息。例如,如果在过去 10 分钟内出了问题,你则可以只提取该时间段内的错误和警告消息。
此操作模式将为每个 Overcloud 节点生成一个包含结果文件的目录。结果文件是经过压缩的简单文本文件(`*.gz`),以减少从 Overcloud 节点下载所需的时间。将压缩文件转换为常规文本文件,可以使用 `zcat` 或类似工具。此外Vi 的某些版本和 Emacs 的任何最新版本均支持读取压缩数据。结果文件分为几部分,并在底部包含目录。
LogTool 可以即时检测两种日志文件:标准和非标准。在标准文件中,每条日志行都有一个已知的和已定义的结构:时间戳、调试级别、信息等等。在非标准文件中,日志的结构未知。例如,它可能是第三方的日志。在目录中,你可以找到每个部分的“名称 --> 行号”例如:
* **原始数据 - 从标准 OSP 日志中提取的错误/警告消息:** 这部分包含所有提取的错误/警告消息,没有任何修改或更改。这些消息是 LogTool 用于模糊匹配分析的原始数据。
* **统计信息 - 每个标准 OSP 日志的错误/警告信息数量:** 在此部分,你将找到每个标准日志文件的错误和警告数量。这些信息可以帮助你了解用于排查问题根本原因的潜在组件。
* **统计信息 - 每个标准 OSP 日志文件的唯一消息:** 这部分提供指定时间戳内的唯一的错误和警告消息。有关每个唯一错误或警告的更多详细信息,请在“原始数据”部分中查找相同的消息。
* **统计信息 - 每个非标准日志文件在任意时间的唯一消息:** 此部分包含非标准日志文件中的唯一消息。遗憾的是LogTool 无法像标准日志文件那样的处理方式处理这些日志文件。因此,在你提取“特定时间”的日志信息时会被忽略,你会看到过去创建的所有唯一的错误/警告消息。因此,首先,向下滚动到结果文件底部的目录并查看其部分-使用目录中的行索引跳到相关部分,其中第 3、4 和 5 行的信息最重要。
#### 2、从 Overcloud 节点下载所有日志
所有 Overcloud 节点的日志将被压缩并下载到 Undercloud 主机上的本地目录。
#### 3、所有 Overcloud 日志中搜索字符串
该模式“grep”搜索由用户在所有 Overcloud 日志上提供的字符串。例如你可能希望查看特定请求的所有日志消息例如“Create VM”的失败的请求 ID。
#### 4、检查 Overcloud 上当前的 CPU、RAM 和磁盘使用情况
该模式显示每个 Overcloud 节点上的当前 CPU、RAM 和磁盘信息。
#### 5、执行用户脚本
该模式使用户可以在 Overcloud 节点上运行自己的脚本。例如,假设 Overcloud 部署失败,你就需要在每个控制器节点上执行相同的过程来修复该问题。你可以实现“替代方法”脚本,并使用此模式在控制器上运行它。
#### 6、仅按给定的时间戳下载相关日志
此模式仅下载 Overcloud 上 “给定的时间戳”的“上次修改时间”的日志。例如,如果 10 分钟前出现错误,则与旧日志文件就没有关系,因此无需下载。此外,你不能(或不应)在某些错误报告工具中附加大文件,因此此模式可能有助于编写错误报告。
#### 7、从 Undercloud 日志中导出错误和警告信息
这与上面的模式 1 相同。
#### 8、在 Overcloud 上检查不正常的 docker
此模式用于在节点上搜索不正常的 Docker。
#### 9、下载 OSP 日志并在本地运行 LogTool
此模式允许你从 Jenkins 或 Log Storage 下载 OSP 日志(例如,`cougar11.scl.lab.tlv.redhat.com`),并在本地分析。
#### 10、在 Undercloud 上分析部署日志
此模式可以帮助你了解 Overcloud 或 Undercloud 部署过程中出了什么问题。例如,在`overcloud_deploy.sh` 脚本中,使用 `--log` 选项时会生成部署日志;此类日志的问题是“不友好”,你很难理解是什么出了问题,尤其是当详细程度设置为 `vv` 或更高时,使得日志中的数据难以读取。此模式提供有关所有失败任务的详细信息。
#### 11、分析 GerritZuul失败的日志
此模式用于分析 GerritZuul日志文件。它会自动从远程 Gerrit 门下载所有文件HTTP 下载)并在本地进行分析。
### 安装
GitHub 上有 LogTool使用以下命令将其克隆到你的 Undercloud 主机:
```
git clone https://github.com/zahlabut/LogTool.git
```
该工具还使用了一些外部 Python 模块:
#### Paramiko
默认情况下SSH 模块通常会安装在 Undercloud 上。使用以下命令来验证是否已安装:
```
ls -a /usr/lib/python2.7/site-packages | grep paramiko
```
如果需要安装模块,请在 Undercloud 上执行以下命令:
```
sudo easy_install pip
sudo pip install paramiko==2.1.1
```
#### BeautifulSoup
此 HTML 解析器模块仅在使用 HTTP 下载日志文件的模式下使用。它用于解析 Artifacts HTML 页面以获取其中的所有链接。安装 BeautifulSoup请输入以下命令
```
pip install beautifulsoup4
```
你还可以通过执行以下命令使用 [requirements.txt][6] 文件安装所有必需的模块:
```
pip install -r requirements.txt
```
### 配置
所有必需的参数都直接在 `PyTool.py` 脚本中设置。默认值为:
```
overcloud_logs_dir = '/var/log/containers'
overcloud_ssh_user = 'heat-admin'
overcloud_ssh_key = '/home/stack/.ssh/id_rsa'
undercloud_logs_dir ='/var/log/containers'
source_rc_file_path='/home/stack/'
```
### 用法
此工具是交互式的,因此要启动它,只需输入:
```
cd LogTool
python PyTool.py
```
### 排除 LogTool 故障
在运行时会创建两个日志文件:`Error.log` 和 `Runtime.log`。请在你要打开的问题的描述中添加两者的内容。
### 局限性
LogTool 进行硬编码以处理最大 500 MB 的文件。
### LogTool_Python3 脚本
在 [github.com/zahlabut/LogTool][2] 获取。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/logtool-root-cause-identification
作者:[Arkady Shtempler][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ashtempl
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
[2]: https://github.com/zahlabut/LogTool
[3]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python2
[4]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python3
[5]: https://opensource.com/article/19/2/getting-started-cat-command
[6]: https://github.com/zahlabut/LogTool/blob/master/LogTool_Python3/requirements.txt

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11856-1.html)
[#]: subject: (One open source chat tool to rule them all) [#]: subject: (One open source chat tool to rule them all)
[#]: via: (https://opensource.com/article/20/1/open-source-chat-tool) [#]: via: (https://opensource.com/article/20/1/open-source-chat-tool)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
一个管理所有聊天的开源聊天工具 一个通过 IRC 管理所有聊天的开源聊天工具
====== ======
BitlBee 将多个聊天应用集合到一个界面中。在我们的 20 个使用开源提升生产力的系列的第九篇文章中了解如何设置和使用 BitlBee。
![Person using a laptop][1] > BitlBee 将多个聊天应用集合到一个界面中。在我们的 20 个使用开源提升生产力的系列的第九篇文章中了解如何设置和使用 BitlBee。
![](https://img.linux.net.cn/data/attachment/album/202002/05/123636dw8uw34mbkqzmw84.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 将所有聊天都放到 BitlBee 中 ### 将所有聊天都放到 BitlBee 中
即时消息和聊天已经成为网络世界的主要内容。如果你像我一样,你可能打开五六个不同的应用与你的朋友、同事和其他人交谈。跟上所有聊天真的很痛苦。谢天谢地,你可以使用一个应用(好吧,是两个)将这些聊天整个到一个地方。 即时消息和聊天已经成为网络世界的主要内容。如果你像我一样,你可能打开五六个不同的应用与你的朋友、同事和其他人交谈。关注所有聊天真的很痛苦。谢天谢地,你可以使用一个应用(好吧,是两个)将这些聊天整个到一个地方。
![BitlBee on XChat][2] ![BitlBee on XChat][2]
@ -24,12 +26,11 @@ BitlBee 将多个聊天应用集合到一个界面中。在我们的 20 个使
BitlBee 几乎包含在所有 Linux 发行版中。在 Ubuntu 上安装(我选择的 Linux 桌面),类似这样: BitlBee 几乎包含在所有 Linux 发行版中。在 Ubuntu 上安装(我选择的 Linux 桌面),类似这样:
``` ```
`sudo apt install bitlbee-libpurple` sudo apt install bitlbee-libpurple
``` ```
在其他发行版上,包名可能略有不同,但搜索 _bitlbee_ 应该就能看到。 在其他发行版上,包名可能略有不同,但搜索 “bitlbee” 应该就能看到。
你会注意到我用的 libpurple 版的 BitlBee。这个版本能让我使用 [libpurple][4] 即时消息库中提供的所有协议,该库最初是为 [Pidgin][5] 开发的。 你会注意到我用的 libpurple 版的 BitlBee。这个版本能让我使用 [libpurple][4] 即时消息库中提供的所有协议,该库最初是为 [Pidgin][5] 开发的。
@ -37,47 +38,43 @@ BitlBee 几乎包含在所有 Linux 发行版中。在 Ubuntu 上安装(我选
![Initial BitlBee connection][7] ![Initial BitlBee connection][7]
你将自动连接到控制频道和 **bitlbee**。此频道对于你是独一无二的,每个人在多用户系统上都有自己的频道。在这里你可以配置服务。 你将自动连接到控制频道 &bitlbee。此频道对于你是独一无二的在多用户系统上每个人都有一个自己的。在这里你可以配置该服务。
在控制频道中输入 **help**,你可以随时获得完整的文档。浏览它,然后使用 **register** 命令在服务器上注册帐户。
在控制频道中输入 `help`,你可以随时获得完整的文档。浏览它,然后使用 `register` 命令在服务器上注册帐户。
``` ```
`register <mypassword>` register <mypassword>
``` ```
现在你在服务器上所做的任何配置更改IM 帐户、设置等)都将在输入 **save** 时保存。每当你连接时,使用 **identify &lt;mypassword&gt;** 连接到你的帐户并加载这些设置。 现在你在服务器上所做的任何配置更改IM 帐户、设置等)都将在输入 `save` 时保存。每当你连接时,使用 `identify <mypassword>` 连接到你的帐户并加载这些设置。
![purple settings][8] ![purple settings][8]
命令 **help purple** 将显示 libpurple 提供的所有可用协议。例如,我安装了 [**telegram-purple**][9] 包,它增加了连接到 Telegram 的能力。我可以使用 **account add** 命令将我的电话号码作为帐户添加。 命令 `help purple` 将显示 libpurple 提供的所有可用协议。例如,我安装了 [telegram-purple][9] 包,它增加了连接到 Telegram 的能力。我可以使用 `account add` 命令将我的电话号码作为帐户添加。
``` ```
`account add telegram +15555555` account add telegram +15555555
``` ```
BitlBee 将显示它已添加帐户。你可以使用 **account list** 列出你的帐户。因为我只有一个帐户,我可以通过 **account 0 on** 登录,它会进行 Telegram 登录,列出我所有的朋友和聊天,接下来就能正常聊天了。 BitlBee 将显示它已添加帐户。你可以使用 `account list` 列出你的帐户。因为我只有一个帐户,我可以通过 `account 0 on` 登录,它会进行 Telegram 登录,列出我所有的朋友和聊天,接下来就能正常聊天了。
但是,对于 Slack 这个最常见的聊天系统之一呢?你可以安装 [**slack-libpurple**][10] 插件,并且对 Slack 执行同样的操作。如果你不愿意编译和安装这些,这可能不适合你。 但是,对于 Slack 这个最常见的聊天系统之一呢?你可以安装 [slack-libpurple][10] 插件,并且对 Slack 执行同样的操作。如果你不愿意编译和安装这些,这可能不适合你。
按照插件页面上的说明操作,安装后重新启动 BitlBee 服务。现在,当你运行 **help purple** 时,应该会列出 Slack。像其他协议一样添加一个 Slack 帐户。
按照插件页面上的说明操作,安装后重新启动 BitlBee 服务。现在,当你运行 `help purple` 时,应该会列出 Slack。像其他协议一样添加一个 Slack 帐户。
``` ```
account add slack [ksonney@myslack.slack.com][11] account add slack ksonney@myslack.slack.com
account 1 set password my_legcay_API_token account 1 set password my_legcay_API_token
account 1 on account 1 on
``` ```
你知道么,你已经连接到 Slack 中,你可以通过 **chat add** 命令添加你感兴趣的 Slack 频道。比如: 你知道么,你已经连接到 Slack 中,你可以通过 `chat add` 命令添加你感兴趣的 Slack 频道。比如:
``` ```
`chat add 1 happyparty` chat add 1 happyparty
``` ```
将 Slack 频道 happyparty 添加为本地频道 #happyparty。现在可以使用标准 IRC **/join** 命令访问该频道。这很酷。 将 Slack 频道 happyparty 添加为本地频道 #happyparty。现在可以使用标准 IRC `/join` 命令访问该频道。这很酷。
BitlBee 和 IRC 客户端帮助我的(大部分)聊天和即时消息保存在一个地方,并减少了我的分心,因为我不再需要查找并切换到任何一个刚刚找我的应用上。 BitlBee 和 IRC 客户端帮助我的(大部分)聊天和即时消息保存在一个地方,并减少了我的分心,因为我不再需要查找并切换到任何一个刚刚找我的应用上。
@ -88,7 +85,7 @@ via: https://opensource.com/article/20/1/open-source-chat-tool
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,40 +1,41 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11858-1.html)
[#]: subject: (Use this Twitter client for Linux to tweet from the terminal) [#]: subject: (Use this Twitter client for Linux to tweet from the terminal)
[#]: via: (https://opensource.com/article/20/1/tweet-terminal-rainbow-stream) [#]: via: (https://opensource.com/article/20/1/tweet-terminal-rainbow-stream)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个 Twitter 客户端在 Linux 终端中发推特 使用这个 Twitter 客户端在 Linux 终端中发推特
====== ======
在我们的 20 个使用开源提升生产力的系列的第十篇文章中使用 Rainbow Stream 跟上你的 Twitter 流而无需离开终端。
![Chat bubbles][1] > 在我们的 20 个使用开源提升生产力的系列的第十篇文章中,使用 Rainbow Stream 跟上你的 Twitter 流而无需离开终端。
![](https://img.linux.net.cn/data/attachment/album/202002/06/113720bwi55j7xcccwwwi0.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 通过 Rainbow Stream 跟上Twitter ### 通过 Rainbow Stream 跟上Twitter
我喜欢社交网络和微博。它快速、简单,还有我可以与世界分享我的想法。当然,缺点是几乎所有非 Windows 的桌面客户端都是网站的封装。[Twitter][2] 有很多客户端,但我真正想要的是轻量、易于使用,最重要的是吸引人的客户端。 我喜欢社交网络和微博。它快速、简单,还有我可以与世界分享我的想法。当然,缺点是几乎所有非 Windows 的桌面客户端都是网站的封装。[Twitter][2] 有很多客户端,但我真正想要的是轻量、易于使用,最重要的是吸引人的客户端。
![Rainbow Stream for Twitter][3] ![Rainbow Stream for Twitter][3]
[Rainbow Stream][4] 是好看的 Twitter 客户端之一。它简单易用,并且可以通过 **pip3 install rainbowstream** 快速安装。第一次运行时,它将打开浏览器窗口,并让你通过 Twitter 授权。完成后,你将回到命令行,你的 Twitter 时间线将开始滚动。 [Rainbow Stream][4] 是好看的 Twitter 客户端之一。它简单易用,并且可以通过 `pip3 install rainbowstream` 快速安装。第一次运行时,它将打开浏览器窗口,并让你通过 Twitter 授权。完成后,你将回到命令行,你的 Twitter 时间线将开始滚动。
![Rainbow Stream first run][5] ![Rainbow Stream first run][5]
要了解的最重要的命令是 **p** 暂停推流、**r** 继续推流、**h** 得帮助,以及 **t** 发布新的推文。例如,**h tweets** 将提供发送和回复推文的所有选项。另一个有用的帮助页面是 **h messages**,它提供了处理直接消息的命令,这是我妻子和我经常使用的东西。还有很多其他命令,我会回头获得很多帮助。 要了解的最重要的命令是 `p` 暂停推流、`r` 继续推流、`h` 得到帮助,以及 `t` 发布新的推文。例如,`h tweets` 将提供发送和回复推文的所有选项。另一个有用的帮助页面是 `h messages`,它提供了处理直接消息的命令,这是我妻子和我经常使用的东西。还有很多其他命令,我会回头获得很多帮助。
随着时间线的滚动,你可以看到它有完整的 UTF-8 支持,并以正确的字体显示推文被转推以及喜欢的次数,图标和 emoji 也能正确显示。 随着时间线的滚动,你可以看到它有完整的 UTF-8 支持,并以正确的字体显示推文被转推以及喜欢的次数,图标和 emoji 也能正确显示。
![Kill this love][6] ![Kill this love][6]
关于 Rainbow Stream 的_最好_功能之一就是你不必放弃照片和图像。默认情况下此功能是关闭的但是你可以使用 **config** 命令尝试它。 关于 Rainbow Stream 的*最好*功能之一就是你不必放弃照片和图像。默认情况下,此功能是关闭的,但是你可以使用 `config` 命令尝试它。
``` ```
`config IMAGE_ON_TERM = true` config IMAGE_ON_TERM = true
``` ```
此命令将任何图像渲染为 ASCII 艺术。如果你有大量照片流,它可能会有点多,但是我喜欢。它有非常复古的 1990 年代 BBS 感觉,我也确实喜欢 1990 年代的 BBS 场景。 此命令将任何图像渲染为 ASCII 艺术。如果你有大量照片流,它可能会有点多,但是我喜欢。它有非常复古的 1990 年代 BBS 感觉,我也确实喜欢 1990 年代的 BBS 场景。
@ -50,7 +51,7 @@ via: https://opensource.com/article/20/1/tweet-terminal-rainbow-stream
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,16 +1,18 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11869-1.html)
[#]: subject: (Read Reddit from the Linux terminal) [#]: subject: (Read Reddit from the Linux terminal)
[#]: via: (https://opensource.com/article/20/1/open-source-reddit-client) [#]: via: (https://opensource.com/article/20/1/open-source-reddit-client)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
在 Linux 终端中阅读 Reddit 在 Linux 终端中阅读 Reddit
====== ======
在我们的 20 个使用开源提升生产力的系列的第十一篇文章中使用 Reddit 客户端 Tuir 在工作中短暂休息一下。
![Digital creative of a browser on the internet][1] > 在我们的 20 个使用开源提升生产力的系列的第十一篇文章中使用 Reddit 客户端 Tuir 在工作中短暂休息一下。
![](https://img.linux.net.cn/data/attachment/album/202002/09/104113w1ytjmlv1jly0j1t.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
@ -20,23 +22,23 @@
![/r/emacs in Tuir][3] ![/r/emacs in Tuir][3]
当我阅读 Reddit不仅仅是看动物宝宝的图片我使用 [Tuir][4],代表 Terminal UI for RedditReddit 终端 UI。Tuir 是功能齐全的 Reddit 客户端,可以在运行 Python 的任何系统上运行。安装是通过 pip 完成的,非常轻松 当我阅读 Reddit不仅仅是看动物宝宝的图片我使用 [Tuir][4]Reddit 终端 UI。Tuir 是功能齐全的 Reddit 客户端,可以在运行 Python 的任何系统上运行。安装是通过 `pip` 完成的,非常简单
首次运行时Tuir 会进入 Reddit 默认文章列表。屏幕的顶部和底部有列出不同命令的栏。顶部栏显示你在 Reddit 上的位置,第二行显示根据 Reddit “Hot/New/Controversial” 等类别筛选的命令。按下筛选器前面的数字触发筛选。 首次运行时Tuir 会进入 Reddit 默认文章列表。屏幕的顶部和底部有列出不同命令的栏。顶部栏显示你在 Reddit 上的位置,第二行显示根据 Reddit “Hot/New/Controversial” 等类别筛选的命令。按下筛选器前面的数字触发筛选。
![Filtering by Reddit's "top" category][5] ![Filtering by Reddit's "top" category][5]
你可以使用箭头键或 **j**、**k**、**h** 和 **l** 键浏览列表,这与 Vi/Vim 使用的键相同。底部栏有用于应用导航的命令。如果要跳转到另一个子板,只需按 **/** 键打开提示,然后输入你要进入的子板名称。 你可以使用箭头键或 `j`、`k`、`h` 和 `l` 键浏览列表,这与 Vi/Vim 使用的键相同。底部栏有用于应用导航的命令。如果要跳转到另一个子板,只需按 `/` 键打开提示,然后输入你要进入的子板名称。
![Logging in][6] ![Logging in][6]
某些东西除非你登录,否则无法访问。如果你尝试执行需要登录的操作,那么 Tuir 就会提示你,例如发布新文章 **c** 或赞成/反对 **a** 和 **z**)。要登录,请按 **u** 键。这将打开浏览器以通过 OAuth2 登录Tuir 将保存令牌。之后,你的用户名应出现在屏幕的右上方。 某些东西除非你登录,否则无法访问。如果你尝试执行需要登录的操作,那么 Tuir 就会提示你,例如发布新文章 `c`)或赞成/反对 `a` 和 `z`)。要登录,请按 `u` 键。这将打开浏览器以通过 OAuth2 登录Tuir 将保存令牌。之后,你的用户名应出现在屏幕的右上方。
Tuir 还可以打开浏览器来查看图像、加载链接等。稍作调整,它甚至可以在终端中显示图像(尽管我没有设法使其正常工作)。 Tuir 还可以打开浏览器来查看图像、加载链接等。稍作调整,它甚至可以在终端中显示图像(尽管我没有让它可以正常工作)。
总的来说,我对 Tuir 在我需要休息时能快速跟上 Reddit 感到很满意。 总的来说,我对 Tuir 在我需要休息时能快速跟上 Reddit 感到很满意。
Tuir 是现已淘汰的 [RTV][7] 的两个分叉之一。另一个是 [TTRV][8],它还无法通过 pip 安装,但功能相同。我期待看到它们随着时间的推移脱颖而出。 Tuir 是现已淘汰的 [RTV][7] 的两个分叉之一。另一个是 [TTRV][8],它还无法通过 `pip` 安装,但功能相同。我期待看到它们随着时间的推移脱颖而出。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -45,7 +47,7 @@ via: https://opensource.com/article/20/1/open-source-reddit-client
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,35 +1,36 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11876-1.html)
[#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool) [#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool)
[#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader) [#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用此开源工具将你的RSS 订阅源和播客放在一起 使用此开源工具在一起收取你的 RSS 订阅源和播客
====== ======
在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 跟上你的新闻 RSS 源和播客。
![Ship captain sailing the Kubernetes seas][1] > 在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 收取你的新闻 RSS 源和播客。
![](https://img.linux.net.cn/data/attachment/album/202002/10/162526wv5jdl0m12sw10md.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Newsboat 访问你的 RSS 源和播客 ### 使用 Newsboat 访问你的 RSS 源和播客
RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com我还会关注 [SysAdvent][2] 年度 sysadmin 工具还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。 RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com我还会关注 [SysAdvent][2] sysadmin 年度工具还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。
![Newsboat][3] ![Newsboat][3]
[Newsboat][4] 是一个基于终端的 RSS 订阅源阅读器,外观感觉很像电子邮件程序 [Mutt][5]。它使阅读新闻变得容易,并有许多不错的功能。 [Newsboat][4] 是一个基于终端的 RSS 订阅源阅读器,外观感觉很像电子邮件程序 [Mutt][5]。它使阅读新闻变得容易,并有许多不错的功能。
安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew中。安装后只需在 **~/.newsboat/urls** 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入: 安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew中。安装后只需在 `~/.newsboat/urls` 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入:
``` ```
`newsboat -i </path/to/my/feeds.opml>` newsboat -i </path/to/my/feeds.opml>
``` ```
添加订阅源后Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动使用 **r** 检查某个源中是否有新项目,使用 **R** 检查所有源中是否有新项目,按**回车**打开订阅源,并选择要阅读的文章。 添加订阅源后Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动使用 `r` 检查某个源中是否有新项目,使用 `R` 检查所有源中是否有新项目,按回车打开订阅源,并选择要阅读的文章。
![Newsboat article list][6] ![Newsboat article list][6]
@ -39,7 +40,7 @@ RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了
#### 播客 #### 播客
Newsboat 还通过 Podboat 提供了[播客支持][10]podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 **e** 将节目添加到你的下载队列中。所有信息将保存在 **~/.newsboat** 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat执行此操作也可以使用 ** podboat -a ** 让 Podboat 下载所有内容。作为播客人和播客听众我认为这_真的_很方便。 Newsboat 还通过 Podboat 提供了[播客支持][10]Podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 `e` 将节目添加到你的下载队列中。所有信息将保存在 `~/.newsboat` 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat执行此操作也可以使用 `podboat -a` 让 Podboat 下载所有内容。作为播客人和播客听众,我认为这*真的*很方便。
![Podboat][11] ![Podboat][11]
@ -52,7 +53,7 @@ via: https://opensource.com/article/20/1/open-source-rss-feed-reader
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,29 +1,30 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11879-1.html)
[#]: subject: (Use this open source tool to get your local weather forecast) [#]: subject: (Use this open source tool to get your local weather forecast)
[#]: via: (https://opensource.com/article/20/1/open-source-weather-forecast) [#]: via: (https://opensource.com/article/20/1/open-source-weather-forecast)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个开源工具获取本地天气预报 使用这个开源工具获取本地天气预报
====== ======
在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
![Sky with clouds and grass][1] > 在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
![](https://img.linux.net.cn/data/attachment/album/202002/11/140842a8qwomfeg9mwegg8.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 wego 了解天气 ### 使用 wego 了解天气
过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,”晴朗“可以表示从”酷热“、”低于零度“到”一小时内会小雨“。能够了解实际情况和快速预测非常有用。 过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,“晴朗”可以表示从“酷热”、“低于零度”到“一小时内会小雨”。能够了解实际情况和快速预测非常有用。
![Wego][2] ![Wego][2]
[Wego][3] 是用 Go 编写的程序,可以获取并显示你的当地天气。如果你愿意,它甚至可以用闪亮的 ASCII 艺术效果进行渲染。 [Wego][3] 是用 Go 编写的程序,可以获取并显示你的当地天气。如果你愿意,它甚至可以用闪亮的 ASCII 艺术效果进行渲染。
要安装 wego你需要确保在系统上安装了[Go][4]。之后,你可以使用 **go get** 命令获取最新版本。你可能还想将 **~/go/bin** 目录添加到路径中: 要安装 `wego`,你需要确保在系统上安装了[Go][4]。之后,你可以使用 `go get` 命令获取最新版本。你可能还想将 `~/go/bin` 目录添加到路径中:
``` ```
go get -u github.com/schachmat/wego go get -u github.com/schachmat/wego
@ -31,11 +32,9 @@ export PATH=~/go/bin:$PATH
wego wego
``` ```
首次运行时wego 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。Wego还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap因此我将在此向你展示如何设置。 首次运行时,`wego` 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。`wego` 还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap因此我将在此向你展示如何设置。
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 **~/.wegorc** 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国还是国际单位制SI。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 `~/.wegorc` 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国还是国际单位制SI。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
``` ```
# wego configuration for OEM # wego configuration for OEM
@ -53,16 +52,15 @@ owm-lang=en
units=imperial units=imperial
``` ```
现在,在命令行运行 **wego** 将显示接下来三天的当地天气。 现在,在命令行运行 `wego` 将显示接下来三天的当地天气。
Wego 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 **-f** 参数或在 **.wegorc** 文件中指定前端。 `wego` 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 `-f` 参数或在 `.wegorc` 文件中指定前端。
![Wego at login][10] ![Wego at login][10]
如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 **~/.bashrc**(我这里是 **~/.zshrc**)即可。 如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 `~/.bashrc`(我这里是 `~/.zshrc`)即可。
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 **curl** 获取一行天气信息。我有一个名为 **get_wttr** 的 shell 函数,用于获取当前简化的预报信息。
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 `curl` 获取一行天气信息。我有一个名为 `get_wttr` 的 shell 函数,用于获取当前简化的预报信息。
``` ```
get_wttr() { get_wttr() {
@ -81,7 +79,7 @@ via: https://opensource.com/article/20/1/open-source-weather-forecast
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11882-1.html)
[#]: subject: (3 handy command-line internet speed tests)
[#]: via: (https://opensource.com/article/20/1/internet-speed-tests)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
3 个方便的命令行网速测试工具
======
> 用这三个开源工具检查你的互联网和局域网速度。
![](https://img.linux.net.cn/data/attachment/album/202002/12/115915kk6hkax1vparkuvk.jpg)
能够验证网络连接速度使您可以控制计算机。 使您可以在命令行中检查互联网和网络速度的三个开源工具是 Speedtest、Fast 和 iPerf。
### Speedtest
[Speedtest][2] 是一个旧宠。它用 Python 实现,并打包在 Apt 中,也可用 `pip` 安装。你可以将它作为命令行工具或在 Python 脚本中使用。
使用以下命令安装:
```
sudo apt install speedtest-cli
```
或者
```
sudo pip3 install speedtest-cli
```
然后使用命令 `speedtest` 运行它:
```
$ speedtest
Retrieving speedtest.net configuration...
Testing from CenturyLink (65.128.194.58)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by CenturyLink (Cambridge, UK) [20.49 km]: 31.566 ms
Testing download speed................................................................................
Download: 68.62 Mbit/s
Testing upload speed......................................................................................................
Upload: 10.93 Mbit/s
```
它给你提供了互联网上传和下载的网速。它快速而且可脚本调用,因此你可以定期运行它,并将输出保存到文件或数据库中,以记录一段时间内的网络速度。
### Fast
[Fast][3] 是 Netflix 提供的服务。它的网址是 [Fast.com][4],同时它有一个可通过 `npm` 安装的命令行工具:
```
npm install --global fast-cli
```
网站和命令行程序都提供了相同的基本界面:它是一个尽可能简单的速度测试:
```
$ fast
     82 Mbps ↓
```
该命令返回你的网络下载速度。要获取上传速度,请使用 `-u` 标志:
```
$ fast -u
   ⠧ 80 Mbps ↓ / 8.2 Mbps ↑
```
### iPerf
[iPerf][5] 测试的是局域网速度而不是像前两个工具一样测试互联网速度的好方法。Debian、Raspbian 和 Ubuntu 用户可以使用 apt 安装它:
```
sudo apt install iperf
```
它还可用于 Mac 和 Windows。
安装完成后,你需要在同一网络上的两台计算机上使用它(两台都必须安装 iPerf。指定其中一台作为服务器。
获取服务端计算机的 IP 地址:
```
ip addr show | grep inet.*brd
```
你的本地 IP 地址(假设为 IPv4 本地网络)以 `192.168``10` 开头。记下 IP 地址,以便可以在另一台计算机(指定为客户端的计算机)上使用它。
在服务端启动 `iperf`
```
iperf -s
```
它会等待来自客户端的传入连接。将另一台计算机作为为客户端并运行此命令,将示例中的 IP 替换为服务端计算机的 IP
```
iperf -c 192.168.1.2
```
![iPerf][6]
只需几秒钟即可完成测试,然后返回传输大小和计算出的带宽。我使用家用服务器作为服务端,在 PC 和笔记本电脑上进行了一些测试。我最近在房屋周围安装了六类线以太网,因此我的有线连接速度达到 1Gbps但 WiFi 连接速度却低得多。
![iPerf][7]
你可能注意到它记录到 16Gbps。那是我使用服务器进行自我测试因此它只是在测试写入磁盘的速度。该服务器具有仅 16 Gbps 的硬盘驱动器,但是我的台式机有 46Gbps另外我的较新的笔记本超过了 60Gbps因为它们都有固态硬盘。
![iPerf][8]
### 总结
通过这些工具来了解你的网络速度是一项非常简单的任务。如果你更喜欢脚本或者在命令行中运行上面的任何一个都能满足你。如果你要了解点对点的指标iPerf 能满足你。
你还使用其他哪些工具来衡量家庭网络?在评论中分享你的评论。
本文最初发表在 Ben Nuttall 的 [Tooling blog][9] 上,并获准在此使用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/internet-speed-tests
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/train-plane-speed-big-machine.png?itok=f377dXKs (Old train)
[2]: https://github.com/sivel/speedtest-cli
[3]: https://github.com/sindresorhus/fast-cli
[4]: https://fast.com/
[5]: https://iperf.fr/
[6]: https://opensource.com/sites/default/files/uploads/iperf.png (iPerf)
[7]: https://opensource.com/sites/default/files/uploads/iperf2.png (iPerf)
[8]: https://opensource.com/sites/default/files/uploads/iperf3.png (iPerf)
[9]: https://tooling.bennuttall.com/command-line-speedtest-tools/

View File

@ -1,33 +1,32 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin) [#]: translator: (LazyWolfLin)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11867-1.html)
[#]: subject: (What's your favorite Linux distribution?) [#]: subject: (What's your favorite Linux distribution?)
[#]: via: (https://opensource.com/article/20/1/favorite-linux-distribution) [#]: via: (https://opensource.com/article/20/1/favorite-linux-distribution)
[#]: author: (Opensource.com https://opensource.com/users/admin) [#]: author: (Opensource.com https://opensource.com/users/admin)
你最喜欢哪个 Linux 发行版? 你最喜欢哪个 Linux 发行版?
====== ======
参与我们的第七届[年度调查][5],让我们了解你对 Linux 发行版的偏好。
![Hand putting a Linux file folder into a drawer][1] ![](https://img.linux.net.cn/data/attachment/album/202002/08/004438ei1y4pp44pw4xy3w.jpg)
你最喜欢哪个 Linux 发行版?参与我们的第七届[年度调查][5]。虽然有所变化,但现在仍有数百种 [Linux 发行版][2] 保持活跃且运作良好。发行版、包管理器和桌面的组合为 Linux 用户创建了无数客制化系统环境。 你最喜欢哪个 Linux 发行版?虽然有所变化,但现在仍有数百种 [Linux 发行版][2]保持活跃且运作良好。发行版、包管理器和桌面的组合为 Linux 用户创建了无数客制化系统环境。
我们询问了社区的作者们哪个是他们的最爱以及原因。尽管回答中存在一些共性由于各种原因Fedora 和 Ubuntu 是最受欢迎的选择),但我们也听到一些惊奇的回答。以下是他们的一些回答: 我们询问了社区的作者们哪个是他们的最爱以及原因。尽管回答中存在一些共性由于各种原因Fedora 和 Ubuntu 是最受欢迎的选择),但我们也听到一些惊奇的回答。以下是他们的一些回答:
“我使用 Fedora 发行版!我喜欢这样的社区,成员们共同创建一个令人敬畏的操作系统展现了开源软件世界最伟大的造物。”——Matthew Miller > “我使用 Fedora 发行版!我喜欢这样的社区,成员们共同创建一个令人惊叹的操作系统展现了开源软件世界最伟大的造物。”——Matthew Miller
“我在家中使用 Arch。作为一名游戏玩家我希望可以轻松使用最新版本的 Wine 和 GFX 驱动同时最大限度地掌控我的系统。所以我选择一个滚动升级并且每个包都保持领先的发行版。”——Aimi Hobson > “我在家中使用 Arch。作为一名游戏玩家我希望可以轻松使用最新版本的 Wine 和 GFX 驱动同时最大限度地掌控我的系统。所以我选择一个滚动升级并且每个包都保持领先的发行版。”——Aimi Hobson
“NixOS在业余爱好者市场中没有比这更合适的。”——Alexander Sosedkin > “NixOS在业余爱好者市场中没有比这更合适的。”——Alexander Sosedkin
“我用过每个 Fedora 版本作为我的工作系统。这意味着我从第一个版本开始使用。从前,我问自己是否将有一天我会忘记我用的系统版本号。而这一天已经到来了。所以,现在是什么时候”——Hugh Brock > “我用过每个 Fedora 版本作为我的工作系统。这意味着我从第一个版本开始使用。从前,我问自己是否会忘记我使用的是哪一个版本。而这一天已经到来了,是从什么时候开始忘记了的”——Hugh Brock
“通常,在我的房屋和办公室里都有运行 Ubuntu、CentOS 和 Fedora 的机器。我依赖这些发行版来完成各种工作。Fedora 用于高性能和获取最新版本的应用和库。Ubuntu 用于那些需要大型支持社区。CentOS 则当我们需要稳如磐石的服务器平台时。”——Steve Morris > “通常,在我的家里和办公室里都有运行 Ubuntu、CentOS 和 Fedora 的机器。我依赖这些发行版来完成各种工作。Fedora 速度很快而且可以获取最新版本的应用和库。Ubuntu 有大型社区支持,可以轻松使用。CentOS 则当我们需要稳如磐石的服务器平台时。”——Steve Morris
“我最喜欢?对于社区以及如何为发行版构建软件包(从源码构建而非二进制文件),我选择 Fedora。对于可用包的范围和包的定义和开发我选择 Debian。对于文档我选择 Arch。对于新手的提问我以前会推荐 Ubuntu而现在会推荐 Fedora。”——Al Stone > “我最喜欢?对于社区以及如何为发行版构建软件包(从源码构建而非二进制文件),我选择 Fedora。对于可用包的范围和包的定义和开发我选择 Debian。对于文档我选择 Arch。对于新手的提问我以前会推荐 Ubuntu而现在会推荐 Fedora。”——Al Stone
* * * * * *
@ -44,7 +43,7 @@ via: https://opensource.com/article/20/1/favorite-linux-distribution
作者:[Opensource.com][a] 作者:[Opensource.com][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin) 译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11863-1.html)
[#]: subject: (4 cool new projects to try in COPR for January 2020) [#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/) [#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/) [#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
@ -18,13 +18,13 @@ COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某
### Contrast ### Contrast
[Contrast][4]是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5]中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。 [Contrast][4] 是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5] 中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。
![][6] ![][6]
#### 安装说明 #### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Contrast。要安装 Contrast请使用以下命令 [仓库][7]当前为 Fedora 31 和 Rawhide 提供 Contrast。要安装 Contrast请使用以下命令
``` ```
sudo dnf copr enable atim/contrast sudo dnf copr enable atim/contrast
@ -33,11 +33,11 @@ sudo dnf install contrast
### Pamixer ### Pamixer
[Pamixer][8]是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。 [Pamixer][8] 是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
#### 安装说明 #### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Pamixer。要安装 Pamixer请使用以下命令 [仓库][7]当前为 Fedora 31 和 Rawhide 提供 Pamixer。要安装 Pamixer请使用以下命令
``` ```
sudo dnf copr enable opuk/pamixer sudo dnf copr enable opuk/pamixer
@ -52,7 +52,7 @@ sudo dnf install pamixer
#### 安装说明 #### 安装说明
[仓库][7]当前为 Fedora 31 提供 PhotoFlare。要安装 PhotoFlare请使用以下命令 [仓库][7]当前为 Fedora 31 提供 PhotoFlare。要安装 PhotoFlare请使用以下命令
``` ```
sudo dnf copr enable adriend/photoflare sudo dnf copr enable adriend/photoflare
@ -65,7 +65,7 @@ sudo dnf install photoflare
#### 安装说明 #### 安装说明
[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供 tdiff。要安装 tdiff请使用以下命令 [仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供 tdiff。要安装 tdiff请使用以下命令
``` ```
sudo dnf copr enable fif/tdiff sudo dnf copr enable fif/tdiff
@ -79,7 +79,7 @@ via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2
作者:[Dominik Turecek][a] 作者:[Dominik Turecek][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11864-1.html)
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 命令行简介
======
> 下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。
![](https://images.idgesg.net/images/article/2020/01/cmd_linux-control_linux-logo_-100828420-large.jpg)
如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面前兴奋地输入命令来使用大量工具和应用。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 “Activities” 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(类似 “cmd” 为你打开的窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 `Ctrl+Alt+t` 键打开命令窗口。
如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 `$` 或者像 `user@system:~$` 这样的东西,但它意味着系统已经准备好为你运行命令了。
一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
| 命令 | 用途 |
|---|---|
| `pwd` | 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录) |
| `ls` | 列出我的文件 |
| `ls -a` | 列出我更多的文件(包括隐藏文件) |
| `ls -al` | 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限) |
| `who` | 告诉我谁登录了(如果只有你,不要失望) |
| `date` | 日期提醒我今天是星期几(也显示时间) |
| `ps` | 列出我正在运行的进程(可能只是你的 shell 和 `ps` 命令) |
一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
| 命令 | 用途 |
|---|---|
| `cd /tmp` | 移动到其他文件夹(本例中,打开 `/tmp` 文件夹) |
| `ls` | 列出当前位置的文件 |
| `cd` | 回到主目录(不带参数的 `cd` 总是能将你带回到主目录) |
| `cat .bashrc` | 显示文件的内容(本例中显示 `.bashrc` 文件的内容) |
| `history` | 显示最近执行的命令 |
| `echo hello` | 跟自己说 “hello” |
| `cal` | 显示当前月份的日历 |
要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。“重定向”是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。“管道”是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
| 命令 | 用途 |
|---|---|
| `echo "echo hello" > tryme` | 创建一个新的文件并将 “echo hello” 写入该文件 |
| `chmod 700 tryme` | 使新建的文件可执行 |
| `tryme` | 运行新文件(它应当运行文件中包含的命令并且显示 “hello” |
| `ps aux` | 显示所有运行中的程序 |
| `ps aux | grep $USER` | 显示所有运行中的程序,但是限制输出的内容包含你的用户名 |
| `echo $USER` | 使用环境变量显示你的用户名 |
| `whoami` | 使用命令显示你的用户名 |
| `who | wc -l` | 计数所有当前登录的用户数目 |
### 总结
一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[c]: https://github.com/qianmingtian
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,20 +1,20 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin) [#]: translator: (LazyWolfLin)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11853-1.html)
[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6) [#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
[#]: via: (https://itsfoss.com/linux-kernel-5-6/) [#]: via: (https://itsfoss.com/linux-kernel-5-6/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
四大亮点带你看 Linux Kernel 5.6 四大亮点带你看 Linux 内核 5.6
====== ======
当我们体验 Linux 5.5 稳定发行版带来更好的硬件支持时Linux 5.6 已经来了。 当我们还在体验 Linux 5.5 稳定发行版带来更好的硬件支持时Linux 5.6 已经来了。
说实话Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将开箱集成 Linux 5.5,你也需要真正了解 Linux 5.6 kernel 为我们提供了什么。 说实话Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将自带 Linux 5.5,你也需要切实了解一下 Linux 5.6 内核为我们提供了什么。
我将在本文中重点介绍 Linux 5.6 发版中值得期待的关键更改和功能: 我将在本文中重点介绍 Linux 5.6 发版中值得期待的关键更改和功能:
### Linux 5.6 功能亮点 ### Linux 5.6 功能亮点
@ -22,7 +22,7 @@
当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容: 当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容:
#### 1\. 支持 WireGuard #### 1支持 WireGuard
WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。 WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。
@ -30,44 +30,44 @@ WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取
同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。 同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。
#### 2\. 支持 USB4 #### 2支持 USB4
Linux 5.6 也将支持 **USB4** Linux 5.6 也将支持 **USB4**
如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5]. 如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5]
根据文档,“_USB4 将使 USB 的最大带宽增大一倍并支持同时多数据和显示接口并行。_ 根据文档“USB4 将使 USB 的最大带宽增大一倍并支持<ruby>多并发数据和显示协议<rt>multiple simultaneous data and display protocols</rt></ruby>
另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3这将是一个好消息。 另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3这将是一个好消息。
#### 3\. 使用 LZO/LZ4 压缩 F2FS 数据 #### 3使用 LZO/LZ4 压缩 F2FS 数据
Linux 5.6 也将支持使用LZO/LZ4 算法压缩 F2FS 数据。 Linux 5.6 也将支持使用 LZO/LZ4 算法压缩 F2FS 数据。
换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展 换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展技术
#### 4\. 解决 32 位系统的 2038 年问题 #### 4解决 32 位系统的 2038 年问题
Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。 Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。
这意味着对于32位系统时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。 这意味着对于 32 位系统,时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。
Linux kernel 5.6 解决了这个问题,因此 32 位系统可以运行到 2038 年以后。 Linux kernel 5.6 解决了这个问题,因此 32 位系统可以运行到 2038 年以后。
#### 5\. 改进硬件支持 #### 5改进硬件支持
很显然,在下一次发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样紧迫 很显然,在下一个发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样是优先的
新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。 新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。
除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPUs、NVIDIA GPUs 和 Intel Tiger Lake 芯片组的支持)。 除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPU、NVIDIA GPU 和 Intel Tiger Lake 芯片组的支持)。
#### 6\. 其他更新 #### 6其他更新
此外Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进: 此外Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进:
* 改进 AMD Zen 的温度/功率报告 * 改进 AMD Zen 的温度/功率报告
* 修复华硕飞行堡垒系列笔记本中 AMD CPUs 过热 * 修复华硕飞行堡垒系列笔记本中 AMD CPU 过热
* 开源支持 NVIDIA RTX 2000 图灵系列显卡 * 开源支持 NVIDIA RTX 2000 图灵系列显卡
* 内建 FSCRYPT 加密 * 内建 FSCRYPT 加密
@ -82,7 +82,7 @@ via: https://itsfoss.com/linux-kernel-5-6/
作者:[Ankush Das][a] 作者:[Ankush Das][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin) 译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11866-1.html)
[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.04 已经到期!现有用户必须升级到 Ubuntu 19.10
======
> Ubuntu 19.04 已在 2020 年 1 月 23 日到期,这意味着运行 Ubuntu 19.04 的系统将不再会接收到安全和维护更新,因此将使其容易受到攻击。
![][1]
[Ubuntu 19.04][2] 发布于 2019 年 4 月 18 日。由于它不是长期支持LTS版本因此只有 9 个月的支持。完成它的发行周期后Ubuntu 19.04 于 2020 年 1 月 23 日到期。
Ubuntu 19.04 带来了一些视觉和性能方面的改进,为时尚和美观的 Ubuntu 外观铺平了道路。与其他常规 Ubuntu 版本一样,它的生命周期为 9 个月。它如今结束了。
### Ubuntu 19.04 终止了吗?这是什么意思?
EOLEnd of life是指在某个日期之后操作系统版本将无法获得更新。你可能已经知道 Ubuntu或其他操作系统提供了安全性和维护升级以使你的系统免受网络攻击。当发行版到期后操作系统将停止接收这些重要更新。
如果你的操作系统版本到期后继续使用该系统,那么系统将容易受到网络和恶意软件的攻击。不仅如此。在 Ubuntu 中,你使用 APT 从软件中心下载的应用也不会更新。实际上,你将不再能够[使用 apt-get 命令安装新软件][3](如果不是立即,那就是逐渐地)。
### 所有 Ubuntu 19.04 用户必须升级到 Ubuntu 19.10
从 2020 年 1 月 23 日开始Ubuntu 19.04 将停止接收更新。你必须升级到 2020 年 7 月之前受支持的 Ubuntu 19.10。这也适用于其他[官方 Ubuntu 衍生版][4],例如 Lubuntu、Xubuntu、Kubuntu 等。
你可以在“设置 -> 细节” 或使用如下命令来[检查你的 Ubuntu 版本][9]
```
lsb_release -a
```
#### 如何升级到 Ubuntu 19.10
值得庆幸的是Ubuntu 提供了简单的方法来将现有系统升级到新版本。实际上Ubuntu 还会提示你有新的 Ubuntu 版本可用,你应该升级到该版本。
![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
如果你的互联网连接良好,那么可以使用[和更新 Ubuntu 一样的 Software Updater 工具][6]。在上图中,你只需单击 “Upgrade” 按钮并按照说明进行操作。我已经编写了有关使用此方法[升级到 Ubuntu 18.04][7]的文章。
如果你没有良好的互联网连接,那么有一种临时方案。在外部磁盘上备份家目录或重要数据。
然后,制作一个 Ubuntu 19.10 的 Live USB。下载 Ubuntu 19.10 ISO并使用 Ubuntu 系统上已安装的启动磁盘创建器从该 ISO 创建 Live USB。
从该 Live USB 引导,然后继续“安装” Ubuntu 19.10。在安装过程中,你应该看到一个删除 Ubuntu 19.04 并将其替换为 Ubuntu 19.10 的选项。选择此选项,然后像重新[安装 Ubuntu][8]一样进行下去。
#### 你是否仍在使用 Ubuntu 19.04、18.10、17.10 或其他不受支持的版本?
你应该注意,目前仅 Ubuntu 16.04、18.04 和 19.10(或更高版本)版本还受支持。如果你运行的不是这些 Ubuntu 版本,那么你必须升级到较新版本。
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
[2]: https://itsfoss.com/ubuntu-19-04-release/
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://itsfoss.com/which-ubuntu-install/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
[6]: https://itsfoss.com/update-ubuntu/
[7]: https://itsfoss.com/upgrade-ubuntu-version/
[8]: https://itsfoss.com/install-ubuntu/
[9]: https://itsfoss.com/how-to-know-ubuntu-unity-version/

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11884-1.html)
[#]: subject: (Best Open Source eCommerce Platforms to Build Online Shopping Websites)
[#]: via: (https://itsfoss.com/open-source-ecommerce/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
7 个可用于自建电商站点的开源解决方案
======
在[之前的文章][1]中,我介绍过一些开源<ruby>内容管理系统<rt>Content Management System</rt></ruby>CMS顾名思义这些 CMS 平台更适用于以内容为主的站点。
那如果想要建立自己的线上购物站点呢?我们正好还有一些优秀的开源电商解决方案,可以自行部署在自己的 Linux 服务器上。
这些电商解决方案是专为搭建线上购物站点设计的,因此都集成了库存管理、商品列表、购物车、下单、愿望清单以及支付这些必需的基础功能。
但请注意,这篇文章并不会进行深入介绍。因此,我建议最好广泛试用其中的多个产品,以便进一步的了解和比较。
### 优秀的开源电商解决方案
![][2]
开源电商解决方案种类繁多,一些缺乏维护的都会被我们忽略掉,以免搭建出来的站点因维护不及时而受到影响。
另外,以下的列表排名不分先后。
#### 1、nopCommerce
![][3]
nopCommerce 是基于 [ASP.NET Core][4] 的自由开源的电商解决方案。如果你要找的是基于 PHP 的解决方案,可以跳过这一节了。
nopCommerce 的管理面板界面具有简洁易用的特点,如果你还使用过 OpenCart就可能会感到似曾相识我不是在抱怨。在默认情况下它就已经自带了很多基本的功能同时还为移动端用户提供了响应式的设计。
你可以在其[官方商店][5]中获取到一些兼容的界面主题和应用扩展,还可以选择付费的支持服务。
在开始使用前,你可以从 nopCommerce 的[官方网站][6]下载源代码包,然后进行自定义配置和部署;也可以直接下载完整的软件包快速安装到 web 服务器上。详细信息可以查阅 nopCommerce 的 [GitHub 页面][7]或官方网站。
- [nopCommerce][8]
#### 2、OpenCart
![][9]
OpenCart 是一个基于 PHP 的非常流行的电商解决方案,就我个人而言,我曾为一个项目用过它,并且体验非常好,如果不是最好的话。
或许你会觉得它维护得不是很频繁,但实际上使用 OpenCart 的开发者并不在少数。你可以获得许多受支持的扩展并将它们的功能加入到 OpenCart 中。
OpenCart 不一定是适合所有人的“现代”电商解决方案,但如果你需要的只是一个基于 PHP 的开源解决方案OpenCart 是个值得一试的选择。在大多数具有一键式应用程序安装支持的网络托管平台中,应该可以安装 OpenCart。想要了解更多可以查阅 OpenCart 的官方网站或 [GitHub 页面][10]。
- [OpenCart][11]
#### 3、PrestaShop
![][12]
PrestaShop 也是一个可以尝试的开源电商解决方案。
PrestaShop 是一个积极维护下的开源解决方案,它的官方商店中也有额外提供主题和扩展。与 OpenCart 不同,在托管服务平台上,你可能找不到一键安装的 PrestaShop。但不需要担心从官方网站下载下来之后它的部署过程也并不复杂。如果你需要帮助也可以参考 PrestaShop 的[安装指南][15]。
PrestaShop 的特点就是配置丰富和易于使用,我发现很多其它用户也在用它,你也不妨试用一下。
你也可以在 PrestaShop 的 [GitHub 页面][16]查阅到更多相关内容。
- [PrestaShop][17]
#### 4、WooCommerce
![][18]
如果你想用 [WordPress][19] 来搭建电商站点,不妨使用 WooCommerce。
从技术上来说,这种方式其实是搭建一个 WordPress 应用,然后把 WooCommerce 作为一个插件或扩展以实现电商站点所需要的功能。很多 web 开发者都知道如何使用 WordPress因此 WooCommerce 的学习成本不会很高。
WordPress 作为目前最好的开源站点项目之一,对大部分人来说都不会有太高的门槛。它具有易用、稳定的特点,同时还支持大量的扩展插件。
WooCommerce 的灵活性也是一大亮点,在它的线上商店提供了许多设计和扩展可供选择。你也可以到它的 [GitHub 页面][20]查看相关介绍。
- [WooCommerce][21]
#### 5、Zen Cart
![][22]
这或许是一个稍显古老的电商解决方案,但同时也是最好的开源解决方案之一。如果你喜欢老式风格的模板(主要基于 HTML而且只需要一些基础性的扩展那你也可以尝试使用 Zen Cart。
就我个人而言,我不建议把 Zen Cart 用在一个新项目当中。但考虑到它仍然是一个活跃更新中的解决方案,如果你喜欢的话,也不妨用它来进行试验。
你也可以在 [SourceForge][23] 找到 Zen Cart 这个项目。
- [Zen Cart][24]
#### 6、Magento
![Image Credits: Magestore][25]
Magento 是 Abode 旗下的开源电商解决方案,从某种角度来说,可能比 WordPress 表现得更为优秀。
Magento 完全是作为电商应用程序而生的,因此你会发现它的很多基础功能都非常好用,甚至还提供了高级的定制。
但如果你使用的是 Magento 的开源版,可能会接触不到托管版的一些高级功能,两个版本的差异,可以在[官方文档][26]中查看到。如果你使用托管版,还可以选择相关的托管支持服务。
想要了解更多,可以查看 Magento 的 [GitHub 页面][27]。
- [Magento][28]
#### 7、Drupal
![Drupal][29]
Drupal 是一个适用于创建电商站点的开源 CMS 解决方案。
我没有使用过 Drupal因此我不太确定它用起来是否足够灵活。但从它的官方网站上来看它提供的扩展模块和主题列表足以让你轻松完成一个电商站点需要做的任何事情。
跟 WordPress 类似Drupal 在服务器上的部署并不复杂,不妨看看它的使用效果。在它的[下载页面][30]可以查看这个项目以及下载最新的版本。
- [Drupal][31]
#### 8、Odoo eCommerce
![Odoo Ecommerce Platform][32]
如果你还不知道Odoo 提供了一套开源商务应用程序。他们还提供了[开源会计软件][33]和 CRM 解决方案,我们将会在单独的列表中进行介绍。
对于电子商务门户,你可以根据需要使用其在线拖放生成器自定义网站。你也可以推广该网站。除了简单的主题安装和自定义选项之外,你还可以利用 HTML/CSS 在一定程度上手动自定义外观。
你也可以查看其 [GitHub][34] 页面以进一步了解它。
- [Odoo eCommerce][35]
### 总结
我敢肯定还有更多的开源电子商务平台,但是,我现在还没有遇到比我上面列出的更好的东西。
如果你还有其它值得一提的产品,可以在评论区发表。也欢迎在评论区分享你对开源电商解决方案的经验和想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-ecommerce/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/open-source-cms/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/open-source-eCommerce.png?ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/nopCommerce.png?ssl=1
[4]: https://en.wikipedia.org/wiki/ASP.NET_Core
[5]: https://www.nopcommerce.com/marketplace
[6]: https://www.nopcommerce.com/download-nopcommerce
[7]: https://github.com/nopSolutions/nopCommerce
[8]: https://www.nopcommerce.com/
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/opencart.jpg?ssl=1
[10]: https://github.com/opencart/opencart
[11]: https://www.opencart.com/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/prestashop.jpg?ssl=1
[13]: https://addons.prestashop.com/en/3-templates-prestashop
[14]: https://addons.prestashop.com/en/
[15]: http://doc.prestashop.com/display/PS17/Installing+PrestaShop
[16]: https://github.com/PrestaShop/PrestaShop
[17]: https://www.prestashop.com/en
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/woocommerce.jpg?ssl=1
[19]: https://wordpress.org/
[20]: https://github.com/woocommerce/woocommerce
[21]: https://woocommerce.com/
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/Zen-cart.jpg?ssl=1
[23]: https://sourceforge.net/projects/zencart/
[24]: https://www.zen-cart.com/
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/magento.jpg?ssl=1
[26]: https://magento.com/compare-open-source-and-magento-commerce
[27]: https://github.com/magento
[28]: https://magento.com/
[29]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/drupal.png?ssl=1
[30]: https://www.drupal.org/project/drupal
[31]: https://www.drupal.org/industries/ecommerce
[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/odoo-ecommerce-platform.jpg?w=800&ssl=1
[33]: https://itsfoss.com/open-source-accounting-software/
[34]: https://github.com/odoo/odoo
[35]: https://www.odoo.com/page/open-source-ecommerce

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11888-1.html)
[#]: subject: (NVIDIAs Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux)
[#]: via: (https://itsfoss.com/geforce-now-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
NVIDIA 的云游戏服务 GeForce NOW 无耻地忽略了Linux
======
NVIDIA 的 [GeForce NOW][1] 云游戏服务对于那些可能没有硬件但想使用 GeForce NOW 在最新的最好的游戏上获得尽可能好的游戏体验玩家来说是充满前景的(在线推流游戏,并可以在任何设备上玩)。
该服务仅限于一些用户(以等待列表的形式)使用。然而,他们最近宣布 [GeForce NOW 面向所有人开放][2]。但实际上并不是。
有趣的是,它**并不是面向全球所有区域**。而且,更糟的是 **GeForce NOW 不支持 Linux**
![][3]
### GeForce NOW 并不是向“所有人开放”
制作一个基于订阅的云服务来玩游戏的目的是消除平台依赖性。
就像你通常使用浏览器访问网站一样,你应该能够在每个平台上玩游戏。是这个概念吧?
![][4]
好吧,这绝对不是火箭科学,但是 NVIDIA 仍然不支持 Linux和 iOS
### 是因为没有人使用 Linux 吗?
我非常不同意这一点,即使这是某些不支持 Linux 的原因。如果真是这样,我不会使用 Linux 作为主要桌面操作系统来为 “Its FOSS” 写文章。
不仅如此,如果 Linux 不值一提,你认为为何一个 Twitter 用户会提到缺少 Linux 支持?
![][5]
是的,也许用户群不够大,但是在考虑将其作为基于云的服务时,**不支持 Linux** 显得没有意义。
从技术上讲,如果 Linux 上没有游戏,那么 **Valve** 就不会在 Linux 上改进 [Steam Play][6] 来帮助更多用户在 Linux 上玩纯 Windows 的游戏。
我不想说任何不正确的说法,但台式机 Linux 游戏的发展比以往任何时候都要快(即使统计上要比 Mac 和 Windows 要低)。
### 云游戏不应该像这样
![][7]
如上所述,找到使用 Steam Play 的 Linux 玩家不难。只是你会发现 Linux 上游戏玩家的整体“市场份额”低于其他平台。
即使这是事实,云游戏也不应该依赖于特定平台。而且,考虑到 GeForce NOW 本质上是一种基于浏览器的可以玩游戏的流媒体服务,所以对于像 NVIDIA 这样的大公司来说,支持 Linux 并不困难。
来吧Nvidia*你想要我们相信在技术上支持 Linux 有困难?或者,你只是想说不值得支持 Linux 平台?*
### 结语
不管我为 GeForce NOW 服务发布而感到多么兴奋,当看到它根本不支持 Linux我感到非常失望。
如果像 GeForce NOW 这样的云游戏服务在不久的将来开始支持 Linux**你可能没有理由使用 Windows 了***咳嗽*)。
你怎么看待这件事?在下面的评论中让我知道你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/geforce-now-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.nvidia.com/en-us/geforce-now/
[2]: https://blogs.nvidia.com/blog/2020/02/04/geforce-now-pc-gaming/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now-linux.jpg?ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now.png?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/geforce-now-twitter-1.jpg?ssl=1
[6]: https://itsfoss.com/steam-play/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/ge-force-now.jpg?ssl=1

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news)
[#]: via: (https://opensource.com/article/20/2/linux-java-and-other-industry-news)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Need 32-bit Linux to run past 2038? When version 5.6 of the kernel pops, you're in for a treat][2]
> Arnd Bergmann, an engineer working on the thorny Y2038 problem in the Linux kernel, posted to the [mailing list][3] that, yup, Linux 5.6 "should be the first release that can serve as a base for a 32-bit system designed to run beyond year 2038."
**The impact:** Y2K didn't get fixed; it just got bigger and delayed. There is no magic in software or computers; just people trying to solve complicated problems as best they can, and some times introducing more complicated problems for different people to solve at some point in the future.
## [What the dev? Celebrating Java's 25th anniversary][4]
> Java is coming up on a big milestone: Its 25th anniversary! To celebrate, we take a look back over the last 25 years to see how Java has evolved over time. In this episode, Social Media and Online Editor Jenna Sargent talks to Rich Sharples, senior director of product management for middleware at Red Hat, to learn more.
**The impact:** There is something comforting about immersing yourself in a deep well of lived experience. Rich clearly lived through what he is talking about and shares insider knowlege with you (and his dog).
## [Do I need an API Gateway if I use a service mesh?][5]
> This post may not be able to break through the noise around API Gateways and Service Mesh. However, its 2020 and there is still abundant confusion around these topics. I have chosen to write this to help bring real concrete explanation to help clarify differences, overlap, and when to use which. Feel free to [@ me on twitter (@christianposta)][6] if you feel Im adding to the confusion, disagree, or wish to buy me a beer (and these are not mutually exclusive reasons).
**The impact:** Yes, though they use similar terms and concepts they have different concerns and scopes.
## [What Australia's AGL Energy learned about Cloud Native compliance][7]
> This is really at the heart of what open source is, enabling everybody to contribute equally. Within large enterprises, there are controls that are needed, but if we can automate the management of the majority of these controls, we can enable an amazing culture and development experience.
**The impact:** They say "software is eating the world" and "developers are the new kingmakers." The fact that compliance in an energy utility is subject to developer experience improvement basically proves both statements.
## [Monoliths are the future][8]
> And then what they end up doing is creating 50 deployables, but its really a _distributed_ monolith. So its actually the same thing, but instead of function calls and class instantiation, theyre initiating things and throwing it over a network and hoping that it comes back. And since they cant reliably _make it_ come back, they introduce things like [Prometheus][9], [OpenTracing][10], all of this stuff. Im like, **“What are you doing?!”**
**The impact:** Do things for real reasons with a clear-eyed understanding of what those reasons are and how they'll make your business or your organization better.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-java-and-other-industry-news
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.theregister.co.uk/2020/01/30/linux_5_6_2038/
[3]: https://lkml.org/lkml/2020/1/29/355
[4]: https://whatthedev.buzzsprout.com/673192/2543290-celebrating-java-s-25th-anniversary-episode-16
[5]: https://blog.christianposta.com/microservices/do-i-need-an-api-gateway-if-i-have-a-service-mesh/ (Do I Need an API Gateway if I Use a Service Mesh?)
[6]: http://twitter.com/christianposta?lang=en
[7]: https://thenewstack.io/what-australias-agl-energy-learned-about-cloud-native-compliance/
[8]: https://changelog.com/posts/monoliths-are-the-future
[9]: https://prometheus.io/
[10]: https://opentracing.io

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Linux desktop, CERN powered by Ceph, and more industry trends)
[#]: via: (https://opensource.com/article/20/2/linux-desktop-cern-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Building a Linux desktop, CERN powered by Ceph, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Building a Linux desktop for cloud-native development][2]
> This post covers the building of my Linux Desktop PC for Cloud Native Development. I'll be covering everything from parts, to peripherals, to CLIs, to SaaS software with as many links and snippets as I can manage. I hope that you enjoy reading about my experience, learn something, and possibly go on to build your own Linux Desktop.
**The impact**: I hope the irony is not lost on anyone that step 1, when doing cloud-native software development, is to install Linux on a physical computer.
## [Enabling CERNs particle physics research with open source][3]
> Ceph is an open-source software-defined storage platform. While its not often in the spotlight, its working hard behind the scenes, playing a crucial role in enabling ambitious, world-renowned projects such as CERNs particle physics research, Immunity Bios cancer research, The Human Brain Project, MeerKat radio telescope, and more. These ventures are propelling the collective understanding of our planet and the human race beyond imaginable realms, and the outcomes will forever change how we perceive our existence and potential. 
**The impact**: It is not often that you get to see a straight line drawn between storage and the perception of human existence. Thanks for that, CERN!
## [2020 cloud predictions][4]
> "Serverless" as a concept provides a simplified developer experience that will become a platform feature. More platform-as-a-service providers will incorporate serverless traits into the daily activities developers perform when building cloud-native applications, becoming the default computing paradigm for the cloud.
**The impact:** All of the trends in the predictions in this post are basically about maturation as ideas like serverless, edge computing, DevOps, and other cloud-adjacent buzz words move from the early adopters into the early majority phase of the adoption curve.
## [End-of-life announcement for CoreOS Container Linux][5]
> As we've [previously announced][6], [Fedora CoreOS][7] is the official successor to CoreOS Container Linux. Fedora CoreOS is a [new Fedora Edition][8] built specifically for running containerized workloads securely and at scale. It combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host. For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][9] and the [Fedora CoreOS documentation][10].
**The impact**: Milestones like this are often bittersweet for both creators and users. The CoreOS team built something that their community loved to use, which is something to be celebrated. Hopefully, that community can find a [new home][11] in the wider [Fedora ecosystem][8].
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-desktop-cern-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://blog.alexellis.io/building-a-linux-desktop-for-cloud-native-development/
[3]: https://insidehpc.com/2020/02/how-ceph-powers-exciting-research-with-open-source/
[4]: https://www.devopsdigest.com/2020-cloud-predictions-2
[5]: https://coreos.com/os/eol/
[6]: https://groups.google.com/d/msg/coreos-user/zgqkG88DS3U/PFP9yrKbAgAJ
[7]: https://getfedora.org/coreos/
[8]: https://fedoramagazine.org/fedora-coreos-out-of-preview/
[9]: https://fedoramagazine.org/introducing-fedora-coreos/
[10]: https://docs.fedoraproject.org/en-US/fedora-coreos/
[11]: https://getfedora.org/en/coreos/

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (OpenShot Video Editor Gets a Major Update With Version 2.5 Release)
[#]: via: (https://itsfoss.com/openshot-2-5-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
OpenShot Video Editor Gets a Major Update With Version 2.5 Release
======
[OpenShot][1] is one of the [best open-source video editors][2] out there. With all the features that it offered it was already a good video editor on Linux.
Now, with a major update to it (**v.2.5.0**), OpenShot has added a lot of new improvements and features. And, trust me, its not just any regular release it is a huge release packed with features that you probably wanted for a very long time.
In this article, I will briefly mention the key changes involved in the latest release.
![][3]
### OpenShot 2.5.0 Key Features
Here are some of the major new features and improvements in OpenShot 2.5:
#### Hardware Acceleration Support
The hardware acceleration support is still an experimental addition however, it is a useful feature to have.
Instead of relying on your CPU to do all the hard work, you can utilize your GPU to encode/decode video data when working with MP4/H.264 video files.
This will affect (or improve) the performance of OpenShot in a meaningful way.
#### Support Importing/Exporting Files From Final Cut Pro &amp; Premiere
![][4]
[Final Cut Pro][5] and [Adobe Premiere][6] are the two popular video editors for professional content creators. OpenShot 2.5 now allows you to work on projects created on these platforms. It can import (or export) the files from Final Cut Pro &amp; Premiere in EDL &amp; XML formats.
#### Thumbnail Generation Improved
This isnt a big feature but a necessary improvement to most of the video editors. You dont want broken images in the thumbnails (your timeline/library). So, with this update, OpenShot now generates the thumbnails using a local HTTP server, can check multiple folder locations, and regenerate missing ones.
#### Blender 2.8+ Support
The new OpenShot release also supports the latest [Blender][7] (.blend) format so it should come in handy if youre using Blender as well.
#### Easily Recover Previous Saves &amp; Improved Auto-backup
![][8]
It was always a horror to lose your timeline work after you accidentally deleted it which was then auto-saved to overwrite your saved project.
Now, the auto-backup feature has improved with an added ability to easily recover your previous saved version of the project.
Even though you can recover your previous saves now you will find a limited number of the saved versions, so you have to still remain careful.
#### Other Improvements
In addition to all the key highlights mentioned above, you will also notice a performance improvement when using the keyframe system.
Several other issues like SVG compatibility, exporting &amp; modifying keyframe data, and resizable preview window have been fixed in this major update. For privacy-concerned users, OpenShot no longer sends usage data unless you opt-in to share it with them.
For more information, you can take a look at [OpenShots official blog post][9] to get the release notes.
### Installing OpenShot 2.5 on Linux
You can simply download the .AppImage file from its [official download page][10] to [install the latest OpenShot version][11]. If youre new to AppImage, you should also check out [how to use AppImage][12] on Linux to easily launch OpenShot.
[Download Latest OpenShot Release][10]
Some distributions like Arch Linux may also provide the latest OpenShot release with regular system updates.
#### PPA available for Ubuntu-based distributions
On Ubuntu-based distributions, if you dont want to use AppImage, you can [use the official PPA][13] from OpenShot:
```
sudo add-apt-repository ppa:openshot.developers/ppa
sudo apt update
sudo apt install openshot-qt
```
You may want to know how to remove PPA if you want to uninstall it later.
**Wrapping Up**
With all the latest changes/improvements considered, do you see [OpenShot][11] as your primary [video editor on Linux][14]? If not, what more do you expect to see in OpenShot? Feel free to share your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/openshot-2-5-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.openshot.org/
[2]: https://itsfoss.com/open-source-video-editors/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-2-5-0.png?ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-xml-edl.png?ssl=1
[5]: https://www.apple.com/in/final-cut-pro/
[6]: https://www.adobe.com/in/products/premiere.html
[7]: https://www.blender.org/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-recovery.jpg?ssl=1
[9]: https://www.openshot.org/blog/2020/02/08/openshot-250-released-video-editing-hardware-acceleration/
[10]: https://www.openshot.org/download/
[11]: https://itsfoss.com/openshot-video-editor-release/
[12]: https://itsfoss.com/use-appimage-linux/
[13]: https://itsfoss.com/ppa-guide/
[14]: https://itsfoss.com/best-video-editing-software-linux/

View File

@ -0,0 +1,121 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (KDE Plasma 5.18 LTS Released With New Features)
[#]: via: (https://itsfoss.com/kde-plasma-5-18-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
KDE Plasma 5.18 LTS Released With New Features
======
[KDE plasma][1] desktop is undoubtedly one of the most impressive [Linux desktop environments][2] available out there right now.
Now, with the latest release, the KDE Plasma desktop just got more awesome!
KDE Plasma 5.18 marks itself as an LTS (Long Term Support) release i.e it will be maintained by the KDE contributors for the next 2 years while the regular versions are maintained for just 4 months.
![KDE Plasma 5.18 on KDE Neon][3]
So, if you want more stability on your KDE-powered Linux system, it would be a good idea to upgrade to KDEs Plasma 5.18 LTS release.
### KDE Plasma 5.18 LTS Features
Here are the main new features added in this release:
#### Emoji Selector
![Emoji Selector in KDE][4]
Normally, you would Google an emoji to copy it to your clipboard or simply use the good-old emoticons to express yourself.
Now, with the latest update, you get an emoji selector in Plasma Desktop. You can simply find it by searching for it in the application launcher or by just pressing (Windows key/Meta/Super Key) + . (**period/dot)**.
The shortcut should come in handy when you need to use an emoji while sending an email or any other sort of messages.
#### Global Edit Mode
![Global Edit Mode][5]
You probably would have used the old desktop toolbox on the top-right corner of the screen in the Plasma desktop, but the new release gets rid of that and instead provides you with a global edit mode when you right-click on the desktop and click on “**Customize Layout**“.
#### Night Color Control
![Night Color Control][6]
Now, you can easily toggle the night color mode right from the system tray. In addition to that, you can even choose to set a keyboard shortcut for both night color and the do not disturb mode.
#### Privacy Improvements For User Feedback
![Improved Privacy][7]
It is worth noting that KDE Plasma lets you control the user feedback information that you share with them.
You can either choose to disable sharing any information at all or control the level of information you share (basic, intermediate, and detailed).
#### Global Themes
![Themes][8]
You can either choose from the default global themes available or download community-crafted themes to set up on your system.
#### UI Improvements
There are several subtle improvements and changes. For instance, the look and feel of the notifications have improved.
You can also notice a couple of differences in the software center (Discover) to help you easily install apps.
Not just limited to that, but you also get the ability to mute the volume of a window from the taskbar (just like you normally do on your browsers tab). Similarly, there are a couple of changes here and there to improve the KDE Plasma experience.
#### Other Changes
In addition to the visual changes and customization ability, the performance of KDE Plasma has improved when coupled with a graphics hardware.
To know more about the changes, you can refer the [official announcement post][9] for KDE Plasma 5.18 LTS.
[Subscribe to our YouTube channel for more Linux videos][10]
### How To Get KDE Plasma 5.18 LTS?
If you are using a rolling release distribution like Arch Linux, you might have got it with the system updates. If you havent performed an update yet, simply check for updates from the system settings.
If you are using Kubuntu, you can add the Kubuntu backports PPA to update the Plasma desktop with the following commands:
```
sudo add-apt-repository ppa:kubuntu-ppa/backports
sudo apt update && sudo apt full-upgrade
```
If you do not have KDE as your desktop environment, you can refer our article on [how to install KDE on Ubuntu][11] to get started.
**Wrapping Up**
KDE Plasma 5.18 may not involve a whole lot of changes but being an LTS release, the key new features seem helpful and should come in handy to improve the Plasma desktop experience for everyone.
What do you think about the latest Plasma desktop release? Feel free to let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kde-plasma-5-18-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://kde.org/plasma-desktop/
[2]: https://itsfoss.com/best-linux-desktop-environments/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-5-18-info.jpg?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-emoji-pick.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-global-editor.jpg?ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-night-color.jpg?ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/user-feedback-kde-plasma.png?ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-global-themes.jpg?ssl=1
[9]: https://kde.org/announcements/plasma-5.18.0.php
[10]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[11]: https://itsfoss.com/install-kde-on-ubuntu/

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Stuck in a loop: 4 signs anxiety may be affecting your work)
[#]: via: (https://opensource.com/open-organization/20/2/working-anxiety-inaction-loop)
[#]: author: (Sam Knuth https://opensource.com/users/samfw)
Stuck in a loop: 4 signs anxiety may be affecting your work
======
Gathering feedback is an everyday practice in open organizations. Are
you collecting data to improve your products—or to alleviate your own
apprehensions?
![arrows cycle symbol for failing faster][1]
_Editor's note: This article is part of a series on working with mental health conditions. It details the author's personal experiences and is not meant to convey professional medical advice or guidance._
A few months ago, I was chatting with one of our VPs about my new role and some of the work I was hoping to do with my team. I'd decided that one of my first actions in the new position would be to interview members of the senior leadership team to get their input on strategy. I'd be leading an entirely new function for my department, and I felt it would be good to get input from a wide array of stakeholders before jumping to action. This is part of my standard practice working in an open organization.
I had several goals for these one-on-one conversations. I wanted to be transparent in my work. I wanted to validate some of my hypotheses. I wanted to confirm that what I wanted to do would be valuable to other leaders. And I wanted to get some assurance that I was on the right track.
Or so I thought.
"Hmmm," said the VP after I had shared my initial ideas. He hesitated. "It's very broad." More hesitation. I'm not sure what I had expected him to say, but this was definitely not the kind of assurance I was hoping for. "You need to be careful about tilting at windmills." I didn't know what "[tilting at windmills][2]" meant, but it sounded like a good thing to avoid doing.
After having several more of these conversations over the course of a few weeks—many of them lively and fruitful—I came to one clear conclusion: Although I was getting lots of great input, I wasn't going to find any kind of consensus about priorities among the leadership team.
So why was I asking?
Eventually I realized what was _really_ underlying my desire to seek input: not just a desire to learn from the people I was interviewing, but also a nagging question in my gut. "Am I doing the right thing?"
One manifestation of anxiety is a worry that we're doing something wrong, which is also related to [imposter syndrome][3] (worry that we're going to be "found out" as unqualified for or incapable of the work or the role we've been given).
I've [previously described][4] a positive "anxiety performance loop" that can drive high performance. I can occasionally fall into another kind of anxiety loop, an "inaction loop," which can _lower_ performance. Figure 1 (below) illustrates it.
![][5]
One challenge of this manifestation of anxiety is that it creeps up on me; I don't consciously realize that I'm stuck in it until something happens that makes it apparent.
In this case, that "something" was my coach.
My desire to get input from a large variety of stakeholders was resulting in so much input that it was preventing me from moving forward.
During a session when my coach was asking me questions about my work, I came to the realization that I was overly worried about whether I was on the right track. My desire to get input from a large variety of stakeholders (a legitimate thing to do) was resulting in so much input that it was preventing me from moving forward.
If I hadn't been fortunate enough to be working with a coach, I may never have had that realization—or I may have had it through a much harder experience. At some point, anxiety about whether you are doing the right thing could lead to failure, not because you did the wrong thing but because you didn't do anything at all.
I've found a few signs to help me realize if I'm in an anxiety inaction loop. I may be in one of these loops if:
* I talk about what I'm _planning_ to do rather than what I _am_ doing
* I feel that I need just _one more person's_ opinion, or I just need to check in with my boss _one more time_, before moving ahead
* I am revising the same presentation repeatedly but never actually giving the presentation to anyone
* I am avoiding or delaying something (such as giving a talk, or making a decision)
Having tools for self-reflection is critical. The reality is that most of the time I'm not working with a coach, and I need to be able to recognize these symptoms on my own. That will only happen if I set aside time to reflect on how things are going and to ask myself hard questions about whether I am stuck in any of these stalling patterns. I've started to build this time into my calendar, usually on Friday afternoons, or early in the morning before my meetings start.
The fact that my anxiety can manifest both as dual worries—that I am not doing the right thing and that I am not doing enough—can be paradoxical.
Recognizing the anxiety loop is the first step. To get out of it, I've developed a few techniques like:
* Setting achievement milestones in 90 day increments, reviewing them on a weekly basis, and using this as an opportunity to reflect on progress.
* Reminding myself that (in fact) I might _not_ be doing the right thing. If that's the case, I'll get feedback, correct, and keep going (it won't be the end of the world).
* Reminding myself that I am in this job for a reason; people want me to do the job. They don't want me to ask them what to do or wait for them to tell me what to do.
* When seeking input from others, saying "This is what I am planning on doing" rather than "What do you think of this?" then either hearing objections if they arise or moving ahead if not.
The fact that my anxiety can manifest both as dual worries—that I am not doing the right thing _and_ that I am not doing enough—can be paradoxical. Over-correcting to get out of an anxiety inaction loop could put me right into [an anxiety performance loop][4]. Neither situation feels like a healthy one.
As with most things, the answer is balance and moderation. Finding that balance is precisely the challenge anxiety creates. In some cases I may be worried I'm not doing enough; in others I may be worried that what I'm doing isn't right, which leads me to slow down. The best approach I have found so far is awareness—taking the time to reflect and trying to correct if I'm going too far in either direction.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/2/working-anxiety-inaction-loop
作者:[Sam Knuth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/samfw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://en.wikipedia.org/wiki/Don_Quixote#Tilting_at_windmills
[3]: https://en.m.wikipedia.org/wiki/Impostor_syndrome
[4]: https://opensource.com/open-organization/20/1/leading-openly-anxiety
[5]: https://opensource.com/sites/default/files/images/open-org/loop_2.png

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Drupal 8 aims to be future-proof)
[#]: via: (https://opensource.com/article/20/2/drupal-8-promises)
[#]: author: (Shefali Shetty https://opensource.com/users/shefalishetty)
How Drupal 8 aims to be future-proof
======
What you need to know about Drupal 8 updates.
![Drupal logo with gray and blue][1]
Thomas Edison famously said, "The three great essentials to achieve anything worthwhile are, first, hard work; second, stick-to-itiveness; third, common sense." This quote made me wonder if "sticking-to-it" is contradictory to innovation; does it make you resistant to change? But, the more I pondered on it, I realized that innovation is fueled by perseverance.
Before Drupal 8 was introduced, the Core committee had not just promised to innovate; they decided to be persistent. Persistent in continuous reinvention. Persistent in making Drupal easier to adopt—not only by the market but also by developers with various levels of expertise. However, to be able to make Drupal successful and relevant in the long run, a drastic change was needed—a change that would build a better future. For this, Drupal 8 had to dismantle the Drupal 7 architecture and lay a fresh foundation for a promising future. Moving on to Drupal 9 (coming soon) and subsequent versions will now be easy and straightforward.
### Freedom to innovate with open source
Innovation brings freedom, and freedom creates innovation. Open source gives you the freedom to access, learn, contribute, and, most importantly, the freedom to innovate. The ability to learn, catch up, and reinvent is extremely crucial today. Drupal began as a small internal news website and later went on to become an open source content management system (CMS) because there was a potential to make it much more compelling by attracting more contributions. It gave developers the freedom to collaborate, re-use components, and improvise on it to create something more modern, powerful, and relevant.
### Promises delivered: Drupal 8 version history
The web is always changing. To stay relevant, [Drupal][2] had to introduce changes that were revolutionary but, at the same time, not so hard to accept. Drupal 7, as a content management system, was widely welcomed. But it lacked in certain aspects like developer adoptability, easy upgrade paths, better API support, and more. Drupal 8 changed everything. They did not choose to build upon Drupal 7, which would have been an easier choice for an open source CMS. For a more future-proof CMS that is ready to accept changes, Drupal 8 had to be rebuilt with more modern components like Symfony, Twig, PHP 7, and initiatives like the API-first initiative, mobile-first initiative, Configuration Management initiative, etc.
Drupal 8 was released with a promise of providing more ambitious digital experiences with better UX improvements and mobile compatibilities. The goal was to continuously innovate and reinvent itself. For this to work, these practices needed to be put in place: semantic versioning (major.minor.patch), scheduled releases (two minor releases per year), and introducing experimental modules in Core. All of this while providing backward compatibility and removing deprecated code.
Lets look at some of the promises that have been delivered with each minor version of Drupal 8.
* **Drupal 8.0**
* Modern and sophisticated PHP practices, object-oriented programming, and libraries.
* Storage and management of configuration was a bit of a messy affair with Drupal 7. The Configuration Management Initiative was introduced with Drupal 8.0, which allowed for cleaner installations and config management. Configurations are now stored in easily readable YAML format files. These config files can also be readily imported. This allows for smooth and easy transitions to different deployment environments.
* Adding Symfony components drastically improved Drupal 8s flexibility, performance, and robustness. Symfony is an open source PHP framework, and it abides by the MVC (Model-View-Controller) architecture.
* Twig is a powerful template engine for PHP, replaced Drupals engine since 2005, PHPTemplate. With Twig, the code is now more readable, and the theme system is less complex, uses inheritance to avoid redundant code, and offers more security by sanitizing variables and functions.
* The Entity API, which was quite limited and a contributed module in Drupal 7, is now full-fledged and is in Drupal 8 Core. Since Drupal 8 treats everything as an "entity," the Entity API provides a standardized method of working with them.
* The CKEditor, which is a WYSIWYG (What You See Is What You Get) editor, was introduced. It allows for editing on the go, in-context editing, and previewing your changes before it gets published.
* **Drupal 8.1**
* The alpha version of the BigPipe module got introduced to Core as an experimental module. BigPipe renders Drupal 8 pages faster using methods like caching and auto-placeholder-ing.
* A Migrate UI module suite got introduced to Core as an experimental module. It makes migrating from Drupal 7 to Drupal 8 easier.
* The CKEditor now includes spell-check functionality and the ability to add optional languages in text.
* Improved testing infrastructure and support especially for Javascript interactions.
* Composer is an essential tool to manage third-party dependencies of websites and modules. With Drupal 8.1, Drupal Core and all its dependencies are now managed and packaged by Composer.
* **Drupal 8.2**
* The Place Block module is now an experimental module in Core. With this module, you can easily play around with blocks right from the web UI. Configuring and editing blogs can be done effortlessly.
* A new Content Moderation module that is based on the contributed module Workbench Moderation has been introduced as an experimental module in Core. It allows for granular workflow permissions and support.
* Content authoring experiences have been enhanced with better revision history and recovery.
* Improved page caching for 404 responses.
* **Drupal 8.3**
* The BigPipe module is now stable!
* More improvements in the CKEditor. A smooth copy-paste experience from Word, drag and drop images, and an Autogrow plugin that lets you work with bigger screen sizes and more.
* Better admin status reporting for improved administrator experience.
* The Field Layout module was added as an experimental module in Core. This module replaces the Display Suite in Drupal 7 and allows for arranging and assigning layouts to different content types.
* **Drupal 8.4**
* The 8.4 version calls for many stable releases of previously experimental modules.
* Inline Form Errors module, which was introduced in Drupal 8.0, is now stable. With this module, form errors are placed next to the form element in question, and a summary of the errors is provided on the top of the form.
* Another stable release—the DateTime Range module that allows date formats to match that of the Calendar module.
* The Layout Discovery API, which was added as an experimental module in Drupal 8.3, is now stable and ready to roll. With this module, the Layout API is added to Drupal 8 Core. It has adopted the previously popular contributed modules—Panels and Panelizer—that were used extensively to create amazing layouts. Drupal 8s Layout initiative has ensured that you have a powerful Layout building tool right out of the box.
* The very popular Media module is added as an API for developers to be able to port a wide range of Media contributed modules from Drupal 7. For example, media modules like the Media entity, media entity document, media entity browser, media entity image, and more. However, this module is still hidden from site builders till the porting and fixing of issues are over with.
* **Drupal 8.5**
* One of the top goals that Drupal 8 set out to reach was making rich images, media integration, and asset management easier and better for content authors. It has successfully achieved this goal by adding the Media module now in Core (and it isnt hidden anymore). 
* Content Moderation module is now stable. Defining various levels and statuses of workflow and moving them around is effortless.
* The Layout builder module is introduced as an experimental module. It gives site builders full control and flexibility to customize and built layouts from other layout components, blocks, and regions. This has been one of the top goals for Drupal 8 site builders.
* The Migrate UI module suite that was experimental in Drupal 8.1 is now considered stable.
* Big pipe module which got previously stable in version 8.5, now comes by default in the standard installation profile. All Drupal 8 sites are now faster by default.
* PHP 7.2 is here, and Drupal 8.5 now runs on it and fully supports the new features and performance improvements that it offers.
* **Drupal 8.6**
* The very helpful oEmbed format is now supported in the Drupal 8.6 Media module. The oEmbed API helps in displaying embedded content when a URL for that resource is posted. Also included within the Media module is support for embedding YouTube and Vimeo videos.
* An experimental Media Library module is now in Core. Adding and browsing multiple media is now supported and can also be customized.
* A new demo site called Umami has been introduced that demonstrates Drupal 8's Core features. This installation profile can give a new site builder a peek into Drupals capabilities and allows them to play around with views, fields, and pages for learning purposes. It also acts as an excellent tool for Drupal agencies to showcase Drupal 8 to its customers.
* Workspaces module is introduced as an experimental module. When you have multiple content packages that need to be reviewed (status change) and deployed, this module lets you do all of it together and saves you a lot of time.
* Installing Drupal has now gotten easier with this version. It offers two new easy ways of installing Drupal. One with a "quick start" command that only requires you to have PHP installed. In the other option, the installer automatically identifies if there has been a previous installation and lets you install it from there.
* **Drupal 8.7**
* One of the most significant additions to Drupal Core that went straight there as a stable module is the JSON:API module. It takes forward Drupals API-first initiative and provides an easy way to build decoupled applications.
* The Layout Builder module is now stable and better than ever before. It now even lets you work with unstructured data as well as fieldable entities.
* Media Library module gets a fresh new look with this version release. Marketers and Content editors now have it much easier with the ability to search, attach, drag, and drop media files whenever and wherever they need it.
* Fully supports PHP 7.3.
* Taxonomy and Menu items are revision-able, which means that they can be used in editorial workflows and can be assigned statuses.
* **Drupal 8.8**
* This version is going to be the last minor version of Drupal 8 where you will find new features or deprecations. The next version, Drupal 8.9, will not include any new additions but will be very similar to Drupal 9.0.
* The Media Library module is now stable and ready to use.
* Workspaces module is now enhanced to include adding hierarchical workspaces. This gives more flexibility in the hands of the content editor. It also works with the Content Moderation module now.
* Composer now receives native support and does not need external projects to package Drupal with its dependencies. You can create new projects with just a one-line command using Composer.
* Keeping its promises on making Drupal easier to learn for newbies, a new experimental module for Help Topics has been introduced. Each module, theme, and installation profile can have task-based help topics.
### Opening doors to a wider set of developers
Although Drupal was largely accepted and loved for its flexibility, resilience, and, most of all, its content management abilities, there was a nagging problem—the "deep learning curve" issue. While many Drupalers argue that the deep learning curve is part and parcel of a CMS that can build highly complex and powerful applications, finding Drupal talent is a challenge. Dries, the founder of Drupal, says, "For most people new to Drupal, Drupal 7 is really complex." He also adds that this could be because of holding on to procedural programming, large use of structured arrays, and more such "Drupalisms" (as he calls them).
This issue needed to be tackled. With Drupal 8 adopting modern platforms and standards like object-oriented programming concepts, latest PHP standards, Symfony framework, and design patterns, the doors are now flung wide open to a broad range of talent (site builders, themes, developers).
### Final thoughts
"The whole of science is nothing more than a refinement of everyday thinking." Albert Einstein.
Open source today is more than just free software. It is a body of collaborated knowledge and effort that is revolutionizing the digital ecosystem. The digital world is moving at a scarily rapid pace, and I believe it is only innovation and perseverance from open source communities that can bring it to speed. The Drupal community unwaveringly reinvents and refines itself each day, which is especially seen in the latest release of Drupal 8.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/drupal-8-promises
作者:[Shefali Shetty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/shefalishetty
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/drupal_blue_gray_lead.jpeg?itok=eSkFp_ur (Drupal logo with gray and blue)
[2]: https://www.specbee.com/drupal-web-development-services

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source vs. proprietary: What's the difference?)
[#]: via: (https://opensource.com/article/20/2/open-source-vs-proprietary)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Open source vs. proprietary: What's the difference?
======
Need four good reasons to tell your friends to use open source? Here's
how to make your case.
![Doodles of the word open][1]
There's a lot to be learned from open source projects. After all, managing hundreds of disparate, asynchronous commits and bugs doesn't happen by accident. Someone or something has to coordinate releases, and keep all the code and project roadmaps organized. It's a lot like life. You have lots of tasks demanding your attention, and you have to tend to each in turn. To ensure everything gets done before its deadline, you try to stay organized and focused.
Fortunately, there are [applications out there][2] designed to help with that sort of thing, and many apply just as well to real life as they do to software.
Here are some reasons for choosing [open tools][3] when improving personal or project-based organization.
### Data ownership
It's rarely profitable for proprietary tools to provide you with [data][4] dumps. Some products, usually after a long battle with their users (and sometimes a lawsuit), provide ways to extract your data from them. But the real issue isn't whether a company lets you extract data; it's the fact that the capability to get to your data isn't guaranteed in the first place. It's your data, and when it's literally what you do each day, it is, in a way, your life. Nobody should have primary access to that but you, so why should you have to petition a company for a copy?
Using an open source tool ensures you have priority access to your own activities. When you need a copy of something, you already have it. When you need to export it from one application to another, you have complete control of how the data is exchanged. If you need to export your schedule from a calendar into your kanban board, you can manipulate and process the data to fit. You don't have to wait for functionality to be added to the app, because you own the data, the database, and the app.
### Working for yourself
When you use open source tools, you often end up improving them, sometimes whether you know it or not. You may not (or you may!) download the source and hack on code, but you probably fall into a way of using the tool that works best for you. You optimize your interaction with the tool. The unique way you interact with your tooling creates a kind of meta-tool: you haven't changed the software, but you've adapted it and yourself in ways that the project author and a dozen other users never imagined. Everyone does this with whatever software they rely upon, and it's why sitting at someone else's computer to use a familiar software (or even just looking over someone's shoulder) often feels foreign, like you're using a different version of the application than you're used to.
When you do this with proprietary software, you're either contributing to someone else's marketplace for free, or you're adjusting your own behavior based on forces outside your own control. When you optimize an open source tool, both the software and the interaction belong to you.
### The right to not upgrade
Tools change. It's the way of things.
Change can be frustrating, but it can be crippling when a service changes so severely that it breaks your workflow. A proprietary service has and maintains every right to change its product, and you explicitly accept this by using the product. If your favorite accounting software or scheduling web app changes its interface or its output options, you usually have no recourse but to adapt or stop using the service. Proprietary services reserve the right to remove features, arbitrarily and without warning, and it's not uncommon for companies to start out with an open API and strong compatibility with open source, only to drop these conveniences once its customer base has reached critical mass.
Open source changes, too. Changes in open source can be frustrating, too, and it can even drive users to alternative open source solutions. The difference is that when open source changes, you still own the unchanged code base. More importantly, lots of other people do too, and if there's enough desire for it, the project can be forked. There are several famous examples of this, but admittedly there are just as many examples where the demand was _not_ great enough, and users essentially had to adapt.
Even so, users are never truly forced to do anything in open source. If you want to hack together an old version of your mission-critical service on an old distro running ancient libraries in a virtual machine, you can do that because you own the code. When a proprietary service changes, you have no choice but to follow.
With open source, you can choose to forge your own path when necessary or follow the developers when convenient.
### Open for collaboration
Proprietary services can affect others in ways you may not realize. Closed source tools are accidentally insidious. If you use a proprietary product to manage your schedule or your recipes or your library, or you use a proprietary font in your graphic design or website, then the moment you need to coordinate with someone else, you are essentially forcing them to sign up for the same proprietary service because proprietary services usually require accounts. Of course, the same is sometimes true for an open source solution, but it's not common for open source products to collect and sell user data the way proprietary vendors do, so the stakes aren't quite the same.
### Independence
Ultimately, the open source advantage is one of independence for you and for those you want to collaborate with. Not everyone uses open source, and even if everyone did not everyone would use the exact same tool or the same assets, so there will always be some negotiation when sharing data. However, by keeping your data and projects open, you enable everyone (your future self included) to contribute.
What steps do you take to ensure your work is open and accessible? Tell us in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-vs-proprietary
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_doodles.png?itok=W_0DOMM4 (Doodles of the word open)
[2]: https://opensource.com/article/20/1/open-source-productivity-tools
[3]: https://opensource.com/tags/tools
[4]: https://opensource.com/tags/analytics-and-metrics

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 firewall features IT pros should know about but probably dont)
[#]: via: (https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
5 firewall features IT pros should know about but probably dont
======
As a foundational network defense, firewalls continue to be enhanced with new features, so many that some important ones that shouldnt be get overlooked.
Natalya Burova / Getty Images
[Firewalls][1] continuously evolve to remain a staple of network security by incorporating functionality of standalone devices, embracing network-architecture changes, and integrating outside data sources to add intelligence to the decisions they make a daunting wealth of possibilities that is difficult to keep track of.
Because of this richness of features, next-generation firewalls are difficult to master fully, and important capabilities sometimes can be, and in practice are, overlooked.
Here is a shortlist of new features IT pros should be aware of.
**[ Also see [What to consider when deploying a next generation firewall][2]. | Get regularly scheduled insights by [signing up for Network World newsletters][3]. ]**
### Also in this series:
* [Cybersecurity in 2020: From secure code to defense in depth][4] (CSO)
* [More targeted, sophisticated and costly: Why ransomware might be your biggest threat][5] (CSO)
* [How to bring security into agile development and CI/CD][6] (Infoworld)
* [UEM to marry security finally after long courtship][7] (Computerworld)
* [Security vs. innovation: IT's trickiest balancing act][8] (CIO)
## Network segmentation
Dividing a single physical network into multiple logical networks is known as [network segmentation][9] in which each segment behaves as if it runs on its own physical network. The traffic from one segment cant be seen by or passed to another segment.
This significantly reduces attack surfaces in the event of a breach. For example, a hospital could put all its medical devices into one segment and its patient records into another. Then, if hackers breach a heart pump that was not secured properly, that would not enable them to access private patient information.
Its important to note that many connected things that make up the [internet of things][10] have older operating systems and are inherently insecure and can act as a point of entry for attackers, so the growth of IoT and its distributed nature drives up the need for network segmentation.
## Policy optimization
Firewall policies and rules are the engine that make firewalls go. Most security professionals are terrified of removing older policies because they dont know when they were put in place or why. As a result, rules keep getting added with no thought of reducing the overall number. Some enterprises say they have millions of firewall rules in place. The fact is, too many rules add complexity, can conflict with each other and are time consuming to manage and troubleshoot.
[][11]
Policy optimization migrates legacy security policy rules to application-based rules that permit or deny traffic based on what application is being used. This improves overall security by reducing the attack surface and also provides visibility to safely enable application access. Policy optimization identifies port-based rules so they can be converted to application-based whitelist rules or add applications from a port-based rule to an existing application-based rule without compromising application availability. It also identifies over-provisioned application-based rules. Policy optimization helps prioritize which port-based rules to migrate first, identify application-based rules that allow applications that arent being used, and analyze rule-usage characteristics such as hit count, which compares how often a particular rule is applied vs. how often all the rules are applied.
Converting port-based rules to application-based rules improves security posture because the organization can select the applications they want to whitelist and deny all other applications. That way unwanted and potentially malicious traffic is eliminated from the network.
## Credential-theft prevention
Historically, workers accessed corporate applications from company offices. Today they access legacy apps, SaaS apps and other cloud services from the office, home, airport and anywhere else they may be. This makes it much easier for threat actors to steal credentials. The Verizon [Data Breach Investigations Report][12] found that 81% of hacking-related breaches leveraged stolen and/or weak passwords.
Credential-theft prevention blocks employees from using corporate credentials on sites such as Facebook and Twitter.  Even though they may be sanctioned applications, using corporate credentials to access them puts the business at risk.
Credential-theft prevention works by scanning username and password submissions to websites and compare those submissions to lists of official corporate credentials. Businesses can choose what websites to allow submitting corporate credentials to or block them based on the URL category of the website.
When the firewall detects a user attempting to submit credentials to a site in a category that is restricted, it can display a block-response page that prevents the user from submitting credentials. Alternatively, it can present a continue page that warns users against submitting credentials to sites classified in certain URL categories, but still allows them to continue with the credential submission. Security professionals can customize these block pages to educate users against reusing corporate credentials, even on legitimate, non-phishing sites.
## DNS security
A combination of machine learning, analytics and automation can block attacks that leverage the [Domain Name System (DNS)][13]. In many enterprises, DNS servers are unsecured and completely wide open to attacks that redirect users to bad sites where they are phished and where data is stolen. Threat actors have a high degree of success with DNS-based attacks because security teams have very little visibility into how attackers use the service to maintain control of infected devices. There are some standalone DNS security services that are moderately effective but lack the volume of data to recognize all attacks.
When DNS security is integrated into firewalls, machine learning can analyze the massive amount of network data, making standalone analysis tools unnecessary. DNS security integrated into a firewall can predict and block malicious domains through automation and the real-time analysis that finds them.  As the number of bad domains grows, machine learning can find them quickly and ensure they dont become problems.
Integrated DNS security can also use machine-learning analytics to neutralize DNS tunneling, which smuggles data through firewalls by hiding it within DNS requests. DNS security can also find malware command-and-control servers.  It builds on top of signature-based systems to identify advanced tunneling methods and automates the shutdown of DNS-tunneling attacks.
## Dynamic user groups
Its possible to create policies that automate the remediation of anomalous activities of workers. The basic premise is that users roles within a group means their network behaviors should be similar to each other. For example, if a worker is phished and strange apps were installed, this would stand out and could indicate a breach.
Historically, quarantining a group of users was highly time consuming because each member of the group had to be addressed and policies enforced individually. With dynamic user groups, when the firewall sees an anomaly it creates policies that counter the anomoly and pushes them out to the user group. The entire group is automatically updated without having to manually create and commit policies. So, for example, all the people in accounting would receive the same policy update automatically, at once, instead of manually, one at a time. Integration with the firewall enables the firewall to distribute the policies for the user group to all the other infrastructure that requires it including other firewalls, log collectors or applications. 
Firewalls have been and will continue to be the anchor of cyber security. They are the first line of defense and can thwart many attacks before they penetrate the enterprise network.  Maximizing the value of firewalls means turning on many of the advanced features, some of which have been in firewalls for years but not turned on for a variety of reasons.
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.csoonline.com/article/3519913/cybersecurity-in-2020-from-secure-code-to-defense-in-depth.html
[5]: https://www.csoonline.com/article/3518864/more-targeted-sophisticated-and-costly-why-ransomware-might-be-your-biggest-threat.html
[6]: https://www.infoworld.com/article/3520969/how-to-bring-security-into-agile-development-and-cicd.html
[7]: https://www.computerworld.com/article/3516136/uem-to-marry-security-finally-after-long-courtship.html
[8]: https://www.cio.com/article/3521009/security-vs-innovation-its-trickiest-balancing-act.html
[9]: https://www.networkworld.com/article/3016565/how-network-segmentation-provides-a-path-to-iot-security.html
[10]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[12]: https://enterprise.verizon.com/resources/reports/dbir/
[13]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Future smart walls key to IoT)
[#]: via: (https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Future smart walls key to IoT
======
MIT researchers are developing a wallpaper-like material thats made up of simple RF switch elements and can be applied to building surfaces. Using beamforming, the antenna array could potentially improve wireless signal strength nearly tenfold.
Jason Dorfman, MIT CSAIL
IoT equipment designers shooting for efficiency should explore the potential for using buildings as antennas, researchers say.
Environmental surfaces such as walls can be used to intercept and beam signals, which can increase reliability and data throughput for devices, according to MIT's Computer Science and Artificial Intelligence Laboratory ([CSAIL][1]).
Researchers at CSAIL have been working on a smart-surface repeating antenna array called RFocus. The antennas, which could be applied in sheets like wallpaper, are designed to be incorporated into office spaces and factories. Radios that broadcast signals could then become smaller and less power intensive.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“Tests showed that RFocus could improve the average signal strength by a factor of almost 10,” CSAIL's Adam Conner-Simons [writes in MIT News][3]. “The platform is also very cost-effective, with each antenna costing only a few cents.”
The prototype system CSAIL developed uses more than 3,000 antennas embedded into sheets, which are then hung on walls. In future applications, the antennas could adhere directly to the wall or be integrated during building construction.
“People have had things completely backwards this whole time,” the article claims. “Rather than focusing on the transmitters and receivers, what if we could amplify the signal by adding antennas to an external surface in the environment itself?”
RFocus relies on [beamforming][4]; multiple antennas broadcast the same signal at slightly different times, and as a result, some of the signals cancel each other and some strengthen each other. When properly executed, beamforming can focus a stronger signal in a particular direction.
[][5]
"The surface does not emit any power of its own," the developers explain in their paper ([PDF][6]). The antennas, or RF switch elements, as the group describes them, either let a signal pass through or reflect it through software. Signal measurements allow the apparatus to define exactly what gets through and how its directed.
Importantly, the RFocus surface functions with no additional power requirements. The “RFocus surface can be manufactured as an inexpensive thin wallpaper requiring no wiring,” the group says.
### Antenna design
Antenna engineering is turning into a vital part of IoT development. It's one of the principal reasons data throughput and reliability keeps improving in wireless networks.
Arrays where multiple, active panel components make up antennas, rather than a simple passive wire, as is the case in traditional radio, is an example of advancements in antenna engineering.
[Spray-on antennas][7] (unrelated to the CSAIL work) is another in-the-works technology I've written about. In that case, flexible substrates create the antenna, which is applied in a manner that's similar to spray paint. Another future direction could be anti-laser antennas: [Reversing a laser][8], where the laser becomes an absorber of light rather than the sender of it, could allow all data-carrying energy to be absorbed, making it the perfect light-based antenna.
Development of 6G wireless, which is projected to supersede 5G sometime around 2030, includes efforts to figure out how to directly [couple antennas to fiber][9]—the radio ends up being part of the cable, in other words.
"We cant get faster internet speeds without more efficient ways of delivering wireless signals," CSAILs Conner-Simons says.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.csail.mit.edu/
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: http://news.mit.edu/2020/smart-surface-smart-devices-mit-csail-0203
[4]: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html
[5]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[6]: https://drive.google.com/file/d/1TLfH-r2w1zlGBbeeM6us2sg0yq6Lm2wF/view
[7]: https://www.networkworld.com/article/3309449/spray-on-antennas-will-revolutionize-the-internet-of-things.html
[8]: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html
[9]: https://www.networkworld.com/article/3438337/how-6g-will-work-terahertz-to-fiber-conversion.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Who should lead the push for IoT security?)
[#]: via: (https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Who should lead the push for IoT security?
======
Industry groups and governmental agencies have been taking a stab at rules to improve the security of the internet of things, but so far theres nothing comprehensive.
Thinkstock
The ease with which internet of things devices can be compromised, coupled with the potentially extreme consequences of breaches, have prompted action from legislatures and regulators, but what group is best to decide?
Both the makers of [IoT][1] devices and governments are aware of the security issues, but so far they havent come up with standardized ways to address them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“The challenge of this market is that its moving so fast that no regulation is going to be able to keep pace with the devices that are being connected,” said Forrester vice president and research director Merritt Maxim. “Regulations that are definitive are easy to enforce and helpful, but theyll quickly become outdated.”
The latest such effort by a governmental body is a proposed regulation in the U.K. that would impose three major mandates on IoT device manufacturers that would address key security concerns:
* device passwords would have to be unique, and resetting them to factory defaults would be prohibited
* device makers would have to offer a public point of contact for the disclosure of vulnerabilities
* device makers would have to “explicitly state the minimum length of time for which the device will receive security updates”
This proposal is patterned after a California law that took effect last month. Both sets of rules would likely have a global impact on the manufacture of IoT devices, even though theyre being imposed on limited jurisdictions. Thats because its expensive for device makers to create separate versions of their products.
IoT-specific regulations arent the only ones that can have an impact on the marketplace. Depending on the type of information a given device handles, it could be subject to the growing list of data-privacy laws being implemented around the world, most notably Europes General Data Protection Regulation, as well as industry-specific regulations in the U.S. and elsewhere.
The U.S. Food and Drug Administration, noted Maxim, has been particularly active in trying to address device-security flaws. For example, last year it issued [security warnings][3] about 11 vulnerabilities that could compromise medical IoT devices that had been discovered by IoT security vendor [Armis][4]. In other cases it issued fines against healthcare providers.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
But theres a broader issue with devising definitive regulation for IoT devices in general, as opposed to prescriptive ones that simply urge manufacturers to adopt best practices, he said.
Particular companies might have integrated security frameworks covering their vertically integrated products such as an [industrial IoT][6] company providing security across factory floor sensors but that kind of security is incomplete in the multi-vendor world of IoT.
Perhaps the closest thing to a general IoT-security standard is currently being worked on by Underwriters Laboratories (UL), the security-testing non-profit best known for its century-old certification program for electrical equipment. ULs [IoT Security Rating Program][7] offers a five-tier system for ranking the security of connected devices bronze, silver, gold, platinum and diamond.
Bronze certification means that the device has addressed the most glaring security flaws, similar to those outlined in the recent U.K. and California legislations. [The higher ratings][8] include capabilities like ongoing security maintenance, improved access control and known threat testing.
While government regulation and voluntary industry improvements can help keep future IoT systems safe, neither addresses two key issues in the IoT security puzzle the millions of insecure devices that have already been deployed, and user apathy around making their systems as safe as possible, according to Maxim.
“Requiring a non-default passwords is good, but that doesnt stop users from setting insecure passwords,” he warned. “The challenge is, do customers care? Are they willing to pay extra for products with that certification?”
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.fda.gov/medical-devices/safety-communications/urgent11-cybersecurity-vulnerabilities-widely-used-third-party-software-component-may-introduce
[4]: https://www.armis.com/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://www.networkworld.com/article/3243928/what-is-the-industrial-internet-of-things-essentials-of-iiot.html
[7]: https://ims.ul.com/iot-security-rating-levels
[8]: https://www.cnx-software.com/2019/12/30/ul-iot-security-rating-system-ranks-iot-devices-security-from-bronze-to-diamond/
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why innovation can't happen without standardization)
[#]: via: (https://opensource.com/open-organization/20/2/standardization-versus-innovation)
[#]: author: (Len Dimaggio https://opensource.com/users/ldimaggi)
Why innovation can't happen without standardization
======
Balancing standardization and innovation is critical during times of
organizational change. And it's an ongoing issue in open organizations,
where change is constant.
![and old computer and a new computer, representing migration to new software or hardware][1]
Any organization facing the prospect of change will confront an underlying tension between competing needs for standardization and innovation. Achieving the correct balance between these needs can be essential to an organization's success.
Experiencing too much of either can lead to morale and productivity problems. Over-stressing standardization, for example, can have a stifling effect on the team's ability to innovate to solve new problems. Unfettered innovation, on the other hand, can lead to time lost due to duplicated or misdirected efforts.
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change. In this article, I'll outline various considerations your organization might make when attempting to strike this critical balance.
### The need for standardization
When North American beavers hear running water, they instinctively start building a dam. When some people see a problem, they look to build or buy a new product or tool to solve that problem. Technological advances make modeling business process solutions or setting up production or customer-facing systems much easier than in the past. The ease with which organizational actors can introduce new systems can occasionally, however, lead to problems. Duplicate, conflicting, or incompatible systems—or systems that, while useful, do not address a team's highest priorities—can find their way into organizations, complicating processes.
This is where standardization can help. By agreeing on and implementing a common set of tools and processes, teams become more efficient, as they reduce the need for new development methods, customized training, and maintenance.
Standardization has several benefits:
* **Reliability, predictability, and safety.** Think about the electricity in your own home and the history of electrical systems. In the early days of electrification, companies competed to establish individual standards for basic elements like plug configurations and safety requirements like insulation. Thanks to standardization, when you buy a light bulb today you can be sure that it will fit and not start a fire.
* **Lower costs and more dependable, repeatable processes.** Standarsization frees people in organizations to focus more attention on other things—products, for instance—and not on the need to coordinate the use of potentially conflicting new tools and processes. And it can make people's skills more portable (or, in budgeting terms more "fungible") across projects, since all projects share a common set of standards. In addition to helping project teams be more flexible, this portability of skills makes it easier for people to adopt new assignments.
* **Consistent measurements.** Creating a set of consistent metrics people can use to assess product quality across multiple products or multiple releases of individual products is possible through standardization. Without it, applying this kind of consistent measurement to product quality and maintaining any kind of history of tracking such quality can be difficult. Standardization effectively provides the organization a common language for measuring quality.
A danger of standardization arises when it becomes an all-consuming end in itself. A constant push to standardize can result in it inadvertently stifling creativity and innovation. If taken too far, policies that over emphasize standardization appear to discourage support for people's need to find new solutions to new problems. Taken to an extreme, this can lead to a suffocating organizational atmosphere in which people are reluctant to propose new solutions in the interest of maintaining standardization or conformity. In an open organization especially focused on generating new value and solutions, an attempt to impose standardization can have a negative impact on team morale.
Viewing new challenges through the lens of former solutions is natural. Likewise, it's common (and in fact generally practical) to apply legacy tools and processes to solving new problems.
But in open organizations, change is constant. We must always adapt to it.
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change.
### The need for innovation
Digital technology changes at a rapid rate, and that rate of change is always increasing. New opportunities result in new problems that require new solutions. Any organization must be able to adapt and its people must have the freedom to innovate. This is even more important in an open organization and with open source source software, as many of the factors (e.g., restrictive licenses) that blocked innovation in the past no longer apply.
When considering the prospect of innovation in your organization, keep in mind the following:
* **Standardization doesn't have to be the end of innovation.** Even tools and processes that are firmly established and in use by an organization were once very new and untried, and they only came about through processes of organizational innovation.
* **Progress through innovation also involves failure.** It's very often the case that some innovations fail, but when they fail, they point the way forward to solutions. This progress therefore requires that an organization protect the freedom to fail. (In competitive sports, athletes and teams seldom learn lessons from easy victories; they learn lessons about how to win, including how to innovate to win, from failures and defeats.)
Freedom to innovate, however, cannot be freedom to do whatever the heck we feel like doing. The challenge for any organization is to be able to encourage and inspire innovation, but at the same time to keep innovation efforts focused towards meeting your organization's goals and to address the problems that you're trying to solve.
In closed organizations, leaders may be inclined to impose rigid, top-down limits on innovation. A better approach is to instead provide a direction or path forward in terms of goals and deliverables, and then enable people to find their own ways along that path. That forward path is usually not a straight line; [innovation is almost never a linear process][2]. Like a sailboat making progress into the wind, it's sometimes [necessary to "tack" or go sideways][3] in order to make forward progress.
### Blending standardization with focused innovation
Are we doomed to always think of standardization as the broccoli we must eat, while innovation is the ice cream we want to eat?
Are we doomed to always think of standardization as the broccoli we _must_ eat, while innovation is the ice cream we _want_ to eat?
It doesn't have to be this way.
Perceptions play a role in the conflict between standardization and innovation. People who only want to focus on standardization must remember that even the tools and processes that they want to promote as "the standard" were once new and represented change. Likewise, people who only want to focus on innovation have to remember that in order for a tool or process to provide value to an organization, it has to be stable enough for that organization to use it over time.
An important element of any successful organization, especially an open organization where everyone is free to express their views, is empathy for other people's views. A little empathy is necessary for understanding both perceptions of impending change.
I've always thought about standardization and innovation as being two halves of one solution. A good analogy is that of college course catalog. In many colleges, all incoming first-year students regardless of their major will take a core set of classes. These core classes can cover a wide range of subjects and provide each student with an educational foundation. Every student receives a standard grounding in these disciplines regardless of their major course of study. Beyond the standardized core curriculum, then, each student is free to take specialized courses depending upon his or her major degree requirements and selected elective courses, as they work to innovate in their respective fields.
Similarly, standardization provides a foundation on which innovation can build. Think of standardization as a core set of tools and practices you might applied to _all_ products. Innovation can take the form of tools and practices that go _above and beyond_ this standard. This will enable every team to extend the core set of standardized tools and processes to meet the individual needs of their own specific projects. Standardization does not mean that all forward-looking actions stop. Over time, what was an innovation can become a standard, and thereby make room for the next innovation (and the next).
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/2/standardization-versus-innovation
作者:[Len Dimaggio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ldimaggi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://opensource.com/open-organization/19/6/innovation-delusion
[3]: https://opensource.com/open-organization/18/5/navigating-disruption-1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (chenmu-kk )
[#]: reviewer: ( ) [#]: reviewer: ( )
[#]: publisher: ( ) [#]: publisher: ( )
[#]: url: ( ) [#]: url: ( )

View File

@ -1,113 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (12 open source tools for natural language processing)
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
12 open source tools for natural language processing
======
Take a look at a dozen options for your next NLP application.
![Chat bubbles][1]
Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application.
For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons.
The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool.
I was also surprised to see that the Scala libraries are fairly stagnant. It has been a couple of years since I last used Scala, when it was pretty popular. Most of the libraries haven't been updated since that time—or they've only had a few updates.
Finally, I excluded C++. This is mostly because it's been many years since I last wrote in C++, and the organizations I've worked in have not used C++ for NLP or any data science work.
### Python tools
#### Natural Language Toolkit (NLTK)
It would be easy to argue that [Natural Language Toolkit (NLTK)][2] is the most full-featured tool of the ones I surveyed. It implements pretty much any component of NLP you would need, like classification, tokenization, stemming, tagging, parsing, and semantic reasoning. And there's often more than one implementation for each, so you can choose the exact algorithm or methodology you'd like to use. It also supports many languages. However, it represents all data in the form of strings, which is fine for simple constructs but makes it hard to use some advanced functionality. The documentation is also quite dense, but there is a lot of it, as well as [a great book][3]. The library is also a bit slow compared to other tools. Overall, this is a great toolkit for experimentation, exploration, and applications that need a particular combination of algorithms.
#### SpaCy
[SpaCy][4] is probably the main competitor to NLTK. It is faster in most cases, but it only has a single implementation for each NLP component. Also, it represents everything as an object rather than a string, which simplifies the interface for building applications. This also helps it integrate with many other frameworks and data science tools, so you can do more once you have a better understanding of your text data. However, SpaCy doesn't support as many languages as NLTK. It does have a simple interface with a simplified set of choices and great documentation, as well as multiple neural models for various components of language processing and analysis. Overall, this is a great tool for new applications that need to be performant in production and don't require a specific algorithm.
#### TextBlob
[TextBlob][5] is kind of an extension of NLTK. You can access many of NLTK's functions in a simplified manner through TextBlob, and TextBlob also includes functionality from the Pattern library. If you're just starting out, this might be a good tool to use while learning, and it can be used in production for applications that don't need to be overly performant. Overall, TextBlob is used all over the place and is great for smaller projects.
#### Textacy
This tool may have the best name of any library I've ever used. Say "[Textacy][6]" a few times while emphasizing the "ex" and drawing out the "cy." Not only is it great to say, but it's also a great tool. It uses SpaCy for its core NLP functionality, but it handles a lot of the work before and after the processing. If you were planning to use SpaCy, you might as well use Textacy so you can easily bring in many types of data without having to write extra helper code.
#### PyTorch-NLP
[PyTorch-NLP][7] has been out for just a little over a year, but it has already gained a tremendous community. It is a great tool for rapid prototyping. It's also updated often with the latest research, and top companies and researchers have released many other tools to do all sorts of amazing processing, like image transformations. Overall, PyTorch is targeted at researchers, but it can also be used for prototypes and initial production workloads with the most advanced algorithms available. The libraries being created on top of it might also be worth looking into.
### Node tools
#### Retext
[Retext][8] is part of the [unified collective][9]. Unified is an interface that allows multiple tools and plugins to integrate and work together effectively. Retext is one of three syntaxes used by the unified tool; the others are Remark for markdown and Rehype for HTML. This is a very interesting idea, and I'm excited to see this community grow. Retext doesn't expose a lot of its underlying techniques, but instead uses plugins to achieve the results you might be aiming for with NLP. It's easy to do things like checking spelling, fixing typography, detecting sentiment, or making sure text is readable with simple plugins. Overall, this is an excellent tool and community if you just need to get something done without having to understand everything in the underlying process.
#### Compromise
[Compromise][10] certainly isn't the most sophisticated tool. If you're looking for the most advanced algorithms or the most complete system, this probably isn't the right tool for you. However, if you want a performant tool that has a wide breadth of features and can function on the client side, you should take a look at Compromise. Overall, its name is accurate in that the creators compromised on functionality and accuracy by focusing on a small package with much more specific functionality that benefits from the user understanding more of the context surrounding the usage.
#### Natural
[Natural][11] includes most functions you might expect in a general NLP library. It is mostly focused on English, but some other languages have been contributed, and the community is open to additional contributions. It supports tokenizing, stemming, classification, phonetics, term frequencyinverse document frequency, WordNet, string similarity, and some inflections. It might be most comparable to NLTK, in that it tries to include everything in one package, but it is easier to use and isn't necessarily focused around research. Overall, this is a pretty full library, but it is still in active development and may require additional knowledge of underlying implementations to be fully effective.
#### Nlp.js
[Nlp.js][12] is built on top of several other NLP libraries, including Franc and Brain.js. It provides a nice interface into many components of NLP, like classification, sentiment analysis, stemming, named entity recognition, and natural language generation. It also supports quite a few languages, which is helpful if you plan to work in something other than English. Overall, this is a great general tool with a simplified interface into several other great tools. This will likely take you a long way in your applications before you need something more powerful or more flexible.
### Java tools
#### OpenNLP
[OpenNLP][13] is hosted by the Apache Foundation, so it's easy to integrate it into other Apache projects, like Apache Flink, Apache NiFi, and Apache Spark. It is a general NLP tool that covers all the common processing components of NLP, and it can be used from the command line or within an application as a library. It also has wide support for multiple languages. Overall, OpenNLP is a powerful tool with a lot of features and ready for production workloads if you're using Java.
#### StanfordNLP
[Stanford CoreNLP][14] is a set of tools that provides statistical NLP, deep learning NLP, and rule-based NLP functionality. Many other programming language bindings have been created so this tool can be used outside of Java. It is a very powerful tool created by an elite research institution, but it may not be the best thing for production workloads. This tool is dual-licensed with a special license for commercial purposes. Overall, this is a great tool for research and experimentation, but it may incur additional costs in a production system. The Python implementation might also interest many readers more than the Java version. Also, one of the best Machine Learning courses is taught by a Stanford professor on Coursera. [Check it out][15] along with other great resources.
#### CogCompNLP
[CogCompNLP][16], developed by the University of Illinois, also has a Python library with similar functionality. It can be used to process text, either locally or on remote systems, which can remove a tremendous burden from your local device. It provides processing functions such as tokenization, part-of-speech tagging, chunking, named-entity tagging, lemmatization, dependency and constituency parsing, and semantic role labeling. Overall, this is a great tool for research, and it has a lot of components that you can explore. I'm not sure it's great for production workloads, but it's worth trying if you plan to use Java.
* * *
What are your favorite open source tools and libraries for NLP? Please share in the comments—especially if there's one I didn't include.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/natural-language-processing-tools
作者:[Dan Barker (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
[2]: http://www.nltk.org/
[3]: http://www.nltk.org/book_1ed/
[4]: https://spacy.io/
[5]: https://textblob.readthedocs.io/en/dev/
[6]: https://readthedocs.org/projects/textacy/
[7]: https://pytorchnlp.readthedocs.io/en/latest/
[8]: https://www.npmjs.com/package/retext
[9]: https://unified.js.org/
[10]: https://www.npmjs.com/package/compromise
[11]: https://www.npmjs.com/package/natural
[12]: https://www.npmjs.com/package/node-nlp
[13]: https://opennlp.apache.org/
[14]: https://stanfordnlp.github.io/CoreNLP/
[15]: https://opensource.com/article/19/2/learn-data-science-ai
[16]: https://github.com/CogComp/cogcomp-nlp

View File

@ -1,247 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Manage multimedia files with Git
======
Learn how to use Git to track large multimedia files in your projects in
the final article in our series on little-known uses of Git.
![video editing dashboard][1]
Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on.
In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files.
### The problem with managing multimedia files with Git
It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git:
```
$ du -hs
108K .
$ cp ~/photos/dandelion.tif .
$ git add dandelion.tif
$ git commit -m 'added a photo'
[master (root-commit) fa6caa7] two photos
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 dandelion.tif
$ du -hs
1.8M .
```
Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file:
```
$ git rm dandelion.tif
$ git commit -m 'deleted a photo'
$ du -hs
828K .
```
You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network.
The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next.
### Git-portal
In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG.
So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in.
![Graphic showing relationship between art assets and Git][2]
A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance).
Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority.
Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists.
#### Install Git-portal
There are RPM packages for Git-portal located at <https://klaatu.fedorapeople.org/git-portal>, which you can download and install.
Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself:
```
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
$ cd git-portal.clone
$ ./configure
$ make
$ sudo make install
```
#### Use Git-portal
Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project:
```
$ mkdir bigproject.git
$ cd !$
$ git init
$ git-portal init
```
Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file.
Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data:
```
$ ls -1
_portal
song.1.qtr
song.qtr
song-Track_1-1.mid
song-Track_1-3.mid
song-Track_2-1.mid
$ git add song*qtr
$ git-portal song-Track*mid
$ git add song-Track*mid
```
If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected:
```
$ ls -lG
[...] _portal/
[...] song.1.qtr
[...] song.qtr
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
```
As with Git, you can also add a directory of files:
```
$ cp -r ~/synth-presets/yoshimi .
$ git-portal add yoshimi
Directories cannot go through the portal. Sending files instead.
$ ls -lG _portal/yoshimi
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
```
Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** :
```
$ ls
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
$ git-portal rm song-Track_1-3.mid
rm 'song-Track_1-3.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
```
If you forget to use Git-portal, then you have to remove the portal file manually:
```
$ git-portal rm song-Track_1-1.mid
rm 'song-Track_1-1.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
$ trash _portal/song-Track_1-1.mid
```
Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory:
```
$ mkdir foo
$ mv yoshimi foo
$ git-portal status
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
```
If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote.
#### Add Git-portal remotes
Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge).
If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure:
```
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
```
The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories:
```
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
```
You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync).
Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal:
```
$ git push origin HEAD
master destination detected
Syncing _portal content...
sending incremental file list
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
total size is 60,358,015 speedup is 6,474.10
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
```
If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity.
### Other options
If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync.
A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12].
If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/manage-multimedia-files-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
[3]: http://gitlab.com/slackermedia/git-portal.git
[4]: https://www.apress.com/gp/book/9781484241691
[5]: http://mixedsignals.ml
[6]: mailto:git@gitdawg.com
[7]: mailto:seth@example.com
[8]: https://opensource.com/article/19/4/file-sharing-git
[9]: https://git-lfs.github.com/
[10]: https://git-annex.branchable.com/
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
[12]: https://git-annex.branchable.com/walkthrough/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (mengxinayan)
[#]: reviewer: ( ) [#]: reviewer: ( )
[#]: publisher: ( ) [#]: publisher: ( )
[#]: url: ( ) [#]: url: ( )
@ -180,7 +180,7 @@ via: https://opensource.com/article/19/7/structure-multi-file-c-part-1
作者:[Erik O'Shaughnessy][a] 作者:[Erik O'Shaughnessy][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID) 译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (zhangxiangping)
[#]: reviewer: ( ) [#]: reviewer: ( )
[#]: publisher: ( ) [#]: publisher: ( )
[#]: url: ( ) [#]: url: ( )
@ -264,7 +264,7 @@ via: https://opensource.com/article/19/7/python-google-natural-language-api
作者:[JR Oakes][a] 作者:[JR Oakes][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID) 译者:[zhangxiangping](https://github.com/zhangxiangping)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (mengxinayan)
[#]: reviewer: ( ) [#]: reviewer: ( )
[#]: publisher: ( ) [#]: publisher: ( )
[#]: url: ( ) [#]: url: ( )
@ -204,7 +204,7 @@ via: https://opensource.com/article/19/7/structure-multi-file-c-part-2
作者:[Erik O'Shaughnessy][a] 作者:[Erik O'Shaughnessy][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID) 译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,57 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top CI/CD resources to set you up for success)
[#]: via: (https://opensource.com/article/19/12/cicd-resources)
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
Top CI/CD resources to set you up for success
======
Continuous integration and continuous deployment were key topics in 2019
as organizations look to achieve seamless, flexible, and scalable
deployments.
![Plumbing tubes in many directions][1]
This has been a fantastic year for continuous integration/continuous deployment (CI/CD) and the world of DevOps. Opensource.com authors shared how they're moving toward agile and scrum as they focus on seamless, flexible, and scalable deployments. Here are some of the big themes in the CI/CD articles we published this year.
### Learning and improving your CI/CD skills
Some of our favorite articles focus on hands-on CI/CD experience and cover a lot of ground as they do. The place to start is always with [Jenkins][2] pipelines, and Bryant Son's [_Building CI/CD pipelines with Jenkins_][3] will give you enough experience to get started building your first pipelines. Daniel Oh's [_Automate user acceptance testing with your DevOps pipeline_][4] provides great information on acceptance testing, including various CI/CD applications you can use for testing in its own right. And my article on [_Security scanning your DevOps pipeline_][5] is a very short, to the point tutorial on how to set up security in a pipeline using the Jenkins platform.
### Delivery workflow
While learning how to use and improve your skills with CI/CD, the workflow matters, especially when it comes to pipelines, as Jithin Emmanuel shares in [_Screwdriver: A scalable build platform for continuous delivery_][6]. Emily Burns explains having the flexibility to build exactly what you need with your CI/CD workflow in [_Why Spinnaker matters to CI/CD_][7]. And Willy-Peter Schaub extols the idea of creating a unified pipeline for everything to build consistently in [_One CI/CD pipeline per product to rule them all_][8]. These articles will give you a good sense of what happens after you onboard team members to the workflow process.
### How CI/CD affects organizations
2019 was also the year of recognizing CI/CD's business impact and how it affects day-to-day operations. Agnieszka Gancarczyk shares the results of Red Hat's [_Small Scale Scrum vs. Large Scale Scrum_][9] survey, including respondents' differing opinions on scrums, the agile movement, and the impact on teams. Will Kelly covers [_How continuous deployment impacts the entire organization_][10], including the importance of open communication, and Daniel Oh emphasizes the importance of metrics and observability in [_3 types of metric dashboards for DevOps teams_][11]. Last, but far from least, Ann Marie Fred's great article [_Don't test in production? Test in production!_][12] details why it's important for you to test in production—before your customers do.
We are thankful to the many contributing authors who shared their insights with Opensource.com readers in 2019, and I look forward to learning more from them about the evolution of CI/CD in 2020.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/cicd-resources
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
[2]: https://jenkins.io/
[3]: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
[4]: https://opensource.com/article/19/4/devops-pipeline-acceptance-testing
[5]: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
[6]: https://opensource.com/article/19/3/screwdriver-cicd
[7]: https://opensource.com/article/19/8/why-spinnaker-matters-cicd
[8]: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
[9]: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
[10]: https://opensource.com/article/19/7/organizational-impact-continuous-deployment
[11]: https://opensource.com/article/19/7/dashboards-devops-teams
[12]: https://opensource.com/article/19/5/dont-test-production

View File

@ -1,76 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's HTTPS for secure computing?)
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
What's HTTPS for secure computing?
======
Security by default hasn't arrived yet.
![Secure https browser][1]
Over the past few years, it's become difficult to find a website that is just "http://…" This is because the industry has finally realised that security on the web is "a thing," and also because it has become easy for both servers and clients to set up and use HTTPS connections. A similar shift may be on its way in computing across cloud, edge, Internet of Things, blockchain, artificial intelligence, machine learning, and beyond. We've known for a long time that we should encrypt data at rest (in storage) and in transit (on the network), but encrypting it in use (while processing) has been difficult and expensive. Confidential computing—providing this type of protection for data and algorithms in use using hardware capabilities such as trusted execution environments (TEEs)—protects data on hosted systems or vulnerable environments.
I've written several times about [TEEs][2] and, of course, the [Enarx project][3] of which I'm a co-founder with Nathaniel McCallum (see [_Enarx for everyone (a quest)_][4] and [_Enarx goes multi-platform_][5] for examples). Enarx uses TEEs and provides a platform- and language-independent deployment platform to allow you safely to deploy sensitive applications or components (such as microservices) onto hosts that you don't trust. Enarx is, of course, completely open source (we're using the Apache 2.0 licence, for those with an interest). Being able to run workloads on hosts that you don't trust is the promise of confidential computing, which extends normal practice for sensitive data at rest and in transit to data in use:
* **Storage:** You encrypt your data at rest because you don't fully trust the underlying storage infrastructure.
* **Networking:** You encrypt your data in transit because you don't fully trust the underlying network infrastructure.
* **Compute:** You encrypt your data in use because you don't fully trust the underlying compute infrastructure.
I've got a lot to say about trust, and the word "fully" in the statements above is important (I added it on re-reading what I'd written). In each case, you have to trust the underlying infrastructure to some degree, whether it's to deliver your packets or store your blocks, for instance. In the case of the compute infrastructure, you're going to have to trust the CPU and associated firmware, just because you can't really do computing without trusting them (there are techniques such as homomorphic encryption, which are beginning to offer some opportunities here, but they're limited and the technology still immature).
Questions sometimes come up about whether you should fully trust CPUs, given some of the security problems that have been found with them, and also about whether they are fully secure against physical attacks on the host on which they reside.
The answer to both questions is "no," but this is the best technology we currently have available at scale and at a price point to make it generally deployable. To address the second question, nobody is pretending that this (or any other technology) is fully secure: what we need to do is consider our [threat model][6] and decide whether TEEs (in this case) provide sufficient security for our specific requirements. In terms of the first question, the model that Enarx adopts is to allow decisions to be made at deployment time as to whether you trust a particular set of CPUs. So, for example, if vendor Q's generation R chips are found to contain a vulnerability, it will be easy to say "refuse to deploy my workloads to R-type CPUs from Q, but continue to deploy to S-type, T-type, and U-type chips from Q and any CPUs from vendors P, M, and N."
I think there are three changes in the landscape that are leading to the interest and adoption of confidential computing right now:
1. **Hardware availability:** It is only over the past six to 12 months that hardware supporting TEEs has started to become widely available, with the key examples in the market at the moment being Intel's SGX and AMD's SEV. We can expect to see other examples of TEE-enabled hardware coming out in the fairly near future.
2. **Industry readiness:** Just as cloud use is increasingly becoming accepted as a model for application deployment, regulators and legislators are increasing the requirements on organisations to protect the data they manage. Organisations are beginning to clamour for ways to run sensitive applications (or applications that handle sensitive data) on untrusted hosts—or, to be more accurate, on hosts that they cannot fully trust with that sensitive data. This should be no surprise: the chip vendors would not have invested so much money into this technology if they saw no likely market for it. Formation of the Linux Foundation's [Confidential Computing Consortium][7] (CCC) is another example of how the industry is interested in finding common models for the use of confidential computing and encouraging open source projects to employ these technologies.[1][8]
3. **Open source:** Like blockchain, confidential computing is one of those technologies where it's an absolute no-brainer to use open source. If you are going to run sensitive applications, you need to trust what's doing the running for you. That's not just the CPU and firmware but also the framework that supports the execution of your workload within the TEE. It's all very well saying, "I don't trust the host machine and its software stack, so I'm going to use a TEE," but if you don't have visibility into the TEE software environment, then you're just swapping one type of software opacity for another. Open source support for TEEs allows you or the community—in fact, you _and_ the community—to check and audit what you're running in a way that is impossible for proprietary software. This is why the CCC sits within the Linux Foundation (which is committed to the open development model) and is encouraging TEE-related software projects to join and go open source (if they weren't already).
I'd argue that this triad of hardware availability, industry readiness, and open source has become the driver for technology change over the past 15 to 20 years. Blockchain, AI, cloud computing, webscale computing, big data, and internet commerce are all examples of these three meeting at the same time and leading to extraordinary changes in our industry.
Security by default is a promise that we've been hearing for decades now, and it hasn't arrived yet. Honestly, I'm not sure it ever will. But as new technologies become available, security ubiquity for particular use cases becomes more practical and more expected within the industry. It seems that confidential computing is ready to be the next big change—and you, dear reader, can join the revolution (it's open source, after all).
* * *
1. Enarx, initiated by Red Hat, is a CCC project.
* * *
_This article was originally published on [Alice, Eve, and Bob][9] and is reprinted with the author's permission._
Get a sneak peek at Daniel Roesler's Texas Linux Fest talk, "If you're not using HTTPS, your...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/confidential-computing
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
[3]: https://enarx.io/
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
[7]: https://confidentialcomputing.io/
[8]: tmp.VEZpFGxsLv#1
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What I learned going from prison to Python)
[#]: via: (https://opensource.com/article/20/1/prison-to-python)
[#]: author: (Shadeed "Sha" Wallace-Stepter https://opensource.com/users/shastepter)
What I learned going from prison to Python
======
How open source programming can offer opportunities after incarceration.
![Programming books on a shelf][1]
Less than a year ago, I was in San Quentin State Prison serving a life sentence.
In my junior year in high school, I shot a man while robbing him. Now, it took a while for me to see or even admit that what I did was wrong, but after going through a jury trial and seeing the devastating consequences of my actions, I knew that I needed to make a change, and I did. And although it was a great thing that I had changed, I had still shot a man and nearly killed him. And there are consequences to doing something like that, and rightfully so. So at the age of 18, I was sentenced to life in prison.
Now prison is a terrible place; I do not recommend it. But I had to go and so I went. Ill spare you the details, but you can rest assured its a place where there isnt much incentive to change, and many people pick up more bad habits than they went in with.
Im one of the lucky ones. While I was in prison, something different happened. I started to imagine a future for myself beyond the prison bars where, up until that point, I had spent all of my adult life.
Now YOU think about this: Im black, with nothing more than a high school education. I had no work history, and if I ever were to leave prison, I would be a convicted felon upon my release. And I think Im being fair when I say that the first thought for an employer who sees this profile is not "I need to hire this person."
My options werent clear, but my mind was made up. I needed to do something to survive that wouldnt look anything like my life before prison.
### A path to Python
Eventually, I wound up in San Quentin State Prison, and I had no idea how lucky I was to be there. San Quentin offered several self-help and education programs. These [rehabilitation opportunities][2] ensured prisoners had skills that helped them avoid being repeat offenders upon release.
As part of one of these programs, I met [Jessica McKellar][3] in 2017 through her work with the San Quentin Media Program. Jessica is an enthusiast of the programming language [Python][4], and she started to sell me on how great Python is and how its the perfect language to learn for someone just starting out. And this is where the story becomes stranger than fiction.
 
> Thanks [@northbaypython][5] for letting [@ShaStepter][6] and me reprise our [@pycon][7] keynotes to get them recorded. I'm honored to share:
>
> From Prison to Python: <https://t.co/rcumoAgZHm>
>
> Mass Decarceration: If We Don't Hire People With Felony Convictions, Who Will? <https://t.co/fENDUFdxfX> [pic.twitter.com/Kpjo8d3ul6][8]
>
> — Jessica McKellar (@jessicamckellar) [November 5, 2019][9]
 
Jessica told me about these Python video tutorials that she did for a company called [OReilly Media][10], that they were online, and how great it would be if I could get access to them. Unfortunately, internet access in prison isnt a thing. But, I had met this guy named Tim OReilly, who had recently come to San Quentin. It turns out that, after his visit, Tim had donated a ton of content from his company, OReilly Media, to the prisons programming class. I wound up getting my hands on a tablet that had Jessicas Python tutorials on it and learned how to code using those Python tutorials.
It was incredible. Total strangers with a very different background and life from my own had connected the dots in a way that led to me learning to code.
### The love of the Python community
After this point, I started meeting with Jessica pretty frequently, and she began to tell me about the open source community. What I learned is that, on a fundamental level, open source is about fellowship and collaboration. It works so well because no one is excluded.
And for me, someone who struggled to see where they fit, what I saw was a very basic form of love—love by way of collaboration and acceptance, love by way of access, love by way of inclusion. And my spirit yearned to be a part of it. So I continued my education with Python, and, unfortunately, I wasnt able to get more tutorials, but I was able to draw from the vast wealth of written knowledge that has been compiled by the open source community. I read anything that even mentioned Python, from paperback books to obscure magazine articles, and I used the tablet that I had to solve the Python problems that I read about.
My passion for Python and programming wasnt something that many of my peers shared. Aside from the very small group of people who were in the prisons programming class, no one else that I knew had ever mentioned programming; its just not on the average prisoners radar. I believe that this is due to the perception that programming isnt accessible to people who have experienced incarceration, especially if you are a person of color.
### Life with Python outside of prison
Then, on August 17, 2018, I got the surprise of my life. Then-Governor Jerry Brown commuted my 27-years-to-life sentence, and I was released from prison after serving almost 19 years.
But heres the reality of my situation and why I believe that programming and the open source community are so valuable. I am a 37-year-old, black, convicted felon, with no work history, who just served 18 years in prison. There arent many professions that exist that would prevent me from being at the mercy of the stigmas and biases that inevitably accompany my criminal past. But one of the few exceptions is programming.
The people who are now returning back to society after incarceration are in desperate need of inclusion, but when the conversation turns to diversity in the workplace and how much its needed, you really dont hear this group being mentioned or included.
 
> What else:
>
> 1\. Background checks: ask how they are used at your company.
>
> 2\. Entry-level roles: remove fake, unnecessary prerequisites that will exclude qualified people with records.
>
> 3\. Active outreach: partner with local re-entry programs to create hiring pipelines. [pic.twitter.com/WnzdEUTuxr][11]
>
> — Jessica McKellar (@jessicamckellar) [May 12, 2019][12]
 
So with that, I want to humbly challenge all of the programmers and members of the open source community to expand your thinking around inclusion and diversity. I proudly stand before you today as the representative of a demographic that most people dont think about—formerly incarcerated people. But we exist, and we are eager to prove our value, and, above all else, we are looking to be accepted. Many challenges await us upon our reentry back into society, and I ask that you allow us to have the opportunity to demonstrate our worth. Welcome us, accept us, and, more than anything else, include us.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/prison-to-python
作者:[Shadeed "Sha" Wallace-Stepter][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/shastepter
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2 (Programming books on a shelf)
[2]: https://www.dailycal.org/2019/02/27/san-quentin-rehabilitation-programs-offer-inmates-education-a-voice/
[3]: https://twitter.com/jessicamckellar?lang=en
[4]: https://www.python.org/
[5]: https://twitter.com/northbaypython?ref_src=twsrc%5Etfw
[6]: https://twitter.com/ShaStepter?ref_src=twsrc%5Etfw
[7]: https://twitter.com/pycon?ref_src=twsrc%5Etfw
[8]: https://t.co/Kpjo8d3ul6
[9]: https://twitter.com/jessicamckellar/status/1191601209917837312?ref_src=twsrc%5Etfw
[10]: http://shop.oreilly.com/product/110000448.do
[11]: https://t.co/WnzdEUTuxr
[12]: https://twitter.com/jessicamckellar/status/1127640222504636416?ref_src=twsrc%5Etfw

View File

@ -1,178 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use this Python script to find bugs in your Overcloud)
[#]: via: (https://opensource.com/article/20/1/logtool-root-cause-identification)
[#]: author: (Arkady Shtempler https://opensource.com/users/ashtempl)
Use this Python script to find bugs in your Overcloud
======
LogTool is a set of Python scripts that helps you investigate root
causes for problems in Overcloud nodes.
![Searching for code][1]
OpenStack stores and manages a bunch of log files on its Overcloud nodes and Undercloud host. Therefore, it's not easy to use OSP log files to investigate a problem you're having, especially when you don't even know what could have caused the problem.
If that's your situation, [LogTool][2] makes your life much easier! It saves you the time and work it would otherwise take to investigate the root cause manually. Based on a fuzzy string matching algorithm, LogTool provides all the unique error and warning messages that have occurred in the past. You can export these messages for a particular time period, such as 10 minutes ago, an hour ago, a day ago, and so on, based on timestamp in the log.
LogTool is a set of Python scripts, and its main module, **PyTool.py**, is executed on the Undercloud host. Some operation modes use additional scripts that are executed directly on Overcloud nodes, such as exporting  errors and warnings from Overcloud logs.
LogTool supports Python 2 and 3, and you can change the working directory according to your needs: [LogTool_Python2][3] or [LogTool_Python3][4].
### Operation modes
#### 1\. Export errors and warnings from Overcloud logs
This mode is used to extract all unique **ERROR** and **WARNING** messages from Overcloud nodes that took place in the past. As the user, you're prompted to provide the "since time" and debug level to be used for extraction of errors or warnings. For example, if something went wrong in the last 10 minutes, you're be able to extract error and warning messages for just that time period.
This operation mode generates a directory containing a result file for each Overcloud node. A result file is a simple text file that is compressed (***.gz**) to reduce the time needed to download it from the Overcloud node. To convert a compressed file to a regular text file, you can use [zcat][5] or a similar tool. Also, some versions of Vi and any recent version of Emacs both support reading compressed data. The result file is divided into sections and contains a table of contents at the bottom.
There are two kinds of log files LogTool detects on the fly: _Standard_ and _Not Standard_. In _Standard_, each log line has a known and defined structure: timestamp, debug level, msg, and so on. In _Not Standard_, the log's structure is unknown; it could be a third party's logs, for example. In the table of contents, you find a "Section name --&gt; Line number" per section, for example:
* **Raw Data - extracted Errors/Warnings from standard OSP logs since:** This section contains all extracted Error/Warning messages as-is without any modifications or changes. These messages are the raw data LogTool uses for fuzzy matching analysis.
* **Statistics - Number of Errors/Warnings per standard OSP log since:** In this section, you find the amount of Errors and Warnings per Standard log file. This may help you understand potential components used to search for the root cause of your issue.
* **Statistics - Unique messages, per STANDARD OSP log file since:** This section addresses unique Error and Warning messages since a timestamp you provide. For more details about each unique Error or Warning, search for the same message in the Raw Data section.
* **Statistics - Unique messages per NON STANDARD log file, since any time:** This section contains the unique messages in nonstandard log files. Unfortunately, LogTool cannot handle these log files in the same manner as Standard Log files; therefore, the "since time" you provide on extraction will be ignored, and you'll see all of the unique Errors/Warnings messages ever created. So first, scroll down to the table of contents at the bottom of the result file and review its sections—use the line indexes in the table of contents to jump to the relevant sections, where numbers 3, 4, and 5 are most important.
#### 2\. Download all logs from Overcloud nodes
Logs from all Overcloud nodes are compressed and downloaded to a local directory on your Undercloud host.
#### 3\. Grep for a string in all Overcloud logs
This mode "greps" (searches) a string provided by the user on all Overcloud logs. For example, you might want to see all logged messages for a specific request ID, such as the request ID for a "Create VM" that has failed.
#### 4\. Check current CPU,RAM and Disk on Overcloud
This mode displays the current CPU, RAM, and disk info on each Overcloud node.
#### 5\. Execute user's script
This enables users to run their own scripts on Overcloud nodes. For instance, say an Overcloud deployment failed, so you need to execute the same procedure on each Controller node to fix that. You can implement a "work around" script and to run it on Controllers using this mode.
#### 6\. Download relevant logs only, by given timestamp
This mode downloads only the Overcloud logs with _"Last Modified" &gt; "given by user timestamp."_ For example, if you got an error 10 minutes ago, old log files won't be relevant, so downloading them is unnecessary. In addition, you can't (or shouldn't)  attach large files in some bug reporting tools, so this mode might help with making bug reports.
#### 7\. Export errors and warnings from Undercloud logs
This is the same as mode #1 above, but for Undercloud logs.
#### 8\. Check Unhealthy dockers on the Overcloud
This mode is used to search for unhealthy Dockers on nodes.
#### 9\. Download OSP logs and run LogTool locally
This mode allows you to download OSP logs from Jenkins or Log Storage (for example, **cougar11.scl.lab.tlv.redhat.com**) and to analyze the downloaded logs locally.
#### 10\. Analyze deployment log on the Undercloud
This mode may help you understand what went wrong during Overcloud or Undercloud deployment. Deployment logs are generated when the **\--log** option is used, for example, inside the **overcloud_deploy.sh** script; the problem is that such logs are not "friendly," and it's hard to understand what went wrong, especially when verbosity is set to **vv** or more, as this makes the log unreadable with a bunch of data inside it. This mode provides some details about all failed tasks.
#### 11\. Analyze Gerrit(Zuul) failed gate logs
This mode is used to analyze Gerrit(Zuul) log files. It automatically downloads all files from a remote Gerrit gate (HTTP download) and analyzes all files locally.
### Installation
LogTool is available on GitHub. Clone it to your Undercloud host with:
```
`git clone https://github.com/zahlabut/LogTool.git`
```
Some external Python modules are also used by the tool:
#### Paramiko
This SSH module is usually installed on Undercloud by default. Use the following command to verify whether it's installed:
```
`ls -a /usr/lib/python2.7/site-packages | grep paramiko`
```
If you need to install the module, on your Undercloud, execute the following commands:
```
sudo easy_install pip
sudo pip install paramiko==2.1.1
```
#### BeautifulSoup
This HTML parser module is used only in modes where log files are downloaded using HTTP. It's used to parse the Artifacts HTML page to get all of the links in it. To install BeautifulSoup, enter this command:
```
`pip install beautifulsoup4`
```
You can also use the [requirements.txt][6] file to install all the required modules by executing:
```
`pip install -r requirements.txt`
```
### Configuration
All required parameters are set directly inside the **PyTool.py** script. The defaults are:
```
overcloud_logs_dir = '/var/log/containers'
overcloud_ssh_user = 'heat-admin'
overcloud_ssh_key = '/home/stack/.ssh/id_rsa'
undercloud_logs_dir ='/var/log/containers'
source_rc_file_path='/home/stack/'
```
### Usage
This tool is interactive, so to start it, just enter:
```
cd LogTool
python PyTool.py
```
### Troubleshooting LogTool
Two log files are created on runtime: Error.log and Runtime.log*.* Please add the contents of both in the description of the issue you'd like to open.
### Limitations
LogTool is hardcoded to handle files up to 500 MB.
### LogTool_Python3 script
Get it at [github.com/zahlabut/LogTool][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/logtool-root-cause-identification
作者:[Arkady Shtempler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ashtempl
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
[2]: https://github.com/zahlabut/LogTool
[3]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python2
[4]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python3
[5]: https://opensource.com/article/19/2/getting-started-cat-command
[6]: https://github.com/zahlabut/LogTool/blob/master/LogTool_Python3/requirements.txt

View File

@ -1,107 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to stop typosquatting attacks)
[#]: via: (https://opensource.com/article/20/1/stop-typosquatting-attacks)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
How to stop typosquatting attacks
======
Typosquatting is a way to lure users into divulging sensitive data to
cybercriminals. Learn how to protect your organization, your open source
project, and yourself.
![Gears above purple clouds][1]
Cybercriminals are turning to social engineering to try to trick unsuspecting people into divulging private information or valuable credentials. It is behind many [phishing scams][2] where the attacker poses as a reputable company or organization and uses it as a front to distribute a virus or other piece of malware.
One such risk is [typosquatting][3], a form of social engineering attack that tries to lure users into visiting malicious sites with URLs that are common misspellings of legitimate sites. These sites can cause significant damage to the reputation of organizations that are victimized by these attackers and harm users who are tricked into entering sensitive details into fake sites. Both system administrators and users need to be aware of the risks and take steps to protect themselves.
Open source software, which is developed and tested by large groups in public repositories, is often lauded for its security benefits. However, when it comes to social engineering schemes and malware implantation, even open source tools can fall victim.
This article looks at the rising trend of typosquatting and what these attacks could mean for open source software in the future.
### What is typosquatting?
Typosquatting is a very specific form of cybercrime that is often tied to a larger phishing attack. It begins with the cybercriminal buying and registering a domain name that is the misspelling of a popular site. For example, the cybercriminal might add an extra vowel or replace an "i" with a lowercase "l" character. Sometimes a cybercriminal obtains dozens of domain names, each with a different spelling variation.
A typosquatting attack does not become dangerous until real users start visiting the site. To make that happen, the criminal runs a phishing scam, typically over email, to urge people to click a link and visit the typosquatting website. Normally these rogue pages have simple login screens bearing familiar logos that try to imitate the real company's design.
If the user does not realize they are visiting a fake website and enters sensitive details, such as their password, username, or credit card number, into the page, the cybercriminal gets full access to that data. If a user is utilizing the same password across several sites, their other online accounts are likely to be exploited as well. This is a cybercriminal's payout: identity theft, ruined credit reports, stolen records, and sometimes worse.
### Some recent attacks
From a company perspective, having a typosquatting attack connected to your domain name can be a public relations disaster, even though you played no direct role in it, because it's seen as irresponsible internet stewardship. As a domain owner, you have a responsibility to be proactive in defending against typosquatting to limit the pain caused by this type of fraud.
A few years ago, many [health insurance customers fell victim][4] to a typosquatting attack when they received a phishing email that pointed to we11point.com, with the number 1 replacing the character "l" in the URL.
When the international domain name rules were changed to allow anyone to register a URL with an extension previously tied to specific countries, it created a brand new wave of typosquatting attacks. One of the most prevalent ones seen today is when a cybercriminal registers a .om domain that matches a popular .com domain to take advantage of accidental omissions of the letter "c" when entering a web address.
### How to protect your website from typosquatting
For companies, the best strategy is to try to stay ahead of typosquatting attacks.
That means spending the money to trademark your domain and purchase all related URLs that could be easy misspellings. You don't need to buy all top-level domain variants of your site name, but at least focus on common misspellings to your primary site name.
If you need to send your users to third-party sites, do so from your official website, not in a mass email. It's important to firmly establish a policy that official communication always and only sends users to your site. That way, should a cybercriminal attempt to spoof communication from you, your users will know something's amiss when they end up on an unfamiliar page or URL structure.
Use an open source tool like [DNS Twist][5] to automatically scan your company's domain and determine whether there could already be a typosquatting attack in progress. DNS Twist runs on Linux operating systems and can be used through a series of shell commands.
Some ISPs offer typosquatting protection as part of their product offering. This functions as an extra layer of web filtering—if a user in your organization accidentally misspells a common URL, they are alerted that the page is blocked and redirected to the proper domain.
If you are a system administrator, consider running your own [DNS server][6] along with a blacklist of incorrect and forbidden domains.
Another effective way to spot a typosquatting attack in progress is to monitor your site traffic closely and set an alert for a sudden decrease in visitors from a particular region. It could be that a large number of your regular users have been redirected to a fake site.
As with almost any form of cyberattack, the key to stopping typosquatting is constant vigilance. Your users are counting on you to identify and shut down any fake sites that are operating under your name, and if you don't, you could lose your audience's trust.
### Typosquatting threats to open source software
Most major open source projects go through security and penetration testing, largely because the code is public. However, mistakes happen under even the best of conditions. Here are some things to watch for if you're involved in an open source project.
When you get a merge request or patch from an unknown source, review it carefully before merging, especially if there's a networking stack involved. Don't fall prey to the temptation of only testing your build; look at the code to ensure that nothing nefarious has been embedded into an otherwise functional enhancement.
Also, use the same rigor in protecting your project's identity as a business does for its domain. Don't let a cybercriminal create alternate download sites and offer a version of your project with additional harmful code. Use digital signatures, like the following, to create an assurance of authenticity for your software:
```
gpg --armor --detach-sig \
\--output advent-gnome.sig \
example-0.0.1.tar.xz
```
You should also provide a checksum for the file you deliver:
```
`sha256sum example-0.0.1.tar.xz > example-0.0.1.txt`
```
Provide these safeguards even if you don't believe your users will take advantage of them, because all it takes is one perceptive user to notice a missing signature on an alternative download to alert you that someone, somewhere is spoofing your project.
### Final thoughts
Humans are prone to making mistakes. When you have millions of people around the world typing in a common web address, it's no surprise that a certain percentage enter a typo in the URL. Cybercriminals are trying to capitalize on that trend with typosquatting.
It's hard to stop cybercriminals from registering domains that are available for purchase, so mitigate against typosquatting attacks by focusing on the ways they spread. The best protection is to build trust with your users and to be diligent in detecting typosquatting attempts. Together, as a community, we can all help ensure that typosquatting attempts are ineffective.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/stop-typosquatting-attacks
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
[2]: https://www.cloudberrylab.com/resources/guides/types-of-phishing/
[3]: https://en.wikipedia.org/wiki/Typosquatting
[4]: https://www.menlosecurity.com/blog/-a-new-approach-to-end-typosquatting
[5]: https://github.com/elceef/dnstwist
[6]: https://opensource.com/article/17/4/build-your-own-name-server

View File

@ -1,158 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 handy command-line internet speed tests)
[#]: via: (https://opensource.com/article/20/1/internet-speed-tests)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
3 handy command-line internet speed tests
======
Check your internet and network speeds with these three open source
tools.
![Old train][1]
Being able to validate your network connection speed puts you in control of your computer. Three open source tools that enable you to check your internet and network speeds at the command line are Speedtest, Fast, and iPerf.
### Speedtest
[Speedtest][2] is an old favorite. It's implemented in Python, packaged in Apt, and also available with pip. You can use it as a command-line tool or within a Python script.
Install it with:
```
`sudo apt install speedtest-cli`
```
or
```
`sudo pip3 install speedtest-cli`
```
Then run it with the command **speedtest**:
```
$ speedtest
Retrieving speedtest.net configuration...
Testing from CenturyLink (65.128.194.58)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by CenturyLink (Cambridge, UK) [20.49 km]: 31.566 ms
Testing download speed................................................................................
Download: 68.62 Mbit/s
Testing upload speed......................................................................................................
Upload: 10.93 Mbit/s
```
This gives you your download and upload Internet speeds. It's fast and scriptable, so you can run it regularly and save the output to a file or database for a record of your network speed over time.
### Fast
[Fast][3] is a service provided by Netflix. Its web interface is located at [Fast.com][4], and it has a command-line interface available through npm:
```
`npm install --global fast-cli`
```
Both the website and command-line utility provide the same basic interface: it's a simple-as-possible speed test:
```
$ fast
     82 Mbps ↓
```
The command returns your Internet download speed. To get your upload speed, use the **-u** flag:
```
$ fast -u
   ⠧ 80 Mbps ↓ / 8.2 Mbps ↑
```
### iPerf
[iPerf][5] is a great way to test your LAN speed (rather than your Internet speed, as the two previous tools do). Debian, Raspbian, and Ubuntu users can install it with **apt**:
```
`sudo apt install iperf`
```
It's also available for Mac and Windows.
Once it's installed, you need two machines on the same network to use it (both must have iPerf installed). Designate one as the server.
Obtain the IP address of the server machine:
```
`ip addr show | grep inet.*brd`
```
Your local IP address (assuming an IPv4 local network) starts with either **192.168** or **10**. Take note of the IP address so you can use it on the other machine (the one designated as the client).
Start **iperf** on the server:
```
`iperf -s`
```
This waits for incoming connections from clients. Designate another machine as a client and run this command, substituting the IP address of your server machine for the sample one here:
```
`iperf -c 192.168.1.2`
```
![iPerf][6]
It only takes a few seconds to do a test, and it returns the transfer size and calculated bandwidth. I ran a few tests from my PC and my laptop, using my home server as the server machine. I recently put in Cat6 Ethernet around my house, so I get up to 1Gbps speeds from my wired connections but much lower speeds on WiFi connections.
![iPerf][7]
­You may notice where it recorded 16Gbps. That was me using the server to test itself, so it's just testing how fast it can write to its own disk. The server has hard disk drives, which are only 16Gbps, but my desktop PC gets 46Gbps, and my (newer) laptop gets over 60Gbps, as they have solid-state drives.
![iPerf][8]
### Wrapping up
Knowing the speed of your network is a rather straightforward task with these tools. If you prefer to script or run these from the command line for the fun of it, any of the above projects will get you there. If you're after specific point-to-point metrics, iPerf is your go-to.
What other tools do you use to measure the network at home? Share in the comments.
* * *
_This article was originally published on Ben Nuttall's [Tooling blog][9] and is used here with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/internet-speed-tests
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/train-plane-speed-big-machine.png?itok=f377dXKs (Old train)
[2]: https://github.com/sivel/speedtest-cli
[3]: https://github.com/sindresorhus/fast-cli
[4]: https://fast.com/
[5]: https://iperf.fr/
[6]: https://opensource.com/sites/default/files/uploads/iperf.png (iPerf)
[7]: https://opensource.com/sites/default/files/uploads/iperf2.png (iPerf)
[8]: https://opensource.com/sites/default/files/uploads/iperf3.png (iPerf)
[9]: https://tooling.bennuttall.com/command-line-speedtest-tools/

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Run multiple consoles at once with this open source window environment)
[#]: via: (https://opensource.com/article/20/1/multiple-consoles-twin)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
Run multiple consoles at once with this open source window environment
======
Simulate the old-school DESQview experience with twin in the fourteenth
in our series on 20 ways to be more productive with open source in 2020.
![Digital creative of a browser on the internet][1]
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
### Overcome "one screen, one app" limits with twin
Who remembers [DESQview][2]? It allowed for things in DOS we take for granted now in Windows, Linux, and MacOS—namely the ability to run and have multiple programs running onscreen at once. In my early days running a dial-up BBS, DESQview was a necessity—it enabled me to have the BBS running in the background while doing other things in the foreground. For example, I could be working on new features or setting up new external programs while someone was dialed in without impacting their experience. Later, in my early days in support, I could have my work email ([DaVinci email on MHS][3]), the support ticket system, and other DOS programs running all at once. It was amazing!
![twin][4]
Running multiple console applications has come a long way since then. But applications like [tmux][5] and [Screen][6] still follow the "one screen, one app" kind of display. OK, yes, tmux has screen splitting and panes, but not like DESQview, with the ability to "float" windows over others, and I, for one, miss that.
Enter [twin][7], the text-mode window environment. This relatively young project is, in my opinion, a spiritual successor to DESQview. It supports console and graphical environments, as well as the ability to detach from and reattach to sessions. It's not as easy to set up as some things, but it will run on most modern operating systems.
Twin is installed from source (for now). But first, you need to install the required development libraries. The library names will vary by operating system. The following example shows it for my Ubuntu 19.10 installation. Once the libraries are installed, check out the twin source from Git and run **./configure** and **make**, which should auto-detect everything and build twin:
```
sudo apt install libx11-dev libxpm-dev libncurses-dev zlib1g-dev libgpm-dev
git clone [git@github.com][8]:cosmos72/twin.git
cd twin
./configure
make
sudo make install
```
Note: If you are compiling this on MacOS or BSD, you will need to comment out **#define socklen_t int** in the files **include/Tw/autoconf.h** and **include/twautoconf.h** before running **make**. This should be addressed by [twin issue number 57][9].
![twin text mode][10]
Invoking twin for the first time can be a bit of a challenge. You need to tell it what kind of display it is using with the **\--hw** parameter. For example, to launch a text-mode version of twin, you would enter **twin --hw=tty,TERM=linux**. The **TERM** variable specifies an override to the current terminal variable in your shell. To launch a graphical version, run **twin --hw=X@$DISPLAY**. On Linux, twin mostly "just works," and on MacOS, it mostly only works in terminals.
The _real_ fun comes with the ability to attach to running sessions with the **twattach** and **twdisplay** commands. They allow you to attach to a running twin session somewhere else. For example, on my Mac, I can run the following command to connect to the twin session running on my demo box:
```
`twdisplay --twin@20days2020.local:0 --hw=tty,TERM=linux`
```
![remote twin session][11]
With some extra work, you can also use it as a login shell in place of [getty][12] on consoles. This requires the gdm mouse daemon, the twdm application (included), and a little extra configuration. On systems that use systemd, start by installing and enabling gdm (if it isn't already installed). Then use systemctl to create an override for a console (I used tty6). The commands must be run as the root user; on Ubuntu, they look something like this:
```
apt install gdm
systemctl enable gdm
systemctl start gdm
systemctl edit getty@tty6
```
The **systemctl edit getty@tty6** command will open an empty file named **override.conf**. This defines systemd service settings to override the default for console 6. Update the contents to:
```
[service]
ExecStart=
ExecStart=-/usr/local/sbin/twdm --hw=tty@/dev/tty6,TERM=linux
StandardInput=tty
StandardOutput=tty
```
Now, reload systemd and restart tty6 to get a twin login prompt:
```
systemctl daemon-reload
systemctl restart getty@tty6
```
![twin][13]
This will launch a twin session for the user who logs in. I do not recommend this for a multi-user system, but it is pretty cool for a personal desktop. And, by using **twattach** and **twdisplay**, you can access that session from the local GUI or remote desktops.
I think twin is pretty darn cool. It has some rough edges, but the basic functionality is there, and it has some pretty good documentation. Also, it scratches the itch I have for a DESQview-like experience on modern operating systems. I look forward to improvements over time, and I hope you like it as much as I do.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/multiple-consoles-twin
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://en.wikipedia.org/wiki/DESQview
[3]: https://en.wikipedia.org/wiki/Message_Handling_System
[4]: https://opensource.com/sites/default/files/uploads/productivity_14-1.png (twin)
[5]: https://github.com/tmux/tmux/wiki
[6]: https://www.gnu.org/software/screen/
[7]: https://github.com/cosmos72/twin
[8]: mailto:git@github.com
[9]: https://github.com/cosmos72/twin/issues/57
[10]: https://opensource.com/sites/default/files/uploads/productivity_14-2.png (twin text mode)
[11]: https://opensource.com/sites/default/files/uploads/productivity_14-3.png (remote twin session)
[12]: https://en.wikipedia.org/wiki/Getty_(Unix)
[13]: https://opensource.com/sites/default/files/uploads/productivity_14-4.png (twin)

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use Vim to manage your task list and access Reddit and Twitter)
[#]: via: (https://opensource.com/article/20/1/vim-task-list-reddit-twitter)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
Use Vim to manage your task list and access Reddit and Twitter
======
Handle your to-do list and get social from the text editor in the
seventeenth in our series on 20 ways to be more productive with open
source in 2020.
![Chat via email][1]
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
### Doing (almost) all the things with Vim, part 2
In [yesterday's article][2], you started reading mail and checking your calendars with Vim. Today, you're going to do even more. First, you'll take care of your task tracking, and then you'll get social, directly in the Vim text editor.
#### Track your to-do's in Vim with todo.txt-vim
![to-dos and Twitter with Vim][3]
Editing a text-based to-do file with Vim is a natural fit, and the [todo.txt-vim][4] package makes it even easier. Start by installing the todo.txt-vim package:
```
git clone <https://github.com/freitass/todo.txt-vim> ~/.vim/bundle/todo.txt-vim
vim ~/path/to/your/todo.txt
```
Todo.txt-vim automatically recognizes files ending in todo.txt and done.txt as [todo.txt][5] files. It adds key bindings specific to the todo.txt format. You can mark things "done" with **\x**, set them to the current date with **\d**, and change the priority with **\a**, **\b**, and **\c**. You can bump the priorities up (**\k**) or down (**\j**) and sort (**\s**) based on project (**\s+**), context (**\s@**), or date (**\sd**). And when you are finished, you can close and save the file like normal.
The todo.txt-vim package is a great addition to the [todo.sh program][6] I wrote about a few days ago, and with the [todo edit][7] add-on, it can really supercharge your to-do list tracking.
#### Read Reddit in Vim with vim-reddit
![Reddit in Vim][8]
Vim also has a nice add-on for [Reddit][9] called [vim-reddit][10]. It isn't as nice as [Tuir][11], but for a quick review of the latest posts, it works really well. Start by installing the bundle:
```
git clone <https://github.com/DougBeney/vim-reddit.git> ~/.vim/bundle/vim-reddit
vim
```
Now type **:Reddit** and the Reddit frontpage will load. You can load a specific subreddit with **:Reddit name**. Once the article list is onscreen, navigate with the arrow keys or scroll with the mouse. Pressing **o** will open the article in Vim (unless it is a media post, in which case it opens a browser), and pressing **c** brings up the comments. If you want to go right to the page, press **O** instead of **o**. Going back a screen is as easy as **u**. And when you are done with Reddit, type **:bd**. The only drawback is vim-reddit cannot log in or post new stories or comments. Then again, sometimes that is a good thing.
#### Tweet from Vim with twitvim
![Twitter in Vim][12]
And last, we have [twitvim][13], a Vim package for reading and posting to Twitter. This one takes a bit more to set up. Start by installing twitvim from GitHub:
```
`git clone https://github.com/twitvim/twitvim.git ~/.vim/bundle/twitvim`
```
Now you need to edit the **.vimrc** file and set some options. These help the plugin know which libraries it can use to talk to Twitter. Run **vim --version** and see what languages have a **+** next to them—those languages are supported by your copy of Vim.
![Enabled and Disabled things in vim][14]
Since mine says **+perl -python +python3**, I know I can enable Perl and Python 3, but not Python 2 (python).
```
" TwitVim Settings
let twitvim_enable_perl = 1
" let twitvim_enable_python = 1
let twitvim_enable_python3 = 1
```
Now you can start up Vim and log into Twitter by running **:SetLoginTwitter**, which launches a browser window asking you to authorize VimTwit as an application with access to your account. Once you enter
the supplied PIN into Vim, you're good to go.
Twitvim's commands are not as simple as in the other packages. To load up the timeline of your friends and followers, type in **:FriendsTwitter**. To list your mentions and replies, use **:MentionsTwitter**. Posting a new tweet is **:PosttoTwitter &lt;Your message&gt;**. You can scroll through the list and reply to a specific tweet by typing **\r**, and you can start a direct message with someone using **\d**.
And there you have it; you're doing (almost) all the things in Vim!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/vim-task-list-reddit-twitter
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_chat_communication_message.png?itok=LKjiLnQu (Chat via email)
[2]: https://opensource.com/article/20/1/send-email-and-check-your-calendar-vim
[3]: https://opensource.com/sites/default/files/uploads/productivity_17-1.png (to-dos and Twitter with Vim)
[4]: https://github.com/freitass/todo.txt-vim
[5]: http://todotxt.org
[6]: https://opensource.com/article/20/1/open-source-to-do-list
[7]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory#edit-open-in-text-editor
[8]: https://opensource.com/sites/default/files/uploads/productivity_17-2.png (Reddit in Vim)
[9]: https://reddit.com
[10]: https://github.com/DougBeney/vim-reddit
[11]: https://opensource.com/article/20/1/open-source-reddit-client
[12]: https://opensource.com/sites/default/files/uploads/productivity_17-3.png (Twitter in Vim)
[13]: https://github.com/twitvim/twitvim
[14]: https://opensource.com/sites/default/files/uploads/productivity_17-4.png (Enabled and Disabled things in vim)

View File

@ -1,125 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Joplin: The True Open Source Evernote Alternative)
[#]: via: (https://itsfoss.com/joplin/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Joplin: The True Open Source Evernote Alternative
======
_**Brief: Joplin is an open source note taking and to-do application. You can organize notes into notebooks and tag them. Joplin also provides a web-clipper to save articles from the internet.**_
### Joplin: Open source note organizer
![][1]
If you like [Evernote][2], you wont be too uncomfortable with the open source software, [Joplin][3].
Joplin is an excellent open source note taking application with plenty of features. You can take notes, make to-do list and sync your notes across devices by linking it with cloud services like Dropbox and NextCloud. The synchronization is protected with end to end encryption.
Joplin also has a web clipper that allows you to save webpages as notes. The web clipper is available for Firefox and Chrome/Chromium browsers.
Joplin makes the switch from Evernote easier by allowing importing Evernote files in Enex format.
Since you own the data, you can export all your files either in Joplin format or in the raw format.
### Features of Joplin
![][4]
Heres a list of all the features Joplin provides:
* Save notes into notebooks and sub-notebooks for better organization
* Create to-do list
* Notes can be tagged and searched
* Offline first, so the entire data is always available on the device even without an internet connection
* Markdown notes with pictures, math notation and checkboxes support
* File attachment support
* Application available for desktop, mobile and terminal (CLI)
* [Web Clipper][5] for Firefox and Chrome
* End To End Encryption
* Keeps note history
* Notes sorting based on name, time etc
* Synchronisation with various [cloud services][6] like [Nextcloud][7], Dropbox, WebDAV and OneDrive
* Import files from Evernote
* Export JEX files (Joplin Export format) and raw files.
* Support notes, to-dos, tags and notebooks.
* Goto Anything feature.
* Support for notifications in mobile and desktop applications.
* Geo-location support.
* Supports multiple languages
* External editor support open notes in your favorite external editor with one click in Joplin.
**Recommended Read:**
![][8]
#### [EncryptPad Encrypted Text Editor For Linux][9]
Looking for a text editor with encryption in Linux? Meet EncryptPad, a text editor with built-in encryption.
### Installing Joplin on Linux and other platforms
![][10]
[Joplin][11] is a cross-platform application available for Linux, macOS and Windows. On the mobile, you can [get the APK file][12] to install it on Android and Android-based ROMs. You can also [get it from the Google Play store][13].
For Linux, you can [use AppImage][14] file for Joplin and run the application as an executable. Youll have to give execute permission to the downloaded file.
[Download Joplin][15]
### Experiencing Joplin
Notes in Joplin use markdown but you dont have to know markdown notations to use it. The editor has a top panel that lets you graphically choose the bullet points, headings, images, link etc.
Though Joplin provides many interesting features, you have to fiddle around on your own to check things out. For example, the web clipper is not enabled by default and I had to figure out how to do it.
You have to enable the clipper from the desktop application. From the top menu, go to Tools-&gt;Options. Youll find the Web Clipper option here:
![Enable Web Clipper from the desktop application first][16]
The web clipper is not as smart as Evernotes web clipper that allows to clip portion of a web article graphically. However, you still have good enough options here.
It is an open source software under active development and I do hope that it gets more improvement over the time.
**Conclusion**
If you are looking for a good note taking application with web-clipper feature, do give Joplin a try. And if you like it and would continue using, try to help Joplin development by making a donation or improving its code and documentation. I made a sweet little [donation][17] of 25 Euro on behalf of Its FOSS.
If you have used Joplin in the past or still using it, hows your experience with it? If you use some other note taking application, would you switch to Joplin? Feel free to share your views.
--------------------------------------------------------------------------------
via: https://itsfoss.com/joplin/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_logo.png?ssl=1
[2]: https://evernote.com/
[3]: https://joplinapp.org/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_featured.jpg?ssl=1
[5]: https://joplinapp.org/clipper/
[6]: https://itsfoss.com/cloud-services-linux/
[7]: https://nextcloud.com/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/encryptpad-text-editor-with-encryption.jpg?fit=800%2C450&ssl=1
[9]: https://itsfoss.com/encryptpad-encrypted-text-editor-linux/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_ubuntu.jpg?ssl=1
[11]: https://github.com/laurent22/joplin
[12]: https://itsfoss.com/download-apk-ubuntu/
[13]: https://play.google.com/store/apps/details?id=net.cozic.joplin&hl=en_US
[14]: https://itsfoss.com/use-appimage-linux/
[15]: https://github.com/laurent22/joplin/releases
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_web_clipper.jpg?ssl=1
[17]: https://itsfoss.com/donations-foss/

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (New zine: Become a SELECT Star!)
[#]: via: (https://jvns.ca/blog/2020/02/03/new-zine--become-a-select-star/)
[#]: author: (Julia Evans https://jvns.ca/)
New zine: Become a SELECT Star!
======
On Friday I published a zine about SQL called “Become a SELECT Star!”
You can get it for $12 at <https://wizardzines.com/zines/sql>. If you buy it, youll get a PDF that you can either read on your computer or print out. You can also get a pack of [all 7 zines][1] so far.
Heres the cover and table of contents:
[![][2]][3] <https://jvns.ca/images/sql-toc.png>
### why SQL?
I got excited about writing a zine about SQL because at my old job I wrote a ton of SQL queries (mostly related to machine learning) and by doing that I learned there are a lot of weird things about SQL! For example [SQL queries dont actually start with SELECT][4]. And the way [NULL behaves isnt really intuitive at first][5].
Its been really fun to go back and try to explain the basics of SQL from the beginning. (whats the difference between WHERE and HAVING? whats the basic idea with indexes actually? how do you write a join?)
I think SQL is a really nice thing to know because there are SO MANY SQL databases out there, and some of them are super powerful! (like BigQuery and Redshift). So if you know SQL and have access to one of these big data warehouses you can write queries that crunch like 10 billion rows of data really quickly.
### lots of examples
I ended up spending a lot of time on the examples in this zine, more than in any previous zine. My friend [Anton][6] helped me come up with a fun way to illustrate them, where you can clearly see the query, the table its running on, and what the query outputs. Like this:
![][7]
### experiment: include a SQL playground
All the examples in the zine are real queries that you can run. So I thought: why not provide a simple environment where people can actually run those queries (and variations on those queries) to try things out?
So I built a small [playground where you can run queries on the example tables in the zine][8]. It uses SQLite compiled to web assembly, so all the queries run in your browser. It wasnt too complicated to build I just used my minimal Javascript/CSS skills and vue.js.
Id love to hear any feedback about whether this is helpful or not the example tables in the zine are really small (you can only print out small SQL tables!), so the biggest table in the example set has 9 rows or something.
### whats next: probably containers
I think that next up is going to be a zine on containers, which is more of a normal systems-y topic for me. (for example: [namespaces][9], [cgroups][10], [why containers?][11])
Heres a link to where to [get the zine][3] again :)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2020/02/03/new-zine--become-a-select-star/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://wizardzines.com/zines/all-the-zines/
[2]: https://jvns.ca/images/sql-cover.png
[3]: https://wizardzines.com/zines/sql
[4]: https://jvns.ca/blog/2019/10/03/sql-queries-don-t-start-with-select/
[5]: https://twitter.com/b0rk/status/1195184321818243083
[6]: http://www.cat-bus.com/
[7]: https://jvns.ca/images/sql-diagram.png
[8]: https://sql-playground.wizardzines.com
[9]: https://twitter.com/b0rk/status/1195725346970181632
[10]: https://twitter.com/b0rk/status/1214341831049252870
[11]: https://twitter.com/b0rk/status/1224500774450929664

View File

@ -0,0 +1,170 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevOps vs Agile: What's the difference?)
[#]: via: (https://opensource.com/article/20/2/devops-vs-agile)
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
DevOps vs Agile: What's the difference?
======
The difference between the two is what happens after development.
![Pair programming][1]
Early on, software development didn't really fit under a particular management umbrella. Then along came [waterfall][2], which spoke to the idea that software development could be defined by the length of time an application took to create or build.
Back then, it often took long periods of time to create, test, and deploy software because there were no checks and balances during the development process. The results were poor software quality with defects and bugs and unmet timelines. The focus was on long, drawn-out plans for software projects.
Waterfall projects have been associated with the [triple constraint][3] model, which is also called the project management triangle. Each side of the triangle represents a component of the triple constraints of project management: **scope**, **time**, and **cost**. As [Angelo Baretta writes][4], the triple constraint model "says that cost is a function of time and scope, that these three factors are related in a defined and predictable way… [I]f we want to shorten the schedule (time), we must increase cost. It says that if we want to increase scope, we must increase cost or schedule."
### Transitioning from waterfall to agile
Waterfall came from manufacturing and engineering, where a linear process makes sense; you build the wall before you build the roof. Similarly, software development problems were viewed as something that could be solved with planning. From beginning to end, the development process was clearly defined by a roadmap that would lead to the final delivery of a product.
Eventually, waterfall was recognized as detrimental and counterintuitive to software development because, often, the value could not be determined until the very end of the project cycle, and in many cases, the projects failed. Also, the customer didn't get to see any working software until the end of the project.
Agile takes a different approach that moves away from planning the entire project, committing to estimated dates, and being accountable to a plan. Rather, agile assumes and embraces uncertainty. It is built around the idea of responding to change instead of charging past it or ignoring the need for it. Instead, change is considered as a way to fulfill the needs of the customer.
### Agile values
Agile is governed by the Agile Manifesto, which defines [12 principles][5]:
1. Satisfying the customer is the top priority
2. Welcome changing requirements, even late in development
3. Deliver working software frequently
4. Development and business must work together
5. Build projects around motivated people
6. Face-to-face communication is the most efficient and effective method of conveying information
7. The primary measure of success is working software
8. Agile processes promote sustainable development
9. Maintain continuous attention to technical excellence and good design
10. Simplicity is essential
11. The best architectures, requirements, and designs emerge from self-organizing teams
12. Regularly reflect on work, then tune and adjust behavior
Agile's four [core values][6] are:
* **Individuals and interactions** over processes and tools
* **Working software** over comprehensive documentation
* **Customer collaboration** over contract negotiation
* **Responding to change** over following a plan
This contrasts with waterfall's rigid planning style. In agile, the customer is a member of the development team rather than engaging only at the beginning, when setting business requirements, and at the end, when reviewing the final product (as in waterfall). The customer helps the team write the [acceptance criteria][7] and remains engaged throughout the process. In addition, agile requires changes and continuous improvement throughout the organization. The development team works with other teams, including the project management office and the testers. What gets done and when are led by a designated role and agreed to by the team as a whole.
### Agile software development
Agile software development requires adaptive planning, evolutionary development, and delivery. Many software development methodologies, frameworks, and practices fall under the umbrella of being agile, including:
* Scrum
* Kanban (visual workflow)
* XP (eXtreme Programming)
* Lean
* DevOps
* Feature-driven development (FDD)
* Test-driven development (TDD)
* Crystal
* Dynamic systems development method (DSDM)
* Adaptive software development (ASD)
All of these have been used on their own or in combination for developing and deploying software. The most common are [scrum, kanban][8] (or the combination called scrumban), and DevOps.
[Scrum][9] is a framework under which a team, generally consisting of a scrum master, product owner, and developers, operates cross-functionally and in a self-directed manner to increase the speed of software delivery and
to bring greater business value to the customer. The focus is on faster iterations with smaller [increments][10].
[Kanban][11] is an agile framework, sometimes called a workflow management system, that helps teams visualize their work and maximize efficiency (thus being agile). Kanban is usually represented by a digital or physical board. A team's work moves across the board, for example, from not started, to in progress, testing, and finished, as it progresses. Kanban allows each team member to see the state of all work at any time.
### DevOps values
DevOps is a culture, a state of mind, a way that software development or infrastructure is, and a way that software and applications are built and deployed. There is no wall between development and operations; they work simultaneously and without silos.
DevOps is based on two other practice areas: lean and agile. DevOps is not a title or role within a company; it's really a commitment that an organization or team makes to continuous delivery, deployment, and integration. According to [Gene Kim][12], author of _The Phoenix Project_ and _The Unicorn Project_, there are three "ways" that define the principles of DevOps:
* The First Way: Principles of flow
* The Second Way: Principles of feedback
* The Third Way: Principles of continuous learning
### DevOps software development
DevOps does not happen in a vacuum; it is a flexible practice that, in its truest form, is a shared culture and mindset around software development and IT or infrastructure implementation.
When you think of automation, cloud, microservices, you think of DevOps. In an [interview][13], _Accelerate: Building and Scaling High Performing Technology Organizations_ authors Nicole Forsgren, Jez Humble, and Gene Kim explained:
> * Software delivery performance matters, and it has a significant impact on organizational outcomes such as profitability, market share, quality, customer satisfaction, and achieving organizational and mission goals.
> * High performers achieve levels of throughput, stability, and quality; they're not trading off to achieve these attributes.
> * You can improve your performance by implementing practices from the lean, agile, and DevOps playbooks.
> * Implementing these practices and capabilities also has an impact on your organizational culture, which in turn has an impact on both your software delivery performance and organizational performance.
> * There's still lots of work to do to understand how to improve performance.
>
### DevOps vs. agile
Despite their similarities, DevOps and agile are not the same, and some argue that DevOps is better than agile. To eliminate the confusion, it's important to get down to the nuts and bolts.
#### Similarities
* Both are software development methodologies; there is no disputing this.
* Agile has been around for over 20 years, and DevOps came into the picture fairly recently.
* Both believe in fast software development, and their principles are based on how fast software can be developed without causing harm to the customer or operations.
#### Differences
* **The difference between the two** is what happens after development.
* Software development, testing, and deployment happen in both DevOps and agile. However, pure agile tends to stop after these three stages. In contrast, DevOps includes operations, which happen continually. Therefore, monitoring and software development are also continuous.
* In agile, separate people are responsible for developing, testing, and deploying the software. In DevOps, the DevOps engineering role is are responsible for everything; development is operations, and operations is development.
* DevOps is more associated with cost-cutting, and agile is more synonymous with lean and reducing waste, and concepts like agile project accounting and minimum viable product (MVP) are relevant.
* Agile focuses on and embodies empiricism (**adaptation**, **transparency**, and **inspection**) instead of predictive measures.
Agile | DevOps
---|---
Feedback from customer | Feedback from self
Smaller release cycles | Smaller release cycles, immediate feedback
Focus on speed | Focus on speed and automation
Not the best for business | Best for business
### Wrapping up
Agile and DevOps are distinct, although their similarities lead people to think they are one and the same. This does both agile and DevOps a disservice.
In my experience as an agilist, I have found it valuable for organizations and teams to understand—from a high level—what agile and DevOps are and how they aid teams in working faster and more efficiently, delivering quality faster, and improving customer satisfaction.
Agile and DevOps are not adversarial in any way (or at least the intent is not there). They are more allies than enemies in the agile revolution. Agile and DevOps can operate exclusively and inclusively, which allows both to exist in the same space.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/devops-vs-agile
作者:[Taz Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/heronthecli
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
[2]: http://www.agilenutshell.com/agile_vs_waterfall
[3]: https://en.wikipedia.org/wiki/Project_management_triangle
[4]: https://www.pmi.org/learning/library/triple-constraint-erroneous-useless-value-8024
[5]: https://agilemanifesto.org/principles.html
[6]: https://agilemanifesto.org/
[7]: https://www.productplan.com/glossary/acceptance-criteria/
[8]: https://opensource.com/article/19/8/scrum-vs-kanban
[9]: https://www.scrum.org/
[10]: https://www.scrum.org/resources/what-is-an-increment
[11]: https://www.atlassian.com/agile/kanban
[12]: https://itrevolution.com/the-unicorn-project/
[13]: https://www.infoq.com/articles/book-review-accelerate/

View File

@ -0,0 +1,157 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing your attached hardware on Linux with systemd-udevd)
[#]: via: (https://opensource.com/article/20/2/linux-systemd-udevd)
[#]: author: (David Clinton https://opensource.com/users/dbclinton)
Managing your attached hardware on Linux with systemd-udevd
======
Manipulate how your Linux system handles physical devices with udev.
![collection of hardware on blue backround][1]
Linux does a great job automatically recognizing, loading, and exposing attached hardware devices from countless vendors. In fact, it was this feature that, many years ago, convinced me to insist that my employer convert its entire infrastructure to Linux. The pain point was the way a certain company in Redmond couldn't load drivers for the integrated network card on our Compaq desktops while Linux did it effortlessly.
In the years since then, Linux's library of recognized devices has grown enormously along with the sophistication of the process. And the star of that show is [udev][2]. Udev's job is to listen for events from the Linux kernel involving changes to the state of a device. It could be a new USB device that's plugged in or pulled out, or it might be a wireless mouse going offline as it's drowned in spilled coffee.
Udev's job is to handle all changes of state by, for instance, assigning the names or permissions through which devices are accessed. A record of those changes can be accessed through [dmesg][3]. Since dmesg typically spits out thousands of entries, it's smart to filter the results. The example below shows how Linux identifies my WiFi interface. It shows the chipset my wireless device uses (**ath9k**), the original name it was assigned early in the process (**wlan0**), and the big, ugly permanent name it's currently using (**wlxec086b1ef0b3**):
```
$ dmesg | grep wlan
[    5.396874] ath9k_htc 1-3:1.0 wlxec086b1ef0b3: renamed from wlan0
```
In this article, I'll discuss why anyone might want to use a name like that. Along the way, I'll explore the anatomy of udev configuration files and then show how to make changes to udev settings, including how to edit the way the system names devices. This article is based on a module from my new course, [Linux System Optimization][4].
### Understanding the udev configuration system
On systemd machines, udev operations are managed by the **systemd-udevd** daemon. You can check the status of the udev daemon the regular systemd way using **systemctl status systemd-udevd**.
Technically, udev works by trying to match each system event it receives against sets of rules found in either the **/lib/udev/rules.d/** or **/etc/udev/rules.d/** directories. Rules files include match keys and assignment keys. The set of available match keys includes **action**, **name**, and **subsystem**. This means that if a device with a specified name that's part of a specified subsystem is detected, then it will be assigned a preset configuration.
Then, the "assignment" key/value pairs are used to apply the desired configuration. You could, for instance, assign a new name to the device, associate it with a filesystem symlink, or restrict access to a particular owner or group. Here's an excerpt from such a rule from my workstation:
```
$ cat /lib/udev/rules.d/73-usb-net-by-mac.rules
# Use MAC based names for network interfaces which are directly or indirectly
# on USB and have an universally administered (stable) MAC address (second bit
# is 0). Don't do this when ifnames is disabled via kernel command line or
# customizing/disabling 99-default.link (or previously 80-net-setup-link.rules).
IMPORT{cmdline}="net.ifnames"
ENV{net.ifnames}=="0", GOTO="usb_net_by_mac_end"
ACTION=="add", SUBSYSTEM=="net", SUBSYSTEMS=="usb", NAME=="", \
    ATTR{address}=="?[014589cd]:*", \
    TEST!="/etc/udev/rules.d/80-net-setup-link.rules", \
    TEST!="/etc/systemd/network/99-default.link", \
    IMPORT{builtin}="net_id", NAME="$env{ID_NET_NAME_MAC}"
```
The **add** action tells udev to fire up whenever a new device is plugged in that is part of the networking subsystem _and_ is a USB device. In addition, if I understand it correctly, the rule will apply only when the device has a MAC address consisting of characters within a certain range and, in addition, only if the **80-net-setup-link.rules** and **99-default.link** files do _not_ exist.
Assuming all these conditions are met, the interface ID will be changed to match the device's MAC address. Remember the previous dmesg entry showing how my interface name was changed from **wlan0** to that nasty **wlxec086b1ef0b3** name? That was a result of this rule's execution. How do I know? Because **ec:08:6b:1e:f0:b3** is the device's MAC address (minus the colons):
```
$ ifconfig -a
wlxec086b1ef0b3: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1500
        inet 192.168.0.103  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::7484:3120:c6a3:e3d1  prefixlen 64  scopeid 0x20&lt;link&gt;
        ether ec:08:6b:1e:f0:b3  txqueuelen 1000  (Ethernet)
        RX packets 682098  bytes 714517869 (714.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 472448  bytes 201773965 (201.7 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
```
This udev rule exists by default within Linux. I didn't have to write it myself. But why bother—especially seeing how difficult it is to work with such an interface designation? Take a second look at the comments included with the rule:
```
# Use MAC based names for network interfaces which are directly or indirectly
# on USB and have an universally administered (stable) MAC address (second bit
# is 0). Don't do this when ifnames is disabled via kernel command line or
# customizing/disabling 99-default.link (or previously 80-net-setup-link.rules).
```
Note how this rule is designed specifically for USB-based network interfaces. Unlike PCI network interface cards (NICs), USB devices are likely to be removed and replaced from time to time. This means that there's no guarantee that their ID won't change. They could be **wlan0** one day and **wlan3** the next. To avoid confusing the applications, assign devices absolute IDs—like the one given to my USB interface.
### Manipulating udev settings
For my next trick, I'm going to grab the MAC address and current ID for the Ethernet network interface on a [VirtualBox][5] virtual machine and then use that information to create a new udev rule that will change the interface ID. Why? Well, perhaps I'm planning to work with the device from the command line, and having to type that long name can be annoying. Here's how that will work.
Before I can change my ID, I'll need to disable [Netplan][6]'s current network configuration. That'll force Linux to pay attention to the new configuration. Here's my current network interface configuration file in the **/etc/netplan/** directory:
```
$ less /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        enp0s3:
            addresses: []
            dhcp4: true
    version: 2
```
The **50-cloud-init.yaml** file contains a very basic interface definition. But it also includes some important information about disabling the configuration in the comments. To do so, I'll move to the **/etc/cloud/cloud.cfg.d** directory and create a new file called **99-disable-network-config.cfg** and add the **network: {config: disabled}** string.
While I haven't tested this method on distros other than Ubuntu, it should work on any flavor of Linux with systemd (which is nearly all of them). Whatever you're using, you'll get a good look at writing udev config files and testing them.
Next, I need to gather some system information. Running the **ip** command reports that my Ethernet interface is called **enp0s3** and its MAC address is **08:00:27:1d:28:10**:
```
$ ip a
2: enp0s3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:1d:28:10 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.115/24 brd 192.168.0.255 scope global dynamic enp0s3
```
Now, I'll create a new file called **peristent-net.rules** in the **/etc/udev/rules.d** directory. I'm going to give the file a name that starts with a low number, 10:
```
$ cat /etc/udev/rules.d/10-persistent-network.rules
ACTION=="add", SUBSYSTEM=="net",ATTR{address}=="08:00:27:1d:28:10",NAME="eth3"
```
The lower the number, the earlier Linux will execute the file, and I want this one to go early. The file contains code that will give the name **eth3** to a network device when it's added—as long as its address matches **08:00:27:1d:28:10**, which is my interface's MAC address. 
Once I save the file and reboot the machine, my new interface name should be in play. I may need to log in directly to my virtual machine and use **dhclient** to manually get Linux to request an IP address on this newly named network. Opening SSH sessions might be impossible without doing that first:
```
`$ sudo dhclient eth3`
```
Done. So you're now able to force udev to make your computer refer to a NIC the way you want. But more importantly, you've got the tools to figure out how to manage _any_ misbehaving device.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-systemd-udevd
作者:[David Clinton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dbclinton
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_BUS_Apple_520.png?itok=ZJu-hBV1 (collection of hardware on blue backround)
[2]: https://en.wikipedia.org/wiki/Udev
[3]: https://en.wikipedia.org/wiki/Dmesg
[4]: https://pluralsight.pxf.io/RqrJb
[5]: https://www.virtualbox.org/
[6]: https://netplan.io/

View File

@ -1,83 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10
======
_**Brief: Ubuntu 19.04 has reached the end of life on 23rd January 2020. This means that systems running Ubuntu 19.04 wont receive security and maintenance updates anymore and thus leaving them vulnerable.**_
![][1]
[Ubuntu 19.04][2] was released on 18th April, 2019. Since it was not a long term support (LTS) release, it was supported only for nine months.
Completing its release cycle, Ubuntu 19.04 reached end of life on 23rd January, 2020.
Ubuntu 19.04 brought a few visual and performance improvements and paved the way for a sleek and aesthetically pleasant Ubuntu look.
Like any other regular Ubuntu release, it had a life span of nine months. And that has ended now.
### End of life for Ubuntu 19.04? What does it mean?
End of life is means a certain date after which an operating system release wont get updates.
You might already know that Ubuntu (or any other operating system for that matter) provides security and maintenance upgrades in order to keep your systems safe from cyber attacks.
Once a release reaches the end of life, the operating system stops receiving these important updates.
If you continue using a system after the end of life of your operating system release, your system will be vulnerable to cyber attacks and malware.
Thats not it. In Ubuntu, the applications that you downloaded using APT from Software Center wont be updated as well. In fact, you wont be able to [install new software using apt-get command][3] anymore (gradually, if not immediately).
### All Ubuntu 19.04 users must upgrade to Ubuntu 19.10
Starting 23rd January 2020, Ubuntu 19.04 will stop receiving updates. You must upgrade to Ubuntu 19.10 which will be supported till July 2020.
This is also applicable to other [official Ubuntu flavors][4] such as Lubuntu, Xubuntu, Kubuntu etc.
#### How to upgrade to Ubuntu 19.10?
Thankfully, Ubuntu provides easy ways to upgrade the existing system to a newer version.
In fact, Ubuntu also prompts you that a new Ubuntu version is available and that you should upgrade to it.
![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
If you have a good internet connection, you can use the same [Software Updater tool that you use to update Ubuntu][6]. In the above image, you just need to click the Upgrade button and follow the instructions. I have written a detailed guide about [upgrading to Ubuntu 18.04][7] using this method.
If you dont have a good internet connection, there is a workaround for you. Make a backup of your home directory or your important data on an external disk.
Then, make a live USB of Ubuntu 19.10. Download Ubuntu 19.10 ISO and use the Startup Disk Creator tool already installed on your Ubuntu system to create a live USB out of this ISO.
Boot from this live USB and go on installing Ubuntu 19.10. In the installation procedure, you should see an option to remove Ubuntu 19.04 and replace it with Ubuntu 19.10. Choose this option and proceed as if you are [installing Ubuntu][8] afresh.
#### Are you still using Ubuntu 19.04, 18.10, 17.10 or some other unsupported version?
You should note that at present only Ubuntu 16.04, 18.04 and 19.10 (or higher) versions are supported. If you are running an Ubuntu version other than these, you must upgrade to a newer version.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
[2]: https://itsfoss.com/ubuntu-19-04-release/
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://itsfoss.com/which-ubuntu-install/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
[6]: https://itsfoss.com/update-ubuntu/
[7]: https://itsfoss.com/upgrade-ubuntu-version/
[8]: https://itsfoss.com/install-ubuntu/

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (OpenCensus to monitor your Kubernetes cluster)
[#]: via: (https://opensource.com/article/20/2/kubernetes-opencensus)
[#]: author: (Yuri Grinshteyn https://opensource.com/users/yuri-grinshteyn)
OpenCensus to monitor your Kubernetes cluster
======
Learn how to use OpenCensus, a set of open source libraries for
observability instrumentation and metrics tracing.
![Ship captain sailing the Kubernetes seas][1]
In my last article in this series, I [introduced monitoring with Prometheus][2], the leading open source metric instrumentation, collection, and storage toolkit. While Prometheus has become the de facto standard for monitoring Kubernetes for many users, there may be reasons why you might choose another approach for metric telemetry.
One reason is that using Prometheus introduces another component in your cluster that needs to be maintained and updated and will require additional management to ensure data persistence over the long term. Another reason is that Prometheus collects an incredibly large set of metrics right out of the box, and this could become cost-prohibitive in situations where metric volume is an input into your overall observability costs.
This article will introduce you to [OpenCensus][3], a set of open source libraries for observability instrumentation. OpenCensus is the currently recommended library to use for instrumenting services to collect traces and metrics. The OpenTracing and OpenCensus projects have been merged into [OpenTelemetry][4], which will become the recommended library.
While OpenCensus enables both metrics and distributed tracing, this article focuses on metrics by:
* Describing the OpenCensus approach to instrumentation and its data model
* Walking through a tutorial to explain how to instrument an application, deploy a sample application, and review the metrics that you can create with OpenCensus
I will revisit tracing in a future article.
### OpenCensus basics
OpenCensus' implementation depends on three core components:
* The [instrumentation][5] to create metrics and record data points (varies by language)
* An [exporter][6] to send metric data to a storage backend (varies by language)
* The backend to store metrics and enable querying metric data (varies by database)
To use OpenCensus in your application to record custom metrics, you will need to understand these elements for your particular programming languages and infrastructure.
#### Instrumentation
To understand how to instrument your application, you need first to understand OpenCensus' primitives, which are **measurements**, **measures**, **views**, and **aggregations**.
* **Measurement:** A measurement is the most fundamental entity; it's the single data point collected that represents a value at a point in time. For example, for a latency metric measured in milliseconds (ms), a measurement of 100 could represent an event with 100ms latency.
* **Measure:** A measure represents a metric to be recorded. For example, you could use a "latency" measure to record HTTP response latency from your service. A measure is made up of a name, a description, and the units that the metric uses. For example, to measure latency, you might specify:
* Name: response_latency
* Description: latency of server response in ms
* Unit: ms
* **View:** A view is the combination of a measure, an aggregation, and optional tags. Views are the mechanism you'll use to connect to an exporter to send the captured values to a storage backend. A view includes:
* Name
* Description
* The measure that will produce measurements for this collection
* TagKeys, if you're using tags
* **Aggregations:** Each view is also required to specify an aggregation; that is, how the view will treat multiple measures. Aggregations can be one of the following:
* Count: The count of the number of measurement points in the view
* Distribution: Histogram distribution of the points in the view
* Sum: A sum of the values of the measurement points
* LastValue: Only the last recorded value in the measurement
You can also refer to OpenCensus' [source][7] for additional information about the primitives.
#### Exporters
Once you have written the instrumentation to create measures, capture measurements, and aggregate them into views, you need an exporter to send your recorded metric data to your chosen storage backend. Unlike Prometheus, where you expose a dedicated metric endpoint to be scraped, OpenCensus works on a push model—the exporter pushes your collected data to the specified backend. You need to [choose the exporter][6] based on:
* The language that your application and instrumentation are written in
* Available support for stats (metrics)
* The available backend options
Using an exporter requires instantiating it in your code, registering it, and then registering your view to have the exporter send the collected data to the backend.
### Leverage metrics in OpenCensus
Now that you understand the terminology for how OpenCensus works to collect and export metrics, the next thing to learn is how metrics in OpenCensus function. Unlike [Prometheus][8], where you have to define the metric kind upfront, OpenCensus simply requires you to _collect_ the measurements and then _aggregate_ them in the _view_ before sending them to the _exporter_. The measurements support integer and float values. From there, you can use create histograms using the _distribution_ aggregation, add up the number of samples using the _count_ aggregation, or add up the collected values using the _sum_ aggregation.
Now you have a basic understanding of what OpenCensus is, how it works, and the kinds of data it can collect and store. Download your favorite tooling (or use [my tutorial here][9] and a [quickstart lab][10]) and take OpenCensus for a spin.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubernetes-opencensus
作者:[Yuri Grinshteyn][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yuri-grinshteyn
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://opensource.com/article/19/11/introduction-monitoring-prometheus
[3]: https://opencensus.io/
[4]: https://opentelemetry.io/
[5]: https://github.com/census-instrumentation
[6]: https://opencensus.io/exporters/supported-exporters/
[7]: https://opencensus.io/stats/
[8]: https://prometheus.io/docs/concepts/metric_types/
[9]: https://github.com/yuriatgoogle/stack-doctor
[10]: https://google.qwiklabs.com/

View File

@ -0,0 +1,122 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (PaperWM, the Tiling Window Manager for GNOME)
[#]: via: (https://itsfoss.com/paperwm/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
PaperWM, the Tiling Window Manager for GNOME
======
Lately, tiling window managers have been gaining popularity even among the regular desktop Linux users. Unfortunately, it can be difficult and time-consuming for a user to install and set up a tiling window manager.
This is why projects like [Regolith][1] and PaperWM has come up to provide tiling window experience with minimal efforts.
We have already discussed [Regolith desktop][2] in details. In this article, well check out PaperWM.
### What is PaperWM?
According to its GitHub repo, [PaperWM][3] is “an experimental [Gnome Shell extension][4] providing scrollable tiling of windows and per monitor workspaces. Its inspired by paper notebooks and tiling window managers.”
PaperWM puts all of your windows in a row. You can quickly switch between windows very quickly. Its a little bit like having a long spool of paper in front of you that you can move back and forth.
This extension supports GNOME Shell 3.28 to 3.34. It also supports both X11 and Wayland. It is written in JavaScript.
![PaperWM Desktop][5]
# How to Install PaperWM?
To install the PaperWM extension, you will need to clone the Github repo. Use this command:
```
git clone 'https://github.com/paperwm/PaperWM.git' "${XDG_DATA_HOME:-$HOME/.local/share}/gnome-shell/extensions/[email protected]:matrix.org"
```
Now all you have to do is run:
```
./install.sh
```
The installer will set up and enable PaperWM.
If you are an Ubuntu user, there are a couple of things that you will need to consider. There are currently three different versions of the Gnome desktop available with Ubuntu:
* ubuntu-desktop
* ubuntu-gnome-desktop
* vanilla-gnome-desktop
Ubuntu ships ubuntu-desktop by default and includes the _desktop-icons_ package, which causes issues with PaperWM. The PaperWM devs recommend that you turn off the desktop-icons extension [using GNOME Tweaks tool][6]. However, while this step does work in 19.10, they say that users have reported that it is not working 19.04.
According to the PaperWM devs, using _ubuntu-gnome-desktop_ produces the best out of the box results. _vanilla-gnome-desktop_ has some keybindings that raise havoc with PaperWM.
**Recommended Read:**
![][7]
#### [Get a Preconfigured Tiling Window Manager on Ubuntu With Regolith][2]
Using tiling window manager in Linux can be tricky with all those configuration. Regolith gives you an out of box i3wm experience within Ubuntu.
### How to Use PaperWM?
Like most tiling window managers, PaperWM uses the keyboard to control and manage the windows. PaperWM also supports mouse and touchpad controls. For example, if you have Wayland installed, you can use a three-fingered swipe to navigate.
![PaperWM in action][8]
Here is a list of a few of the keybinding that preset in PaperWM:
* Super + , or Super + . to activate the next or previous window
* Super + Left or Super + Rightto activate the window to the left or right
* Super + Up or Super + Downto activate the window above or below
* Super + , or Super + . to activate the next or previous window
* Super + Tab or Alt + Tab to cycle through the most recently used windows
* Super + C to center the active window horizontally
* Super + R to resize the window (cycles through useful widths)
* Super + Shift + R to resize the window (cycles through useful heights)
* Super + Shift + F to toggle fullscreen
* Super + Return or Super + N to create a new window from the active application
* Super + Backspace to close the active window
The Super key is the Windows key on your keyboard. You can find the full list of keybindings on the PaperWM [GitHub page][9].
### Final Thoughts on PaperWM
As I have stated previously, I dont use tiling managers. However, this one has me thinking. I like the fact that you dont have to do a lot of configuring to get it working. Another big plus is that it is built on GNOME, which means that getting a tiling manager working on Ubuntu is fairly straight forward.
The only downside that I can see is that a system running a dedicated tiling window manager, like [Sway][10], would use fewer system resources and be faster overall.
What are your thoughts on the PaperWM GNOME extension? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][11].
--------------------------------------------------------------------------------
via: https://itsfoss.com/paperwm/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://regolith-linux.org/
[2]: https://itsfoss.com/regolith-linux-desktop/
[3]: https://github.com/paperwm/PaperWM
[4]: https://itsfoss.com/gnome-shell-extensions/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/paperwm-desktop.png?ssl=1
[6]: https://itsfoss.com/gnome-tweak-tool/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/regolith-linux.png?fit=800%2C450&ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/paperwm-desktop2.png?fit=800%2C450&ssl=1
[9]: https://github.com/paperwm/PaperWM#usage
[10]: https://itsfoss.com/sway-window-manager/
[11]: https://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,230 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 ways to use PostgreSQL commands)
[#]: via: (https://opensource.com/article/20/2/postgresql-commands)
[#]: author: (Greg Pittman https://opensource.com/users/greg-p)
3 ways to use PostgreSQL commands
======
Whether you need something simple, like a shopping list, or complex,
like a color swatch generator, PostgreSQL commands make it easy.
![Team checklist and to dos][1]
In _[Getting started with PostgreSQL][2]_, I explained how to install, set up, and begin using the open source database software. But there's a lot more you can do with commands in [PostgreSQL][3].
For example, I use Postgres to keep track of my grocery shopping list. I do most of the grocery shopping in our home, and the bulk of it happens once a week. I go to several places to buy the things on my list because each store offers a particular selection or quality or maybe a better price. Initially, I made an HTML form page to manage my shopping list, but it couldn't save my entries. So, I had to wait to make my list all at once, and by then I usually forgot some items we need or I want.
Instead, with PostgreSQL, I can enter bits when I think of them as the week goes on and print out the whole thing right before I go shopping. Here's how you can do that, too.
### Create a simple shopping list
First, enter the database with the **psql **command, then create a table for your list with:
```
`Create table groc (item varchar(20), comment varchar(10));`
```
Type commands like the following to add items to your list:
```
insert into groc values ('milk', 'K');
insert into groc values ('bananas', 'KW');
```
There are two pieces of information (separated by a comma) inside the parentheses: the item you want to buy and letters indicating where you want to buy it and whether it's something you usually buy every week (W).
Since **psql** has a history, you can press the Up arrow and edit the data between the parentheses instead of having to type the whole line for each item.
After entering a handful of items, check what you've entered with:
```
Select * from groc order by comment;
      item      | comment
\----------------+---------
 ground coffee  | H
 butter         | K
 chips          | K
 steak          | K
 milk           | K
 bananas        | KW
 raisin bran    | KW
 raclette       | L
 goat cheese    | L
 onion          | P
 oranges        | P
 potatoes       | P
 spinach        | PW
 broccoli       | PW
 asparagus      | PW
 cucumber       | PW
 sugarsnap peas | PW
 salmon         | S
(18 rows)
```
This command orders the results by the _comment_ column so that the items are grouped by where you buy them to make it easier to shop.
By using a W to indicate your weekly purchases, you can keep your weekly items on the list when you clear out the table to prepare for the next week's list. To so that, enter:
```
`delete from groc where comment not like '%W';`
```
Notice that in PostgreSQL, **%** is the wildcard character (instead of an asterisk). So, to save typing, you might type:
```
`delete from groc where item like 'goat%';`
```
You can't use **item = 'goat%'**; it won't work.
When you're ready to shop, output your list to print it or send it to your phone with:
```
\o groclist.txt
select * from groc order by comment;
\o
```
The last command, **\o**, with nothing afterward, resets the output to the command line. Otherwise, all output will continue to go to the groc file you created.
### Analyze complex tables
This item-by-item entry may be okay for short tables, but what about really big ones? A couple of years ago, I was helping the team at [FreieFarbe.de][4] to create a swatchbook of the free colors (freieFarbe means "free colors" in German) from its HLC color palette, where virtually any imaginable print color can be specified by its hue, luminosity (brightness), and chroma (saturation). The result was the [HLC Color Atlas][5], and here's how we did it.
The team sent me files with color specifications so I could write Python scripts that would work with Scribus to generate the swatchbooks of color patches easily. One example started like:
```
HLC, C, M, Y, K
H010_L15_C010, 0.5, 49.1, 0.1, 84.5
H010_L15_C020, 0.0, 79.7, 15.1, 78.9
H010_L25_C010, 6.1, 38.3, 0.0, 72.5
H010_L25_C020, 0.0, 61.8, 10.6, 67.9
H010_L25_C030, 0.0, 79.5, 18.5, 62.7
H010_L25_C040, 0.4, 94.2, 17.3, 56.5
H010_L25_C050, 0.0, 100.0, 15.1, 50.6
H010_L35_C010, 6.1, 32.1, 0.0, 61.8
H010_L35_C020, 0.0, 51.7, 8.4, 57.5
H010_L35_C030, 0.0, 68.5, 17.1, 52.5
H010_L35_C040, 0.0, 81.2, 22.0, 46.2
H010_L35_C050, 0.0, 91.9, 20.4, 39.3
H010_L35_C060, 0.1, 100.0, 17.3, 31.5
H010_L45_C010, 4.3, 27.4, 0.1, 51.3
```
This is slightly modified from the original, which separated the data with tabs. I transformed it into a CSV (comma-separated value) file, which I prefer to use with Python. (CSV files are also very useful because they can be imported easily into a spreadsheet program.)
In each line, the first item is the color name, and it's followed by its C, M, Y, and K color values. The file consisted of 1,793 colors, and I wanted a way to analyze the information to get a sense of the range of values. This is where PostgreSQL comes into play. I did not want to enter all of this data manually—I don't think I could without errors (and headaches). Fortunately, PostgreSQL has a command for this.
My first step was to create the database with:
```
`Create table hlc_cmyk (color varchar(40), c decimal, m decimal, y decimal, k decimal);`
```
Then I brought in the data with:
```
`\copy  hlc_cmyk from '/home/gregp/HLC_Atlas_CMYK_SampleData.csv' with (header, format CSV);`
```
The backslash at the beginning is there because using the plain **copy** command is restricted to root and the Postgres superuser. In the parentheses, **header** means the first line contains headings and should be ignored, and **CSV** means the file format is CSV. Note that parentheses are not required around the color name in this method.
If the operation is successful, I see a message that says **COPY NNNN**, where the N's refer to the number of rows inserted into the table.
Finally, I can query the table with:
```
select * from hlc_cmyk;
     color     |   c   |   m   |   y   |  k  
\---------------+-------+-------+-------+------
 H010_L15_C010 |   0.5 |  49.1 |   0.1 | 84.5
 H010_L15_C020 |   0.0 |  79.7 |  15.1 | 78.9
 H010_L25_C010 |   6.1 |  38.3 |   0.0 | 72.5
 H010_L25_C020 |   0.0 |  61.8 |  10.6 | 67.9
 H010_L25_C030 |   0.0 |  79.5 |  18.5 | 62.7
 H010_L25_C040 |   0.4 |  94.2 |  17.3 | 56.5
 H010_L25_C050 |   0.0 | 100.0 |  15.1 | 50.6
 H010_L35_C010 |   6.1 |  32.1 |   0.0 | 61.8
 H010_L35_C020 |   0.0 |  51.7 |   8.4 | 57.5
 H010_L35_C030 |   0.0 |  68.5 |  17.1 | 52.5
```
It goes on like this for all 1,793 rows of data. In retrospect, I can't say that this query was absolutely necessary for the HLC and Scribus task, but it allayed some of my anxieties about the project.
To generate the HLC Color Atlas, I automated creating the color charts with Scribus for the 13,000+ colors in those pages of color swatches.
I could have used the **copy** command to output my data:
```
`\copy hlc_cmyk to '/home/gregp/hlc_cmyk_backup.csv' with (header, format CSV);`
```
I also could restrict the output according to certain values with a **where** clause.
For example, the following command will only send the table values for the hues that begin with H10.
```
`\copy hlc_cmyk to '/home/gregp/hlc_cmyk_backup.csv' with (header, format CSV) where color like 'H10%';`
```
### Back up or transfer a database or table
The final command I will mention here is **pg_dump**, which is used to back up a PostgreSQL database and runs outside of the **psql** console. For example:
```
pg_dump gregp -t hlc_cmyk &gt; hlc.out
pg_dump gregp &gt; dball.out
```
The first line exports the **hlc_cmyk** table along with its structure. The second line dumps all the tables inside the **gregp** database. This is very useful for backing up or transferring a database or tables. 
To transfer a database or table to another computer, first, create a database on the other computer (see my "[getting started][2]" article for details), then do the reverse process:
```
`psql -d gregp -f dball.out`
```
This creates all the tables and enters the data in one step.
### Conclusion
In this article, we have seen how to use the **WHERE** parameter to restrict operations, along with the use of the PostgreSQL wildcard character **%**. We've also seen how to load a large amount of data into a table, then output some or all of the table data to a file, or even your entire database with all its individual tables.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/postgresql-commands
作者:[Greg Pittman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
[2]: https://opensource.com/article/19/11/getting-started-postgresql
[3]: https://www.postgresql.org/
[4]: http://freiefarbe.de
[5]: https://www.freiefarbe.de/en/thema-farbe/hlc-colour-atlas/

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How key Python projects are maintained)
[#]: via: (https://opensource.com/article/20/2/python-maintained)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
How key Python projects are maintained
======
A peek behind the scenes of the community that keeps open source Python
projects running smoothly.
![and old computer and a new computer, representing migration to new software or hardware][1]
Jannis Leidel is part of the [Jazzband][2] community. Jazzband is a collaborative community that shares the responsibility of maintaining [Python][3]-based projects.
Jazzband was born out of the stress of maintaining an open source project alone for a longer time. Jannis is a roadie, which means he does administrative tasks and makes sure the people in the band can play when they want.
Jazzband is not his first open source volunteer work—he is a former [Django][4] core developer, [Django Software Foundation][5] board member, has written many Django apps and Python projects, has been a [pip][6] and [virtualenv][7] core developer and release manager, co-founded the [Python Packaging Authority][8], and been a [PyPI][9] admin. On the community front, he co-founded the German Django Association, served as [DjangoCon Europe][10] 2010 co-chairperson, has spoken at several conferences, and for the last year has served as a director and co-communication chair of the [Python Software Foundation][11].
### Moshe Zadka: How did you get started with programming?
Jannis Leidel: I got started with programming as part of the regular German computer science lessons in high school, where I dabbled with Turbo Pascal and Prolog. I quickly got drawn into the world of web development and wrote small websites with PHP3, [Perl5][12], and [MySQL][13]. Later at university, I picked up programming again while working on media arts projects and found [Ruby][14], Perl, and Python to be particularly helpful. I eventually stuck with Python for its versatility and ease of use. I'm very happy to have been able to use Python and open web technologies (HTML/JS/CSS) in my career since then.
### Zadka: How did you get started with open source?
Leidel: As part of an art project at university, I needed a way to talk to various web services and interact with some electronics and found my prior PHP skills not up to the task. So I took a class about programming with Python and got interested in learning more about how frameworks work—compared to libraries—as they further enshrine best practices that I wanted to know about. In particular, the nascent Django Web Framework really appealed to me since it favored a pragmatic approach and provided lots of guidance for how to develop web applications. In 2007 I participated as a student in the Google Summer of Code for Django and later contributed more to Django and its ecosystem of reusable components—after a while as a Django core developer as well. While finishing my degree, I was able to use those skills to work as a freelancer and also spend time on many different parts of the Django community. Moving laterally to the broader Python community was only natural at that point.
### Zadka: What do you for your day job?
Leidel: I'm a Staff Software Engineer at Mozilla, working on data tools for the Firefox data pipeline. In practice, that means I'm working in the broader Firefox Engineering team on various internal and public-facing web-based projects that help Mozilla employees and community members to make sense of the telemetry data that the Firefox web browser sends. Part of my current focus is maintaining our data analysis and visualization platform, which is based on the open source project [Redash][15], and also contributing back to it. Other projects that I contribute to are our next-gen telemetry system [Glean][16] and a tool that allows you to do data science in the browser (including the Scientific Python stack) called [Iodide][17].
### Zadka: How did you get involved with Jazzband?
Leidel: Back in 2015, I was frustrated with maintaining projects alone that a lot of people depended on and saw many of my community peers struggle with similar issues. I didn't know a good way to reach more people in the community who may also have an interest in long-term maintenance. On some occasions, I felt that the new "social coding" paradigm was rarely social and often rather isolating and sometimes even traumatic for old and new contributors. I believe the inequality in our community that I find intolerable nowadays was even more rampant at the time, which made providing a safe environment for contributors difficult—something which we now know is essential for stable project maintenance. I wondered if we were missing a more collaborative and inclusive approach to software development.
The Jazzband project was launched in an attempt to lower the barriers to entry for maintenance and simplify some of the more boring aspects of it (e.g., best practices around [CI][18]).
### Zadka: What is your favorite thing about Jazzband?
Leidel: My favorite thing about Jazzband is the fact that we've secured the maintenance of many projects that a lot of people depend on while also making sure that new contributors of any level of experience can join.
### Zadka: What is the job of a "roadie" in Jazzband?
Leidel: A "roadie" is a go-to person when it comes to all things behind the scenes for Jazzband. That means, for example, dealing with onboarding new projects, maintaining the Jazzband website that handles user management and project releases, acting as a first responder to security or Code of Conduct incidents, and much more. The term "roadies" is borrowed from the music and event industry for support personnel that takes care of almost everything that needs to be done while traveling on tour, except for the actual artistic performance. In Jazzband, they are there to make sure the members can work on the projects. That also means that some tasks are partially or fully automated, where it makes sense, and that best practices are applied to the majority of the Jazzband projects like packaging setup, documentation hosting or continuous integration.
### Zadka: What is the most challenging aspect of your job as a roadie for Jazzband?
Leidel: At the moment, the most challenging aspect of my job as a roadie is to implement improvements for Jazzband that community members have proposed without risking the workflow that they have come to rely on. In other words, scaling the project on a conceptual level has become more difficult the bigger Jazzband gets. There is a certain irony in the fact that I'm the only roadie at the moment and handle some of the tasks alone while Jazzband tries to prevent that from happening for its projects. This is a big concern for the future of Jazzband.
### Zadka: What would you say to someone who is wondering whether they should join Jazzband?
Leidel: If you're interested in joining a group of people who believe that working collaboratively is better than working alone, or if you have struggled with maintenance burden on your own and don't know how to proceed, consider joining Jazzband. It simplifies onboarding new contributors, provides a framework for disputes, and automates releases to [PyPI][19]. There are many best practices that work well for reducing the risk of projects becoming unmaintained.
### Zadka: Is there anything else you want to tell our readers?
Leidel: I encourage everyone working on open source projects to consider the people on the other side of the screen. Be empathetic and remember that your own experience may not be the experience of your peers. Understand that you are members of a global and diverse community, which requires us always to take leaps of respect for the differences between us.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/python-maintained
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://jazzband.co/
[3]: https://opensource.com/resources/python
[4]: https://opensource.com/article/18/8/django-framework
[5]: https://www.djangoproject.com/foundation/
[6]: https://opensource.com/article/19/11/python-pip-cheat-sheet
[7]: https://virtualenv.pypa.io/en/latest/
[8]: https://www.pypa.io/en/latest/
[9]: https://pypi.org/
[10]: https://djangocon.eu/
[11]: https://www.python.org/psf/
[12]: http://opensource.com/article/18/1/why-i-love-perl-5
[13]: https://opensource.com/life/16/10/all-things-open-interview-dave-stokes
[14]: http://opensource.com/business/16/4/save-development-time-and-effort-ruby
[15]: https://redash.io/
[16]: https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/start/report-gecko-telemetry-in-glean.html
[17]: https://alpha.iodide.io/
[18]: https://opensource.com/article/19/12/cicd-resources
[19]: https://opensource.com/downloads/7-essential-pypi-libraries

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Customize your internet with an open source search engine)
[#]: via: (https://opensource.com/article/20/2/open-source-search-engine)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Customize your internet with an open source search engine
======
Get started with YaCy, an open source, P2P web indexer.
![Person using a laptop][1]
A long time ago, the internet was small enough to be indexed by a few people who gathered the names and locations of all websites and listed them each by topic on a page or in a printed book. As the World Wide Web network grew, the "web rings" convention developed, in which sites with a similar theme or topic or sensibility banded together to form a circular path to each member. A visitor to any site in the ring could click a button to proceed to the next or previous site in the ring to discover new sites relevant to their interest.
Then for a while, it seemed the internet outgrew itself. Everyone was online, there was a lot of redundancy and spam, and there was no way to find anything. Yahoo and AOL and CompuServe and similar services had unique approaches, but it wasn't until Google came along that the modern model took hold. According to Google, the internet was meant to be indexed, sorted, and ranked through a search engine.
### Why choose an open source alternative?
Search engines like Google and DuckDuckGo are demonstrably effective. You may have reached this site through a search engine. While there's a debate to be had about content falling through the cracks because a host chooses not to follow best practices for search engine optimization, the modern solution for managing the wealth of culture and knowledge and frivolity that is the internet is relentless indexing.
But maybe you prefer not to use Google or DuckDuckGo because of privacy concerns or because you're looking to contribute to an effort to make the internet more independent. If that appeals to you, then consider participating in [YaCy][2], the peer-to-peer internet indexer and search engine.
### Install YaCy
To install and try YaCy, first ensure you have Java installed. If you're on Linux, you can follow the instructions in my [_How to install Java on Linux_][3] article. If you're on Windows or MacOS, obtain an installer from [AdoptOpenJDK.net][4].
Once you have Java installed, [download the installer][5] for your platform.
If you're on Linux, unarchive the tarball and move it to the **/opt** directory:
```
`$ sudo tar --extract --file  yacy_*z --directory /opt`
```
Start YaCy according to instructions for the installer you downloaded.
On Linux, start YaCy running in the background:
```
`$ /opt/startYACY.sh &`
```
In a web browser, navigate to **localhost:8090** and search.
![YaCy start page][6]
### Add YaCy to your URL bar
If you're using the Firefox web browser, you can make YaCy your default search engine in the Awesome Bar (that's Mozilla's name for the URL field) with just a few clicks.
First, make the dedicated search bar visible in the Firefox toolbar, if it's not already (you don't have to keep the search bar visible; you only need it active long enough to add a custom search engine). The search bar is available in the hamburger menu in the upper-right corner of Firefox in the **Customize** menu. Once the search bar is visible in your Firefox toolbar, navigate to **localhost:8090**, and click the magnifying glass icon in the Firefox search bar you just added. Click the option to add YaCy to your Firefox search engines.
![Adding YaCy to Firefox][7]
Once this is done, you can mark it as your default in Firefox preferences, or just use it selectively in searches performed in the Firefox search bar. If you set it as your default search engine, then you may have no need for the dedicated search bar because the default engine is also used by the Awesome Bar, so you can remove it from your toolbar.
### How to a P2P search engine works
YaCy is an open source and distributed search engine. It's written in [Java][8], so it runs on any platform, and it performs web crawls, indexing, and searching. It's a peer-to-peer (P2P) network, so every user running YaCy joins in the effort to track the internet as it changes from day to day. Of course, no single user possesses a full index of the entire internet because that would take a data center to house, but the index is distributed and redundant across all YaCy users. It's a lot like BitTorrent (as it uses distributed hash tables, or DHT, to reference index entries), except the data you're sharing is a matrix of words and URL associations. By mixing the results returned by the hash tables, no one can tell who has searched for what words, so all searches are functionally anonymous. It's an effective system for unbiased, ad-free, untracked, and anonymous searches, and you can join in just by using it.
### Search engines and algorithms
The act of indexing the internet refers to separating a web page into the singular words on it, then associating the page's URL with each word. Searching for one or more words in a search engine fetches all URLs associated with the query. That's one thing the YaCy client does while running.
The other thing the client does is provide a search interface for your browser. Instead of navigating to Google when you want to search, you can point your web browser to **localhost:8090** to search YaCy. You may even be able to add it to your browser's search bar (depending on your browser's extensibility), so you can search from the URL bar.
### Firewall settings for YaCy
When you first start using YaCy, it's probably running in "junior" mode. This means that the sites your client crawls are available only to you because no other YaCy client can reach your index entries. To join the P2P experience, you must open port 8090 in your router's firewall and possibly your software firewall if you're running one. This is called "senior" mode.
If you're on Linux, you can find out more about your computer's firewall in [_Make Linux stronger with firewalls_][9]. On other platforms, refer to your operating system's documentation.
A firewall is almost always active on the router provided by your internet service provider (ISP), and there are far too many varieties of them to document accurately here. Most routers provide the option to "poke a hole" in your firewall because many popular networked games require two-way traffic.
If you know how to log into your router (it's often either 192.168.0.1 or 10.1.0.1, but can vary depending on the manufacturer's settings), then log in and look for a configuration panel controlling the _firewall_ or _port forwarding_ or _applications_.
Once you find the preferences for your router's firewall, add port 8090 to the whitelist. For example:
![Adding YaCy to an ISP router][10]
If your router is doing port forwarding, then you must forward the incoming traffic to your computer's IP address, using the same port. For example:
![Adding YaCy to an ISP router][11]
If you can't adjust your firewall settings for any reason, that's OK. YaCy will continue to run and operate as a client of the P2P search network in junior mode.
### An internet of your own
There's much more you can do with the YaCy search engine than just search passively. You can force crawls of underrepresented websites, you can request the network crawl a site, you can choose to use YaCy for just on-premises searches, and much more. You have better control over what _your_ internet looks like. The more senior users there are, the more sites indexed. The more sites indexed, the better the experience for all users. Join in!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-search-engine
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://yacy.net/
[3]: https://opensource.com/article/19/11/install-java-linux
[4]: https://adoptopenjdk.net/releases.html
[5]: https://yacy.net/download_installation/
[6]: https://opensource.com/sites/default/files/uploads/yacy-startpage.jpg (YaCy start page)
[7]: https://opensource.com/sites/default/files/uploads/yacy-add-firefox.jpg (Adding YaCy to Firefox)
[8]: https://opensource.com/resources/java
[9]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
[10]: https://opensource.com/sites/default/files/uploads/router-add-app.jpg (Adding YaCy to an ISP router)
[11]: https://opensource.com/sites/default/files/uploads/router-add-app1.jpg (Adding YaCy to an ISP router)

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introducing Zuul for improved CI/CD)
[#]: via: (https://opensource.com/article/20/2/zuul)
[#]: author: (Jeremy Stanley https://opensource.com/users/fungi)
Introducing Zuul for improved CI/CD
======
A quick history of how and why Zuul is replacing Jenkins in CI testing
in the OpenStack community.
![Plumbing tubes in many directions][1]
[Jenkins][2] is a marvelous piece of software. As an execution and automation engine, it's one of the best you're going to find. Jenkins serves as a key component in countless continuous integration (CI) systems, and this is a testament to the value of what its community has built over the years. But that's what it is­­—a component. Jenkins is not a CI system itself; it just runs things for you. It does that really well and has a variety of built-ins and a vibrant ecosystem of plugins to help you tell it what to run, when, and where.
CI is, at the most fundamental level, about integrating the work of multiple software development streams into a coherent whole with as much frequency and as little friction as possible. Jenkins, on its own, doesn't know about your source code or how to merge it together, nor does it know how to give constructive feedback to you and your colleagues. You can, of course, glue it together with other software that can perform these activities, and this is how many CI systems incorporate Jenkins.
It's what we did for OpenStack, too, at least at first.
### If it's not tested, it's broken
In 2010, an open source community of projects called [OpenStack][3] was forming. Some of the developers brought in to assist with the collaboration infrastructure also worked on a free database project called [Drizzle][4], and a key philosophy within that community was the idea "if it's not tested, it's broken." So OpenStack, on day one, required all proposed changes of its software to be reviewed and tested for regressions before they could be approved to merge into the trunk of any source code repositories. To do this, Hudson (which later forked to form the Jenkins project) was configured to run tests exercising every change.
A plugin was installed to interface with the [Gerrit][5] code review system, automatically triggering jobs when new changes were proposed and reporting back with review comments indicating whether they succeeded or failed. This may sound rudimentary by today's standards, but at the time, it was a revolutionary advancement for an open source collaboration. No developer on OpenStack was special in the eyes of CI, and everyone's changes had to pass this growing battery of tests before they could merge—a concept the project called "project gating."
There was, however, an emerging flaw with this gating idea: To guarantee two unrelated changes didn't alter a piece of software in functionally incompatible ways, they had to be tested one at a time in sequence before they could merge. OpenStack was complicated to install and test, even back then, and quickly grew in popularity. The rising volume of developer contributions coupled with increasing test coverage meant that, during busy periods, there was simply not enough time to test every change that passed review. Some longer-running jobs took nearly an hour to complete, so the upper bound for what could get through the gate was roughly two dozen changes in a day. The resulting merge backlog showed a new solution was required.
### Enter Zuul
During an OpenStack CI meeting in May 2012, one of the CI team members, James Blair, [announced][6] that he'd "been working on speculative execution of Jenkins jobs." **Speculative execution** is an optimization most commonly found in the pipelines of modern microprocessors. Much like the analogy with processor hardware, the theory was that by optimistically predicting positive gating results for changes recently approved but that had not yet completed their tests, subsequently approved changes could be tested concurrently and then conditionally merged as long as their predecessors also passed tests and merged. James said he had a name for this intelligent scheduler: [Zuul][7].
Within this time frame, challenges from trying to perform better revision control for Jenkins' XML job configuration led to the creation of the human-readable YAML-based [Jenkins Job Builder][8] templating engine. Limited success with the JClouds plugin for Jenkins and cumbersome attempts to use jobs for refreshing cloud images of single-use Jenkins slaves ended with the creation of the [Nodepool][9] service. Limited log-storage capabilities resulted in the team adding separate external solutions for organizing, serving, and indexing job logs and assuming maintainership of an abandoned secure copy protocol (SCP) plugin replacing the less-secure FTP option that Jenkins provided out of the box. The OpenStack infrastructure team was slowly building a fleet of services and utilities around Jenkins but began to bump up against a performance limitation.
### Multiplying Jenkins
By mid-2013, Nodepool was constantly recycling as many as 100 virtual machines registered with Jenkins as slaves, but this was no longer enough to keep up with the growing workload. Thread contention for global locks in Jenkins thwarted all attempts to push past this threshold, no matter how much processor power and memory was thrown at the master server. The project had offers to donate additional capacity for Jenkins slaves to help relieve the frequent job backlog, but this would require an additional Jenkins master. The efficient division of work between multiple masters needed a new channel of communication for dispatch and coordination of jobs. Zuul's maintainers identified the [Gearman][10] job server protocol as an ideal fit, so they outfitted Zuul with a new [geard][11] service and extended Jenkins with a custom Gearman client plugin.
Now that jobs were spread across a growing assembly of Jenkins masters, there was no longer any single dashboard with a complete view of job activity and results. In order to facilitate this new multi-master world, Zuul grew its own status API and WebUI, as well as a feature to emit metrics through the [StatsD][12] protocol. Over the next few years, Zuul steadily subsumed more of the CI features its users relied on, while Jenkins' place in the system waned accordingly, and it was becoming a liability. OpenStack made an early choice to standardize on the Python programming language; this was reflected in Zuul's development, yet Jenkins and its plugins were implemented in Java. Zuul's configuration was maintained in the same YAML serialization format that OpenStack used to template its own Jenkins jobs, while Jenkins kept everything in baroque XML. These differences complicated ongoing maintenance and led to an unnecessarily steep learning curve for new administrators from related communities that had started trying to run Zuuls.
The time was right for another revolution.
### The rise of Ansible
In early 2016, Zuul's maintainers embarked on an ambitious year-long overhaul of their growing fleet of services with the goal of eliminating Jenkins from the overall system design. By this time, Jenkins was serving only as a conduit for running jobs consisting mostly of shell scripts on slave nodes over SSH, providing real-time streaming of job output and copying resulting artifacts to longer-term storage. [Ansible][13] was found to be a great fit for that first need; purpose-built to run commands remotely over SSH, it was written in Python, just like Zuul, and also used YAML to define its tasks. It even had built-in modules for features the team had previously implemented as bespoke Jenkins plugins. Ansible provided true multi-node support right out of the box, so the same playbooks could be used for both simulating and performing complex production deployments. An ever-expanding ecosystem of third-party modules filled in any gaps, in much the same way as the Jenkins community's plugins had before.
A new Zuul executor service filled the prior role of the Jenkins master: it acted on pending requests in the scheduler's geard, dispatched them via Ansible to ephemeral servers managed by Nodepool, then collected results and artifacts for publication. It also exposed in-progress build output over the classic [RFC 742 Name/Finger protocol][14], streamed in real time from an extension of Ansible's command output module. Once it was no longer necessary to limit jobs to what Jenkins' parser could comprehend, Zuul was free to grow new features like distributed in-repository job definitions, shareable between projects with inheritance and secure handling of secrets, as well as the ability to test-drive proposed changes for the jobs themselves. Jenkins served its purpose admirably, but at least for Zuul, its usefulness was finally at an end.
### Testing the future
Zuul's community likes to say that it "tests the future" through its novel application of speculative execution. Gone are the harrowing days of wondering whether the improvement you want to make to an existing job will render it non-functional once it's applied in production. Overloaded review teams for a massive central job repository are a thing of the past. Jobs are treated as a part of the software and shipped right alongside the rest of the source code, taking advantage of Zuul's other features like cross-repository dependencies so that your change to part of a job in one project can be exercised with a proposed job change in another project. It will even comment on your job changes, highlighting specific lines with syntax problems as if it were another code reviewer giving you advice.
These were features Zuul only dreamed of before, but which required freedom from Jenkins so that it could take job parsing into its own hands. This is the future of CI, and Zuul's users are living it.
As of early 2019, the OpenStack Foundation recognized Zuul as an independent, openly governed project with its own identity and flourishing community. If you're into open source CI, consider taking a look. Development on the next evolution of Zuul is always underway, and you're welcome to help. Find out more on [Zuul's website][7].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/zuul
作者:[Jeremy Stanley][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fungi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
[2]: https://jenkins.io/
[3]: https://www.openstack.org/
[4]: https://en.wikipedia.org/wiki/Drizzle_(database_server)
[5]: https://www.gerritcodereview.com/
[6]: http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2012-05-22.log.html#t2012-05-22T19:42:27
[7]: https://zuul-ci.org/
[8]: https://jenkins-job-builder.readthedocs.io/
[9]: https://zuul-ci.org/docs/nodepool/
[10]: http://gearman.org/
[11]: https://docs.opendev.org/opendev/gear/#server-example
[12]: https://github.com/statsd/statsd
[13]: https://www.ansible.com/
[14]: https://tools.ietf.org/html/rfc742

View File

@ -0,0 +1,185 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Powershell to automate Linux, macOS, and Windows processes)
[#]: via: (https://opensource.com/article/20/2/devops-automation)
[#]: author: (Willy-Peter Schaub https://opensource.com/users/wpschaub)
Using Powershell to automate Linux, macOS, and Windows processes
======
Automation is pivotal to DevOps, but is everything automatable?
![CICD with gears][1]
Automation takes control of manual, laborious, and error-prone processes and replaces engineers performing manual tasks with computers running automation scripts. Everyone agrees that manual processes are a foe of a healthy DevOps mindset. Some argue that automation is not a good thing because it replaces hard-working engineers, while others realize that it boosts consistency, reliability, and efficiency, saves time, and (most importantly) enables engineers to work smart.
> "_DevOps is not just automation or infrastructure as code_" —[Donovan Brown][2].
Having used automated processes and toolchains since the early '80s, I always twitch when I hear or read the recommendation to "automate everything." While it is technically possible to automate everything, automation is complex and comes at a price in terms of development, debugging, and maintenance. If you have ever dusted off an inviable Azure Resource Manager (ARM) template or a precious maintenance script you wrote a long time ago, expecting it to execute flawlessly months or years later, you will understand that automation, like any other code, is brittle and needs continuous maintenance and nurture.
So, what and when should you automate?
* Automate processes you perform manually more than once or twice.
* Automate processes you will perform regularly and continuously.
* Automate everything automatable.
More importantly, what should you _not_ automate?
* Don't automate processes that are a one-off—it is not worth the investment unless you reuse it as reference documentation and regularly validate to ensure it remains functional.
* Don't automate highly volatile processes—it is too complex and expensive.
* Don't automate broken processes—fix them before automating.
For example, my team continuously inspects hundreds of user activities on our common collaboration and engineering system, looking for inactivity that is wasting precious dollars. If a user has been inactive for three or more months and has been assigned an expensive license, we revert the user to a less functional and free license.
As Fig. 1 shows, it is not a technically challenging process. It is a mind-numbing and error-prone process, especially when it's performed while context switching with other development and operational tasks.
![Manual process to switch user license][3]
Fig. 1 Manual process to switch user license
Incidentally, this is an example of a value stream map created in three easy steps:
1. Visualize all activities: list users, filter users, and reset licenses.
2. Identify stakeholders, namely operations and licensing teams.
3. Measure:
* Total lead time (TLT) = 13 hours
* Total cycle time (TCT) = 1.5 hours
* Total efficiency percentage = TLT/TCT*100 = 11.5%
If you hang a copy of these visualizations in high-traffic and high-visibility areas, such as your team's breakout area, cafeteria, or on the way to your washrooms, you will trigger lots of discussions and unsolicited feedback. For example, looking at the visual, it is evident that the manual tasks are a waste, caused primarily by long process wait times.
Let us explore a simple PowerShell script that automates the process, as shown in Figure 2, reducing the total lead-time from 13 to 4 hours and 60 seconds, and raising the overall efficiency from 11.5 to 12.75%.
![Semi-automated PowerShell-based process to switch user license][4]
 
[PowerShell][5] is an open source task-based scripting language. It is found [on GitHub][6], is built on .NET, and allows you to automate Linux, macOS, and Windows processes. Users with a development background, especially C#, will enjoy the full benefits of PowerShell.
The PowerShell script example below communicates with [Azure DevOps][7] via its service [REST API][8]. The script combines the manual list users and filter users tasks in Fig. 1, identifies all users in the **DEMO** organization that have not been active for two months and are using either a **Basic** or a more expensive **Basic + Test** license, and outputs the user's details to the console. Simple!
First, set up the authentication header and other variables that will be used later with this initialization script:
```
param(
  [string]   $orgName       = "DEMO",
  [int]      $months        = "-2",
  [string]   $patToken      = "&lt;PAT&gt;"
)
# Basic authentication header using the personal access token (PAT)
$basicAuth = ("{0}:{1}" -f "",$patToken)
$basicAuth = [System.Text.Encoding]::UTF8.GetBytes($basicAuth)
$basicAuth = [System.Convert]::ToBase64String($basicAuth)
$headers   = @{Authorization=("Basic {0}" -f $basicAuth)}
# REST API Request to get all entitlements
$request_GetEntitlements    = "<https://vsaex.dev.azure.com/>" + $orgName + "/_apis/userentitlements?top=10000&amp;api-version=5.1-preview.2";
# Initialize data variables
$members              = New-Object System.Collections.ArrayList
[int] $count          = 0;
[string] $basic       = "Basic";
[string] $basicTest   = "Basic + Test Plans";
```
Next, query all the entitlements with this script to identify inactive users:
```
# Send the REST API request and initialize the members array list.
$response = Invoke-RestMethod -Uri $request_GetEntitlements -headers $headers -Method Get
$response.items | ForEach-Object { $members.add($_.id) | out-null }
# Iterate through all user entitlements
$response.items | ForEach-Object {
  $name    = [string]$_.user.displayName;
  $date    = [DateTime]$_.lastAccessedDate;
  $expired = Get-Date;
  $expired = $expired.AddMonths($months);
  $license = [string]$_.accessLevel.AccountLicenseType;
  $licenseName = [string]$_.accessLevel.LicenseDisplayName;
  $count++;
  if ( $expired -gt $date ) {
    # Ignore users who have NEVER or NOT YET ACTIVATED their license
    if ( $date.Year -eq 1 )
    {
      Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
    }
    # Look for BASIC license
    elseif ( $licenseName -eq $basic ) {
         Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
      }
    }
    # Look for BASIC + TEST license
    elseif ( $licenseName -eq $basicTest ) {
        Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
      }
    }
}
```
When you run the script, you get the following output, which you can forward to the licensing team to reset the user licenses:
```
**INACTIVE** Name: Demo1 Last Access: 2019/09/06 11:01:26 AM License: Basic
**INACTIVE** Name: Demo2 Last Access: 2019/06/04 08:53:15 AM License: Basic
**INACTIVE** Name: Demo3 Last Access: 2019/09/26 12:54:57 PM License: Basic
**INACTIVE** Name: Demo4 Last Access: 2019/06/07 12:03:18 PM License: Basic
**INACTIVE** Name: Demo5 Last Access: 2019/07/18 10:35:11 AM License: Basic
**INACTIVE** Name: Demo6 Last Access: 2019/10/03 09:21:20 AM License: Basic
**INACTIVE** Name: Demo7 Last Access: 2019/10/02 11:45:55 AM License: Basic
**INACTIVE** Name: Demo8 Last Access: 2019/09/20 01:36:29 PM License: Basic + Test Plans
**INACTIVE** Name: Demo9 Last Access: 2019/08/28 10:58:22 AM License: Basic
```
If you automate the final step, automatically setting the user licenses to a free stakeholder license, as in Fig. 3, you can further reduce the overall lead time to 65 seconds and raise the overall efficiency to 77%.
![Fully automated PowerShell-based process to switch user license][9]
Fig. 3 Fully automated PowerShell-based process to switch user license
The core value of this PowerShell script is not just the ability to _automate_ but also to perform the process _regularly_, _consistently_, and _quickly_. Further improvements would trigger the script weekly or daily using a scheduler such as an Azure pipeline, but I will hold the programmatic license reset and script scheduling for a future article.
Here is a graph to visualize the progress:
![Graph to visualize progress][10]
Fig. 4 Measure, measure, measure
I hope you enjoyed this brief journey through automation, PowerShell, REST APIs, and value stream mapping. Please share your thoughts and feedback in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/devops-automation
作者:[Willy-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/wpschaub
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: http://www.donovanbrown.com/post/what-is-devops
[3]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_1.png (Manual process to switch user license)
[4]: https://opensource.com/sites/default/files/uploads/the_devops_quest_to_automate_everything_automatable_using_powershell_picture_2.png (Semi-automated PowerShell-based process to switch user license)
[5]: https://opensource.com/article/19/8/variables-powershell
[6]: https://github.com/powershell/powershell
[7]: https://docs.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
[8]: https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-5.1
[9]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_3.png (Fully automated PowerShell-based process to switch user license)
[10]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_4.png (Graph to visualize progress)

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is WireGuard? Why Linux Users Going Crazy Over it?)
[#]: via: (https://itsfoss.com/wireguard/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
What is WireGuard? Why Linux Users Going Crazy Over it?
======
From normal Linux users to Linux creator [Linus Torvalds][1], everyone is in awe of WireGuard. What is WireGuard and what makes it so special?
### What is WireGuard?
![][2]
[WireGuard][3] is an easy to configure, fast, and secure open source [VPN][4] that utilizes state-of-the-art cryptography. Its aim is to provide a faster, simpler and leaner general purpose VPN that can be easily deployed on low-end devices like Raspberry Pi to high-end servers.
Most of the other solutions like [IPsec][5] and OpenVPN were developed decades ago. Security researcher and kernel developer Jason Donenfeld realized that they were slow and difficult to configure and manage properly.
This made him create a new open source VPN protocol and solution which is faster, secure easier to deploy and manage.
WireGuard was originally developed for Linux but it is now available for Windows, macOS, BSD, iOS and Android. It is still under heavy development.
### Why is WireGuard so popular?
![][6]
Apart from being a cross-platform, one of the biggest plus point for WireGuard is the ease of deployment. Configuring and deploying WireGuard is as easy as configuring and using SSH.
Look at [WireGuard set up guide][7]. You install WireGuard, generate public and private keys (like SSH), set up firewall rules and start the service. Now compare it to the [OpenVPN set up guide][8]. There are way too many things to do here.
Another good thing about WireGuard is that it has a lean codebase with just 4000 lines of code. Compare it to 100,000 lines of code of [OpenVPN][9] (another popular open source VPN). It is clearly easier to debug WireGuard.
Dont go by its simplicity. WireGuard supports all the state-of-the-art cryptography like like the [Noise protocol framework][10], [Curve25519][11], [ChaCha20][12], [Poly1305][13], [BLAKE2][14], [SipHash24][15], [HKDF][16], and secure trusted constructions.
Since WireGuard runs in the [kernel space][17], it provides secure networking at a high speed.
These are some of the reasons why WireGuard has become increasingly popular. Linux creator Linus Torvalds loves WireGuard so much that he is merging it in the [Linux Kernel 5.6][18]:
> Can I just once again state my love for it and hope it gets merged soon? Maybe the code isnt perfect, but Ive skimmed it, and compared to the horrors that are OpenVPN and IPSec, its a work of art.
>
> Linus Torvalds
### If WireGuard is already available, then whats the fuss about including it in Linux kernel?
This could be confusing to new Linux users. You know that you can install and configure a WireGuard VPN server on Linux but then you also read the news that Linux Kernel 5.6 is going to include WireGuard. Let me explain it to you.
At present, you can install WireGuard on Linux as a [kernel module][19]. Regular applications like VLC, GIMP etc are installed on top of the Linux kernel (in [user space][20]), not inside it.
When you install WireGuard as a kernel module, you are basically modifying the Linux kernel on your own and add some code to it. Starting kernel 5.6, you wont need manually add the kernel module. It will be included in the kernel by default.
The inclusion of WireGuard in Kernel 5.6 will most likely [extend the adoption of WireGuard and thus change the current VPN scene][21].
**Conclusion**
WireGuard is gaining popularity for the good reasons. Some of the popular [privacy focused VPNs][22] like [Mullvad VPN][23] are already using WireGuard and the adoption is likely to grow in the near future.
I hope you have a slightly better understanding of WireGuard. Your feedback is welcome, as always.
--------------------------------------------------------------------------------
via: https://itsfoss.com/wireguard/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linus-torvalds-facts/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/wireguard.png?ssl=1
[3]: https://www.wireguard.com/
[4]: https://en.wikipedia.org/wiki/Virtual_private_network
[5]: https://en.wikipedia.org/wiki/IPsec
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/wireguard-logo.png?ssl=1
[7]: https://www.linode.com/docs/networking/vpn/set-up-wireguard-vpn-on-ubuntu/
[8]: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04
[9]: https://openvpn.net/
[10]: https://noiseprotocol.org/
[11]: https://cr.yp.to/ecdh.html
[12]: https://cr.yp.to/chacha.html
[13]: https://cr.yp.to/mac.html
[14]: https://blake2.net/
[15]: https://131002.net/siphash/
[16]: https://eprint.iacr.org/2010/264
[17]: http://www.linfo.org/kernel_space.html
[18]: https://itsfoss.com/linux-kernel-5-6/
[19]: https://wiki.archlinux.org/index.php/Kernel_module
[20]: http://www.linfo.org/user_space.html
[21]: https://www.zdnet.com/article/vpns-will-change-forever-with-the-arrival-of-wireguard-into-linux/
[22]: https://itsfoss.com/best-vpn-linux/
[23]: https://mullvad.net/en/

View File

@ -0,0 +1,109 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Music composition with Python and Linux)
[#]: via: (https://opensource.com/article/20/2/linux-open-source-music)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Music composition with Python and Linux
======
A chat with Mr. MAGFest—Brendan Becker.
![Wires plugged into a network switch][1]
I met Brendan Becker working in a computer store in 1999. We both enjoyed building custom computers and installing Linux on them. Brendan was always involved in several technology projects at once, ranging from game coding to music composition. Fast-forwarding a few years from the days of computer stores, he went on to write [pyDance][2], an open source implementation of multiple dancing games, and then became the CEO of music and gaming event [MAGFest][3]. Sometimes referred to as "Mr. MAGFest" because he was at the helm of the event, Brendan now uses the music pseudonym "[Inverse Phase][4]" as a composer of chiptunes—music predominantly made on 8-bit computers and game consoles.
I thought it would be interesting to interview him and ask some specifics about how he has benefited from Linux and open source software throughout his career.
![Inverse Phase performance photo][5]
Copyright Nickeledge, CC BY-SA 2.0.
### Alan Formy-Duval: How did you get started in computers and software?
Brendan Becker: There's been a computer in my household almost as far back as I can remember. My dad has fervently followed technology; he brought home a Compaq Portable when they first hit the market, and when he wasn't doing work on it, I would have access to it. Since I began reading at age two, using a computer became second nature to me—just read what it said on the disk, follow the instructions, and I could play games! Some of the time I would be playing with learning and education software, and we had a few disks full of games that I could play other times. I remember a single disk with a handful of free clones of popular titles. Eventually, my dad showed me that we could call other computers (BBS'ing at age 5!), and I saw where some of the games came from. One of the games I liked to play was written in BASIC, and all bets were off when I realized that I could simply modify the game by just reading a few things and re-typing them to make my game easier.
### Formy-Duval: This was the 1980s?
Becker: The Compaq Portable dropped in 1983 to give you a frame of reference. My dad had one of the first of that model.
### Formy-Duval: How did you get into Linux and open source software?
Becker: I was heavy into MODs and demoscene stuff in the early 90s, and I noticed that Walnut Creek ([cdrom.com][6]; now defunct) ran shop on FreeBSD. I was super curious about Unix and other operating systems in general, but didn't have much firsthand exposure, and thought it might be time to finally try something. DOOM had just released, and someone told me I might even be able to get it to run. Between that and being able to run cool internet servers, I started going down the rabbit hole. Someone saw me reading about FreeBSD and suggested I check out Linux, this new OS that was written from the ground up for x86, unlike BSD, which (they said) had some issues with compatibility. So, I joined #linuxhelp on undernet IRC and asked how to get started with Linux, pointing out that I had done a modicum of research (asking "what's the difference between Red Hat and Slackware?") and probing mainly about what would be easiest to use. The only person talking in the channel said that he was 13 years old and he could figure out Slackware, so I should not have an issue. A math teacher in my school gave me a hard disk, I downloaded the "A" disk sets and a boot disk, wrote it out, installed it, and didn't spend much time looking back.
### Formy-Duval: How'd you become known as Mr. MAGFest?
Becker: Well, this one is pretty easy. I became the acting head of MAGFest almost immediately after the first event. The former chairpeople were all going their separate ways, and I demanded the event not be canceled to the guy in charge. The solution was to run it myself, and that nickname became mine as I slowly molded the event into my own.
### Formy-Duval: I remember attending in those early days. How large did MAGFest eventually become?
Becker: The first MAGFest was 265 people. It is now a scary huge 20,000+ unique attendees.
### Formy-Duval: That is huge! Can you briefly describe the MAGFest convention?
Becker: One of my buddies, Hex, described it really well. He said, "It's like this video-game themed birthday party with all of your friends, but there happen to be a few thousand people there, and all of them can be your friends if you want, and then there are rock concerts." This was quickly adopted and shortened to "It's a four-day video game party with multiple video game rock concerts." Often the phrase "music and gaming festival" is all people need to get the idea.
### Formy-Duval: How did you make use of open source software in running MAGFest?
Becker: At the time I became the head of MAGFest, I had already written a game in Python, so I felt most comfortable also writing our registration system in Python. It was a pretty easy decision since there were no costs involved, and I already had the experience. Later on, our online registration system and rideshare interfaces were written in PHP/MySQL, and we used Kboard for our forums. Eventually, this evolved to us rolling our own registration system from scratch in Python, which we also use at the event, and running Drupal on the main website. At one point, I also wrote a system to manage the video room and challenge stations in Python. Oh, and we had a few game music listening stations that you could flip through tracks and liner notes of iconic game OSTs (Original Sound Tracks) and bands who played MAGFest.
### Formy-Duval: I understand that a few years ago you reduced your responsibilities at MAGFest to pursue new projects. What was your next endeavor?
Becker: I was always rather heavily into the game music scene and tried to bring as much of it to MAGFest as possible. As I became a part of those communities more and more, I wanted to participate. I wrote some medleys, covers, and arrangements of video game tunes using free, open source versions of the DOS and Windows demoscene tools that I had used before, which were also free but not necessarily open source. I released a few tracks in the first few years of running MAGFest, and then after some tough love and counseling from Jake Kaufman (also known as virt; Shovel Knight and Shantae are on his resume, among others), I switched gears to something I was much better at—chiptunes. Even though I had written PC speaker beeps and boops as a kid with my Compaq Portable and MOD files in the demoscene throughout the 90s, I released the first NES-spec track that I was truly proud to call my own in 2006. Several pop tributes and albums followed.
In 2010, I was approached by multiple individuals for game soundtrack work. Even though the soundtrack work didn't affect it much, I began to scale back some of my responsibilities with MAGFest more seriously, and in 2011, I decided to take a much larger step into the background. I would stay on the board as a consultant and help people learn what they needed to run their departments, but I was no longer at the helm. At the same time, my part-time job, which was paying the bills, laid off all of their workers, and I suddenly found myself with a lot of spare time. I began writing Pretty Eight Machine, a Nine Inch Nails tribute, which I spent over a year on, and between that and the game soundtrack work, I proved to myself that I could put food on the table (if only barely) with music, and this is what I wanted to do next.
###
![Inverse Phase CTM Tracker][7]
Copyright Inverse Phase, Used with permission.
### Formy-Duval: What is your workspace like in terms of hardware and software?
Becker: In my DOS/Windows days, I used mostly FastTracker 2. In Linux, I replaced that with SoundTracker (not Karsten Obarski's original, but a GTK rewrite; see [soundtracker.org][8]). These days, SoundTracker is in a state of flux—although I still need to try the new GTK3 version—but [MilkyTracker][9] is a good replacement when I can't use SoundTracker. Good old FastTracker 2 also runs in DOSBox, if I really need the original. This was when I started using Linux, however, so this is stuff I figured out 20-25 years ago.
Within the last ten years, I've gravitated away from sample-based music and towards chiptunes—music synthesized by old sound chips from 8- and 16-bit game systems and computers. There is a very good cross-platform tool called [Deflemask][10] to write music for a lot of these systems. A few of the systems I want to write music for aren't supported, though, and Deflemask is closed source, so I've begun building my own music composition environment from scratch with Python and [Pygame][11]. I maintain my code tree using Git and will control hardware synthesizer boards using open source [KiCad][12].
### Formy-Duval: What projects are you currently focused on?
Becker: I work on game soundtracks and music commissions off and on. While that's going on, I've also been working on starting an electronic entertainment museum called [Bloop][13]. We're doing a lot of cool things with archival and inventory, but perhaps the most exciting thing is that we've been building exhibits with Raspberry Pis. They're so versatile, and it's weird to think that, if I had tried to do this even ten years ago, I wouldn't have had small single-board computers to drive my exhibits; I probably would have bolted a laptop to the back of a flat-panel!
### Formy-Duval: There are a lot more game platforms coming to Linux now, such as Steam, Lutris, and Play-on-Linux. Do you think this trend will continue, and these are here to stay?
Becker: As someone who's been gaming on Linux for 25 years—in fact, I was brought to Linux _because_ of games—I think I find this question harder than most people would. I've been running Linux-native games for decades, and I've even had to eat my "either a Linux solution exists or can be written" words back in the day, but eventually, I did, and wrote a Linux game.
Real talk: Android's been out since 2008. If you've played a game on Android, you've played a game on Linux. Steam's been available for Linux for eight years. The Steambox/SteamOS was announced only a year after Steam. I don't hear a whole lot about Lutris or Play-on-Linux, but I know they exist and hope they succeed. I do see a pretty big following for GOG, and I think that's pretty neat. I see a lot of quality game ports coming out of people like Ryan Gordon (icculus) and Ethan Lee (flibitijibibo), and some companies even port in-house. Game engines like Unity and Unreal already support Linux. Valve has incorporated Proton into the Linux version of Steam for something like two years now, so now Linux users don't even have to search for Linux-native versions of their games.
I can say that I think most gamers expect and will continue to expect the level of support they're already receiving from the retail game market. Personally, I hope that level goes up and not down!
_Learn more about Brendan's work as [Inverse Phase][14]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-open-source-music
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other21x_cc.png?itok=JJJ5z6aB (Wires plugged into a network switch)
[2]: http://icculus.org/pyddr/
[3]: http://magfest.org/
[4]: http://www.inversephase.com/
[5]: https://opensource.com/sites/default/files/uploads/inverse_phase_performance_bw.png (Inverse Phase performance photo)
[6]: https://en.wikipedia.org/wiki/Walnut_Creek_CDROM
[7]: https://opensource.com/sites/default/files/uploads/inversephase_ctm_tracker_screenshot.png (Inverse Phase CTM Tracker)
[8]: http://soundtracker.org
[9]: http://www.milkytracker.org
[10]: http://www.deflemask.com
[11]: http://www.pygame.org
[12]: http://www.kicad-pcb.org
[13]: http://bloopmuseum.com
[14]: https://www.inversephase.com

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: (chai-yuan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Playing Music on your Fedora Terminal with MPD and ncmpcpp)
[#]: via: (https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/)
[#]: author: (Carmine Zaccagnino https://fedoramagazine.org/author/carzacc/)
Playing Music on your Fedora Terminal with MPD and ncmpcpp
======
![][1]
MPD, as the name implies, is a Music Playing Daemon. It can play music but, being a daemon, any piece of software can interface with it and play sounds, including some CLI clients.
One of them is called _ncmpcpp_, which is an improvement over the pre-existing _ncmpc_ tool. The name change doesnt have much to do with the language theyre written in: theyre both C++, but _ncmpcpp_ is called that because its the _NCurses Music Playing Client_ _Plus Plus_.
### Installing MPD and ncmpcpp
The _ncmpmpcc_ client can be installed from the official Fedora repositories with DNF directly with
```
$ sudo dnf install ncmpcpp
```
On the other hand, MPD has to be installed from the RPMFusion _free_ repositories, which you can enable, [as per the official installation instructions][2], by running
```
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
and then you can install MPD by running
```
$ sudo dnf install mpd
```
### Configuring and Starting MPD
The most painless way to set up MPD is to run it as a regular user. The default is to run it as the dedicated _mpd_ user, but that causes all sorts of issues with permissions.
Before we can run it, we need to create a local config file that will allow it to run as a regular user.
To do that, create a subdirectory called _mpd_ in _~/.config_:
```
$ mkdir ~/.config/mpd
```
copy the default config file into this directory:
```
$ cp /etc/mpd.conf ~/.config/mpd
```
and then edit it with a text editor like _vim_, _nano_ or _gedit_:
```
$ nano ~/.config/mpd/mpd.conf
```
I recommend you read through all of it to check if theres anything you need to do, but for most setups you can delete everything and just leave the following:
```
db_file "~/.config/mpd/mpd.db"
log_file "syslog"
```
At this point you should be able to just run
```
$ mpd
```
with no errors, which will start the MPD daemon in the background.
### Using ncmpcpp
Simply run
```
$ ncmpcpp
```
and youll see a ncurses-powered graphical user interface in your terminal.
Press _4_ and you should see your local music library, be able to change the selection using the arrow keys and press _Enter_ to play a song.
Doing this multiple times will create a _playlist_, which allows you to move to the next track using the _&gt;_ button (not the right arrow, the _&gt;_ closing angle bracket character) and go back to the previous track with _&lt;_. The + and buttons increase and decrease volume. The _Q_ button quits ncmpcpp but it doesnt stop the music. You can play and pause with _P_.
You can see the current playlist by pressing the _1_ button (this is the default view). From this view you can press _i_ to look at the information (tags) about the current song. You can change the tags of the currently playing (or paused) song by pressing _6_.
Pressing the \ button will add (or remove) an informative panel at the top of the view. In the top left, you should see something that looks like this:
```
[------]
```
Pressing the _r_, _z_, _y_, _R_, _x_ buttons will respectively toggle the _repeat_, _random_, _single_, _consume_ and _crossfade_ playback modes and will replace one of the __ characters in that little indicator to the initial of the selected mode.
Pressing the _F1_ button will display some help text, which contains a list of keybindings, so theres no need to write a complete list here. So now go on, be geeky, and play all your music from your terminal!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/
作者:[Carmine Zaccagnino][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/carzacc/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/play_music_mpd-816x346.png
[2]: https://rpmfusion.org/Configuration

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Scan Kubernetes for errors with KRAWL)
[#]: via: (https://opensource.com/article/20/2/kubernetes-scanner)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
Scan Kubernetes for errors with KRAWL
======
The KRAWL script identifies errors in Kubernetes pods and containers.
![Ship captain sailing the Kubernetes seas][1]
When you're running containers with Kubernetes, you often find that they pile up. This is by design. It's one of the advantages of containers: they're cheap to start whenever a new one is needed. You can use a front-end like OpenShift or OKD to manage pods and containers. Those make it easy to visualize what you have set up, and have a rich set of commands for quick interactions.
If a platform to manage containers doesn't fit your requirements, though, you can also get that information using only a Kubernetes toolchain, but there are a lot of commands you need for a full overview of a complex environment. For that reason, I wrote [KRAWL][2], a simple script that scans pods and containers under the namespaces on Kubernetes clusters and displays the output of events, if any are found. It can also be used as Kubernetes plugin for the same purpose. It's a quick and easy way to get a lot of useful information.
### Prerequisites
* You must have kubectl installed.
* Your cluster's kubeconfig must be either in its default location ($HOME/.kube/config) or exported (KUBECONFIG=/path/to/kubeconfig).
### Usage
```
`$ ./krawl`
```
![KRAWL script][3]
### The script
```
#!/bin/bash
# AUTHOR: Abhishek Tamrakar
# EMAIL: [abhishek.tamrakar08@gmail.com][4]
# LICENSE: Copyright (C) 2018 Abhishek Tamrakar
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#       <http://www.apache.org/licenses/LICENSE-2.0>
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.
##
#define the variables
KUBE_LOC=~/.kube/config
#define variables
KUBECTL=$(which kubectl)
GET=$(which egrep)
AWK=$(which awk)
red=$(tput setaf 1)
normal=$(tput sgr0)
# define functions
# wrapper for printing info messages
info()
{
  printf '\n\e[34m%s\e[m: %s\n' "INFO" "$@"
}
# cleanup when all done
cleanup()
{
  rm -f results.csv
}
# just check if the command we are about to call is available
checkcmd()
{
  #check if command exists
  local cmd=$1
  if [ -z "${!cmd}" ]
  then
    printf '\n\e[31m%s\e[m: %s\n' "ERROR"  "check if $1 is installed !!!"
    exit 1
  fi
}
get_namespaces()
{
  #get namespaces
  namespaces=( \
          $($KUBECTL get namespaces --ignore-not-found=true | \
          $AWK '/Active/ {print $1}' \
          ORS=" ") \
          )
#exit if namespaces are not found
if [ ${#namespaces[@]} -eq 0 ]
then
  printf '\n\e[31m%s\e[m: %s\n' "ERROR"  "No namespaces found!!"
  exit 1
fi
}
#get events for pods in errored state
get_pod_events()
{
  printf '\n'
  if [ ${#ERRORED[@]} -ne 0 ]
  then
      info "${#ERRORED[@]} errored pods found."
      for CULPRIT in ${ERRORED[@]}
      do
        info "POD: $CULPRIT"
        info
        $KUBECTL get events \
        --field-selector=involvedObject.name=$CULPRIT \
        -ocustom-columns=LASTSEEN:.lastTimestamp,REASON:.reason,MESSAGE:.message \
        --all-namespaces \
        --ignore-not-found=true
      done
  else
      info "0 pods with errored events found."
  fi
}
#define the logic
get_pod_errors()
{
  printf "%s %s %s\n" "NAMESPACE,POD_NAME,CONTAINER_NAME,ERRORS" &gt; results.csv
  printf "%s %s %s\n" "---------,--------,--------------,------" &gt;&gt; results.csv
  for NAMESPACE in ${namespaces[@]}
  do
    while IFS=' ' read -r POD CONTAINERS
    do
      for CONTAINER in ${CONTAINERS//,/ }
      do
        COUNT=$($KUBECTL logs --since=1h --tail=20 $POD -c $CONTAINER -n $NAMESPACE 2&gt;/dev/null| \
        $GET -c '^error|Error|ERROR|Warn|WARN')
        if [ $COUNT -gt 0 ]
        then
            STATE=("${STATE[@]}" "$NAMESPACE,$POD,$CONTAINER,$COUNT")
        else
        #catch pods in errored state
            ERRORED=($($KUBECTL get pods -n $NAMESPACE --no-headers=true | \
                awk '!/Running/ {print $1}' ORS=" ") \
                )
        fi
      done
    done&lt; &lt;($KUBECTL get pods -n $NAMESPACE --ignore-not-found=true -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name --no-headers=true)
  done
  printf "%s\n" ${STATE[@]:-None} &gt;&gt; results.csv
  STATE=()
}
#define usage for seprate run
usage()
{
cat &lt;&lt; EOF
  USAGE: "${0##*/} &lt;/path/to/kube-config&gt;(optional)"
  This program is a free software under the terms of Apache 2.0 License.
  COPYRIGHT (C) 2018 Abhishek Tamrakar
EOF
exit 0
}
#check if basic commands are found
trap cleanup EXIT
checkcmd KUBECTL
#
#set the ground
if [ $# -lt 1 ]; then
  if [ ! -e ${KUBE_LOC} -a ! -s ${KUBE_LOC} ]
  then
    info "A readable kube config location is required!!"
    usage
  fi
elif [ $# -eq 1 ]
then
  export KUBECONFIG=$1
elif [ $# -gt 1 ]
then
  usage
fi
#play
get_namespaces
get_pod_errors
printf '\n%40s\n' 'KRAWL'
printf '%s\n' '---------------------------------------------------------------------------------'
printf '%s\n' '  Krawl is a command line utility to scan pods and prints name of errored pods   '
printf '%s\n\n' ' +and containers within. To use it as kubernetes plugin, please check their page '
printf '%s\n' '================================================================================='
cat results.csv | sed 's/,/,|/g'| column -s ',' -t
get_pod_events
```
* * *
_This was originally published as the README in [KRAWL's GitHub repository][2] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubernetes-scanner
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://github.com/abhiTamrakar/kube-plugins/tree/master/krawl
[3]: https://opensource.com/sites/default/files/uploads/krawl_0.png (KRAWL script)
[4]: mailto:abhishek.tamrakar08@gmail.com

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top hacks for the YaCy open source search engine)
[#]: via: (https://opensource.com/article/20/2/yacy-search-engine-hacks)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Top hacks for the YaCy open source search engine
======
Rather than adapting to someone else's vision, customize you search
engine for the internet you want with YaCY.
![Browser of things][1]
In my article about [getting started with YaCy][2], I explained how to install and start using the [YaCy][3] peer-to-peer search engine. One of the most exciting things about YaCy, however, is the fact that it's a local client. Each user owns and operates a node in a globally distributed search engine infrastructure, which means each user is in full control of how they navigate and experience the World Wide Web.
For instance, Google used to provide the URL google.com/linux as a shortcut to filter searches for Linux-related topics. It was a small feature that many people found useful, but [topical shortcuts were dropped][4] in 2011. 
YaCy makes it possible to customize your search experience.
### Customize YaCy
Once you've installed YaCy, navigate to your search page at **localhost:8090**. To customize your search engine, click the **Administration** button in the top-right corner (it may be concealed in a menu icon on small screens).
The admin panel allows you to configure how YaCy uses your system resources and how it interacts with other YaCy clients.
![YaCy profile selector][5]
For instance, to configure an alternative port and set RAM and disk usage, use the **First steps** menu in the sidebar. To monitor YaCy activity, use the **Monitoring** panel. Most features are discoverable by clicking through the panels, but here are some of my favorites.
### Search appliance
Several companies have offered [intranet search appliances][6], but with YaCy, you can implement it for free. Whether you want to search through your own data or to implement a search system for local file shares at your business, you can choose to run YaCy as an internal indexer for files accessible over HTTP, FTP, and SMB (Samba). People in your local network can use your personalized instance of YaCy to find shared files, and none of the data is shared with users outside your network.
### Network configuration
YaCy favors isolation and privacy by default. You can adjust how you connect to the peer-to-peer network in the **Network Configuration** panel, which is revealed by clicking the link located at the top of the **Use Case &amp; Account** configuration screen.
![YaCy network configuration][7]
### Crawl a site
Peer-to-peer indexing is user-driven. There's no mega-corporation initiating searches on every accessible page on the internet, so a site isn't indexed until someone deliberately crawls it with YaCy.
The YaCy client provides two options to help you help crawl the web: you can perform a manual crawl, and you can make YaCy available for suggested crawls.
![YaCy advanced crawler][8]
#### Start a manual crawling job
A manual crawl is when you enter the URL of a site you want to index and start a YaCy crawl job. To do this, click the **Advanced Crawler** link in the **Production** sidebar. Enter one or more URLs, then scroll to the bottom of the page and enable the **Do remote indexing** option. This enables your client to broadcast the URLs it is indexing, so clients that have opted to accept requests can help you perform the crawl.
To start the crawl, click the **Start New Crawl Job** button at the bottom of the page. I use this method to index sites I use frequently or find useful.
Once the crawl job starts, YaCy indexes the URLs you enter and stores the index on your local machine. As long as you are running in senior mode (meaning your firewall permits incoming and outgoing traffic on port 8090), your index is available to YaCy users all over the globe.
#### Join in on a crawl
While some very dedicated YaCy senior users may crawl the internet compulsively, there are a _lot_ of sites out there in the world. It might seem impossible to match the resources of popular spiders and bots, but because YaCy has so many users, they can band together as a community to index more of the internet than any one user could do alone. If you activate YaCy to broadcast requests for site crawls, participating clients can work together to crawl sites you might not otherwise think to crawl manually.
To configure your client to accept jobs from others, click the **Advanced Crawler** link in the left sidebar menu. In the **Advanced Crawler** panel, click the **Remote Crawling** link under the **Network Harvesting** heading at the top of the page. Enable remote crawls by placing a tick in the checkbox next to the **Load** setting.
![YaCy remote crawling][9]
### YaCy monitoring and more
YaCy is a surprisingly robust search engine, providing you with the opportunity to theme and refine your experience in nearly any way you could want. You can monitor the activity of your YaCy client in the **Monitoring** panel, so you can get an idea of how many people are benefiting from the work of the YaCy community and also see what kind of activity it's generating for your computer and network.
![YaCy monitoring screen][10]
### Search engines make a difference
The more time you spend with the Administration screen, the more fun it becomes to ponder how the search engine you use can change your perspective. Your experience of the internet is shaped by the results you get back for even the simplest of queries. You might notice, in fact, how different one person's "internet" is from another person's when you talk to computer users from a different industry. For some people, the web is littered with ads and promoted searches and suffers from the tunnel vision of learned responses to queries. For instance, if someone consistently searches for answers about X, most commercial search engines will give weight to query responses that concern X. That's a useful feature on the one hand, but it occludes answers that require Y, even though that might be the better solution for a specific task.
As in real life, stepping outside a manufactured view of the world can be healthy and enlightening. Try YaCy, and see what you discover.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/yacy-search-engine-hacks
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
[2]: https://opensource.com/article/20/2/open-source-search-engine
[3]: https://yacy.net/
[4]: https://www.linuxquestions.org/questions/linux-news-59/is-there-no-more-linux-google-884306/
[5]: https://opensource.com/sites/default/files/uploads/yacy-profiles.jpg (YaCy profile selector)
[6]: https://en.wikipedia.org/wiki/Vivisimo
[7]: https://opensource.com/sites/default/files/uploads/yacy-network-config.jpg (YaCy network configuration)
[8]: https://opensource.com/sites/default/files/uploads/yacy-advanced-crawler.jpg (YaCy advanced crawler)
[9]: https://opensource.com/sites/default/files/uploads/yacy-remote-crawl-accept.jpg (YaCy remote crawling)
[10]: https://opensource.com/sites/default/files/uploads/yacy-monitor.jpg (YaCy monitoring screen)

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Automate your live demos with this shell script)
[#]: via: (https://opensource.com/article/20/2/live-demo-script)
[#]: author: (Lisa Seelye https://opensource.com/users/lisa)
Automate your live demos with this shell script
======
Try this script the next time you give a presentation to prevent making
typos in front of a live audience.
![Person using a laptop][1]
I gave a talk about [multi-architecture container images][2] at [LISA19][3] in October that included a lengthy live demo. Rather than writing out 30+ commands and risking typos, I decided to automate the demo with a shell script.
The script mimics what appears as input/output and runs the real commands in the background, pausing at various points so I can narrate what is going on. I'm very pleased with how the script turned out and the effect on stage. The script and supporting materials for my presentation are available on [GitHub][4] under an Apache 2.0 license.
### The script
```
#!/bin/bash
set -e
IMG=thedoh/lisa19
REGISTRY=docker.io
VERSION=19.10.1
# Plan B with GCR:
#IMG=dulcet-iterator-213018
#REGISTRY=us.gcr.io
#VERSION=19.10.1
pause() {
  local step="${1}"
  ps1
  echo -n "# Next step: ${step}"
  read
}
ps1() {
  echo -ne "\033[01;32m${USER}@$(hostname -s) \033[01;34m$(basename $(pwd)) \$ \033[00m"
}
echocmd() {
  echo "$(ps1)$@"
}
docmd() {
  echocmd $@
  $@
}
step0() {
  local registry="${1}" img="${2}" version="${3}"
  # Mindful of tokens in ~/.docker/config.json
  docmd grep experimental ~/.docker/config.json
 
  docmd cd ~/go/src/github.com/lisa/lisa19-containers
 
  pause "This is what we'll be building"
  docmd export REGISTRY=${registry}
  docmd export IMG=${img}
  docmd export VERSION=${version}
  docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
}
step1() {
  local registry="${1}" img="${2}" version="${3}"
 
  docmd docker build --no-cache --platform=linux/amd64 --build-arg=GOARCH=amd64 -t ${REGISTRY}/${IMG}:amd64-${VERSION} .
  pause "ARM64 image next"
  docmd docker build --no-cache --platform=linux/arm64 --build-arg=GOARCH=arm64 -t ${REGISTRY}/${IMG}:arm64-${VERSION} .
}
step2() {
  local registry="${1}" img="${2}" version="${3}" origpwd=$(pwd) savedir=$(mktemp -d) jsontemp=$(mktemp -t XXXXX)
  chmod 700 $jsontemp $savedir
  # Set our way back home and get ready to fix our arm64 image to amd64.
  echocmd 'origpwd=$(pwd)'
  echocmd 'savedir=$(mktemp -d)'
  echocmd "mkdir -p \$savedir/change"
  mkdir -p $savedir/change &amp;&gt;/dev/null
  echocmd "docker save ${REGISTRY}/${IMG}:arm64-${VERSION} 2&gt;/dev/null 1&gt; \$savedir/image.tar"
  docker save ${REGISTRY}/${IMG}:arm64-${VERSION} 2&gt;/dev/null 1&gt; $savedir/image.tar
  pause "untar the image to access its metadata"
 
  echocmd "cd \$savedir/change"
  cd $savedir/change
  echocmd tar xf \$savedir/image.tar
  tar xf $savedir/image.tar
  docmd ls -l
 
  pause "find the JSON config file"
  echocmd 'jsonfile=$(jq -r ".[0].Config" manifest.json)'
  jsonfile=$(jq -r ".[0].Config" manifest.json)
 
  pause "notice the original metadata says amd64"
  echocmd jq '{architecture: .architecture, ID: .config.Image}' \$jsonfile
  jq '{architecture: .architecture, ID: .config.Image}' $jsonfile
 
  pause "Change from amd64 to arm64 using a temp file"
  echocmd "jq '.architecture = \"arm64\"' \$jsonfile &gt; \$jsontemp"
  jq '.architecture = "arm64"' $jsonfile &gt; $jsontemp
  echocmd /bin/mv -f -- \$jsontemp \$jsonfile
  /bin/mv -f -- $jsontemp $jsonfile
  pause "Check to make sure the config JSON file says arm64 now"
  echocmd jq '{architecture: .architecture, ID: .config.Image}' \$jsonfile
  jq '{architecture: .architecture, ID: .config.Image}' $jsonfile
 
  pause "delete the image with the incorrect metadata"
  docmd docker rmi ${REGISTRY}/${IMG}:arm64-${VERSION}
 
  pause "Re-compress the ARM64 image and load it back into Docker, then clean up the temp space"
  echocmd 'tar cf - * | docker load'
  tar cf - * | docker load
  docmd cd $origpwd
  echocmd "/bin/rm -rf -- \$savedir"
  /bin/rm -rf -- $savedir &amp;&gt;/dev/null
}
step3() {
  local registry="${1}" img="${2}" version="${3}"
  docmd docker push ${registry}/${img}:amd64-${version}
  pause "push ARM64 image to ${registry}"
  docmd docker push ${registry}/${img}:arm64-${version}
}
step4() {
  local registry="${1}" img="${2}" version="${3}"
  docmd docker manifest create ${registry}/${img}:${version} ${registry}/${img}:arm64-${version} ${registry}/${img}:amd64-${version}
 
  pause "add a reference to the amd64 image to the manifest list"
  docmd docker manifest annotate ${registry}/${img}:${version} ${registry}/${img}:amd64-${version} --os linux --arch amd64
  pause "now add arm64"
  docmd docker manifest annotate ${registry}/${img}:${version} ${registry}/${img}:arm64-${version} --os linux --arch arm64
}
step5() {
  local registry="${1}" img="${2}" version="${3}"
  docmd docker manifest push ${registry}/${img}:${version}
}
step6() {
  local registry="${1}" img="${2}" version="${3}"
  docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
 
  pause "ask docker.io if ${img}:${version} has a linux/amd64 manifest, and run it"
  docmd docker pull --platform linux/amd64 ${registry}/${img}:${version}
  docmd docker run --rm -i ${registry}/${img}:${version}
 
  pause "clean slate again"
  docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
 
  pause "now repeat for linux/arm64 and see what it gives us"
  docmd docker pull --platform linux/arm64 ${registry}/${img}:${version}
  set +e
  docmd docker run --rm -i ${registry}/${img}:${version}
  set -e
  if [[ $(uname -s) == "Darwin" ]]; then
    pause "note about Docker on Mac and binfmt_misc: binfmt_misc lets a mac run arm64 binaries in the Docker VM"
  fi
}
pause "initial setup"
step0 ${REGISTRY} ${IMG} ${VERSION}
pause "1 build constituent images"
step1 ${REGISTRY} ${IMG} ${VERSION}
pause "2 fix ARM64 metadata"
step2 ${REGISTRY} ${IMG} ${VERSION}
pause "3 push constituent images up to docker.io"
step3 ${REGISTRY} ${IMG} ${VERSION}
pause "4 build the manifest list for the image"
step4 ${REGISTRY} ${IMG} ${VERSION}
pause "5 Push the manifest list to docker.io"
step5 ${REGISTRY} ${IMG} ${VERSION}
pause "6 clean slate, and validate the list-based image"
step6 ${REGISTRY} ${IMG} ${VERSION}
docmd echo 'Manual steps all done!'
make REGISTRY=${REGISTRY} IMG=${IMG} VERSION=${VERSION} clean &amp;&gt;/dev/null
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/live-demo-script
作者:[Lisa Seelye][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lisa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://www.usenix.org/conference/lisa19/presentation/seelye
[3]: https://www.usenix.org/conference/lisa19
[4]: https://github.com/lisa/lisa19-containers

View File

@ -0,0 +1,211 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Basic kubectl and Helm commands for beginners)
[#]: via: (https://opensource.com/article/20/2/kubectl-helm-commands)
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
Basic kubectl and Helm commands for beginners
======
Take a trip to the grocery store to shop for the commands you'll need to
get started with these Kubernetes tools.
![A person working.][1]
Recently, my husband was telling me about an upcoming job interview where he would have to run through some basic commands on a computer. He was anxious about the interview, but the best way for him to learn and remember things has always been to equate the thing he doesn't know to something very familiar to him. Because our conversation happened right after I was roaming the grocery store trying to decide what to cook that evening, it inspired me to write about kubectl and Helm commands by equating them to an ordinary trip to the grocer.
[Helm][2] is a tool to manage applications within Kubernetes. You can easily deploy charts with your application information, allowing them to be up and preconfigured in minutes within your Kubernetes environment. When you're learning something new, it's always helpful to look at chart examples to see how they are used, so if you have time, take a look at these stable [charts][3].
[Kubectl][4] is a command line that interfaces with Kubernetes environments, allowing you to configure and manage your cluster. It does require some configuration to work within environments, so take a look through the [documentation][5] to see what you need to do.
I'll use namespaces in the examples, which you can learn about in my article [_Kubernetes namespaces for beginners_][6].
Now that we have that settled, let's start shopping for basic kubectl and Helm commands!
### Helm list
What is the first thing you do before you go to the store? Well, if you're organized, you make a **list**. LIkewise, this is the first basic Helm command I will explain.
In a Helm-deployed application, **list** provides details about an application's current release. In this example, I have one deployed application—the Jenkins CI/CD application. Running the basic **list** command always brings up the default namespace. Since I don't have anything deployed in the default namespace, nothing shows up:
```
$helm list
NAME    NAMESPACE    REVISION    UPDATED    STATUS    CHART    APP VERSION
```
However, if I run the command with an extra flag, my application and information appear:
```
$helm list --all-namespaces
NAME     NAMESPACE  REVISION  UPDATED                   STATUS      CHART           APP  VERSION
jenkins  jenkins        1         2020-01-18 16:18:07 EST   deployed    jenkins-1.9.4   lts
```
Finally, I can direct the **list** command to check only the namespace I want information from:
```
$helm list --namespace jenkins
NAME     NAMESPACE  REVISION  UPDATED                   STATUS    CHART          APP VERSION
jenkins    jenkins      1              2020-01-18 16:18:07 EST  deployed  jenkins-1.9.4  lts    
```
Now that I have a list and know what is on it, I can go and get my items with **get** commands! I'll start with the Kubernetes cluster; what can I get from it?
### Kubectl get
The **kubectl get** command gives information about many things in Kubernetes, including pods, nodes, and namespaces. Again, without a namespace flag, you'll always land in the default. First, I'll get the namespaces in the cluster to see what's running:
```
$kubectl get namespaces
NAME             STATUS   AGE
default          Active   53m
jenkins          Active   44m
kube-node-lease  Active   53m
kube-public      Active   53m
kube-system      Active   53m
```
Now that I have the namespaces running in my environment, I'll get the nodes and see how many are running:
```
$kubectl get nodes
NAME       STATUS   ROLES       AGE   VERSION
minikube   Ready    master  55m   v1.16.2
```
I have one node up and running, mainly because my Minikube is running on one small server. To get the pods running on my one node:
```
$kubectl get pods
No resources found in default namespace.
```
Oops, it's empty. I'll get what's in my Jenkins namespace with:
```
$kubectl get pods --namespace jenkins
NAME                      READY  STATUS   RESTARTS  AGE
jenkins-7fc688c874-mh7gv  1/1    Running  0         40m
```
Good news! There's one pod, it hasn't restarted, and it has been running for 40 minutes. Well, since I know the pod is up, I want to see what I can get from Helm.
### Helm get
**Helm get** is a little more complicated because this **get** command requires more than an application name, and you can request multiple things from applications. I'll begin by getting the values used to make the application, and then I'll show a snip of the **get all** action, which provides all the data related to the application.
```
$helm get values jenkins -n jenkins
USER-SUPPLIED VALUES:
null
```
Since I did a very minimal stable-only install, the configuration didn't change. If I run the **all** command, I get everything out of the chart:
```
`$helm get all jenkins -n jenkins`
```
![output from helm get all command][7]
This produces a ton of data, so I always recommend keeping a copy of a Helm chart so you can look over the templates in the chart. I also create my own values to see what I have in place.
Now that I have all my goodies in my shopping cart, I'll check the labels that **describe** what's in them. These examples pertain only to kubectl, and they describe what I've deployed through Helm.
### Kubectl describe
As I did with the **get** command, which can describe just about anything in Kubernetes, I'll limit my examples to namespaces, pods, and nodes. Since I know I'm working with one of each, this will be easy.
```
$kubectl describe ns jenkins
Name:           jenkins
Labels:         &lt;none&gt;
Annotations:  &lt;none&gt;
Status:         Active
No resource quota.
No resource limits.
```
I can see my namespace's name and that it is active and has no resource nor quote limits.
The **describe pods** command produces a large amount of information, so I'll provide a small snip of the output. If you run the command without the pod name, it will return information for all of the pods in the namespace, which can be overwhelming. So, be sure you always include the pod name with this command. For example:
```
`$kubectl describe pods jenkins-7fc688c874-mh7gv --namespace jenkins`
```
![output of kubectl-describe-pods][8]
This provides (among many other things) the status of the container, how the container is managed, the label, and the image used in the pod. The data not in this abbreviated output includes resource requests and limits along with any conditions, init containers, and storage volume information applied in a Helm values file. This data is useful if your application is crashing due to inadequate resources, a configured init container that runs a prescript for configuration, or generated hidden passwords that shouldn't be in a plain text YAML file.
Finally, I'll use **describe node**, which (of course) describes the node. Since this example has just one, named Minikube, that is what I'll use; if you have multiple nodes in your environment, you must include the node name of interest.
As with pods, the node command produces an abundance of data, so I'll include just a snip of the output.
```
`$kubectl describe node minikube`
```
![output of kubectl describe node][9]
Note that **describe node** is one of the more important basic commands. As this image shows, the command returns statistics that indicate when the node is running out of resources, and this data is excellent for alerting you when you need to scale up (if you do not have autoscaling in your environment). Other things not in this snippet of output include the percentages of requests made for all resources and limits, as well as the age and allocation of resources (e.g., for my application).
### Checking out
With these commands, I've finished my shopping and gotten everything I was looking for. Hopefully, these basic commands can help you, too, in your day-to-day with Kubernetes.
I urge you to work with the command line often and learn the shorthand flags available in the Help sections, which you can access by running these commands:
```
`$helm --help`
```
and
```
`$kubectl -h`
```
### Peanut butter and jelly
Some things just go together like peanut butter and jelly. Helm and kubectl are a little like that.
I often use these tools in my environment. Because they have many similarities in a ton of places, after using one, I usually need to follow up with the other. For example, I can do a Helm deployment and watch it fail using kubectl. Try them together, and see what they can do for you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubectl-helm-commands
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
[2]: https://helm.sh/
[3]: https://github.com/helm/charts/tree/master/stable
[4]: https://kubernetes.io/docs/reference/kubectl/kubectl/
[5]: https://kubernetes.io/docs/reference/kubectl/overview/
[6]: https://opensource.com/article/19/12/kubernetes-namespaces
[7]: https://opensource.com/sites/default/files/uploads/helm-get-all.png (output from helm get all command)
[8]: https://opensource.com/sites/default/files/uploads/kubectl-describe-pods.png (output of kubectl-describe-pods)
[9]: https://opensource.com/sites/default/files/uploads/kubectl-describe-node.png (output of kubectl describe node)

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dino is a Modern Looking Open Source XMPP Client)
[#]: via: (https://itsfoss.com/dino-xmpp-client/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Dino is a Modern Looking Open Source XMPP Client
======
_**Brief: Dino is a relatively new open-source XMPP client that tries to offer a good user experience while encouraging privacy-focused users to utilize XMPP for messaging.**_
### Dino: An Open Source XMPP Client
![][1]
[XMPP][2] (Extensible Messaging Presence Protocol) is a decentralized model of network to facilitate instant messaging and collaboration. Decentralize means there is no central server that has access to your data. The communication is directly between the end-points.
Some of us might call it an “old school” tech probably because the XMPP clients usually have a very bad user experience or simply just because it takes time to get used to (or set it up).
Thats when [Dino][3] comes to the rescue as a modern XMPP client to provide a clean and snappy user experience without compromising your privacy.
### The User Experience
![][4]
Dino does try to improve the user experience as an XMPP client but it is worth noting that the look and feel of it will depend on your Linux distribution to some extent. Your icon theme or the gnome theme might make it look better or worse for your personal experience.
Technically, the user interface is quite simple and easy to use. So, I suggest you take a look at some of the [best icon themes][5] and [GNOME themes][6] for Ubuntu to tweak the look of Dino.
### Features of Dino
![Dino Screenshot][7]
You can expect to use Dino as an alternative to Slack, [Signal][8] or [Wire][9] for your business or personal usage.
It offers all of the essential features you would need in a messaging application, let us take a look at a list of things that you can expect from it:
* Decentralized Communication
* Public XMPP Servers supported if you cannot setup your own server
* Similar to UI to other popular messengers so its easy to use
* Image &amp; File sharing
* Multiple accounts supported
* Advanced message search
* [OpenPGP][10] &amp; [OMEMO][11] encryption supported
* Lightweight native desktop application
### Installing Dino on Linux
You may or may not find it listed in your software center. Dino does provide ready to use binaries for Debian (deb) and Fedora (rpm) based distributions.
**For Ubuntu:**
Dino is available in the universe repository on Ubuntu and you can install it using this command:
```
sudo apt install dino-im
```
Similarly, you can find packages for other Linux distributions on their [GitHub distribution packages page][12].
If you want the latest and greatest, you can also find both **.deb** and .**rpm** files for Dino to install on your Linux distribution (nightly builds) from [OpenSUSEs software webpage][13].
In either case, head to their [GitHub page][14] or click on the link below to visit the official site.
[Download Dino][3]
**Wrapping Up**
It works quite well without any issues (at the time of writing this and quick testing it). Ill try exploring more about it and hopefully cover more XMPP-centric articles to encourage users to use XMPP clients and servers for communication.
What do you think about Dino? Would you recommend another open-source XMPP client thats potentially better than Dino? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/dino-xmpp-client/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-main.png?ssl=1
[2]: https://xmpp.org/about/
[3]: https://dino.im/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-xmpp-client.jpg?ssl=1
[5]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[6]: https://itsfoss.com/best-gtk-themes/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-screenshot.png?ssl=1
[8]: https://itsfoss.com/signal-messaging-app/
[9]: https://itsfoss.com/wire-messaging-linux/
[10]: https://www.openpgp.org/
[11]: https://en.wikipedia.org/wiki/OMEMO
[12]: https://github.com/dino/dino/wiki/Distribution-Packages
[13]: https://software.opensuse.org/download.html?project=network:messaging:xmpp:dino&package=dino
[14]: https://github.com/dino/dino

View File

@ -0,0 +1,179 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Navigating man pages in Linux)
[#]: via: (https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Navigating man pages in Linux
======
The man pages on a Linux system can do more than provide information on particular commands. They can help discover commands you didn't realize were available.
[Hello I'm Nik][1] [(CC0)][2]
Man pages provide essential information on Linux commands and many users refer to them often, but theres a lot more to the man pages than many of us realize.
You can always type a command like “man who” and get a nice description of how the man command works, but exploring commands that you might not know could be even more illuminating. For example, you can use the man command to help identify commands to handle some unusually challenging task or to show options that can help you use a command you already know in new and better ways.
Lets navigate through some options and see where we end up.
[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][3]
### Using man to identify commands
The man command can help you find commands by topic. If youre looking for a command to count the lines in a file, for example, you can provide a keyword. In the example below, weve put the keyword in quotes and added blanks so that we dont get commands that deal with “accounts” or “accounting” along with those that do some counting for us.
```
$ man -k ' count '
anvil (8postfix) - Postfix session count and request rate control
cksum (1) - checksum and count the bytes in a file
sum (1) - checksum and count the blocks in a file
timer_getoverrun (2) - get overrun count for a POSIX per-process timer
```
To show commands that relate to new user accounts, we might try a command like this:
```
$ man -k "new user"
newusers (8) - update and create new users in batch
useradd (8) - create a new user or update default new user information
zshroadmap (1) - informal introduction to the zsh manual The Zsh Manual, …
```
Just to be clear, the third item in the list above makes a reference to “new users” liking the material and is not a command for setting up, removing or configuring user accounts. The man command is simply matching words in the command description, acting very much like the apropos command. Notice the numbers in parentheses after each command listed above. These relate to the man page sections that contain the commands.
### Identifying the manual sections
The man command sections divide the commands into categories. To list these categories, type “man man” and look for descriptions like those below. You very likely wont have Section 9 commands on your system.
[][4]
```
1 Executable programs or shell commands
2 System calls (functions provided by the kernel)
3 Library calls (functions within program libraries)
4 Special files (usually found in /dev)
5 File formats and conventions eg /etc/passwd
6 Games
7 Miscellaneous (including macro packages and conventions), e.g.
man(7), groff(7)
8 System administration commands (usually only for root)
9 Kernel routines [Non standard]
```
Man pages cover more than what we typically think of as “commands”. As you can see from the above descriptions, they cover system calls, library calls, special files and more.
The listing below shows where man pages are actually stored on Linux systems. The dates on these directories will vary because, with updates, some of these sections will get new content while others will not.
```
$ ls -ld /usr/share/man/man?
drwxr-xr-x 2 root root 98304 Feb 5 16:27 /usr/share/man/man1
drwxr-xr-x 2 root root 65536 Oct 23 17:39 /usr/share/man/man2
drwxr-xr-x 2 root root 270336 Nov 15 06:28 /usr/share/man/man3
drwxr-xr-x 2 root root 4096 Feb 4 10:16 /usr/share/man/man4
drwxr-xr-x 2 root root 28672 Feb 5 16:25 /usr/share/man/man5
drwxr-xr-x 2 root root 4096 Oct 23 17:40 /usr/share/man/man6
drwxr-xr-x 2 root root 20480 Feb 5 16:25 /usr/share/man/man7
drwxr-xr-x 2 root root 57344 Feb 5 16:25 /usr/share/man/man8
```
Note that the man page files are generally **gzipped** to save space. The man command unzips them as needed whenever you use the man command.
```
$ ls -l /usr/share/man/man1 | head -10
total 12632
lrwxrwxrwx 1 root root 9 Sep 5 06:38 [.1.gz -> test.1.gz
-rw-r--r-- 1 root root 563 Nov 7 05:07 2to3-2.7.1.gz
-rw-r--r-- 1 root root 592 Apr 23 2016 411toppm.1.gz
-rw-r--r-- 1 root root 2866 Aug 14 10:36 a2query.1.gz
-rw-r--r-- 1 root root 2361 Sep 9 15:13 aa-enabled.1.gz
-rw-r--r-- 1 root root 2675 Sep 9 15:13 aa-exec.1.gz
-rw-r--r-- 1 root root 1142 Apr 3 2018 aaflip.1.gz
-rw-r--r-- 1 root root 3847 Aug 14 10:36 ab.1.gz
-rw-r--r-- 1 root root 2378 Aug 23 2018 ac.1.gz
```
### Listing man pages by section
Even just looking at the first 10 man pages in Section 1 (as shown above), you are likely to see some commands that are new to you maybe **a2query** or **aaflip** (shown above).
An even better strategy for exploring commands is to list commands by section without looking at the files themselves but, instead, using a man command that shows you the commands and provides a brief description of each.
In the command below, the **-s 1** instructs man to display information on commands in section 1. The **-k .** makes the command work for all commands rather than specifying a particular keyword; without this, the man command would come back and ask “What manual page do you want?” So, use a keyword to select a group of related commands or a dot to show all commands in a section.
```
$ man -s 1 -k .
2to3-2.7 (1) - Python2 to Python3 converter
411toppm (1) - convert Sony Mavica .411 image to ppm
as (1) - the portable GNU assembler.
baobab (1) - A graphical tool to analyze disk usage
busybox (1) - The Swiss Army Knife of Embedded Linux
cmatrix (1) - simulates the display from "The Matrix"
expect_dislocate (1) - disconnect and reconnect processes
red (1) - line-oriented text editor
enchant (1) - a spellchecker
```
### How many man pages are there?
If youre curious about how many man pages there are in each section, you can count them by section with a command like this:
```
$ for num in {1..8}
> do
> man -s $num -k . | wc -l
> done
2382
493
2935
53
441
11
245
919
```
The exact number may vary, but most Linux systems will have a similar number of commands. If we use a command that adds these numbers together, we can see that the system that this command is running on has nearly 7,500 man pages. Thats a lot of commands, system calls, etc.
```
$ for num in {1..8}
> do
> num=`man -s $num -k . | wc -l`
> tot=`expr $num + $tot`
> echo $tot
> done
2382
2875
5810
5863
6304
6315
6560
7479 <=== total
```
Theres a lot you can learn by reading man pages, but exploring them in other ways can help you become aware of commands you may not have known were available on your system.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/YiRQIglwYig
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,328 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using external libraries in Java)
[#]: via: (https://opensource.com/article/20/2/external-libraries-java)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Using external libraries in Java
======
External libraries fill gaps in the Java core libraries.
![books in a library, stacks][1]
Java comes with a core set of libraries, including those that define commonly used data types and related behavior, like **String** or **Date**; utilities to interact with the host operating system, such as **System** or **File**; and useful subsystems to manage security, deal with network communications, and create or parse XML. Given the richness of this core set of libraries, it's often easy to find the necessary bits and pieces to reduce the amount of code a programmer must write to solve a problem.
Even so, there are a lot of interesting Java libraries created by people who find gaps in the core libraries. For example, [Apache Commons][2] "is an Apache project focused on all aspects of reusable Java components" and provides a collection of some 43 open source libraries (as of this writing) covering a range of capabilities either outside the Java core (such as [geometry][3] or [statistics][4]) or that enhance or replace capabilities in the Java core (such as [math][5] or [numbers][6]).
Another common type of Java library is an interface to a system component—for example, to a database system. This article looks at using such an interface to connect to a [PostgreSQL][7] database and get some interesting information. But first, I'll review the important bits and pieces of a library.
### What is a library?
A library, of course, must contain some useful code. But to be useful, that code needs to be organized in such a way that the Java programmer can access the components to solve the problem at hand.
I'll boldly claim that the most important part of a library is its application programming interface (API) documentation. This kind of documentation is familiar to many and is most often produced by [Javadoc][8], which reads structured comments in the code and produces HTML output that displays the API's packages in the panel in the top-left corner of the page; its classes in the bottom-left corner; and the detailed documentation at the library, package, or class level (depending on what is selected in the main panel) on the right. For example, the [top level of API documentation for Apache Commons Math][9] looks like:
![API documentation for Apache Commons Math][10]
Clicking on a package in the main panel shows the Java classes and interfaces defined in that package. For example, **[org.apache.commons.math4.analysis.solvers][11]** shows classes like **BisectionSolver** for finding zeros of univariate real functions using the bisection algorithm. And clicking on the [BisectionSolver][12] link lists all the methods of the class **BisectionSolver**.
This type of documentation is useful as reference information; it's not intended as a tutorial for learning how to use the library. For example, if you know what a univariate real function is and look at the package **org.apache.commons.math4.analysis.function**, you can imagine using that package to compose a function definition and then using the **org.apache.commons.math4.analysis.solvers** package to look for zeros of the just-created function. But really, you probably need more learning-oriented documentation to bridge to the reference documentation. Maybe even an example!
This documentation structure also helps clarify the meaning of _package_—a collection of related Java class and interface definitions—and shows what packages are bundled in a particular library.
The code for such a library is most commonly found in a [**.jar** file][13], which is basically a .zip file created by the Java **jar** command that contains some other useful information. **.jar** files are typically created as the endpoint of a build process that compiles all the **.java** files in the various packages defined.
There are two main steps to accessing the functionality provided by an external library:
1. Make sure the library is available to the Java compilation step—[**javac**][14]—and the execution step—**java**—via the classpath (either the **-cp** argument on the command line or the **CLASSPATH** environment variable).
2. Use the appropriate **import** statements to access the package and class in the program source code.
The rest is just like coding with Java core classes, such as **String**—write the code using the class and interface definitions provided by the library. Easy, eh? Well, maybe not quite that easy; first, you need to understand the intended use pattern for the library components, and then you can write code.
### An example: Connect to a PostgreSQL database
The typical use pattern for accessing data in a database system is:
1. Gain access to the code specific to the database software being used.
2. Connect to the database server.
3. Build a query string.
4. Execute the query string.
5. Do something with the results returned.
6. Disconnect from the database server.
The programmer-facing part of all of this is provided by a database-independent interface package, **[java.sql][15]**, which defines the core client-side Java Database Connectivity (JDBC) API. The **java.sql** package is part of the core Java libraries, so there is no need to supply a **.jar** file to the compile step. However, each database provider creates its own implementation of the **java.sql** interfaces—for example, the **Connection** interface—and those implementations must be provided on the run step.
Let's see how this works, using PostgreSQL.
#### Gain access to the database-specific code
The following code uses the [Java class loader][16] (the **Class.forName()** call) to bring the PostgreSQL driver code into the executing virtual machine:
```
import java.sql.*;
public class Test1 {
    public static void main([String][17] args[]) {
        // Load the driver (jar file must be on class path) [1]
        try {
            Class.forName("org.postgresql.Driver");
            [System][18].out.println("driver loaded");
        } catch ([Exception][19] e1) {
            [System][18].err.println("couldn't find driver");
            [System][18].err.println(e1);
            [System][18].exit(1);
        }
        // If we get here all is OK
        [System][18].out.println("done.");
    }
}
```
Because the class loader can fail, and therefore can throw an exception when failing, surround the call to **Class.forName()** in a try-catch block.
If you compile the above code with **javac** and run it with Java:
```
me@mymachine:~/Test$ javac Test1.java
me@mymachine:~/Test$ java Test1
couldn't find driver
java.lang.ClassNotFoundException: org.postgresql.Driver
me@mymachine:~/Test$
```
The class loader needs the **.jar** file containing the PostgreSQL JDBC driver implementation to be on the classpath:
```
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test1
driver loaded
done.
me@mymachine:~/Test$
```
#### Connect to the database server
The following code loads the JDBC driver and creates a connection to the PostgreSQL database:
```
import java.sql.*;
public class Test2 {
        public static void main([String][17] args[]) {
                // Load the driver (jar file must be on class path) [1]
                try {
                        Class.forName("org.postgresql.Driver");
                        [System][18].out.println("driver loaded");
                } catch ([Exception][19] e1) {
                        [System][18].err.println("couldn't find driver");
                        [System][18].err.println(e1);
                        [System][18].exit(1);
                }
                // Set up connection properties [2]
                java.util.[Properties][20] props = new java.util.[Properties][20]();
                props.setProperty("user","me");
                props.setProperty("password","mypassword");
                [String][17] database = "jdbc:postgresql://myhost.org:5432/test";
                // Open the connection to the database [3]
                try ([Connection][21] conn = [DriverManager][22].getConnection(database, props)) {
                        [System][18].out.println("connection created");
                } catch ([Exception][19] e2) {
                        [System][18].err.println("sql operations failed");
                        [System][18].err.println(e2);
                        [System][18].exit(2);
                }
                [System][18].out.println("connection closed");
                // If we get here all is OK
                [System][18].out.println("done.");
        }
}
```
Compile and run it:
```
me@mymachine:~/Test$ javac Test2.java
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test2
driver loaded
connection created
connection closed
done.
me@mymachine:~/Test$
```
Some notes on the above:
* The code following comment [2] uses system properties to set up connection parameters—in this case, the PostgreSQL username and password. This allows for grabbing those parameters from the Java command line and passing all the parameters in as an argument bundle. There are other **Driver.getConnection()** options for passing in the parameters individually.
* JDBC requires a URL for defining the database, which is declared above as **String database** and passed into the **Driver.getConnection()** method along with the connection parameters.
* The code uses try-with-resources, which auto-closes the connection upon completion of the code in the try-catch block. There is a lengthy discussion of this approach on [Stack Overflow][23].
* The try-with-resources provides access to the **Connection** instance and can execute SQL statements there; any errors will be caught by the same **catch** statement.
#### Do something fun with the database connection
In my day job, I often need to know what users have been defined for a given database server instance, and I use this [handy piece of SQL][24] for grabbing a list of all users:
```
import java.sql.*;
public class Test3 {
        public static void main([String][17] args[]) {
                // Load the driver (jar file must be on class path) [1]
                try {
                        Class.forName("org.postgresql.Driver");
                        [System][18].out.println("driver loaded");
                } catch ([Exception][19] e1) {
                        [System][18].err.println("couldn't find driver");
                        [System][18].err.println(e1);
                        [System][18].exit(1);
                }
                // Set up connection properties [2]
                java.util.[Properties][20] props = new java.util.[Properties][20]();
                props.setProperty("user","me");
                props.setProperty("password","mypassword");
                [String][17] database = "jdbc:postgresql://myhost.org:5432/test";
                // Open the connection to the database [3]
                try ([Connection][21] conn = [DriverManager][22].getConnection(database, props)) {
                        [System][18].out.println("connection created");
                        // Create the SQL command string [4]
                        [String][17] qs = "SELECT " +
                                "       u.usename AS \"User name\", " +
                                "       u.usesysid AS \"User ID\", " +
                                "       CASE " +
                                "       WHEN u.usesuper AND u.usecreatedb THEN " +
                                "               CAST('superuser, create database' AS pg_catalog.text) " +
                        "       WHEN u.usesuper THEN " +
                                "               CAST('superuser' AS pg_catalog.text) " +
                                "       WHEN u.usecreatedb THEN " +
                                "               CAST('create database' AS pg_catalog.text) " +
                                "       ELSE " +
                                "               CAST('' AS pg_catalog.text) " +
                                "       END AS \"Attributes\" " +
                                "FROM pg_catalog.pg_user u " +
                                "ORDER BY 1";
                        // Use the connection to create a statement, execute it,
                        // analyze the results and close the result set [5]
                        [Statement][25] stat = conn.createStatement();
                        [ResultSet][26] rs = stat.executeQuery(qs);
                        [System][18].out.println("User name;User ID;Attributes");
                        while (rs.next()) {
                                [System][18].out.println(rs.getString("User name") + ";" +
                                                rs.getLong("User ID") + ";" +
                                                rs.getString("Attributes"));
                        }
                        rs.close();
                        stat.close();
               
                } catch ([Exception][19] e2) {
                        [System][18].err.println("connecting failed");
                        [System][18].err.println(e2);
                        [System][18].exit(1);
                }
                [System][18].out.println("connection closed");
                // If we get here all is OK
                [System][18].out.println("done.");
        }
}
```
In the above, once it has the **Connection** instance, it defines a query string (comment [4] above), creates a **Statement** instance and uses it to execute the query string, then puts its results in a **ResultSet** instance, which it can iterate through to analyze the results returned, and ends by closing both the **ResultSet** and **Statement** instances (comment [5] above).
Compiling and executing the program produces the following output:
```
me@mymachine:~/Test$ javac Test3.java
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test3
driver loaded
connection created
User name;User ID;[Attributes][27]
fwa;16395;superuser
vax;197772;
mbe;290995;
aca;169248;
connection closed
done.
me@mymachine:~/Test$
```
This is a (very simple) example of using the PostgreSQL JDBC library in a simple Java application. It's worth emphasizing that it didn't need to use a Java import statement like **import org.postgresql.jdbc.*;** in the code because of the way the **java.sql** library is designed. Because of that, there's no need to specify the classpath at compile time. Instead, it uses the Java class loader to bring in the PostgreSQL code at run time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/external-libraries-java
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list.jpg?itok=O3GvU1gH (books in a library, stacks)
[2]: https://commons.apache.org/
[3]: https://commons.apache.org/proper/commons-geometry/
[4]: https://commons.apache.org/proper/commons-statistics/
[5]: https://commons.apache.org/proper/commons-math/
[6]: https://commons.apache.org/proper/commons-numbers/
[7]: https://opensource.com/article/19/11/getting-started-postgresql
[8]: https://en.wikipedia.org/wiki/Javadoc
[9]: https://commons.apache.org/proper/commons-math/apidocs/index.html
[10]: https://opensource.com/sites/default/files/uploads/api-documentation_apachecommonsmath.png (API documentation for Apache Commons Math)
[11]: https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/analysis/solvers/package-summary.html
[12]: https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/analysis/solvers/BisectionSolver.html
[13]: https://en.wikipedia.org/wiki/JAR_(file_format)
[14]: https://en.wikipedia.org/wiki/Javac
[15]: https://docs.oracle.com/javase/8/docs/api/java/sql/package-summary.html
[16]: https://en.wikipedia.org/wiki/Java_Classloader
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+properties
[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+connection
[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+drivermanager
[23]: https://stackoverflow.com/questions/8066501/how-should-i-use-try-with-resources-with-jdbc
[24]: https://www.postgresql.org/message-id/1121195544.8208.242.camel@state.g2switchworks.com
[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+statement
[26]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+resultset
[27]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+attributes

View File

@ -0,0 +1,161 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Extend the life of your SSD drive with fstrim)
[#]: via: (https://opensource.com/article/20/2/trim-solid-state-storage-linux)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Extend the life of your SSD drive with fstrim
======
A new systemd service to make your life easier.
![Linux keys on the keyboard for a desktop computer][1]
Over the past decade, solid-state drives (SSD) have brought about a new way of managing storage. SSDs have benefits like silent and cooler operation and a faster interface spec, compared to their elder spinning ancestors. Of course, new technology brings with it new methods of maintenance and management. SSDs have a feature called TRIM. This is essentially a method for reclaiming unused blocks on the device, which may have been previously written, but no longer contain valid data and therefore, can be returned to the general storage pool for reuse. Opensource.coms Don Watkins first wrote about TRIM in his 2017 article ["Solid-state drives in Linux: Enabling TRIM for SSDs."][2]
If you have been using this feature on your Linux system, then you are probably familiar with the two methods described below.
### The old ways
#### Discard
I initially enabled this with the discard option to the mount command. The configuration is placed into the **/etc/fstab** file for each file system.
```
# cat /etc/fstab
UUID=3453g54-6628-2346-8123435f  /home  xfs  defaults,discard   0 0
```
The discard option enables automatic online TRIM. There has recently been debate on whether this is the best method due to possible negative performance impacts. Using this option causes a TRIM to be initiated every time new data is written to the drive. This may introduce additional activity that interferes with storage performance.
#### Cron
I removed the discard option from the **fstab** file. Then I created a cron job to call the command on a scheduled basis.
```
# crontab -l
@midnight /usr/bin/trim
```
This is the method I used most recently on my Ubuntu Linux systems until I learned about another way.
### A new TRIM service
I recently discovered that a systemd service for TRIM exists. Fedora [introduced][3] this into their distribution in version 30, and, although it is not enabled by default in versions 30 and 31, it is planned to be in version 32. If youre working on Fedora Workstation 31 and you want to begin using this feature, you can enable it very easily. Ill also show you how to test it below. This service is not unique to Fedora. The existence and status will depend on an individual distribution basis.
#### Test
I like to test first, to better understand what is happening behind the scenes. I do this by opening a terminal and issuing the command that the service is configured to call.
```
`/usr/sbin/fstrim --fstab --verbose --quiet`
```
The **help** argument to **fstrim** will describe these and other arguments.
```
$ sudo /usr/sbin/fstrim --help
Usage:
 fstrim [options] &lt;mount point&gt;
Discard unused blocks on a mounted filesystem.
Options:
 -a, --all           trim all supported mounted filesystems
 -A, --fstab         trim all supported mounted filesystems from /etc/fstab
 -o, --offset &lt;num&gt;  the offset in bytes to start discarding from
 -l, --length &lt;num&gt;  the number of bytes to discard
 -m, --minimum &lt;num&gt; the minimum extent length to discard
 -v, --verbose       print number of discarded bytes
     --quiet         suppress error messages
 -n, --dry-run       does everything, but trim
 -h, --help          display this help
 -V, --version       display version
```
So, now I can see that the systemd service is configured to run the trim on all supported mounted filesystems in my **/etc/fstab** file **fstab** and print the number of discarded bytes **verbose** but suppress any error messages that might occur **quiet**. Knowing these options is helpful for testing. For instance, I can start with the safest one, which is the dry run. Ill also leave off the quiet argument so I can determine if any errors will occur with my drive setup.
```
`$ sudo /usr/sbin/fstrim --fstab --verbose --dry-run`
```
This will simply show what the **fstrim** command will do based on the file systems that it finds configured in your **/etc/fstab** file.
```
`$ sudo /usr/sbin/fstrim --fstab --verbose`
```
This will now send the TRIM operation to the drive and report on the number of discarded bytes from each file system. Below is an example after my recent fresh install of Fedora on a new NVME SSD.
```
/home: 291.5 GiB (313011310592 bytes) trimmed on /dev/mapper/wkst-home
/boot/efi: 579.2 MiB (607301632 bytes) trimmed on /dev/nvme0n1p1
/boot: 787.5 MiB (825778176 bytes) trimmed on /dev/nvme0n1p2
/: 60.7 GiB (65154805760 bytes) trimmed on /dev/mapper/wkst-root
```
#### Enable
Fedora Linux implements systemd timer service, scheduled to run on a weekly basis. To check the existence and current status, run **systemctl status**.
```
`$ sudo systemctl status fstrim.timer`
```
Now, enable the service.
```
`$ sudo systemctl enable fstrim.timer`
```
#### Verify
Then you can verify that the timer is enabled by listing all of the timers.
```
`$ sudo systemctl list-timers --all`
```
The following line referring to the **fstrim.timer** will appear. Notice that the timer actually activates **fstrim.service**. This is from where the actual **fstrim** is called. The time-related fields show **n/a** because the service has just been enabled and has not run yet.
```
NEXT   LEFT    LAST   PASSED   UNIT           ACTIVATES
n/a    n/a     n/a    n/a      fstrim.timer   fstrim.service
```
### Conclusion
This service seems like the best way to run TRIM on your drives. It is much simpler than having to create your own crontab entry to call the **fstrim** command. It is also safer not having to edit the **fstab** file. It has been interesting to watch the evolution of solid-state storage technology and nice to know that it appears Linux is moving toward a standard and safe way to implement it.
In this article, learn how solid state drives differ from traditional hard drives and what it means...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/trim-solid-state-storage-linux
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/article/17/1/solid-state-drives-linux-enabling-trim-ssds
[3]: https://fedoraproject.org/wiki/Changes/EnableFSTrimTimer (Fedora Project WIKI: Changes/EnableFSTrimTimer)

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use byobu to multiplex SSH sessions)
[#]: via: (https://opensource.com/article/20/2/byobu-ssh)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
How to use byobu to multiplex SSH sessions
======
Byobu allows you to maintain multiple terminal windows, connect via SSH,
disconnect, reconnect, and share access, all while keeping the session
alive.
![Person drinking a hat drink at the computer][1]
[Byobu][2] is a text-based window manager and terminal multiplexer. It's similar to [GNU Screen][3] but more modern and more intuitive. It also works on most Linux, BSD, and Mac distributions.
Byobu allows you to maintain multiple terminal windows, connect via SSH (secure shell), disconnect, reconnect, and even let other people access it, all while keeping the session alive.
For example, if you are SSH'd into a Raspberry Pi or server and run (for example) **sudo apt update &amp;&amp; sudo apt upgrade**—and lose your internet connection while it is running, your command will be lost to the void. However, if you start a byobu session first, it will continue running and, when you reconnect, you will find it's been running happily without your eyes on it.
![The byobu logo is a fun play on screens.][4]
Byobu is named for a Japanese term for decorative, multi-panel screens that serve as folding room dividers, which I think is quite fitting.
To install byobu on Debian/Raspbian/Ubuntu:
**sudo apt install byobu**
Then enable it:
**byobu-enable**
Now drop out of your SSH session and log back in—you'll land in a byobu session. Run a command like **sudo apt update** and close the window (or enter the escape sequence ([**Enter**+**~**+**.**][5]) and log back in. You'll see the update running just as you left it.
There are a _lot_ of features I don't use regularly or at all. The most common ones I use are:
* **F2** New window
* **F3/F4** Navigate between windows
* **Ctrl**+**F2** Split pane vertically
* **Shift**+**F2** Split pane horizontally
* **Shift**+**Left arrow/Shift**+**Right arrow** Navigate between splits
* **Shift**+**F11** Zoom in (or out) on a split
You can learn more by watching this video:
### How we're using byobu
Byobu has been great for the maintenance of [piwheels][6], the convenient, pre-compiled Python packages for Raspberry Pi. We have a horizontal split showing the piwheels monitor in the top half and the syslog entries scrolled in real time on the bottom half. Then, if we want to do something else, we switch to another window. It's particularly handy when we're investigating something collaboratively, as I can see what my colleague Dave types (and correct his typos) while we chat in IRC.
I have byobu enabled on my home and work servers, so when I log into either machine, everything is as I left it—multiple jobs running, a window left in a particular directory, running a process as another user, that kind of thing.
![byobu screenshot][7]
Byobu is handy for development on Raspberry Pis, too. You can launch it on the desktop, run a command, then SSH in and attach yourself to the session where that command is running. Just note that enabling byobu won't change what the terminal launcher does. Just run **byobu** to launch it.
* * *
_This article originally appeared on Ben Nuttall's [Tooling blog][8] and is reused with permission._
Enter the black raspberry. Rubus occidentalis . It's an ominous name for an ominous fruit: the...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/byobu-ssh
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hat drink at the computer)
[2]: https://byobu.org/
[3]: http://www.gnu.org/software/screen/
[4]: https://opensource.com/sites/default/files/uploads/byobu.png (byobu screen)
[5]: https://www.google.com/search?client=ubuntu&channel=fs&q=Enter-tilde-dot&ie=utf-8&oe=utf-8
[6]: https://opensource.com/article/20/1/piwheels
[7]: https://opensource.com/sites/default/files/uploads/byobu-screenshot.png (byobu screenshot)
[8]: https://tooling.bennuttall.com/byobu/

View File

@ -0,0 +1,685 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage your SSL certificates with the ssl-on-demand script)
[#]: via: (https://opensource.com/article/20/2/ssl-demand)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
Manage your SSL certificates with the ssl-on-demand script
======
Keep track of certificate expirations to prevent problems with the
ssl-on-demand script.
![Lock][1]
It happens all the time, to the largest of companies. An important certificate doesn't get renewed, and services become inaccessible. It happened to Microsoft Teams in early February 2020, awkwardly timed just after the launch of a major television campaign promoting it as a [Slack competitor][2]. Embarrassing as that may be, it's sure to happen to someone else in the future.
On the modern web, expired [certificates][3] can create major problems for websites, ranging from unhappy users who can't connect to a site to security threats from bad actors who take advantage of the failure to renew a certificate.
[Ssl-on-demand][4] is a set of SSL scripts to help site owners manage certificates. It is used for on-demand certificate generation and validation and it can create certificate signing requests ([CSRs][5]) and predict the expiration of existing certificates.
### Automate SSL expiry checks
```
 USAGE: SSLexpiryPredictions.sh -[cdewh]
  DESCRIPTION: This script predicts the expiring SSL certificates based on the end date.
  OPTIONS:
  -c|   sets the value for configuration file which has server:port or host:port details.
       
  -d|   sets the value of directory containing the certificate files in crt or pem format.
  -e|   sets the value of certificate extention, e.g crt, pem, cert.
        crt: default [to be used with -d, if certificate file extention is other than .crt]
  -w|   sets the value for writing the script output to a file.
  -h|   prints this help and exit.
```
**Examples:**
To create a file with a list of all servers and their port numbers to make an SSL handshake, use:
```
cat &gt; servers.list
         server1:port1
         server2:port2
         server3:port3
        (ctrl+d)
       
$ ./SSLexpiryPredictions.sh -c server.list
```
Run the script by providing the certificate location and extension (in case it is not .crt):
```
`$ ./SSLexpiryPredictions.sh -d /path/to/certificates/dir -e pem`
```
### Automate CSR and private key creation
```
Usage:  genSSLcsr.sh [options] -[cdmshx]
  [-c (common name)]
  [-d (domain name)]
  [-s (SSL certificate subject)]
  [-p (password)]
  [-m (email address)] *(Experimental)
  [-r (remove pasphrase) default:true]
  [-h (help)]
  [-x (optional)]
[OPTIONS]
  -c|   Sets the value for common name.
        A valid common name is something that ends with 'xyz.com'
  -d|   Sets the domain name.
  -s|   Sets the subject to be applied to the certificates.
        '/C=country/ST=state/L=locality/O=organization/OU=organizationalunit/emailAddress=email'
  -p|   Sets the password for private key.
  -r|   Sets the value of remove passphrase.
        true:[default] passphrase will be removed from key.
        false: passphrase will not be removed and key wont get printed.
  -m|   Sets the mailing capability to the script.
        (Experimental at this time and requires a lot of work)
  -x|   Creates the certificate request and key but do not print on screen.
        To be used when script is used just to create the key and CSR with no need
        + to generate the certficate on the go.
  -h|   Displays the usage. No further functions are performed.
  Example: genSSLcsr.sh -c mywebsite.xyz.com -m [myemail@mydomain.com][6]
```
### The scripts
#### 1. SSLexpiryPredictions.sh
```
#!/bin/bash
##############################################
#
#       PURPOSE: The script to predict expiring SSL certificates.
#
#       AUTHOR: 'Abhishek.Tamrakar'
#
#       VERSION: 0.0.1
#
#       COMPANY: Self
#
#       EMAIL: [abhishek.tamrakar08@gmail.com][7]
#
#       GENERATED: on 2018-05-20
#
#       LICENSE: Copyright (C) 2018 Abhishek Tamrakar
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#       <http://www.apache.org/licenses/LICENSE-2.0>
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.
##############################################
#your Variables go here
script=${0##/}
exitcode=''
WRITEFILE=0
CONFIG=0
DIR=0
# functions here
usage()
{
cat &lt;&lt;EOF
  USAGE: $script -[cdewh]"
  DESCRIPTION: This script predicts the expiring SSL certificates based on the end date.
  OPTIONS:
  -c|   sets the value for configuration file which has server:port or host:port details.
  -d|   sets the value of directory containing the certificate files in crt or pem format.
  -e|   sets the value of certificate extention, e.g crt, pem, cert.
        crt: default
  -w|   sets the value for writing the script output to a file.
  -h|   prints this help and exit.
EOF
exit 1
}
# print info messages
info()
{
  printf '\n%s: %6s\n' "INFO" "$@"
}
# print error messages
error()
{
  printf '\n%s: %6s\n' "ERROR" "$@"
  exit 1
}
# print warning messages
warn()
{
  printf '\n%s: %6s\n' "WARN" "$@"
}
# get expiry for the certificates
getExpiry()
{
  local expdate=$1
  local certname=$2
  today=$(date +%s)
  timetoexpire=$(( ($expdate - $today)/(60*60*24) ))
  expcerts=( ${expcerts[@]} "${certname}:$timetoexpire" )
}
# print all expiry that was found, typically if there is any.
printExpiry()
{
  local args=$#
  i=0
  if [[ $args -ne 0 ]]; then
    #statements
    printf '%s\n' "---------------------------------------------"
    printf '%s\n' "List of expiring SSL certificates"
    printf '%s\n' "---------------------------------------------"
    printf '%s\n' "$@"  | \
      sort -t':' -g -k2 | \
      column -s: -t     | \
      awk '{printf "%d.\t%s\n", NR, $0}'
    printf '%s\n' "---------------------------------------------"
  fi
}
# calculate the end date for the certificates first, finally to compare and predict when they are going to expire.
calcEndDate()
{
  sslcmd=$(which openssl)
  if [[ x$sslcmd = x ]]; then
    #statements
    error "$sslcmd command not found!"
  fi
  # when cert dir is given
  if [[ $DIR -eq 1 ]]; then
    #statements
    checkcertexists=$(ls -A $TARGETDIR| egrep "*.$EXT$")
    if [[ -z ${checkcertexists} ]]; then
      #statements
      error "no certificate files at $TARGETDIR with extention $EXT"
    fi
    for file in $TARGETDIR/*.${EXT:-crt}
    do
      expdate=$($sslcmd x509 -in $file -noout -enddate)
      expepoch=$(date -d "${expdate##*=}" +%s)
      certificatename=${file##*/}
      getExpiry $expepoch ${certificatename%.*}
    done
  elif [[ $CONFIG -eq 1 ]]; then
    #statements
    while read line
    do
      if echo "$line" | \
      egrep -q '^[a-zA-Z0-9.]+:[0-9]+|^[a-zA-Z0-9]+_.*:[0-9]+';
      then
        expdate=$(echo | \
        openssl s_client -connect $line 2&gt;/dev/null | \
        openssl x509 -noout -enddate 2&gt;/dev/null);
        if [[ $expdate = '' ]]; then
          #statements
          warn "[error:0906D06C] Cannot fetch certificates for $line"
        else
          expepoch=$(date -d "${expdate##*=}" +%s);
          certificatename=${line%:*};
          getExpiry $expepoch ${certificatename};
        fi
      else
        warn "[format error] $line is not in required format!"
      fi
    done &lt; $CONFIGFILE
  fi
}
# your script goes here
while getopts ":c:d:w:e:h" options
do
case $options in
c )
  CONFIG=1
  CONFIGFILE="$OPTARG"
  if [[ ! -e $CONFIGFILE ]] || [[ ! -s $CONFIGFILE ]]; then
    #statements
    error "$CONFIGFILE does not exist or empty!"
  fi
        ;;
e )
  EXT="$OPTARG"
  case $EXT in
    crt|pem|cert )
    info "Extention check complete."
    ;;
    * )
    error "invalid certificate extention $EXT!"
    ;;
  esac
  ;;
d )
  DIR=1
  TARGETDIR="$OPTARG"
  [ $TARGETDIR = '' ] &amp;&amp; error "$TARGETDIR empty variable!"
  ;;
w )
  WRITEFILE=1
  OUTFILE="$OPTARG"
  ;;
h )
        usage
        ;;
\? )
        usage
        ;;
: )
        fatal "Argument required !!! see \'-h\' for help"
        ;;
esac
done
shift $(($OPTIND - 1))
#
calcEndDate
#finally print the list
if [[ $WRITEFILE -eq 0 ]]; then
  #statements
  printExpiry ${expcerts[@]}
else
  printExpiry ${expcerts[@]} &gt; $OUTFILE
fi
```
#### 2. genSSLcsr.sh
```
#!/bin/bash -
#===============================================================================
#
#          FILE: genSSLcsr.sh
#
#         USAGE: ./genSSLcsr.sh [options]
#
#   DESCRIPTION: ++++version 1.0.2
#               Fixed few bugs from previous script
#               +Removing passphrase after CSR generation
#               Extended use of functions
#               Checks for valid common name
#               ++++1.0.3
#               Fixed line breaks
#               Work directory to be created at the start
#               Used getopts for better code arrangements
#   ++++1.0.4
#     Added mail feature (experimental at this time and needs
#     a mail server running locally.)
#     Added domain input and certificate subject inputs
#
#       OPTIONS: ---
#  REQUIREMENTS: openssl, mailx
#          BUGS: ---
#         NOTES: ---
#        AUTHOR: Abhishek Tamrakar (), [abhishek.tamrakar08@gmail.com][7]
#  ORGANIZATION: Self
#       CREATED: 6/24/2016
#      REVISION: 4
# COPYRIGHT AND
#       LICENSE: Copyright (C) 2016 Abhishek Tamrakar
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#       <http://www.apache.org/licenses/LICENSE-2.0>
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.
#===============================================================================
#variables ges here
#set basename to scriptname
SCRIPT=${0##*/}
#set flags
TFOUND=0
CFOUND=0
MFOUND=0
XFOUND=0
SFOUND=0
logdir=/var/log
# edit these below values to replace with yours
homedir=''
yourdomain=''
country=IN
state=Maharashtra
locality=Pune
organization="your_organization"
organizationalunit="your_organizational_unit"
email=your_email@your_domain
password=your_ssl_password
# OS is declared and will be used in its next version
OS=$(egrep -io 'Redhat|centos|fedora|ubuntu' /etc/issue)
### function declarations ###
info()
{
  printf '\n%s\t%s\t' "INFO" "$@"
}
#exit on error with a custom error message
#the extra function was removed and replaced withonly one.
#using FAILED\n\e&lt;message&gt; is a way but not necessarily required.
#
fatal()
{
 printf '\n%s\t%s\n' "ERROR" "$@"
 exit 1
}
checkperms()
{
if [[ -z ${homedir} ]]; then
homedir=$(pwd)
fi
if [[ -w ${homedir} ]]; then
info "Permissions acquired for ${SCRIPT} on ${homedir}."
else
fatal "InSufficient permissions to run the ${SCRIPT}."
fi
}
checkDomain()
{
info "Initializing Domain ${cn} check ? "
if [[ ! -z ${yourdomain} ]]; then
workdir=${homedir}/${yourdomain}
echo -e "${cn}"|grep -E -i -q "${yourdomain}$" &amp;&amp; echo -n "[OK]" || fatal "InValid domain in ${cn}"
else
workdir=${homedir}/${cn#*.}
echo -n "[NULL]"
info "WARNING: No domain declared to check."
confirmUserAction
fi
}       # end function checkDomain
usage()
{
cat &lt;&lt; EOF
Usage:  $SCRIPT [options] -[cdmshx]
  [-c (common name)]
  [-d (domain name)]
  [-s (SSL certificate subject)]
  [-p (password)]
  [-m (email address)] *(Experimental)
  [-r (remove pasphrase) default:true]
  [-h (help)]
  [-x (optional)]
[OPTIONS]
  -c|   Sets the value for common name.
        A valid common name is something that ends with 'xyz.com'
  -d|   Sets the domain name.
  -s|   Sets the subject to be applied to the certificates.
        '/C=country/ST=state/L=locality/O=organization/OU=organizationalunit/emailAddress=email'
  -p|   Sets the password for private key.
  -r|   Sets the value of remove passphrase.
        true:[default] passphrase will be removed from key.
        false: passphrase will not be removed and key wont get printed.
  -m|   Sets the mailing capability to the script.
        (Experimental at this time and requires a lot of work)
  -x|   Creates the certificate request and key but do not print on screen.
        To be used when script is used just to create the key and CSR with no need
        + to generate the certficate on the go.
  -h|   Displays the usage. No further functions are performed.
  Example: $SCRIPT -c mywebsite.xyz.com -m [myemail@mydomain.com][6]
EOF
exit 1
}       # end usage
confirmUserAction() {
while true; do
read -p "Do you wish to continue? ans: " yn
case $yn in
[Yy]* ) info "Initiating the process";
break;;
[Nn]* ) exit 1;;
* ) info "Please answer yes or no.";;
esac
done
}       # end function confirmUserAction
parseSubject()
{
  local subject="$1"
  parsedsubject=$(echo $subject|sed 's/\// /g;s/^ //g')
  for i in ${parsedsubject}; do
      case ${i%=*} in
        'C' )
        country=${i##*=}
        ;;
        'ST' )
        state=${i##*=}
        ;;
        'L' )
        locality=${i##*=}
        ;;
        'O' )
        organization=${i##*=}
        ;;
        'OU' )
        organizationalunit=${i##*=}
        ;;
        'emailAddress' )
        email=${i##*=}
      ;;
    esac
  done
}
sendMail()
{
 mailcmd=$(which mailx)
 if [[ x"$mailcmd" = "x" ]]; then
   fatal "Cannot send email! please install mailutils for linux"
 else
   echo "SSL CSR attached." | $mailcmd -s "SSL certificate request" \
   -t $email $ccemail -A ${workdir}/${cn}.csr \
   &amp;&amp; info "mail sent" \
   || fatal "error in sending mail."
 fi
}
genCSRfile()
{
info "Creating signed key request for ${cn}"
#Generate a key
openssl genrsa -des3 -passout pass:$password -out ${workdir}/${cn}.key 4096 -noout 2&gt;/dev/null &amp;&amp; echo -n "[DONE]" || fatal "unable to generate key"
#Create the request
info "Creating Certificate request for ${cn}"
openssl req -new -key ${workdir}/${cn}.key -passin pass:$password -sha1 -nodes \
        -subj "/C=$country/ST=$state/L=$locality/O=$organization/OU=$organizationalunit/CN=$cn/emailAddress=$email" \
        -out ${workdir}/${cn}.csr &amp;&amp; echo -n "[DONE]" || fatal "unable to create request"
if [[ "${REMOVEPASSPHRASE:-true}" = 'true' ]]; then
  #statements
  #Remove passphrase from the key. Comment the line out to keep the passphrase
  info "Removing passphrase from ${cn}.key"
  openssl rsa -in ${workdir}/${cn}.key \
  -passin pass:$password \
  -out ${workdir}/${cn}.insecure 2&gt;/dev/null \
  &amp;&amp; echo -n "[DONE]" || fatal "unable to remove passphrase"
  #swap the filenames
  info "Swapping the ${cn}.key to secure"
  mv ${workdir}/${cn}.key ${workdir}/${cn}.secure \
  &amp;&amp; echo -n "[DONE]" || fatal "unable to perfom move"
  info "Swapping insecure key to ${cn}.key"
  mv ${workdir}/${cn}.insecure ${workdir}/${cn}.key \
  &amp;&amp; echo -n "[DONE]" || fatal "unable to perform move"
else
  info "Flag '-r' is set, passphrase will not be removed."
fi
}
printCSR()
{
if [[ -e ${workdir}/${cn}.csr ]] &amp;&amp; [[ -e ${workdir}/${cn}.key ]]
then
echo -e "\n\n----------------------------CSR-----------------------------"
cat ${workdir}/${cn}.csr
echo -e "\n----------------------------KEY-----------------------------"
cat ${workdir}/${cn}.key
echo -e "------------------------------------------------------------\n"
else
fatal "CSR or KEY generation failed !!"
fi
}
### END Functions ###
#Check the number of arguments. If none are passed, print help and exit.
NUMARGS=$#
if [ $NUMARGS -eq 0 ]; then
fatal "$NUMARGS Arguments provided !!!! See usage with '-h'"
fi
#Organisational details
while getopts ":c:d:sⓂp:rhx" atype
do
case $atype in
c )
        CFOUND=1
        cn="$OPTARG"
        ;;
d )
  yourdomain="$OPTARG"
  ;;
s )
  SFOUND=1
  subj="$OPTARG"
  ;;
p )
  password="$OPTARG"
  ;;
r )
  REMOVEPASSPHRASE='false'
  ;;
m )
  MFOUND=1
  ccemail="$OPTARG"
  ;;
x )
        XFOUND=1
  ;;
h )
        usage
        ;;
\? )
        usage
        ;;
: )
        fatal "Argument required !!! see \'-h\' for help"
        ;;
esac
done
shift $(($OPTIND - 1))
#### END CASE #### START MAIN ####
if [ $CFOUND -eq 1 ]
then
# take current dir as homedir by default.
checkperms ${homedir}
checkDomain
  if [[ ! -d ${workdir} ]]
  then
    mkdir ${workdir:-${cn#*.}} 2&gt;/dev/null &amp;&amp; info "${workdir} created."
  else
    info "${workdir} exists."
  fi # end workdir check
  parseSubject "$subj"
  genCSRfile
  if [ $XFOUND -eq 0 ]
  then
    sleep 2
    printCSR
  fi    # end x check
  if [[ $MFOUND -eq 1 ]]; then
    sendMail
  fi
else
        fatal "Nothing to do!"
fi      # end common name check
##### END MAIN #####
```
* * *
_This was originally published as the README in [ssl-on-demand's GitHub repository][4] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/ssl-demand
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum (Lock)
[2]: https://opensource.com/alternatives/slack
[3]: https://opensource.com/article/19/1/what-certificate
[4]: https://github.com/abhiTamrakar/ssl-on-demand
[5]: https://en.wikipedia.org/wiki/Certificate_signing_request
[6]: mailto:myemail@mydomain.com
[7]: mailto:abhishek.tamrakar08@gmail.com

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution)
[#]: via: (https://itsfoss.com/appcenter-for-everyone/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution
======
_**Brief: elementary OS is building an app center ecosystem where you can buy open source applications for your Linux distribution.**_
### Crowdfunding to build an open source AppCenter for everyone
![][1]
[elementary OS][2] recently announced that it is [crowdfunding a campaign to build an app center][3] from where you can buy open source applications. The applications in the app center will be in Flatpak format.
Though its an initiative taken by elementary OS, this new app center will be available for other distributions as well.
The campaign aims to fund a week of in-person development sprint in Denver, Colorado (USA) featuring developers from elementary OS, [Endless][4], [Flathub][5] and [GNOME][6].
The crowdfunding campaign has already crossed its goal of raising $10,000. You can still fund it as additional funds will be used for the development of elementary OS.
[Crowdfunding Campaign][3]
### What features this AppCenter brings
The focus is on providing secure applications and hence [Flatpak][7] apps are used to provide confined applications. In this format, apps will be restricted from accessing system or personal files and will be isolated from other apps on a technical level by default.
Apps will have access to operating system and personal files only if you explicitly provide your consent for it.
Apart from security, [Flatpak][8] also bundles all the dependencies. This way, app developers can utilize the cutting edge technologies even if it is not available on the current Linux distribution.
AppCenter will also have the wallet feature to save your card details. This enables you to quickly pay for apps without entering the card details each time.
![][9]
This new open source app center will be available for other Linux distributions as well.
### Inspired by the success of elementary OSs own Pay What You Want app center model
A couple of years ago, elementary OS launched its own app center. The pay what you want approach for the app center was quite a hit. The developers can put a minimum amount for their open source apps and the users can choose to pay an amount equal to or more than the minimum amount.
![][10]
This helped several indie developers get paid for their open source applications. The app store now has around 160 native applications and elementary OS says that thousands of dollars have been paid to the developers through the app center.
Inspired by the success of this app center experiment in elementary OS, they now want to bring this app center approach to other distributions as well.
### If the applications are open source, how can you charge money for it?
Some people still get confused with the idea of FOSS (free and open source). Here, the **source** code of the software is **open** and anyone is **free** to modify it and redistribute it.
It doesnt mean that open source software has to be free of cost. Some developers rely on donations while some charge a fee for support.
Getting paid for the open source apps may encourage developers to create [applications for Linux][11].
### Lets see if it could work
![][12]
Personally, I am not a huge fan of Flatpak or Snap packaging format. They do have their benefits but they take relatively more time to start and they are huge in size. If you install several such Snaps or Flatpaks, your disk space may start running out of free space.
There is also a need to be vigilant about the fake and scam developers in this new app ecosystem. Imagine if some scammers starts creating Flatpak package of obscure open source applications and put it on the app center? I hope the developers put some sort of mechanism to weed out such apps.
I do hope that this new AppCenter replicates the success it has seen in elementary OS. We definitely need a better ecosystem for open source apps for desktop Linux.
What are your views on it? Is it the right approach? What suggestions do you have for the improvement of the AppCenter?
--------------------------------------------------------------------------------
via: https://itsfoss.com/appcenter-for-everyone/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter.png?ssl=1
[2]: https://elementary.io/
[3]: https://www.indiegogo.com/projects/appcenter-for-everyone/
[4]: https://itsfoss.com/endless-linux-computers/
[5]: https://flathub.org/
[6]: https://www.gnome.org/
[7]: https://flatpak.org/
[8]: https://itsfoss.com/flatpak-guide/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter-wallet.png?ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter-payment.png?ssl=1
[11]: https://itsfoss.com/essential-linux-applications/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/open_source_app_center.png?ssl=1

View File

@ -1,175 +0,0 @@
MidnightBSD 可能是你通往 FreeBSD 的大门
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_4_0.jpg?itok=T2gpLVui)
[FreeBSD][1] 是一个开源操作系统,衍生自著名的 [Berkeley Software Distribution伯克利软件套件][2]。FreeBSD 的第一个版本在 1993 年发布并且仍然很强大。2007 年左右Lucas Holt 想创建一个 FreeBSD 的分支,利用 OpenStep (现在是 Cocoa) 的 Objective-C 框架widget 工具包和应用程序开发工具的 [GnuStep][3] 的实现。为此,他开始开发 MidnightBSD 桌面发行版。
MidnightBSD (以 Lucas 的猫命名Midnight) 仍然在积极地(尽管缓慢)开发。自2017年8月可以获得最新的稳定发布版本 (0.8.6) 。尽管 BSD 发行版不是你所说的用户友好型发行版,通过命令行来加速它们的安装速度是一个你自己熟悉如何处理一个文本( ncurses )安装和最后完成安装的非常好的方法。
为此,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。但是如果你是一名 Linux 用户正在寻找扩展你的技能的方法,它将做一点工作,… 这是一个很好的起始地方。
我想带你走过安装 MidnightBSD 的工艺流程,如何添加一个图形桌面环境,然后如何安装应用程序。
### 安装
正如我所提到的,这是一个文本( ncurses ) 安装过程,因此在这里没有找到可用的鼠标。相反,你将使用你键盘的 Tab 和箭头按键。在你下载 [最新的发布版本][4] 后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装器将打开并给你三个选项(图 1)。选择安装(使用你的键盘的箭头按键),并敲击 Enter 键。
![MidnightBSD installer][6]
图 1: 启动 MidnightBSD 安装器。
在这点上,在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
1. 设置非默认键盘映射(是/否)
2. 设置主机名称
3. 添加可选系统组件(文档游戏32位兼容性系统源码代码)
4. 分区硬盘
5. 管理员密码
6. 配置网络接口
7. 选择地区(时区)
8. 启用服务(例如获得 shell)
9. 添加用户(图 2)
![Adding a user][7]
图 2: 向系统添加一个用户。
在你向系统添加用户后,你将被拖拽到一个窗口中(图 3),在这里,你可以处理任何你可能忘记的或你想重新配置的东西。如果你不需要作出任何更改,选择 Exit ,然后你的配置将被应用。
![Applying your configurations][8]
图 3: 应用你的配置。
在接下来的窗口中,当出现提示时,选择 No ,接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
### Post 安装
当你最新安装的 MidnightBSD 启动时你将发现你自己在一个命令提示符中。在这一点上在这里未找到图形界面。我安装应用程序MidnightBSD 依赖于 mport 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
```
sudo mport index
sudo mport install xorg
```
你现在有已经安装的 Xorg 窗口服务器,它将允许你来安装桌面环境。使用命令来安装 Xfce
```
sudo mport install xfce
```
现在已经安装 Xfce 。不过,我们需要和命令 startx 一起运行来启用它。为此,让我们先安装 nano 编辑器。发出命令:
```
sudo mport install nano
```
随着 nano 已安装,发出命令:
```
nano ~/.xinitrc
```
这个文件仅包含一行:
```
exec startxfce4
```
保存并关闭这个文件。如果你现在发出命令 startx, Xfce 桌面环境将启动。你应该会感到一点在家里的感觉(图 4).
![ Xfce][9]
图 4: Xfce桌面界面已准备好服务。
因为你不想总是必需发出命令 startx ,你希望启用登录守护进程。然而,却没有安装。要安装这个子系统,发出命令:
```
sudo mport install mlogind
```
当完成安装后,通过在 /etc/rc.conf 文件中添加一个项目来在启动时启用 mlogind 。在 rc.conf 文件的底部,添加以下内容:
```
mlogind_enable=”YES”
```
保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后,我最后得到一个空白屏幕和不想要的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必需使用 startx 命令。
### 安装
开箱即用,你将不能找到很多能使用的应用程序。如果你尝试安装应用程序(使用 mport ),你将很快发现你自己的沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 svnlite 命令来查看检查出可用的 mport 软件列表。回到终端窗口,并发出命令:
```
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
```
在你完成这些后,你应该看到一个命名为 ~/mports 的新的命令。 更改到这个目录(使用命令 cd ~/.mports 。发出 ls 命令,然后你应该看到许多的类别(图 5)。
![applications][10]
图 5: 对于 mport 现在可用的应用程序类别。
你想安装 Firefox ?如果你查看 www 目录,你将看到一个 linux-firefox 列表。发出命令:
```
sudo mport install linux-firefox
```
现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项目。翻找所有的类别,并使用 mport 命令来安装你需要的所有软件。
### 一个悲哀的警告
一个悲哀的小警告是, mport (通过via svnlite) 仅能找到的一个 office 套件的版本是 OpenOffice 3 。那是非常过时的。尽管 在 ~/mports/editors 目录中找到 Abiword ,但是它看起来不可用于安装。甚至在安装 OpenOffice 3 后,它会输出一个 Exec 格式错误。换句话说,你将不能使用 MidnightBSD 在 office 生产效率方面做很多的事情。但是,嘿嘿,如果你周围躺有一个旧的 Palm 导航器,你也安装 pilot 链接。换句话说,可用的软件不能生成一个极其有用的桌面发行版… 至少对普通用户不是。但是,如果你想在 MidnightBSD 上开发,你将找到很多可用的工具,准备安装(查看 ~/mports/devel 目录)。你甚至可以使用命令安装 Drupal
```
sudo mport install drupal7
```
当然,在此之后,你将需要创建一个数据库( MySQL 已经安装),安装 Apache (sudo mport install apache24) ,并配置必需的 Apache 指令。
显然地,已安装的和能够安装什么是一个已经应用程序,系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务特殊目的的发行版。
### 享受 *BSD 优良
这就是你如何使 MidnightBSD 启动,并在一个有点用的桌面发行版中运行。它不像很多其它的 Linux 发行版一样快速容易但是如果你想要一个你想要的发行版这可能正是你正在寻找的。尽管很多竞争对手有很多为安装而准备的可用的软件标题MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣的挑战。
通过来自 Linux 基金会和 edX 的免费的[" Linux 简介" ][11]课程学习更多关于 Linux 的信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.freebsd.org/
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]:https://en.wikipedia.org/wiki/GNUstep
[4]:http://www.midnightbsd.org/download/
[5]:https://www.virtualbox.org/
[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (zhangxiangping)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (12 open source tools for natural language processing)
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
12种自然语言处理的开源工具
======
看看可以用在你自己NLP应用中的十几个工具吧。
![Chat bubbles][1]
在过去的几年里,自然语言处理(NLP)推动了聊天机器人、语音助手、文本预测这些在我们的日常生活中常用的语音或文本应用程技术的发展。目前有着各种各样开源的NLP工具所以我决定调查一下当前开源的NLP工具来帮助您制定您开发下一个基于语音或文本的应用程序的计划。
我将从我所熟悉的编程语言出发来介绍这些工具,尽管我对这些工具不是很熟悉(我没有在我不熟悉的语言中找工具)。也就是说,出于各种原因,我排除了三种我熟悉的语言中的工具。
R语言是没有被包含在内的因为我发现的大多数库都有一年多没有更新了。这并不总是意味着他们没有得到很好的维护但我认为他们应该得到更多的更新以便和同一领域的其他工具竞争。我还选择了最有可能在生产场景中使用的语言和工具而不是在学术界和研究中使用虽然我主要是使用R作为研究和发现工具。
我发现Scala的很多库都没有更新了。我上次使用Scala已经有好几年了当时它非常流行。但是大多数库从那个时候就再没有更新过或者只有少数一些有更新。
最后我排除了C++。这主要是因为我在的公司很久没有使用C++来进行NLP或者任何数据科学的工作。
### Python工具
#### Natural Language Toolkit (NLTK)
[Natural Language Toolkit (NLTK)][2]是我调研的所有工具中功能最完善的一个。它完美地实现了自然语言处理中多数功能组件,比如分类,令牌化,词干化,标注,分词和语义推理。每一种方法都有多种不同的实现方式,所以你可以选择具体的算法和方式去使用它。同时,它也支持不同语言。然而,它将所有的数据都表示为字符串的形式,对于一些简单的数据结构来说可能很方便,但是如果要使用一些高级的功能来说就可能有点困难。它的使用文档有点复杂,但也有很多其他人编写的使用文档,比如[a great book][3]。和其他的工具比起来,这个工具库的运行速度有点慢。但总的来说,这个工具包非常不错,可以用于需要具体算法组合的实验,探索和实际应用当中。
#### SpaCy
[SpaCy][4]是NLTK的主要竞争者。在大多数情况下都比NLTK的速度更快但是SpaCy对自然语言处理的功能组件只有单一实现。SpaCy把所有的东西都表示为一个对象而不是字符串这样就能够为构建应用简化接口。这也方便它能够集成多种框架和数据科学的工具使得你更容易理解你的文本数据。然而SpaCy不像NLTK那样支持多种语言。它对每个接口都有一些简单的选项和文档包括用于语言处理和分析各种组件的多种神经网络模型。总的来说如果创造一个新的应用的生产过程中不需要使用特定的算法的话这是一个很不错的工具。
#### TextBlob
[TextBlob][5]是NLTK的一个扩展库。你可以通过TextBlob用一种更简单的方式来使用NLTK的功能TextBlob也包括了Pattern库中的功能。如果你刚刚开始学习这将会是一个不错的工具可以用于生产对性能要求不太高的应用。TextBlob适用于任何场景但是对小型项目会更加合适。
#### Textacy
这个工具是我用过的名字最好听的。读"[Textacy][6]" 时先发出"ex"再发出"cy"。它不仅仅是名字好同时它本身也是一个很不错的工具。它使用SpaCy作为它自然语言处理核心功能但它在处理过程的前后做了很多工作。如果你想要使用SpaCy你可以先使用Textacy从而不用去多写额外的附加代码你就可以处理不同种类的数据。
#### PyTorch-NLP
[PyTorch-NLP][7]才出现短短的一年但它已经有一个庞大的社区了。它适用于快速原型开发。当公司或者研究人员推出很多其他工具去完成新奇的处理任务比如图像转换它就会被更新。PyTorch的目标用户是研究人员但它也能用于原型开发或在最开始的生产任务中使用最好的算法。基于此基础上的创建的库也是值得研究的。
### 节点工具
#### Retext
[Retext][8]是[unified collective][9]的一部分。Unified是一个接口能够集成不同的工具和插件以便他们能够高效的工作。Retext是unified工具集三个中的一个另外的两个分别是用于markdown编辑的Remark和用于HTML处理的Rehype。这是一个非常有趣的想法我很高兴看到这个社区的发展。Retext没有暴露过多的底层技术更多的是使用插件去完成你在NLP任务中想要做的事情。拼写检查固定排版情绪检测和可读性分析都可以用简单的插件来完成。如果你不想了解底层处理技术又想完成你的任务的话这个工具和社区是一个不错的选择。
#### Compromise
如果你在找拥有最高级的功能和最复杂的系统的工具的话,[Compromise][10]不是你的选择。 然而如果你想要一个性能好应用广泛还能在客户端运行的工具的话Compromise值得一试。实际上它的名字是准确的因为作者更关注更具体功能的小软件包而在功能性和准确性上做出了牺牲这些功能得益于用户对使用环境的理解。
#### Natural
[Natural][11]包含了一般自然语言处理库所具有的大多数功能。它主要是处理英文文本,但也包括一些其他语言,它的社区也支持额外的语言。它能够进行令牌化,词干化,分类,语音处理,词频-逆文档频率计算(TF-IDF)WordNet字符相似度计算和一些变换。它和NLTK有的一比因为它想要把所有东西都包含在一个包里头使用方便但是可能不太适合专注的研究。总的来说这是一个不错的功能齐全的库目前仍在开发但可能需要对底层实现有更多的了解才能完更有效。
#### Nlp.js
[Nlp.js][12]是在其他几个NLP库上开发的包括Franc和Brain.js。它提供了一个能很好支持NLP组件的接口比如分类情感分析词干化命名实体识别和自然语言生成。它也支持一些其他语言在你处理除了英语之外的语言时也能提供一些帮助。总之它是一个不错的通用工具能够提供简单的接口去调用其他工具。在你需要更强大或更灵活的工具之前这个工具可能会在你的应用程序中用上很长一段时间。
### Java工具
#### OpenNLP
[OpenNLP][13]是由Apache基金会维护的所以它可以很方便地集成到其他Apache项目中比如Apache FlinkApache NiFi和Apache Spark。这是一个通用的NLP工具包含了所有NLP组件中的通用功能可以通过命令行或者以包的形式导入到应用中来使用它。它也支持很多种语言。OpenNLP是一个很高效的工具包含了很多特性如果你用Java开发生产的话它是个很好的选择。
#### StanfordNLP
[Stanford CoreNLP][14]是一个工具集提供了基于统计的基于深度学习和基于规则的NLP功能。这个工具也有许多其他编程语言的版本所以可以脱离Java来使用。它是由高水平的研究机构创建的一个高效的工具但在生产环境中可能不是最好的。此工具具有双重许可并具有可以用于商业目的的特殊许可。总之在研究和实验中它是一个很棒的工具但在生产系统中可能会带来一些额外的开销。比起Java版本来说读者可能对它的Python版本更感兴趣。斯坦福教授在Coursera上教的最好的机器学习课程之一[点此][15]访问其他不错的资源。
#### CogCompNLP
[CogCompNLP][16]由伊利诺斯大学开发的一个工具它也有一个相似功能的Python版本事项。它可以用于处理文本包括本地处理和远程处理能够极大地缓解你本地设备的压力。它提供了很多处理函数比如令牌化词性分析标注断句命名实体标注词型还原依存分析和语义角色标注。它是一个很好的研究工具你可以自己探索它的不同功能。我不确定它是否适合生产环境但如果你使用Java的话它值得一试。
* * *
你最喜欢的开源的NLP工具和库是什么请在评论区分享文中没有提到的工具。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/natural-language-processing-tools
作者:[Dan Barker (Community Moderator)][a]
选题:[lujun9972][b]
译者:[zxp](https://github.com/zhangxiangping)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
[2]: http://www.nltk.org/
[3]: http://www.nltk.org/book_1ed/
[4]: https://spacy.io/
[5]: https://textblob.readthedocs.io/en/dev/
[6]: https://readthedocs.org/projects/textacy/
[7]: https://pytorchnlp.readthedocs.io/en/latest/
[8]: https://www.npmjs.com/package/retext
[9]: https://unified.js.org/
[10]: https://www.npmjs.com/package/compromise
[11]: https://www.npmjs.com/package/natural
[12]: https://www.npmjs.com/package/node-nlp
[13]: https://opennlp.apache.org/
[14]: https://stanfordnlp.github.io/CoreNLP/
[15]: https://opensource.com/article/19/2/learn-data-science-ai
[16]: https://github.com/CogComp/cogcomp-nlp

View File

@ -1,232 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
如何进行 Linux 启动时间优化
======
[![][1]][2]
_快速启动一台嵌入式设备或一台电信设备对于时间要求严格的应用程序是至关重要的并且在改善用户体验方面也起着非常重要的作用。这个文件给予一些关于如何增强任意设备的启动时间的重要技巧。_
快速启动或快速重启在各种情况下起着至关重要的作用。对于一套嵌入式设备来说,开始启动是为了保持所有服务的高可用性和更好的性能。设想一台电信设备运行一套没有启用快速启动的 Linux 操作系统。依赖于这个特殊嵌入式设备的所有的系统,服务和用户可能会受到影响。这些设备在其服务中维持高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
一台电信设备的一次小故障或关机,甚至几秒钟,都可能会对无数在因特网上工作的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的服务中包含快速启动以帮助它们快速重新开始工作是非常重要的。让我们从图表 1 中理解 Linux 启动过程。
![Figure 1: Boot-up procedure][3]
![Figure 2: Boot chart][4]
**监视工具和启动过程**
A user should take note of a number of factors before making changes to a machine.这包括机器的当前启动速度,以及服务,进程或应用程序 These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
**Boot chart:** 为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装 boot chart
```
sudo apt-get install pybootchartgui.
```
你每次启动时boot chart 在日志中保存一个 _.png_ (便携式网络图片)文件,使用户能够查看 _png_ 文件来理解系统的启动过程和服务。为此,使用下面的命令:
```
cd /var/log/bootchart
```
用户可能需要一个应用程序来查看 _.png_ 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器它没有一个精致的图形用户界面但是它仅仅显示图片。Feh 可以用于查看 _.png_ 文件。你可以使用下面的命令来安装它:
```
sudo apt-get install feh
```
你可以使用 _feh xxxx.png_ 来查看 _png_ 文件。
图表 2 显示查看一个 boot chart 的 _png_ 文件时的启动图表。
但是,对于 Ubuntu 15.10 以后的版本不再需要 boot chart 。 为获取关于启动速度的简短信息,使用下面的命令:
```
systemd-analyze
```
![Figure 3: Output of systemd-analyze][5]
图表 3 显示命令 _systemd-analyze_ 的输出。
命令 _systemd-analyze_ blame 用于打印所有正在运行的基于初始化所用的时间的单元。这个信息是非常有用的并且可用于优化启动时间。systemd-analyze blame 不会显示服务于使用 _Type=simple_ 的结果,因为 systemd 认为这些服务是立即启动的;因此,不能完成测量初始化的延迟。
![Figure 4: Output of systemd-analyze blame][6]
图表 4 显示 _systemd-analyze_ blame 的输出.
下面的命令打印一个单元的时间关键的链的树:
```
command systemd-analyze critical-chain
```
图表 5 显示命令_systemd-analyze critical-chain_ 的输出。
![Figure 5: Output of systemd-analyze critical-chain][7]
**减少启动时间的步骤**
下面显示的是一些可采取的用于减少启动时间的步骤。
**BUM (启动管理器):** BUM 是一个运行级配置编辑器,当系统启动或重启时,允许 _init_ 服务的配置。它显示在启动时可以启动的每个服务的一个列表。用户可以打开和关闭之间切换个别的服务。 BUM 有一个非常干净的图形用户界面,并且非常容易使用。
在 Ubuntu 14.04 中, BUM 可以使用下面的命令安装:
```
sudo apt-get install bum
```
为在 15.10 以后的版本中安装它,从链接 _<http://apt.ubuntu.com/p/bum> 13_ 下载软件包。
以基础的事开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习它们的基础知识,因为它可能会影响机器或操作系统。图表 6 显示 BUM 的图形用户界面。
![Figure 6: BUM][8]
**编辑 rc 文件:** 为编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
```
cd /etc/init.d.
```
然而,访问 _init.d_ 需要 root 用户权限,它基本上包含了开始/停止脚本,当系统在运行时或在启动期间,控制(开始,停止,重新加载,启动启动)守护进程。
_rc_ 文件在 _init.d_ 中被称为一个运行控制脚本。在启动期间init 执行 _rc_ 脚本并发挥它的作用。为改善启动速度,我们更改 _rc_ 文件。使用任意的文件编辑器打开 _rc_ 文件(当你在 _init.d_ 目录中时)。
例如,通过输入 _vim rc_ ,你可以更改 _CONCURRENCY=none_ 的值为 _CONCURRENCY=shell_ 。后者允许同时执行某些起始阶段的脚本,而不是连续地间断地交替执行。
在最新版本的内核中,该值应该被更改为 _CONCURRENCY=makefile_
图表 7 和 8 显示编辑 rc 文件前后的启动时间的比较。启动速度的改善可以被注意到。在编辑The time to boot before editing the rc 文件前的启动时间是 50.98 秒,然而在对 rc 文件进行更改后的启动时间是 23.85 秒。
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 _init.d_ 文件。
![Figure 7: Boot speed before making changes to the rc file][9]
![Figure 8: Boot speed after making changes to the rc file][10]
**E4rat:** E4rat 代表 e4 ‘减少访问时间’ (仅在 ext4 文件系统的情况下). 它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目. E4rat 是一个在碎片整理的帮助下来达到一次快速启动的应用程序。它也加速应用程序的启动。E4rat 排除使用物理文件重新分配的寻道时间和旋转延迟。这导致一个高速的磁盘传输速度。
E4rat 作为一个可以获得的 .deb 软件包,你可以从它的官方网站 _<http://e4rat.sourceforge.net/>_ 下载它.
Ubuntu 默认的 ureadahead 软件包与 e4rat 冲突。因此不得不使用下面的命令安装几个软件包:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
现在使用下面的命令来安装 e4rat 的依赖关系:
```
sudo apt-get install libblkid1 e2fslibs
```
打开下载的 _.deb_ 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
遵循下面所给的步骤来使 e4rat 正确地运行,并提高启动速度。
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 shift 按键来完成。
* 选择通常用于启动的选项(内核版本),并按 e
* 查找以 _linux /boot/vmlinuz_ 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键)
```
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
```
* 现在,按 _Ctrl+x_ 来继续启动。这让 e4rat 在启动后收集数据。在机器上工作,打开应用程序,并在接下来的两分钟时间内关闭应用程序。
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:
```
cd /var/log/e4rat
```
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件在这里,再次访问 Grub 菜单,并按 e 作为你的选项。
* 在你之前已经编辑过的同一行的末尾输入 single 。这将帮助你访问命令行。如果出现一个要求任何东西的不同菜单选择恢复正常启动Resume normal boot。如果你不知为何不能进入命令提示符按 Ctrl+Alt+F1 组合键。
* 在你看到登录提示后,输入你的详细信息。
* 现在输入下面的命令:
```
sudo e4rat-realloc /var/lib/e4rat/startup.log
```
这个进程需要一段时间,依赖于机器的磁盘速度。
* 现在使用下面的命令来重启你的机器:
```
sudo shutdown -r now
```
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat 。
* 使用任意的编辑器访问 grub 文件。例如, _gksu gedit /etc/default/grub 。_
* 查找以 _GRUB CMDLINE LINUX DEFAULT=_ 开头的一行,并在引号之间和任何选项之前添加下面的行:
```
init=/sbin/e4rat-preload 18
```
* 它应该看起来像这样:
```
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
```
* 保存并关闭 Grub 菜单,并使用 _sudo update-grub_ 更新 Grub 。
* 重启系统,你将在启动速度方面发现显著的变化。
图表 9 和 10 显示在安装 e4rat 前后的启动时间的不同。启动速度的改善可以被注意到。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
![Figure 9: Boot speed before using e4rat][11]
![Figure 10: Boot speed after using e4rat][12]
**一些易做的调整**
一个极好的启动速度也可以使用非常小的调整来实现,其中两个在下面列出。
**SSD:** 使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也帮助获得在传输文件和运行应用程序方面的极好速度。
**禁用图形用户界面:** 图形用户界面,桌面图形和窗口动画占用大量的资源。禁用图形用户界面是另一个实现极好的启动速度的好方法。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What I learned going from prison to Python)
[#]: via: (https://opensource.com/article/20/1/prison-to-python)
[#]: author: (Shadeed "Sha" Wallace-Stepter https://opensource.com/users/shastepter)
我在监狱从 Python 中学到了什么
======
开源编程是如何在入监狱中提供机会的![书架上的编程书籍][1]
不到一年前,我还在圣昆廷州立监狱服刑,我是无期徒刑。
我高三的时候我抢劫了一个人并向他开了枪。现在我经过一段时间才意识到并承认自己做错了这是在经历了陪审团审判并看到我的行为带来的恶果后我知道需要改变自己我也确实做到了。尽管我对我的行为表示懊悔但我毕竟开枪打了一个人并差点杀了他。做这样的事是有后果的这是理所当然的。所以在我18岁的时候我被判了终身监禁。
监狱是一个非常可怕的地方;我是也不向你推荐的。但是我必须去,所以我去了。我不告诉你具体的细节,但你可以放心,这是一个不可能有太多想法的的地方,许多人在这里养成的坏习惯比他们过去在别处养成的更多。
我是幸运儿。当我在服刑的时候,发生了一些不同的事情。我开始想象自己出狱后的的未来,虽然在这之前,我还是已经度过了我所有的成年生活。
现在你想想:我是黑人,只受过高中教育。我没有工作经历,如果我离开监狱,在被释放前,我还是一个被定罪的重罪犯。当每个雇主看到我的简历,都不会有“我需要雇用这个人”
想法,我认为是正常的。
我不知道我的选择是什么,但我已经下定决心了。我需要做些活下去的事情,并且这和我入狱前的生活一点也不像。
### Python 之路
最终,我被关在了圣昆廷州立监狱,我不知道我在那里有多幸运。圣昆廷提供了几个自助教育编程项目。这些[改造机会][2]帮助囚犯使他们拥有在获释后避免再次犯罪的技能。
作为其中一个编程项目的一部分2017年我通过圣昆廷媒体项目认识了[杰西卡·麦凯拉]。杰西卡是编程语言[Pythone][4]的爱好者,她开始向我推荐 Python 有多棒,以及它是刚起步的人学习的完美语言。这就是故事变得比小说更精彩的地方。
> 感谢[@northbaypython][5]让[@ShaStepter][6]和我重复[@pycon][7]的主题演讲,让他们被录制下来。我很荣幸与大家分享:
>
> 从监狱到 Pythone: https://t.co/rcumoAgZHm
>
> 大规模裁员:如果我们不雇佣被判重罪的人,谁会呢? https://t.co/fENDUFdxfX [pic.Twitter.com/kpjo8d3ul6][8]
>
> —杰西卡·麦凯拉(@ jessicamckellar)[2019年11月5日][9]
杰西卡告诉我一些 Python 视频教程,这些教程是她为一家名叫[OReilly Media][10]的公司做的,课程是在线的,如果我能接触到它们,那该有多好呀。不幸的是,在监狱里上网是不可能的。但是,我遇到了一个叫 Tim OReilly 的人他最近刚来到圣昆廷。在他访问之后Tim 从他的公司 OReilly Media 公司向监狱的编程班捐赠了大量内容。最终,我拿到了一款平板电脑,上面有杰西卡的 Python 教程并学会了如何使用这些Python教程进行编码。
真是难以置信。背景和生活与我完全不同的陌生人把这些联系在一起,让我学会了编码。
### 对 Python 社区的热爱
在这之后,我开始经常和杰西卡见面,她开始告诉我关于开源社区的情况。从根本上说,开源社区就是关于伙伴关系和协作的社区。因为没有人被排除在外,所以效果很好。
对我来说一个努力寻找我自己的定位的人我所看到的是一种非常基本的爱——通过合作和接受的爱通过接触的爱通过包容的爱。我渴望成为其中的一部分。所以我继续学习Python不幸的是我无法获得更多的教程但是我能够从开源社区编译的大量书面知识中获益。我读过任何提到 Python 的东西,从平装本到晦涩难懂的杂志文章,我使用平板电脑来解决我读到的 Python 问题。
我对 Python 和编程的热情不是我的许多同龄人所共有的。除了监狱编程课上的极少数人之外,我认识的其他人都没有提到过编程;一般囚犯都不知道。我认为这是因为有过监禁经历的人无法接触编程,尤其是如果你是有色人种。
`
### 监狱外的 Python 生活
然而在2018年8月17日我得到了生命中的惊喜。杰里·布朗州长将我27年的刑期减为无期徒刑在服刑近19年后我被释放出狱了。
但现实情况是这也是为什么我认为编程和开源社区如此有价值。我是一名37岁的黑人罪犯没有工作经历刚刚在监狱服刑18年。我有犯罪史并且现存偏见导致没有多少职业适合我。但是编程是少数例外之一。
监禁后重返社会的人们迫切需要包容,但当谈话转向工作场所的多样性以及对多样性的需求时,你真的听不到这个群体被提及或包容。
 
> 还有什么:
>
> 1\. 背景调查:询问他们在你的公司是如何使用的。
>
> 2\. 初级角色:删除虚假的、不必要的先决条件,这些条件将排除有记录的合格人员。
>
> 3\. 积极拓展:与当地再就业项目合作,创建招聘渠道。[11]
>
> —杰西卡·麦凯拉(@ jessicamckellar)[2019年5月12日][12]
 
因此,我想谦卑地挑战开源社区的所有程序员和成员,让他们围绕包容和多样性展开思考。今天,我自豪地站在你们面前,代表一个大多数人都没有想到的群体——以前被监禁的人。但是我们存在,我们渴望证明我们的价值,最重要的是,我们期待被接受。当我们重返社会时,许多挑战等待着我们,我请求你们允许我们有机会展示我们的价值。欢迎我们,接受我们,最重要的是,包容我们。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/prison-to-python
作者:[Shadeed "Sha" Wallace-Stepter][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/shastepter
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2 (Programming books on a shelf)
[2]: https://www.dailycal.org/2019/02/27/san-quentin-rehabilitation-programs-offer-inmates-education-a-voice/
[3]: https://twitter.com/jessicamckellar?lang=en
[4]: https://www.python.org/
[5]: https://twitter.com/northbaypython?ref_src=twsrc%5Etfw
[6]: https://twitter.com/ShaStepter?ref_src=twsrc%5Etfw
[7]: https://twitter.com/pycon?ref_src=twsrc%5Etfw
[8]: https://t.co/Kpjo8d3ul6
[9]: https://twitter.com/jessicamckellar/status/1191601209917837312?ref_src=twsrc%5Etfw
[10]: http://shop.oreilly.com/product/110000448.do
[11]: https://t.co/WnzdEUTuxr
[12]: https://twitter.com/jessicamckellar/status/1127640222504636416?ref_src=twsrc%5Etfw

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to stop typosquatting attacks)
[#]: via: (https://opensource.com/article/20/1/stop-typosquatting-attacks)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
如何防范误植攻击
======
> <ruby>误植<rt>Typosquatting</rt></ruby>是一种引诱用户将敏感数据泄露给不法分子的方式,针对这种攻击方式,我们很有必要了解如何保护我们的组织、我们的开源项目以及我们自己。
![Gears above purple clouds][1]
除了常规手段以外,不法分子还会利用社会工程的方式,试图让安全意识较弱的人泄露私人信息或是有价值的证书。很多[网络钓鱼骗局][2]的实质都是攻击者伪装成信誉良好的公司或组织,然后借此大规模分发病毒或恶意软件。
[误植][3]就是其中一个常用的手法。它是一种社会工程学的攻击方式,通过使用一些合法网站的错误拼写 URL 以引诱用户访问恶意网站,这样的做法既使真正的原网站遭受声誉上的损害,又诱使用户向这些恶意网站提交个人敏感信息。因此,网站的管理人员和用户双方都应该意识到这个问题带来的风险,并采取措施加以保护。
一些由广大开发者在公共代码库中维护的开源软件通常都被认为具有安全上的优势,但当面临社会工程学攻击或恶意软件植入时,开源软件也需要注意以免受到伤害。
下面就来关注一下误植攻击的发展趋势,以及这种攻击方式在未来可能对开源软件造成的影响。
### 什么是误植?
误植是一种具体的网络犯罪手段其背后通常是一个网络钓鱼骗局。不法分子首先会购买注册域名而他们注册的域名通常是一个常用网站的错误拼写形式例如在正确拼写的基础上添加一个额外的元音字母又或者是将字母“i”替换成字母“l”。对于同一个正常域名不法分子通常会注册数十个拼写错误的变体域名。
用户一旦访问这样的域名,不法分子的目的就已经成功了一半。为此,他们会通过电子邮件的方式,诱导用户访问这样的伪造域名。伪造域名指向的页面中,通常都带有一个简单的登录界面,还会附上被模仿的网站的 logo尽可能让用户认为自己访问的是正确的网站。
如果用户没有识破这一个骗局,在页面中提交了诸如银行卡号、用户名、密码等敏感信息,这些数据就会被不法分子所完全掌控。进一步来看,如果这个用户在其它网站也使用了相同的用户名和密码,那就有同样受到波及的风险。受害者最终可能会面临身份被盗、信用记录被破坏等危险。
### 最近的一些案例
从网站的所有方来看,遭到误植攻击可能会带来一场公关危机。尽管网站域名的所有者没有参与到犯罪当中,但这会被认为是一次管理上的失职,因为域名所有者有主动防御误植攻击的责任,以避免这一类欺诈事件的发生。
在几年之前就发生过[一起案件][4],很多健康保险客户收到了一封指向 we11point.com 的钓鱼电子邮件,其中 URL 里正确的字母“l”被换成了数字“1”从而导致一批用户成为了这一次攻击的受害者。
最初,特定国家的顶级域名是不允许随意注册的。但后来国际域名规则中放开这一限制之后,又兴起了一波新的误植攻击。例如最常见的一种手法就是注册一个与 .com 域名类似的 .om 域名,一旦在输入 URL 时不慎遗漏了字母 c 就会给不法分子带来可乘之机。
### 网站如何防范误植攻击
对于一个公司来说,最好的策略就是永远比误植攻击采取早一手的行动。
也就是说,在注册域名的时候,不仅要注册自己商标名称的域名,最好还要同时注册可能由于拼写错误产生的其它域名。当然,没有太大必要把可能导致错误的所有顶级域名都注册掉,但至少要把可能导致错误的一些一级域名抢注下来。
如果你有让用户跳转到一个第三方网站的需求,务必要让用户从你的官方网站上进行跳转,而不应该通过类似群发邮件的方式向用户告知 URL。因此必须明确一个策略在与用户通信交流时不将用户引导到官方网站以外的地方去。在这样的情况下如果有不法分子试图以你公司的名义发布虚假消息用户将会从带有异样的页面或 URL 上有所察觉。
你可以使用类似 [DNS Twist][5] 的开源工具来扫描公司正在使用的域名它可以确定是否有相似的域名已被注册从而暴露潜在的误植攻击。DNS Twist 可以在 Linux 系统上通过一系列的 shell 命令来运行。
还有一些网络提供商会将防护误植攻击作为他们网络产品的一部分。这就相当于一层额外的保护,如果用户不慎输入了带有拼写错误的 URL就会被提示该页面已经被阻止并重定向到正确的域名。
如果你是系统管理员,还可以考虑运行一个自建的 [DNS 服务器][6],以便通过黑名单的机制禁止对某些域名的访问。
你还可以密切监控网站的访问流量,如果来自某个特定地区的用户被集体重定向到了虚假的站点,那么访问量将会发生骤降。这也是一个有效监控误植攻击的角度。
防范误植攻击与防范其它网络攻击一样需要保持警惕。所有用户都希望网站的所有者能够扫除那些与正主类似的假冒站点,如果这项工作没有做好,用户的信任对你的信任程度就会每况愈下。
### 误植对开源软件的影响
因为开源项目的源代码是公开的,所以其中大部分项目都会进行安全和渗透测试。但错误是不可能完全避免的,如果你参与了开源项目,还是有需要注意的地方。
当你收到一个不明来源的<ruby>合并请求<rt>Merge Request</rt></ruby>或补丁时,必须在合并之前仔细检查,尤其是相关代码涉及到网络层面的时候。一定要进行严格的检查和测试,以确保没有恶意代码混入正常的代码当中。
同时,还要严格按照正确的方法使用域名,避免不法分子创建仿冒的下载站点并提供带有恶意代码的软件。可以通过如下所示的方法使用数字签名来确保你的软件没有被篡改:
```
gpg --armor --detach-sig \
\--output advent-gnome.sig \
example-0.0.1.tar.xz
```
同时给出你提供的文件的校验和:
```
`sha256sum example-0.0.1.tar.xz > example-0.0.1.txt`
```
无论你的用户会不会去用上这些安全措施,你也应该提供这些必要的信息。因为只要有那么一个人留意到签名有异样,就能为你敲响警钟。
### 总结
人类犯错在所难免。世界上数百万人输入同一个,总会有人出现拼写的错误。不法分子也正是抓住了这个漏洞才得以实施误植攻击。
用抢注域名的方式去完全根治误植攻击也是不太现实的,我们更应该关注这种攻击的传播方式以减轻它对我们的影响。最好的保护就是和用户之间建立信任,并积极检测误植攻击的潜在风险。作为开源社区,我们更应该团结起来一起应对误植攻击。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/stop-typosquatting-attacks
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
[2]: https://www.cloudberrylab.com/resources/guides/types-of-phishing/
[3]: https://en.wikipedia.org/wiki/Typosquatting
[4]: https://www.menlosecurity.com/blog/-a-new-approach-to-end-typosquatting
[5]: https://github.com/elceef/dnstwist
[6]: https://opensource.com/article/17/4/build-your-own-name-server

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Run multiple consoles at once with this open source window environment)
[#]: via: (https://opensource.com/article/20/1/multiple-consoles-twin)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用此开源窗口环境一次运行多个控制台
======
> 在我们的 20 个使用开源提升生产力的系列的第十四篇文章中用 twin 模拟了老式的 DESQview 体验。
![Digital creative of a browser on the internet][1]
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 通过 twin 克服“一个屏幕,一个应用程序”的限制
还有人记得 [DESQview][2] 吗?我们在 Windows、Linux 和 MacOS 中理所当然地可以在屏幕上同时运行多个程序,而它赋予了 DOS 同样的功能。在我运营拨号 BBS 服务的初期DESQview 是必需的,它使我能够使 BBS 在后台运行,同时在前台进行其他操作。例如,当有人拨打电话时,我可能正在开发新功能或设置新的外部程序而不会影响他们的体验。后来,在我早期支持的时候,我可以同时运行我的工作电子邮件([MHS 上的 DaVinci 电子邮件] [3])、支持票证系统和其他 DOS 程序。这是令人吃惊的!
![twin][4]
从那时起,运行多个控制台应用程序的功能已经发展了很多。但是 [tmux][5] 和 [Screen][6] 等应用仍然遵循“一个屏幕一个应用”的显示方式。好吧是的tmux 具有屏幕拆分和窗格,但是不像 DESQview 那样具有将窗口“浮动”在其他窗口上的功能,就我个人而言,我怀念那个功能。
让我们来看看 [twin][7](文本模式窗口环境)。我认为,这个相对年轻的项目是 DESQview 的精神继任者。它支持控制台和图形环境,并具有与会话脱离和重新接驳的功能。设置起来并是那么容易,但是它可以在大多数现代操作系统上运行。
Twin 是从源代码安装的(现在是这样)。但是首先,你需要安装所需的开发库。库名称将因操作系统而异。 以下示例显示了在我的 Ubuntu 19.10 系统中的情况。一旦安装了依赖库,请从 Git 中检出 twin 源代码,并运行 `./configure``make`,它们应自动检测所有内容并构建 twin
```
sudo apt install libx11-dev libxpm-dev libncurses-dev zlib1g-dev libgpm-dev
git clone git@github.com:cosmos72/twin.git
cd twin
./configure
make
sudo make install
```
注意:如果要在 MacOS 或 BSD 上进行编译,则需要在运行 `make` 之前在文件 `include/Tw/autoconf.h``include/twautoconf.h` 中注释掉 `#define socklen_t int`。这应该在 [twin #57][9] 解决了。
![twin text mode][10]
第一次调用 twin 是一个挑战。你需要通过 `--hw` 参数告诉它正在使用哪种显示。例如,要启动文本模式的 twin请输入 `twin --hw=tty,TERM=linux`。这里 `TERM` 变量指定替代了你当前 Shell 中终端变量。要启动图形版本,运行 `twin --hw=X@$DISPLAY`。在 Linux 上twin 一般都“可以正常工作”,而在 MacOS 上Twin 基本是只能在终端上使用。
*真正*的乐趣是可以通过 `twattach``twdisplay` 命令接驳到正在运行的会话的功能。它们使你可以接驳到其他正在运行的 twin 会话。例如,在 Mac 上,我可以运行以下命令以接驳到演示机器上运行的 twin 会话:
```
twdisplay --twin@20days2020.local:0 --hw=tty,TERM=linux
```
![remote twin session][11]
通过多做一些工作,你还可以将其用作登录外壳,以代替控制台上的 [getty][12]。这需要 gdm 鼠标守护程序、twdm 应用程序(包括)和一些额外的配置。在使用 systemd 的系统上,首先安装并启用 gdm如果尚未安装然后使用 `systemctl` 为控制台创建一个替代(我使用 tty6。这些命令必须以 root 用户身份运行;在 Ubuntu 上,它们看起来像这样:
```
apt install gdm
systemctl enable gdm
systemctl start gdm
systemctl edit getty@tty6
```
`systemctl edit getty@tty6` 命令将打开一个名为 `override.conf` 的空文件。它可以定义 systemd 服务设置以覆盖 6 号控制台的默认设置。将内容更新为:
```
[service]
ExecStart=
ExecStart=-/usr/local/sbin/twdm --hw=tty@/dev/tty6,TERM=linux
StandardInput=tty
StandardOutput=tty
```
现在,重新加载 systemd 并重新启动 tty6 以获得 twin 登录提示:
```
systemctl daemon-reload
systemctl restart getty@tty6
```
![twin][13]
这将为登录的用户启动一个 twin 会话。我不建议在多用户系统中使用此会话,但是对于个人桌面来说,这是很酷的。并且,通过使用 `twattach``twdisplay`,你可以从本地 GUI 或远程桌面访问该会话。
我认为 twin 真是太酷了。它还有一些细节不够好,但是基本功能都已经有了,并且有一些非常好的文档。另外,它也使我可以在现代操作系统上稍解 DESQview 式的体验的渴望。我希望随着时间的推移它会有所改进,希望你和我一样喜欢它。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/multiple-consoles-twin
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://en.wikipedia.org/wiki/DESQview
[3]: https://en.wikipedia.org/wiki/Message_Handling_System
[4]: https://opensource.com/sites/default/files/uploads/productivity_14-1.png (twin)
[5]: https://github.com/tmux/tmux/wiki
[6]: https://www.gnu.org/software/screen/
[7]: https://github.com/cosmos72/twin
[8]: mailto:git@github.com
[9]: https://github.com/cosmos72/twin/issues/57
[10]: https://opensource.com/sites/default/files/uploads/productivity_14-2.png (twin text mode)
[11]: https://opensource.com/sites/default/files/uploads/productivity_14-3.png (remote twin session)
[12]: https://en.wikipedia.org/wiki/Getty_(Unix)
[13]: https://opensource.com/sites/default/files/uploads/productivity_14-4.png (twin)

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use Vim to manage your task list and access Reddit and Twitter)
[#]: via: (https://opensource.com/article/20/1/vim-task-list-reddit-twitter)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用 Vim 管理任务列表和访问 Reddit 和 Twitter
======
在我们的 20 个使用开源提升生产力的系列的第十七篇文章中了解在编辑器中处理待办列表以及获取社交信息。
![Chat via email][1]
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Vim 执行(几乎)所有操作,第 2 部分
在[昨天的文章][2]中,你开始用 Vim 检查邮件和日历。今天,你可以做的更多。首先,你会在 Vim 编辑器中跟踪任务,然后获取社交信息。
#### 使用 todo.txt-vim 在 Vim 中跟踪待办任务
![to-dos and Twitter with Vim][3]
使用 Vim 编辑一个文本待办事件是一件自然的事,而 [todo.txt-vim][4] 包使其更加简单。首先安装 todo.txt-vim 包:
```
git clone <https://github.com/freitass/todo.txt-vim> ~/.vim/bundle/todo.txt-vim
vim ~/path/to/your/todo.txt
```
todo.txt-vim 自动识别以 todo.txt 和 done.txt 结尾的文件作为 [todo.txt][5] 文件。它添加特定于 todo.txt 格式的键绑定。你可以使用 **\x** 标记“已完成”的内容,使用 **\d** 将其设置为当前日期,然后使用 **\a**、**\b** 和 **\c** 更改优先级。你可以提升(**\k**)或降低(**\j**)优先级,并根据项目(**\s+**)、上下文(**\s@**)或日期(**\sd**)排序(**\s**)。完成后,你可以和平常一样关闭和保存文件。
todo.txt-vim 包是我几天前写的 [todo.sh 程序][6]的一个很好的补充,使用 [todo edit][7] 加载项,它可以增强的你待办事项列表跟踪。
#### 使用 vim-reddit 读取 Reddit
![Reddit in Vim][8]
Vim 还有一个不错的用于 [Reddit][9] 的加载项,叫 [vim-reddit][10]。它不如 [Tuir][11] 好,但是用于快速查看最新的文章,它还是不错的。首先安装捆绑包:
```
git clone <https://github.com/DougBeney/vim-reddit.git> ~/.vim/bundle/vim-reddit
vim
```
现在输入 **:Reddit** 将加载 Reddit 首页。你可以使用 **:Reddit name** 加载特定子板。打开文章列表后,使用箭头键导航或使用鼠标滚动。按 **o** 将在 Vim 中打开文章(除非它多媒体文章,它会打开浏览器),然后按 **c** 打开评论。如果要直接转到页面,请按 **O** 而不是 **o**。只需按 **u** 就能返回。当你 Reddit 看完后,输入 **:bd** 就行。vim-reddit 唯一的缺点是无法登录或发布新文章和评论。话又说回来,有时这是一件好事。
#### 使用 twitvim 在 Vim 中发推
![Twitter in Vim][12]
最后,我们有 [twitvim][13],这是一个于阅读和发布 Twitter 的 Vim 软件包。它需要更多设置。首先从 GitHub 安装 twitvim
```
`git clone https://github.com/twitvim/twitvim.git ~/.vim/bundle/twitvim`
```
现在你需要编辑 **.vimrc** 文件并设置一些选项。它帮助插件知道使用哪些库与 Twitter 交互。运行 **vim --version** 并查看哪些语言的前面有 **+** 就代表你的 Vim 支持它。
![Enabled and Disabled things in vim][14]
因为我的是 **+perl -python +python3**,所以我知道我可以启用 Perl 和 Python 3 但不是 Python 2 python
```
" TwitVim Settings
let twitvim_enable_perl = 1
" let twitvim_enable_python = 1
let twitvim_enable_python3 = 1
```
现在,你可以通过运行 **:SetLoginTwitter** 启动浏览器窗口,它会打开一个浏览器窗口要求你授权 VimTwit 访问你的帐户。在 Vim 中输入提供的 PIN 后就可以了。
Twitvim 的命令不像其他包中一样简单。要加载好友和关注者的时间线,请输入 **:FriendsTwitter**。要列出提及你的和回复,请使用 **:MentionsTwitter**。发布新推文是 **:PosttoTwitter &lt;Your message&gt;**。你可以滚动列表并输入 **\r** 回复特定推文,你可以用 **\d** 直接给某人发消息。
就是这些了。你现在可以在 Vim 中做(几乎)所有事了!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/vim-task-list-reddit-twitter
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_chat_communication_message.png?itok=LKjiLnQu (Chat via email)
[2]: https://opensource.com/article/20/1/send-email-and-check-your-calendar-vim
[3]: https://opensource.com/sites/default/files/uploads/productivity_17-1.png (to-dos and Twitter with Vim)
[4]: https://github.com/freitass/todo.txt-vim
[5]: http://todotxt.org
[6]: https://opensource.com/article/20/1/open-source-to-do-list
[7]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory#edit-open-in-text-editor
[8]: https://opensource.com/sites/default/files/uploads/productivity_17-2.png (Reddit in Vim)
[9]: https://reddit.com
[10]: https://github.com/DougBeney/vim-reddit
[11]: https://opensource.com/article/20/1/open-source-reddit-client
[12]: https://opensource.com/sites/default/files/uploads/productivity_17-3.png (Twitter in Vim)
[13]: https://github.com/twitvim/twitvim
[14]: https://opensource.com/sites/default/files/uploads/productivity_17-4.png (Enabled and Disabled things in vim)

View File

@ -0,0 +1,116 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Joplin: The True Open Source Evernote Alternative)
[#]: via: (https://itsfoss.com/joplin/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Joplin真正的 Evernote 开源替代品
======
_**简介Joplin 是一个开源笔记记录和待办应用。你可以将笔记组织到笔记本中并标记它们。Joplin 还提供网络剪贴板来保存来自互联网的文章。**_
### Joplin开源笔记管理器
![][1]
如果你喜欢[Evernote][2],那么你不会不太适应开源软件 [Joplin][3]。
Joplin 是一个优秀的开源笔记应用,拥有丰富的功能。你可以记笔记、写待办事项并且通过和 Dropbox 和 NextCloud 等云服务链接来跨设备同步笔记。同步通过端到端加密保护。
Joplin 还有一个 Web 剪贴板,能让你将网页另存为笔记。网络剪贴板可用于 Firefox 和 Chrome/Chromium 浏览器。
Joplin 可以导入 enex 格式的 Evernote 文件, 这让从 Evernote 切换变得容易。
因为自己保存数据,你可以用 Joplin 格式或者原始格式导出所有文件。
### Joplin 的功能
![][4]
以下是 Joplin 的所有功能列表:
* 将笔记保存到笔记本和子笔记本中,以便更好地组织
  * 创建待办事项清单
  * 笔记可以被标记和搜索
  * 离线优先,因此即使没有互联网连接,所有数据始终在设备上可用
  * Markdown 笔记支持图片、数学符号和复选框
  * 支持附件
  * 应用可在桌面、移动设备和终端CLI使用
  * 可在 Firefox 和 Chrome 使用[网页剪切板][5]
  * 端到端加密
  * 保留笔记历史
  * 根据名称,时间等对笔记进行排序
  * 与 [Nextcloud][7]、Dropbox、WebDAV 和 OneDrive 等各种[云服务][6]同步
  * 从 Evernote 导入文件
  * 导出 JEX 文件Joplin 导出格式)和原始文件。
  * 支持笔记、待办事项、标签和笔记本。
  * 任意跳转功能。
  * 支持移动设备和桌面应用通知。
  * 地理位置支持。
  * 支持多种语言
  * 外部编辑器支持–在 Joplin 中一键用你最喜欢的编辑器打开笔记。
### Installing Joplin on Linux and other platforms
![][10]
[Joplin][11] 是一个跨平台应用,可用于 Linux、macOS 和 Windows。在移动设备上你可以[获取 APK 文件][12]将其安装在 Android 和基于 Android 的 ROM 上。你也可以[从谷歌 Play 商店下载][13]。
在 Linux 中,你可以获取 Joplin 的 [AppImage][14] 文件,并作为可执行文件运行。你需要为下载的文件授予执行权限。
[Download Joplin][15]
### 体验 Joplin
Joplin 中的笔记使用 markdown但你不需要了解它。编辑器的顶部面板能让你以图形方式选择项目符号、标题、图像、链接等。
虽然 Joplin 提供了许多有趣的功能,但你需要自己去尝试。例如,默认情况下未启用 Web 剪切板,我需要发现如何打开它。
你需要从桌面应用启用剪切板。在顶部菜单中,进入 “Tools-&gt;Options”。你可以在此处找到 Web 剪切板选项:
![Enable Web Clipper from the desktop application first][16]
它的 Web 剪切板不如 Evernote 的 Web 剪切板聪明,后者可以以图形方式剪辑网页文章的一部分。但是,它也足够了。
这是一个在积极开发中的开源软件,我希望它随着时间的推移得到更多的改进。
**总结**
如果你正在寻找一个不错的拥有 Web 剪切板的笔记应用,你可以试试 Joplin。如果你喜欢它并将继续使用尝试通过捐赠或改进代码和文档来帮助 Joplin 开发。我以 FOSS 的名义[捐赠][17]了 25 欧。
如果你曾经使用过 Joplin或者仍在使用它你对此的体验如何如果你用的是其他笔记应用你会切换到 Joplin 么?欢迎分享你的观点。
--------------------------------------------------------------------------------
via: https://itsfoss.com/joplin/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_logo.png?ssl=1
[2]: https://evernote.com/
[3]: https://joplinapp.org/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_featured.jpg?ssl=1
[5]: https://joplinapp.org/clipper/
[6]: https://itsfoss.com/cloud-services-linux/
[7]: https://nextcloud.com/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_ubuntu.jpg?ssl=1
[11]: https://github.com/laurent22/joplin
[12]: https://itsfoss.com/download-apk-ubuntu/
[13]: https://play.google.com/store/apps/details?id=net.cozic.joplin&hl=en_US
[14]: https://itsfoss.com/use-appimage-linux/
[15]: https://github.com/laurent22/joplin/releases
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_web_clipper.jpg?ssl=1
[17]: https://itsfoss.com/donations-foss/

View File

@ -1,99 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 命令行简介
======
下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。[Sandra Henry-Stocker / Linux][1] [(CC0)][2]
如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面使用大量工具与应用时键入命令会产生兴奋。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 Activities 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(如 “cmd” ,它将为你打开窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 **Ctrl+Alt+t** 键打开命令窗口。
如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
[][3]
由 HPE 赞助的 BrandPost
[走消费存储智能化之路][3]
将 HPE 存储的灵活性和经济性与 HPE GreenLake 结合起来,高效地运转你的 IT 部门。
一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 **$** 或者像 “**user@system:~$**” 这样的东西,但它意味着系统已经准备好为你运行命令了。
一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
```
命令 用途
pwd 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录)
ls 列出我的文件
ls-a 列出我更多的文件(包括隐藏文件)
ls-al 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限)
who 告诉我谁登录了(如果只有你,不要失望)
date 日期提醒我今天是星期几(也显示时间)
ps 列出我正在运行的进程可能只是你的shell和“ps”命令
```
一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
```
命令 用途
cd /tmp 移动到其他文件夹(本例中,打开 /tem 文件夹)
ls 列出当前位置的文件
cd 回到主目录(不带参数的 cd 总是能将你带回到主目录)
cat .bashrc 显示文件的内容(本例中显示 .bashrc 文件的内容)
history 显示最近执行的命令
echo hello 跟自己说 “hello”
cal 显示当前月份的日历
```
要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。 重定向是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。管道是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
[[通过注册 Network World 简讯来获得定期安排的详解]][5]
```
命令 用途
echo “echo hello” > tryme 创建一个新的文件并将 “echo hello” 写入该文件
chmod 700 tryme 使新建的文件可执行
tryme 运行新文件(它应当运行文件中包含的命令并且显示 “hello”
ps aux 显示所有运行中的程序
ps aux | grep $USER 显示所有运行中的程序,但是限制输出的内容包含你的用户名
echo $USER 使用环境变量显示你的用户名
whoami 使用命令显示你的用户名
who | wc -l 计数所有当前登录的用户数目
```
### 总结
一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多。
加入 [Facebook][6] 和 [LinkedIn][7] 上的 Network World 社区,来评论最热门的话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[c]: https://github.com/qianmingtian
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with GnuCash)
[#]: via: (https://opensource.com/article/20/2/gnucash)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
开始使用 GnuCash
======
使用 GnuCash 管理你的个人或小型企业会计。
![A dollar sign in a network][1]
在过去的四年里,我一直在用 [GnuCash][2] 来管理我的个人财务,我对此非常满意。这个开源 GPL v3 项目自 1998 年首次发布以来一直成长和改进2019 年 12 月发布的最新版本 3.8 增加了许多改进和 bug 修复。
GnuCash 可在 Windows、MacOS 和 Linux 中使用。它实现了一个复式记账系统,并可以导入各种流行的开放和专有文件格式,包括 QIF、QFX、OFX、CSV 等。这使得从其他财务应用转换(包括 Quicken而来很容易它是为取代这些而出现的。
借助 GnuCash你可以跟踪个人财务状况以及小型企业会计和开票。它没有一个集成的工资系统。根据文档你可以在 GnuCash 中跟踪工资支出,但你必须在软件外部计算税金和扣减。
### 安装
要在 Linux 上安装 GnuCash
* 在 Red Hat、CentOS 或 Fedora 中: **$ sudo dnf install gnucash**
* 在 Debian、Ubuntu 或 Pop_OS 中: **$ sudo apt install gnucash**
你也可以从 [Flathub][3] 安装它,我在运行 Elementary OS 的笔记本上使用它。(本文中的所有截图都来自此次安装)。
### 设置
安装并启动程序后,你将看到一个欢迎屏幕,该页面提供了创建新账户集、导入 QIF 文件或打开新用户教程的选项。
![GnuCash Welcome screen][4]
#### 个人账户
如果你选择第一个选项正如我所做的那样GnuCash 会打开一个页面给你向导。它收集初始数据并设置账户首选项,例如账户类型和名称、商业数据(例如,税号)和首选货币。
![GnuCash new account setup][5]
GnuCash 支持个人银行账户、商业账户、汽车贷款、CD 和货币市场账户、儿童保育账户等。
例如,首先创建一个简单的支票簿。你可以输入账户的初始余额或以多种格式导入现有账户数据。
![GnuCash import data][6]
#### 开票
GnuCash 还支持小型企业功能,包括客户、供应商和开票。要创建发票,请在 **Business -&gt;Invoice** 中输入数据。
![GnuCash create invoice][7]
然后,你可以将发票打印在纸上,也可以将其导出到 PDF 并通过电子邮件发送给你的客户。
![GnuCash invoice][8]
### 获取帮助
如果你有任何疑问,它有一个优秀的帮助,你可在菜单栏的右侧获取指导。
![GnuCash help][9]
项目的网站包含许多有用的信息的链接,例如 GnuCash [功能][10]的概述。GnuCash 还提供了[详细的文档][11],可供下载和离线阅读,它还有一个 [wiki][12],为用户和开发人员提供了有用的信息。
你可以在项目的 [GitHub][13] 仓库中找到其他文件和文档。GnuCash 项目由志愿者驱动。如果你想参与,请查看项目的 wiki 上的 [Getting involved][14] 部分。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/gnucash
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi]](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network)
[2]: https://www.gnucash.org/
[3]: https://flathub.org/apps/details/org.gnucash.GnuCash
[4]: https://opensource.com/sites/default/files/images/gnucash_welcome.png (GnuCash Welcome screen)
[5]: https://opensource.com/sites/default/files/uploads/gnucash_newaccountsetup.png (GnuCash new account setup)
[6]: https://opensource.com/sites/default/files/uploads/gnucash_importdata.png (GnuCash import data)
[7]: https://opensource.com/sites/default/files/uploads/gnucash_enter-invoice.png (GnuCash create invoice)
[8]: https://opensource.com/sites/default/files/uploads/gnucash_invoice.png (GnuCash invoice)
[9]: https://opensource.com/sites/default/files/uploads/gnucash_help.png (GnuCash help)
[10]: https://www.gnucash.org/features.phtml
[11]: https://www.gnucash.org/docs/v3/C/gnucash-help.pdf
[12]: https://wiki.gnucash.org/wiki/GnuCash
[13]: https://github.com/Gnucash
[14]: https://wiki.gnucash.org/wiki/GnuCash#Getting_involved_in_the_GnuCash_project

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: (chai-yuan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Connect Fedora to your Android phone with GSConnect)
[#]: via: (https://fedoramagazine.org/connect-fedora-to-your-android-phone-with-gsconnect/)
[#]: author: (Lokesh Krishna https://fedoramagazine.org/author/lowkeyskywalker/)
使用GSConnect将Android手机连接到Fedora系统
======
![][1]
苹果和微软公司都提供了集成好的与移动设备交互的应用。Fedora提供了类似的工具——**GSConnect**。它可以让你将你的安卓手机和你的Fedora桌面配对并使用。读下去来了解更多关于它是什么以及它是如何工作的信息。
### GSConnect是什么?
GSConnect是基于KDE Connect项目而为GNOME桌面定制的程序。KDE Connect使你的设备相互之间能够进行通信。但是在Fedora的默认GNOME桌面上安装它需要安装大量的KDE依赖。
GSConnect基于KDE Connect实现作为GNOME shell的拓展应用。一旦安装GSConnect允许您执行以下操作
* 在电脑上接收电话通知并回复信息
* 用手机操纵你的桌面
* 在不同设备之间分享文件与链接
* 在电脑上查看手机电量
* 让手机响铃以便你能找到它
### 设置GSConnect扩展
设置GSConnect需要安装两款软件电脑上的GSConnect扩展和Android设备上的KDE Connect应用。
首先从GNOME Shell扩展网站[GSConnect][2]安装GSConnect扩展。Fedora Magazine有一篇关于[如何安装GNOMEShell扩展名][3]的文章,可以帮助你完成这一步。)
KDE Connect应用程序可以在Google的[Play Store][4]上找到。它也可以在FOSS Android应用程序库[F-Droid][5]上找到。
一旦安装了这两个组件,就可以配对两个设备。安装扩展后再系统菜单中显示“移动设备(Mobile Devices)”。单击它会出现一个下拉菜单,你可以从中访问“移动设置(Mobile Settings)”。
![][6]
在这里你可以用GSConnect查看并管理配对的设备。进入此界面后需要在Android设备上启动应用程序。
配对的初始化可以再任意一台设备上进行在这里我们从Android设备连接到电脑。点击应用程序上的刷新(refresh)只要两个设备都在同一个无线网络环境中你的Android设备便可以搜索到你的电脑。现在可以向桌面发送配对请求并在桌面上接受配对请求以完成配对。
![][7]
### 使用 GSConnect
配对后你将需要授予对Android设备的权限才能使用GSConnect上提供的许多功能。单击设备列表中的配对设备便可以查看所有可用功能并根据你的偏好和需要启用或禁用它们。
![][8]
请记住你还需要在Android应用程序中授予相应的权限才能使用这些功能。启用权限后你现在可以访问桌面上的移动联系人获得消息通知并回复消息甚至同步桌面和Android设备剪贴板。
### 集成在文件系统与浏览器上
GSConnect允许你直接从桌面文件资源管理器向Android设备发送文件。
在Fedora的默认GNOME桌面上你需要安装_nautilus-python_依赖包以便在菜单中显示配对的设备。安装它只需要再终端中输入
```
$ sudo dnf install nautilus-python
```
完成后,将在“文件(Files)”的菜单中显示“发送到移动设备(Send to Mobile Device)”选项。
![][9]
同样为你的浏览器安装相应的WebExtension无论是[Firefox][10]还是[Chrome][11]浏览器都可以将链接发送到你的Android设备。你可以选择直接在浏览器中发送要启动的链接或将其作为短信息发送。
### 运行命令
GSConnect允许你定义命令然后可以从远程设备在电脑上运行这些命令。这使得你可以远程截屏或者从你的Android设备锁定和解锁你的桌面。
![][12]
要使用此功能可以使用标准shell命令和GSConnect公开的CLI。项目的GitHub存储库中提供了有关此操作的文档 _CLI Scripting_
[KDE UserBase Wiki][13]有一个命令示例列表。这些例子包括控制桌面的亮度和音量锁定鼠标和键盘甚至更改桌面主题。其中一些命令是针对KDE Plasma设计的需要进行修改才能在GNOME桌面上运行。
### 探索并享受乐趣
GSConnect使我们能够享受到极大的便利和舒适。深入研究首选项查看你可以做的所有事情灵活的使用这些命令功能。并在下面的评论中自由分享你解锁的新方式。
* * *
_Photo by [Pathum Danthanarayana][14] on [Unsplash][15]._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/connect-fedora-to-your-android-phone-with-gsconnect/
作者:[Lokesh Krishna][a]
选题:[lujun9972][b]
译者:[chai-yuan](https://github.com/chai-yuan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/lowkeyskywalker/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/12/gsconnect-816x345.jpg
[2]: https://extensions.gnome.org/extension/1319/gsconnect/
[3]: https://fedoramagazine.org/install-gnome-shell-extension/
[4]: https://play.google.com/store/apps/details?id=org.kde.kdeconnect_tp
[5]: https://f-droid.org/en/packages/org.kde.kdeconnect_tp/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/within-the-menu-1024x576.png
[7]: https://fedoramagazine.org/wp-content/uploads/2020/01/pair-request-1024x576.png
[8]: https://fedoramagazine.org/wp-content/uploads/2020/01/permissions-1024x576.png
[9]: https://fedoramagazine.org/wp-content/uploads/2020/01/send-to-mobile-2-1024x576.png
[10]: https://addons.mozilla.org/en-US/firefox/addon/gsconnect/
[11]: https://chrome.google.com/webstore/detail/gsconnect/jfnifeihccihocjbfcfhicmmgpjicaec
[12]: https://fedoramagazine.org/wp-content/uploads/2020/01/commands-1024x576.png
[13]: https://userbase.kde.org/KDE_Connect/Tutorials/Useful_commands
[14]: https://unsplash.com/@pathum_danthanarayana?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[15]: https://unsplash.com/s/photos/android?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Install All Essential Media Codecs in Ubuntu With This Single Command [Beginners Tip])
[#]: via: (https://itsfoss.com/install-media-codecs-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
使用此单条命令在 Ubuntu 中安装所有基本媒体编解码器(初学者技巧)
======
如果你刚刚安装了 Ubuntu 或其他 [Ubuntu 特色版本][1] 如 Kubuntu、Lubuntu 等,你会注意到系统无法播放某些音频或视频文件。
对于视频文件,你可以[在 Ubuntu 上安装 VLC][2]。 [VLC][3] 是 [Linux 上的最佳视频播放器][4]之一,它几乎可以播放任何视频文件格式。但你仍然会遇到无法播放音频和 flash 的麻烦。
好消息是 [Ubuntu][5] 提供了一个软件包来安装所有基本的媒体编解码器ubuntu-restricted-extras。
![][6]
### 什么是 Ubuntu Restricted Extras
ubuntu-restricted-extras 是一个包含各种基本软件,如 Flash 插件、[unrar][7]、[gstreamer][8]、mp4、[Ubuntu 中的 Chromium 浏览器][9]的编解码器等的软件包。
由于这些软件不是开源软件,并且其中一些涉及软件专利,因此 Ubuntu 默认情况下不会安装它们。你必须使用 multiverse 仓库,它是 Ubuntu 专门为用户提供非开源软件而创建的仓库。
请阅读本文以[了解有关各种 Ubuntu 仓库的更多信息][10]。
### 如何安装 Ubuntu Restricted Extras
令我惊讶的是,我发现软件中心未列出 Ubuntu Restricted Extras。不管怎样你都可以使用命令行安装该软件包这非常简单。
在菜单中搜索或使用[终端键盘快捷键 Ctrl+Alt+T][11] 打开终端。
由于 ubuntu-restrcited-extras 软件包在 multiverse 仓库中,因此你应验证系统上已启用 multiverse 仓库:
```
sudo add-apt-repository multiverse
```
然后你可以使用以下命令安装:
```
sudo apt install ubuntu-restricted-extras
```
输入回车后你会被要求输入密码_**当你输入密码时,屏幕不会有显示**_。这是正常的。输入你的密码并回车。
它将显示大量要安装的包。按回车确认选择。
你会看到 [EULA][12](最终用户许可协议),如下所示:
![Press Tab key to select OK and press Enter key][13]
浏览此页面可能会很麻烦,但是请放心。只需按 Tab 键,它将高亮选项。当高亮在正确的选项上,按下回车确认你的选择。
![Press Tab key to highlight Yes and press Enter key][14]
安装完成后,由于新安装的媒体编解码器,你应该可以播放 MP3 和其他媒体格式了。
##### 在 Kubuntu、Lubuntu、Xubuntu 上安装受限制的额外软件包
请记住Kubuntu、Lubuntu 和 Xubuntu 都有此软件包,并有各自的名称。它们本应使用相同的名字,但不幸的是并不是。
在 Kubuntu 上,使用以下命令:
```
sudo apt install kubuntu-restricted-extras
```
在 Lubuntu 上,使用:
```
sudo apt install lubuntu-restricted-extras
```
在 Xubuntu 上,你应该使用:
```
sudo apt install xubuntu-restricted-extras
```
我一直建议将 ubuntu-restricted-extras 作为[安装 Ubuntu 后要做的基本事情][15]之一。只需一个命令即可在 Ubuntu 中安装多个编解码器。
希望你喜欢 Ubuntu 初学者系列中这一技巧。以后,我将分享更多此类技巧。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-media-codecs-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/which-ubuntu-install/
[2]: https://itsfoss.com/install-latest-vlc/
[3]: https://www.videolan.org/index.html
[4]: https://itsfoss.com/video-players-linux/
[5]: https://ubuntu.com/
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/Media_Codecs_in_Ubuntu.png?ssl=1
[7]: https://itsfoss.com/use-rar-ubuntu-linux/
[8]: https://gstreamer.freedesktop.org/
[9]: https://itsfoss.com/install-chromium-ubuntu/
[10]: https://itsfoss.com/ubuntu-repositories/
[11]: https://itsfoss.com/ubuntu-shortcuts/
[12]: https://en.wikipedia.org/wiki/End-user_license_agreement
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras.jpg?ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras_1.jpg?ssl=1
[15]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Change the Default Terminal in Ubuntu)
[#]: via: (https://itsfoss.com/change-default-terminal-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何在 Ubuntu 中更改默认终端
======
终端是 Linux 系统的关键部分。它能让你通过 shell 访问 Linux 系统。Linux 上有多个终端应用(技术上称为终端仿真器)。
大多数[桌面环境][1]都有自己的终端实现。它们的外观可能有所不同,并且可能有不同的快捷键。
例如,[Guake 终端][2]对高级用户非常有用,它提供了一些可能无法在发行版默认终端中使用的功能。
你可以在系统上安装其他终端,并将其设为默认,并能通过[快捷键 Ctrl+Alt+T][3] 打开。
现在的问题来了,如何在 Ubuntu 中更改默认终端。它没有遵循[更改 Ubuntu 中的默认应用][4]的标准方式,要怎么做?
### 更改 Ubuntu 中的默认终端
![][5]
在基于 Debian 的发行版中,有一个方便的命令行程序,称为 [update-alternatives][6],可用于处理默认应用。
你可以使用它来更改默认的命令行文本编辑器、终端等。为此,请运行以下命令:
```
sudo update-alternatives --config x-terminal-emulator
```
它将显示系统上存在的所有可作为默认值的终端仿真器。当前的默认终端标有星号。
```
[email protected]:~$ sudo update-alternatives --config x-terminal-emulator
There are 2 choices for the alternative x-terminal-emulator (providing /usr/bin/x-terminal-emulator).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/bin/gnome-terminal.wrapper 40 auto mode
1 /usr/bin/gnome-terminal.wrapper 40 manual mode
* 2 /usr/bin/st 15 manual mode
Press <enter> to keep the current choice[*], or type selection number:
```
你要做的就是输入选择编号。对我而言,我想使用 GNOME 终端,而不是来自 [Regolith 桌面][7]的终端。
```
Press <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /usr/bin/gnome-terminal.wrapper to provide /usr/bin/x-terminal-emulator (x-terminal-emulator) in manual mode
```
##### 自动模式 vs 手动模式
你可能已经在 update-alternatives 命令的输出中注意到了自动模式和手动模式。
如果选择自动模式,那么在安装或删除软件包时,系统可能会自动决定默认应用。该决定受优先级数字的影响(如上一节中的命令输出所示)。
假设你的系统上安装了 5 个终端仿真器,并删除了默认的仿真器。现在,你的系统将检查哪些仿真器处于自动模式。如果有多个,它将​​选择优先级最高的一个作为默认仿真器。
我希望你觉得这个小技巧有用。随时欢迎提出问题和建议。
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-default-terminal-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-desktop-environments/
[2]: http://guake-project.org/
[3]: https://itsfoss.com/ubuntu-shortcuts/
[4]: https://itsfoss.com/change-default-applications-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/switch_default_terminal_ubuntu.png?ssl=1
[6]: https://manpages.ubuntu.com/manpages/trusty/man8/update-alternatives.8.html
[7]: https://itsfoss.com/regolith-linux-desktop/