mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
commit
3d72ed78e7
144
published/20181123 Three SSH GUI Tools for Linux.md
Normal file
144
published/20181123 Three SSH GUI Tools for Linux.md
Normal file
@ -0,0 +1,144 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: subject: (Three SSH GUI Tools for Linux)
|
||||
[#]: via: (https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
[#]: url: (https://linux.cn/article-10559-1.html)
|
||||
|
||||
3 个 Linux 上的 SSH 图形界面工具
|
||||
======
|
||||
|
||||
> 了解一下这三个用于 Linux 上的 SSH 图形界面工具。
|
||||
|
||||

|
||||
|
||||
在你担任 Linux 管理员的职业生涯中,你会使用 Secure Shell(SSH)远程连接到 Linux 服务器或桌面。可能你曾经在某些情况下,会同时 SSH 连接到多个 Linux 服务器。实际上,SSH 可能是 Linux 工具箱中最常用的工具之一。因此,你应该尽可能提高体验效率。对于许多管理员来说,没有什么比命令行更有效了。但是,有些用户更喜欢使用 GUI 工具,尤其是在从台式机连接到远程并在服务器上工作时。
|
||||
|
||||
如果你碰巧喜欢好的图形界面工具,你肯定很乐于了解一些 Linux 上优秀的 SSH 图形界面工具。让我们来看看这三个工具,看看它们中的一个(或多个)是否完全符合你的需求。
|
||||
|
||||
我将在 [Elementary OS][1] 上演示这些工具,但它们都可用于大多数主要发行版。
|
||||
|
||||
### PuTTY
|
||||
|
||||
已经有一些经验的人都知道 [PuTTY][2]。实际上,从 Windows 环境通过 SSH 连接到 Linux 服务器时,PuTTY 是事实上的标准工具。但 PuTTY 不仅适用于 Windows。事实上,通过标准软件库,PuTTY 也可以安装在 Linux 上。 PuTTY 的功能列表包括:
|
||||
|
||||
* 保存会话。
|
||||
* 通过 IP 或主机名连接。
|
||||
* 使用替代的 SSH 端口。
|
||||
* 定义连接类型。
|
||||
* 日志。
|
||||
* 设置键盘、响铃、外观、连接等等。
|
||||
* 配置本地和远程隧道。
|
||||
* 支持代理。
|
||||
* 支持 X11 隧道。
|
||||
|
||||
PuTTY 图形工具主要是一种保存 SSH 会话的方法,因此可以更轻松地管理所有需要不断远程进出的各种 Linux 服务器和桌面。一旦连接成功,PuTTY 就会建立一个到 Linux 服务器的连接窗口,你将可以在其中工作。此时,你可能会有疑问,为什么不在终端窗口工作呢?对于一些人来说,保存会话的便利确实使 PuTTY 值得使用。
|
||||
|
||||
在 Linux 上安装 PuTTY 很简单。例如,你可以在基于 Debian 的发行版上运行命令:
|
||||
|
||||
```
|
||||
sudo apt-get install -y putty
|
||||
```
|
||||
|
||||
安装后,你可以从桌面菜单运行 PuTTY 图形工具或运行命令 `putty`。在 PuTTY “Configuration” 窗口(图 1)中,在 “HostName (or IP address) ” 部分键入主机名或 IP 地址,配置 “Port”(如果不是默认值 22),从 “Connection type”中选择 SSH,然后单击“Open”。
|
||||
|
||||
![PuTTY Connection][4]
|
||||
|
||||
*图 1:PuTTY 连接配置窗口*
|
||||
|
||||
建立连接后,系统将提示你输入远程服务器上的用户凭据(图2)。
|
||||
|
||||
![log in][7]
|
||||
|
||||
*图 2:使用 PuTTY 登录到远程服务器*
|
||||
|
||||
要保存会话(以便你不必始终键入远程服务器信息),请填写主机名(或 IP 地址)、配置端口和连接类型,然后(在单击 “Open” 之前),在 “Saved Sessions” 部分的顶部文本区域中键入名称,然后单击 “Save”。这将保存会话的配置。若要连接到已保存的会话,请从 “Saved Sessions” 窗口中选择它,单击 “Load”,然后单击 “Open”。系统会提示你输入远程服务器上的远程凭据。
|
||||
|
||||
### EasySSH
|
||||
|
||||
虽然 [EasySSH][8] 没有提供 PuTTY 中的那么多的配置选项,但它(顾名思义)非常容易使用。 EasySSH 的最佳功能之一是它提供了一个标签式界面,因此你可以打开多个 SSH 连接并在它们之间快速切换。EasySSH 的其他功能包括:
|
||||
|
||||
* 分组(出于更好的体验效率,可以对标签进行分组)。
|
||||
* 保存用户名、密码。
|
||||
* 外观选项。
|
||||
* 支持本地和远程隧道。
|
||||
|
||||
在 Linux 桌面上安装 EasySSH 很简单,因为可以通过 Flatpak 安装应用程序(这意味着你必须在系统上安装 Flatpak)。安装 Flatpak 后,使用以下命令添加 EasySSH:
|
||||
|
||||
```
|
||||
sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||
|
||||
sudo flatpak install flathub com.github.muriloventuroso.easyssh
|
||||
```
|
||||
|
||||
用如下命令运行 EasySSH:
|
||||
|
||||
```
|
||||
flatpak run com.github.muriloventuroso.easyssh
|
||||
```
|
||||
|
||||
将会打开 EasySSH 应用程序,你可以单击左上角的 “+” 按钮。 在结果窗口(图 3)中,根据需要配置 SSH 连接。
|
||||
|
||||
![Adding a connection][10]
|
||||
|
||||
*图 3:在 EasySSH 中添加连接很简单*
|
||||
|
||||
添加连接后,它将显示在主窗口的左侧导航中(图 4)。
|
||||
|
||||
![EasySSH][12]
|
||||
|
||||
*图 4:EasySSH 主窗口*
|
||||
|
||||
要在 EasySSH 连接到远程服务器,请从左侧导航栏中选择它,然后单击 “Connect” 按钮(图 5)。
|
||||
|
||||
![Connecting][14]
|
||||
|
||||
*图 5:用 EasySSH 连接到远程服务器*
|
||||
|
||||
对于 EasySSH 的一个警告是你必须将用户名和密码保存在连接配置中(否则连接将失败)。这意味着任何有权访问运行 EasySSH 的桌面的人都可以在不知道密码的情况下远程访问你的服务器。因此,你必须始终记住在你离开时锁定桌面屏幕(并确保使用强密码)。否则服务器容易受到意外登录的影响。
|
||||
|
||||
### Terminator
|
||||
|
||||
(LCTT 译注:这个选择不符合本文主题,本节删节)
|
||||
|
||||
### termius
|
||||
|
||||
(LCTT 译注:本节是根据网友推荐补充的)
|
||||
|
||||
termius 是一个商业版的 SSH、Telnet 和 Mosh 客户端,不是开源软件。支持包括 [Linux](https://www.termius.com/linux)、Windows、Mac、iOS 和安卓在内的各种操作系统。对于单一设备是免费的,支持多设备的白金账号需要按月付费。
|
||||
|
||||
### 很少(但值得)的选择
|
||||
|
||||
Linux 上没有很多可用的 SSH 图形界面工具。为什么?因为大多数管理员更喜欢简单地打开终端窗口并使用标准命令行工具来远程访问其服务器。但是,如果你需要图形界面工具,则有两个可靠选项,可以更轻松地登录多台计算机。虽然对于那些寻找 SSH 图形界面工具的人来说只有不多的几个选择,但那些可用的工具当然值得你花时间。尝试其中一个,亲眼看看。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://elementary.io/
|
||||
[2]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
|
||||
[3]: https://www.linux.com/files/images/sshguis1jpg
|
||||
[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_1.jpg?itok=DiNTz_wO (PuTTY Connection)
|
||||
[5]: https://www.linux.com/licenses/category/used-permission
|
||||
[6]: https://www.linux.com/files/images/sshguis2jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_2.jpg?itok=4ORsJlz3 (log in)
|
||||
[8]: https://github.com/muriloventuroso/easyssh
|
||||
[9]: https://www.linux.com/files/images/sshguis3jpg
|
||||
[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_3.jpg?itok=bHC2zlda (Adding a connection)
|
||||
[11]: https://www.linux.com/files/images/sshguis4jpg
|
||||
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_4.jpg?itok=hhJzhRIg (EasySSH)
|
||||
[13]: https://www.linux.com/files/images/sshguis5jpg
|
||||
[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_5.jpg?itok=piFEFYTQ (Connecting)
|
||||
[15]: https://www.linux.com/files/images/sshguis6jpg
|
||||
[16]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_6.jpg?itok=-kYl6iSE (Terminator)
|
@ -1,36 +1,38 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10557-1.html)
|
||||
[#]: subject: (Get started with Go For It, a flexible to-do list application)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-go-for-it)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
开始使用 Go For It,一个灵活的待办事项列表程序
|
||||
开始使用 Go For It 吧,一个灵活的待办事项列表程序
|
||||
======
|
||||
Go For It,是我们开源工具系列中的第十个工具,它将使你在 2019 年更高效,它在 Todo.txt 系统的基础上构建,以帮助你完成更多工作。
|
||||
|
||||
> Go For It,是我们开源工具系列中的第十个工具,它将使你在 2019 年更高效,它在 Todo.txt 系统的基础上构建,以帮助你完成更多工作。
|
||||
|
||||

|
||||
|
||||
每年年初似乎都有疯狂的冲动,想方设法提高工作效率。新年的决议,开始一年的权利,当然,“与旧的,与新的”的态度都有助于实现这一目标。通常的一轮建议严重偏向封闭源和专有软件。它不一定是这样。
|
||||
每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
|
||||
|
||||
这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 10 个工具来帮助你在 2019 年更有效率。
|
||||
|
||||
### Go For It
|
||||
|
||||
有时,人们要高效率需要的不是一个花哨的看板或一组笔记,而是一个简单,直接的待办事项清单。像“将项目添加到列表中,在完成后检查”一样基本的东西。为此,[纯文本 Todo.txt 系统][1]可能是最容易使用的系统之一,几乎所有系统都支持它。
|
||||
有时,人们要高效率需要的不是一个花哨的看板或一组笔记,而是一个简单、直接的待办事项清单。像“将项目添加到列表中,在完成后检查”一样基本的东西。为此,[纯文本 Todo.txt 系统][1]可能是最容易使用的系统之一,几乎所有系统都支持它。
|
||||
|
||||

|
||||
|
||||
[Go For It][2] 是一个简单易用的 Todo.txt 图形界面。如果你已经在使用 Todo.txt,它可以与现有文件一起使用,如果还没有,那么可以同时创建待办事项和完成事项。它允许拖放任务排序,允许用户按照他们想要执行的顺序组织待办事项。它还支持 [Todo.txt 格式指南][3]中所述的优先级,项目和上下文。而且,只需单击任务列表中的项目或者上下文就可通过它们过滤任务。
|
||||
[Go For It][2] 是一个简单易用的 Todo.txt 图形界面。如果你已经在使用 Todo.txt,它可以与现有文件一起使用,如果还没有,那么可以同时创建待办事项和完成事项。它允许拖放任务排序,允许用户按照他们想要执行的顺序组织待办事项。它还支持 [Todo.txt 格式指南][3]中所述的优先级、项目和上下文。而且,只需单击任务列表中的项目或者上下文就可通过它们过滤任务。
|
||||
|
||||

|
||||
|
||||
一开始,Go For It 可能看起来与任何其他 Todo.txt 程序相同,但外观可能是骗人的。将 Go For It 与其他真正区分开的功能是它包含一个内置的[番茄工作法][4]计时器。选择要完成的任务,切换到“计时器”选项卡,然后单击“启动”。任务完成后,只需单击“完成”,它将自动重置计时器并选择列表中的下一个任务。你可以暂停并重新启动计时器,也可以单击“跳过”跳转到下一个任务(或中断)。当当前任务剩余 60 秒时,它会发出警告。任务的默认时间设置为25分钟,中断的默认时间设置为五分钟。你可以在“设置”页面中调整,同时还能调整 Todo.txt 和 done.txt 文件的目录的位置。
|
||||
一开始,Go For It 可能看起来与任何其他 Todo.txt 程序相同,但外观可能是骗人的。将 Go For It 与其他程序真正区分开的功能是它包含一个内置的[番茄工作法][4]计时器。选择要完成的任务,切换到“计时器”选项卡,然后单击“启动”。任务完成后,只需单击“完成”,它将自动重置计时器并选择列表中的下一个任务。你可以暂停并重新启动计时器,也可以单击“跳过”跳转到下一个任务(或中断)。在当前任务剩余 60 秒时,它会发出警告。任务的默认时间设置为 25 分钟,中断的默认时间设置为 5 分钟。你可以在“设置”页面中调整,同时还能调整 Todo.txt 和 done.txt 文件的目录的位置。
|
||||
|
||||

|
||||
|
||||
Go For It 的第三个选项卡“已完成”,允许你查看已完成的任务并在需要时将其清除。能够看到你已经完成的可能是非常激励的,也是一种了解你在更长的过程中进度的好方法。
|
||||
Go For It 的第三个选项卡是“已完成”,允许你查看已完成的任务并在需要时将其清除。能够看到你已经完成的可能是非常激励的,也是一种了解你在更长的过程中进度的好方法。
|
||||
|
||||

|
||||
|
||||
@ -45,7 +47,7 @@ via: https://opensource.com/article/19/1/productivity-tool-go-for-it
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
111
published/20190201 Top 5 Linux Distributions for New Users.md
Normal file
111
published/20190201 Top 5 Linux Distributions for New Users.md
Normal file
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10553-1.html)
|
||||
[#]: subject: (Top 5 Linux Distributions for New Users)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
5 个面向新手的 Linux 发行版
|
||||
======
|
||||
|
||||
> 5 个可使用新用户有如归家般感觉的发行版。
|
||||
|
||||

|
||||
|
||||
从最初的 Linux 到现在,Linux 已经发展了很长一段路。但是,无论你曾经多少次听说过现在使用 Linux 有多容易,仍然会有表示怀疑的人。而要真的承担得其这份声明,桌面必须足够简单,以便不熟悉 Linux 的人也能够使用它。事实上大量的桌面发行版使这成为了现实。
|
||||
|
||||
### 无需 Linux 知识
|
||||
|
||||
将这个清单误解为又一个“最佳用户友好型 Linux 发行版”的清单可能很简单。但这不是我们要在这里看到的。这二者之间有什么不同?就我的目的而言,定义的界限是 Linux 是否真正起到了使用的作用。换句话说,你是否可以将这个桌面操作系统放在一个用户面前,并让他们应用自如而无需懂得 Linux 知识呢?
|
||||
|
||||
不管你相信与否,有些发行版就能做到。这里我将介绍给你 5 个这样的发行版。这些或许你全都听说过。它们或许不是你所选择的发行版,但可以向你保证它们无需过多关注,而是将用户放在眼前的。
|
||||
|
||||
我们来看看选中的几个。
|
||||
|
||||
### Elementary OS
|
||||
|
||||
[Elementary OS](https://elementary.io/) 的理念主要围绕人们如何实际使用他们的桌面。开发人员和设计人员不遗余力地创建尽可能简单的桌面。在这个过程中,他们致力于去 Linux 化的 Linux。这并不是说他们已经从这个等式中删除了 Linux。不,恰恰相反,他们所做的就是创建一个与你所发现的一样的中立的操作系统。Elementary OS 是如此流畅,以确保一切都完美合理。从单个 Dock 到每个人都清晰明了的应用程序菜单,这是一个桌面,而不用提醒用户说,“你正在使用 Linux!” 事实上,其布局本身就让人联想到 Mac,但附加了一个简单的应用程序菜单(图 1)。
|
||||
|
||||
![Elementary OS Juno][2]
|
||||
|
||||
*图 1:Elementary OS Juno 应用菜单*
|
||||
|
||||
将 Elementary OS 放在此列表中的另一个重要原因是它不像其他桌面发行版那样灵活。当然,有些用户会对此不以为然,但是如果桌面没有向用户扔出各种花哨的定制诱惑,那么就会形成一个非常熟悉的环境:一个既不需要也不允许大量修修补补的环境。操作系统在让新用户熟悉该平台这一方面还有很长的路要走。
|
||||
|
||||
与任何现代 Linux 桌面发行版一样,Elementary OS 包括了应用商店,称为 AppCenter,用户可以在其中安装所需的所有应用程序,而无需触及命令行。
|
||||
|
||||
### 深度操作系统
|
||||
|
||||
[深度操作系统](https://www.deepin.org/)不仅得到了市场上最漂亮的台式机之一的赞誉,它也像任何桌面操作系统一样容易上手。其桌面界面非常简单,对于毫无 Linux 经验的用户来说,它的上手速度非常快。事实上,你很难找到无法立即上手使用 Deepin 桌面的用户。而这里唯一可能的障碍可能是其侧边栏控制中心(图 2)。
|
||||
|
||||
![][5]
|
||||
|
||||
*图 2:Deepin 的侧边栏控制编码*
|
||||
|
||||
但即使是侧边栏控制面板,也像市场上的任何其他配置工具一样直观。任何使用过移动设备的人对于这种布局都很熟悉。至于打开应用程序,Deepin 的启动器采用了 macOS Launchpad 的方式。此按钮位于桌面底座上通常最右侧的位置,因此用户立即就可以会意,知道它可能类似于标准的“开始”菜单。
|
||||
|
||||
与 Elementary OS(以及市场上大多数 Linux 发行版)类似,深度操作系统也包含一个应用程序商店(简称为“商店”),可以轻松安装大量应用程序。
|
||||
|
||||
### Ubuntu
|
||||
|
||||
你知道肯定有它。[Ubuntu](https://www.ubuntu.com/) 通常在大多数用户友好的 Linux 列表中占据首位。因为它是少数几个不需要懂得 Linux 就能使用的桌面之一。但在采用 GNOME(和 Unity 谢幕)之前,情况并非如此。因为 Unity 经常需要进行一些调整才能达到一点 Linux 知识都不需要的程度(图 3)。现在 Ubuntu 已经采用了 GNOME,并将其调整到甚至不需要懂得 GNOME 的程度,这个桌面使得对 Linux 的简单性和可用性的要求不再是迫切问题。
|
||||
|
||||
![Ubuntu 18.04][7]
|
||||
|
||||
*图 3:Ubuntu 18.04 桌面可使用马上熟悉起来*
|
||||
|
||||
与 Elementary OS 不同,Ubuntu 对用户毫无阻碍。因此,任何想从桌面上获得更多信息的人都可以拥有它。但是,其开箱即用的体验对于任何类型的用户都是足够的。任何一个让用户不知道他们触手可及的力量有多少的桌面,肯定不如 Ubuntu。
|
||||
|
||||
### Linux Mint
|
||||
|
||||
我需要首先声明,我从来都不是 [Linux Mint](https://linuxmint.com/) 的忠实粉丝。但这并不是说我不尊重开发者的工作,而更多的是一种审美观点。我更喜欢现代化的桌面环境。但是,旧式的学校计算机桌面的隐喻(可以在默认的 Cinnamon 桌面中找到)可以让几乎每个人使用它的人都格外熟悉。Linux Mint 使用任务栏、开始按钮、系统托盘和桌面图标(图 4),提供了一个需要零学习曲线的界面。事实上,一些用户最初可能会被愚弄,以为他们正在使用 Windows 7 的克隆版。甚至是它的更新警告图标也会让用户感到非常熟悉。
|
||||
|
||||
![Linux Mint][9]
|
||||
|
||||
*图 4:Linux Mint 的 Cinnamon 桌面非常像 Windows 7*
|
||||
|
||||
因为 Linux Mint 受益于其所基于的 Ubuntu,它不仅会让你马上熟悉起来,而且具有很高的可用性。无论你是否对底层平台有所了解,用户都会立即感受到宾至如归的感觉。
|
||||
|
||||
### Ubuntu Budgie
|
||||
|
||||
我们的列表将以这样一个发行版做结:它也能让用户忘记他们正在使用 Linux,并且使用常用工具变得简单、美观。使 Ubuntu 融合 Budgie 桌面可以构成一个令人印象深刻的易用发行版。虽然其桌面布局(图 5)可能不太一样,但毫无疑问,适应这个环境并不需要浪费时间。实际上,除了 Dock 默认居于桌面的左侧,[Ubuntu Budgie](https://ubuntubudgie.org/) 确实看起来像 Elementary OS。
|
||||
|
||||
![Budgie][11]
|
||||
|
||||
*图 5:Budgie 桌面既漂亮又简单*
|
||||
|
||||
Ubuntu Budgie 中的系统托盘/通知区域提供了一些不太多见的功能,比如:快速访问 Caffeine(一种保持桌面清醒的工具)、快速笔记工具(用于记录简单笔记)、Night Lite 开关、原地下拉菜单(用于快速访问文件夹),当然还有 Raven 小程序/通知侧边栏(与深度操作系统中的控制中心侧边栏类似,但不太优雅)。Budgie 还包括一个应用程序菜单(左上角),用户可以访问所有已安装的应用程序。打开一个应用程序,该图标将出现在 Dock 中。右键单击该应用程序图标,然后选择“保留在 Dock”以便更快地访问。
|
||||
|
||||
Ubuntu Budgie 的一切都很直观,所以几乎没有学习曲线。这种发行版既优雅又易于使用,不能再好了。
|
||||
|
||||
### 选择一个吧
|
||||
|
||||
至此介绍了 5 个 Linux 发行版,它们各自以自己的方式提供了让任何用户都马上熟悉的桌面体验。虽然这些可能不是你对顶级发行版的选择,但对于那些不熟悉 Linux 的用户来说,却不能否定它们的价值。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/files/images/elementaryosjpg-2
|
||||
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos_0.jpg?itok=KxgNUvMW (Elementary OS Juno)
|
||||
[3]: https://www.linux.com/licenses/category/used-permission
|
||||
[4]: https://www.linux.com/files/images/deepinjpg
|
||||
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/deepin.jpg?itok=VV381a9f
|
||||
[6]: https://www.linux.com/files/images/ubuntujpg-1
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_1.jpg?itok=bax-_Tsg (Ubuntu 18.04)
|
||||
[8]: https://www.linux.com/files/images/linuxmintjpg
|
||||
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linuxmint.jpg?itok=8sPon0Cq (Linux Mint )
|
||||
[10]: https://www.linux.com/files/images/budgiejpg-0
|
||||
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/budgie_0.jpg?itok=zcf-AHmj (Budgie)
|
||||
[12]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,25 +1,24 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10554-1.html)
|
||||
[#]: subject: (4 cool new projects to try in COPR for February 2019)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/)
|
||||
[#]: author: (Dominik Turecek https://fedoramagazine.org)
|
||||
|
||||
2019 年 2 月 COPR 中 4 个值得尝试很酷的新项目
|
||||
COPR 仓库中 4 个很酷的新软件(2019.2)
|
||||
======
|
||||
|
||||

|
||||
|
||||
COPR 是个人软件仓库[集合][1],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不被 Fedora 基础设施不支持或没有被该项目所签名。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
|
||||
|
||||
|
||||
这是 COPR 中一组新的有趣项目。
|
||||
|
||||
### CryFS
|
||||
|
||||
[CryFS][2] 是一个加密文件系统。它设计用于云存储,主要用于 Dropbox,尽管它也可以与其他存储提供商一起使用。CryFS 不仅加密文件系统中的文件,还会加密元数据、文件大小和目录结构。
|
||||
[CryFS][2] 是一个加密文件系统。它设计与云存储一同使用,主要是 Dropbox,尽管它也可以与其他存储提供商一起使用。CryFS 不仅加密文件系统中的文件,还会加密元数据、文件大小和目录结构。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
@ -32,13 +31,13 @@ sudo dnf install cryfs
|
||||
|
||||
### Cheat
|
||||
|
||||
[Cheat][3] 是一个用于在命令行中查看各种备忘录的工具,用来提醒仅偶尔使用的程序的使用方法。对于许多 Linux 程序,cheat 提供了来自手册页的压缩后的信息,主要关注最常用的示例。除了内置的备忘录,cheat 允许你编辑现有的备忘录或从头开始创建新的备忘录。
|
||||
[Cheat][3] 是一个用于在命令行中查看各种备忘录的工具,用来提醒仅偶尔使用的程序的使用方法。对于许多 Linux 程序,`cheat` 提供了来自手册页的精简后的信息,主要关注最常用的示例。除了内置的备忘录,`cheat` 允许你编辑现有的备忘录或从头开始创建新的备忘录。
|
||||
|
||||
![][4]
|
||||
|
||||
#### 安装说明
|
||||
|
||||
仓库目前为 Fedora 28、29 和 Rawhide 以及 EPEL 7 提供 cheat。要安装 cheat,请使用以下命令:
|
||||
仓库目前为 Fedora 28、29 和 Rawhide 以及 EPEL 7 提供 `cheat`。要安装 `cheat`,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable tkorbar/cheat
|
||||
@ -47,20 +46,20 @@ sudo dnf install cheat
|
||||
|
||||
### Setconf
|
||||
|
||||
[setconf][5] 是一个简单的程序,作为 sed 的替代方案,用于对配置文件进行更改。setconf 唯一能做的就是找到指定文件中的密钥并更改其值。setconf 仅提供一些选项来更改其行为 - 例如,取消更改行的注释。
|
||||
[setconf][5] 是一个简单的程序,作为 `sed` 的替代方案,用于对配置文件进行更改。`setconf` 唯一能做的就是找到指定文件中的密钥并更改其值。`setconf` 仅提供很少的选项来更改其行为 - 例如,取消更改行的注释。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
仓库目前为 Fedora 27、28 和 29 提供 setconf。要安装 setconf,请使用以下命令:
|
||||
仓库目前为 Fedora 27、28 和 29 提供 `setconf`。要安装 `setconf`,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable jamacku/setconf
|
||||
sudo dnf install setconf
|
||||
```
|
||||
|
||||
### Reddit Terminal Viewer
|
||||
### Reddit 终端查看器
|
||||
|
||||
[Reddit Terminal Viewer][6],或称为 rtv,提供了从终端浏览 Reddit 的界面。它提供了 Reddit 的基本功能,因此你可以登录到你的帐户,查看 subreddits、评论、点赞和发现新主题。但是,rtv 目前不支持 Reddit 标签。
|
||||
[Reddit 终端查看器][6],或称为 `rtv`,提供了从终端浏览 Reddit 的界面。它提供了 Reddit 的基本功能,因此你可以登录到你的帐户,查看 subreddits、评论、点赞和发现新主题。但是,rtv 目前不支持 Reddit 标签。
|
||||
|
||||
![][7]
|
||||
|
||||
@ -81,7 +80,7 @@ via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zhs852)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10552-1.html)
|
||||
[#]: subject: (How To Install And Use PuTTY On Linux)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
@ -12,7 +12,7 @@
|
||||
|
||||

|
||||
|
||||
PuTTY 是一个免费、开源且支持包括 SSH、Telnet 和 Rlogin 在内的多种协议的 GUI 客户端。一般来说,Windows 管理员们会把 PuTTY 当成 SSH 或 Telnet 客户端来在本地 Windows 系统和远程 Linux 服务器之间建立连接。不过,PuTTY 可不是 Windows 的独占软件。它在 Linux 用户之中也是很流行的。本篇文章将会告诉你如何在 Linux 中安装并使用 PuTTY。
|
||||
PuTTY 是一个自由开源且支持包括 SSH、Telnet 和 Rlogin 在内的多种协议的 GUI 客户端。一般来说,Windows 管理员们会把 PuTTY 当成 SSH 或 Telnet 客户端来在本地 Windows 系统和远程 Linux 服务器之间建立连接。不过,PuTTY 可不是 Windows 的独占软件。它在 Linux 用户之中也是很流行的。本篇文章将会告诉你如何在 Linux 中安装并使用 PuTTY。
|
||||
|
||||
### 在 Linux 中安装 PuTTY
|
||||
|
||||
@ -60,11 +60,11 @@ PuTTY 的默认界面长这个样子:
|
||||
|
||||
### 使用 PuTTY 访问远程 Linux 服务器
|
||||
|
||||
请在左侧面板点击 **会话** 选项卡,输入远程主机名(或 IP 地址)。然后,请选择连接类型(比如 Telnet、Rlogin 以及 SSH 等)。根据你选择的连接类型,PuTTY 会自动选择对应连接类型的默认端口号(比如 SSH 是 22、Telnet 是 23),如果你修改了默认端口号,别忘了手动把它输入到 **端口** 里。在这里,我用 SSH 连接到远程主机。在输入所有信息后,请点击 **打开**。
|
||||
请在左侧面板点击 “Session” 选项卡,输入远程主机名(或 IP 地址)。然后,请选择连接类型(比如 Telnet、Rlogin 以及 SSH 等)。根据你选择的连接类型,PuTTY 会自动选择对应连接类型的默认端口号(比如 SSH 是 22、Telnet 是 23),如果你修改了默认端口号,别忘了手动把它输入到 “Port” 里。在这里,我用 SSH 连接到远程主机。在输入所有信息后,请点击 “Open”。
|
||||
|
||||

|
||||
|
||||
如果这是你首次连接到这个远程主机,PuTTY 会显示一个安全警告,问你是否信任你连接到的远程主机。点击 **接受** 即可将远程主机的密钥加入 PuTTY 的储存当中:
|
||||
如果这是你首次连接到这个远程主机,PuTTY 会显示一个安全警告,问你是否信任你连接到的远程主机。点击 “Accept” 即可将远程主机的密钥加入 PuTTY 的缓存当中:
|
||||
|
||||
![PuTTY 安全警告][2]
|
||||
|
||||
@ -74,63 +74,60 @@ PuTTY 的默认界面长这个样子:
|
||||
|
||||
#### 使用密钥验证访问远程主机
|
||||
|
||||
一些 Linux 管理员可能在服务器上配置了密钥认证。举个例子,在用 PuTTY 访问 AMS instances 的时候,你需要指定密钥文件的位置。PuTTY 可以使用它自己的格式(**.ppk** 文件)来进行公钥验证。
|
||||
一些 Linux 管理员可能在服务器上配置了密钥认证。举个例子,在用 PuTTY 访问 AMS 实例的时候,你需要指定密钥文件的位置。PuTTY 可以使用它自己的格式(`.ppk` 文件)来进行公钥验证。
|
||||
|
||||
首先输入主机名或 IP。之后,在 **分类** 选项卡中,展开 **连接**,再展开 **SSH**,然后选择 **认证**,之后便可选择 **.ppk** 密钥文件了。
|
||||
首先输入主机名或 IP。之后,在 “Category” 选项卡中,展开 “Connection”,再展开 “SSH”,然后选择 “Auth”,之后便可选择 `.ppk` 密钥文件了。
|
||||
|
||||
![][3]
|
||||
|
||||
点击接受来关闭安全提示。然后,输入远程主机的密码片段(如果密钥被密码片段保护)来建立连接。
|
||||
点击 “Accept” 来关闭安全提示。然后,输入远程主机的密码(如果密钥被密码保护)来建立连接。
|
||||
|
||||
#### 保存 PuTTY 会话
|
||||
|
||||
有些时候,你可能需要多次连接到同一个远程主机,你可以保存这些会话并在之后不输入信息访问他们。
|
||||
|
||||
请输入主机名(或 IP 地址),并提供一个会话名称,然后点击 **保存**。如果你有密钥文件,请确保你在点击保存按钮之前指定它们。
|
||||
请输入主机名(或 IP 地址),并提供一个会话名称,然后点击 “Save”。如果你有密钥文件,请确保你在点击 “Save” 按钮之前指定它们。
|
||||
|
||||
![][4]
|
||||
|
||||
现在,你可以通过选择 **已保存的会话**,然后点击 **Load**,再点击 **打开** 来启动连接。
|
||||
现在,你可以通过选择 “Saved sessions”,然后点击 “Load”,再点击 “Open” 来启动连接。
|
||||
|
||||
#### 使用 <ruby>pscp<rt>PuTTY Secure Copy Client</rt></ruby> 来将文件传输到远程主机中
|
||||
#### 使用 PuTTY 安全复制客户端(pscp)来将文件传输到远程主机中
|
||||
|
||||
通常来说,Linux 用户和管理员会使用 **scp** 这个命令行工具来从本地往远程主机传输文件。不过 PuTTY 给我们提供了一个叫做 <ruby>PuTTY 安全拷贝客户端<rt>PuTTY Secure Copy Client</rt></ruby>(简写为 **PSCP**)的工具来干这个事情。如果你的本地主机运行的是 Windows,你可能需要这个工具。PSCP 在 Windows 和 Linux 下都是可用的。
|
||||
通常来说,Linux 用户和管理员会使用 `scp` 这个命令行工具来从本地往远程主机传输文件。不过 PuTTY 给我们提供了一个叫做 <ruby>PuTTY 安全复制客户端<rt>PuTTY Secure Copy Client</rt></ruby>(简写为 `pscp`)的工具来干这个事情。如果你的本地主机运行的是 Windows,你可能需要这个工具。PSCP 在 Windows 和 Linux 下都是可用的。
|
||||
|
||||
使用这个命令来将 **file.txt** 从本地的 Arch Linux 拷贝到远程的 Ubuntu 上:
|
||||
使用这个命令来将 `file.txt` 从本地的 Arch Linux 拷贝到远程的 Ubuntu 上:
|
||||
|
||||
```shell
|
||||
pscp -i test.ppk file.txt sk@192.168.225.22:/home/sk/
|
||||
```
|
||||
|
||||
让我们来拆分这个命令:
|
||||
让我们来分析这个命令:
|
||||
|
||||
* **-i test.ppk** : 访问远程主机的密钥文件;
|
||||
* **file.txt** : 要拷贝到远程主机的文件;
|
||||
* **sk@192.168.225.22** : 远程主机的用户名与 IP;
|
||||
* **/home/sk/** : 目标路径。
|
||||
* `-i test.ppk`:访问远程主机所用的密钥文件;
|
||||
* `file.txt`:要拷贝到远程主机的文件;
|
||||
* `sk@192.168.225.22`:远程主机的用户名与 IP;
|
||||
* `/home/sk/`:目标路径。
|
||||
|
||||
|
||||
|
||||
要拷贝一个目录,请使用 <ruby>**-r**<rt>Recursive</rt></ruby> 参数:
|
||||
要拷贝一个目录,请使用 `-r`(<ruby>递归<rt>Recursive</rt></ruby>)参数:
|
||||
|
||||
```shell
|
||||
pscp -i test.ppk -r dir/ sk@192.168.225.22:/home/sk/
|
||||
```
|
||||
|
||||
要使用 pscp 传输文件,请执行以下命令:
|
||||
要使用 `pscp` 传输文件,请执行以下命令:
|
||||
|
||||
```shell
|
||||
pscp -i test.ppk c:\documents\file.txt.txt sk@192.168.225.22:/home/sk/
|
||||
```
|
||||
|
||||
你现在应该了解了 PuTTY 是什么,知道了如何安装它和如何使用它。同时,你也学习到了如何使用 pscp 程序在本地和远程主机上传输文件。
|
||||
你现在应该了解了 PuTTY 是什么,知道了如何安装它和如何使用它。同时,你也学习到了如何使用 `pscp` 程序在本地和远程主机上传输文件。
|
||||
|
||||
以上便是所有了,希望这篇文章对你有帮助。
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/
|
||||
@ -138,7 +135,7 @@ via: https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zhs852](https://github.com/zhs852)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,15 +1,15 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10555-1.html)
|
||||
[#]: subject: (Drinking coffee with AWK)
|
||||
[#]: via: (https://opensource.com/article/19/2/drinking-coffee-awk)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
用 AWK 喝咖啡
|
||||
======
|
||||
> 用一个简单的 AWK 程序跟踪你的同事所喝咖啡的欠款。
|
||||
> 用一个简单的 AWK 程序跟踪你的同事喝咖啡的欠款。
|
||||
|
||||

|
||||
|
||||
@ -63,7 +63,7 @@ $1 == "member" {
|
||||
}
|
||||
```
|
||||
|
||||
第二条规则在记录付款时减少欠账。
|
||||
第二条规则在记录付款(`payment`)时减少欠账。
|
||||
|
||||
```
|
||||
$1 == "payment" {
|
||||
@ -71,7 +71,7 @@ $1 == "payment" {
|
||||
}
|
||||
```
|
||||
|
||||
还款则相反:它增加欠账。这可以优雅地支持意外地给了某人太多钱的情况。
|
||||
还款(`payback`)则相反:它增加欠账。这可以优雅地支持意外地给了某人太多钱的情况。
|
||||
|
||||
```
|
||||
$1 == "payback" {
|
||||
@ -79,7 +79,7 @@ $1 == "payback" {
|
||||
}
|
||||
```
|
||||
|
||||
最复杂的部分出现在有人购买速溶咖啡供咖啡角使用时。它被视为付款,并且该人的债务减少了适当的金额。接下来,它计算每个会员的费用。它根据成员的级别对所有成员进行迭代并增加欠款
|
||||
最复杂的部分出现在有人购买(`bought`)速溶咖啡供咖啡角使用时。它被视为付款(`payment`),并且该人的债务减少了适当的金额。接下来,它计算每个会员的费用。它根据成员的级别对所有成员进行迭代并增加欠款
|
||||
|
||||
```
|
||||
$1 == "bought" {
|
||||
@ -101,7 +101,7 @@ END {
|
||||
}
|
||||
```
|
||||
|
||||
除了一个遍历成员文件,并向人们发送提醒电子邮件以支付他们的会费(积极清账)的脚本外,这个系统管理咖啡角相当一段时间。
|
||||
再配合一个遍历成员文件,并向人们发送提醒电子邮件以支付他们的会费(积极清账)的脚本,这个系统管理咖啡角相当一段时间。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -110,7 +110,7 @@ via: https://opensource.com/article/19/2/drinking-coffee-awk
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
52
sources/talk/20120911 Doug Bolden, Dunnet (IF).md
Normal file
52
sources/talk/20120911 Doug Bolden, Dunnet (IF).md
Normal file
@ -0,0 +1,52 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Doug Bolden, Dunnet (IF))
|
||||
[#]: via: (http://www.wyrmis.com/games/if/dunnet.html)
|
||||
[#]: author: (W Doug Bolden http://www.wyrmis.com)
|
||||
|
||||
Doug Bolden, Dunnet (IF)
|
||||
======
|
||||
|
||||
### Dunnet (IF)
|
||||
|
||||
#### Review
|
||||
|
||||
When I began becoming a semi-serious hobbyist of IF last year, I mostly focused on Infocom, Adventures Unlimited, other Scott Adams based games, and freeware titles. I went on to buy some from Malinche. I picked up _1893_ and _Futureboy_ and (most recnetly) _Treasures of a Slave Kingdom_. I downloaded a lot of free games from various sites. With all of my research and playing, I never once read anything that talked about a game being bundled with Emacs.
|
||||
|
||||
Partially, this is because I am a Vim guy. But I used to use Emacs. Kind of a lot. For probably my first couple of years with Linux. About as long as I have been a diehard Vim fan, now. I just never explored, it seems.
|
||||
|
||||
I booted up Emacs tonight, and my fonts were hosed. Still do not know exactly why. I surfed some menus to find out what was going wrong and came across a menu option called "Adventure" under Games, which I assumed (I know, I know) meant the Crowther and Woods and 1977 variety. When I clicked it tonight, thinking that it has been a few months since I chased a bird around with a cage in a mine so I can fight off giant snakes or something, I was brought up text involving ends of roads and shovels. Trees, if shaken, that kill me with a coconut. This was not the game I thought it was.
|
||||
|
||||
I dug around (or, in purely technical terms, typed "help") and got directed to [this website][1]. Well, here was an IF I had never touched before. Brand spanking new to me. I had planned to play out some _ToaSK_ tonight, but figured that could wait. Besides, I was not quite in the mood for the jocular fun of S. John Ross's commerical IF outing. I needed something a little more direct, and this apparently it.
|
||||
|
||||
Most of the game plays out just like the _Colossal Cave Adventure_ cousins of the oldschool (generally commercial) IF days. There are items you pick. Each does a single task (well, there could be one exception to this, I guess). You collect treasures. Winning is a combination of getting to the end and turning in the treasures. The game slightly tweaks the formula by allowing multiple drop off points for the treasures. Since there is a weight limit, though, you usually have to drop them off at a particular time to avoid getting stuck. At several times, your "item cache" is flushed, so to speak, meaning you have to go back and replay earlier portions to find out how to bring things foward. Damage to items can occur to stop you from being able to play. Replaying is pretty much unavoidable, unless you guess outcomes just right.
|
||||
|
||||
It also inherits many problems from the era it came. There is a twisty maze. I'm not sure how big it is. I just cheated and looked up a walkthrough for the maze portion. I plan on going back and replaying up to the maze bit and mapping it out, though. I was just mentally and physically beat when I played and knew that I was going to have to call it quits on the game for the night or cheat through the maze. I'm glad I cheated, because there are some interesting things after the maze.
|
||||
|
||||
It also has the same sort of stilted syntax and variable levels of description that the original _Adventure_ had. Looking at one item might give you "there is nothing special about that" while looking at another might give you a sentence of flavor text. Several things mentioned in the background do not exist to the parser, which some do. Part of game play is putting up with experimenting. This includes, in cases, a tendency of room descriptions to be written from the perspective of the first time you enter. I know that the Classroom found towards the end of the game does not mention the South exit, either. There are possibly other times this occured that I didn't notice.
|
||||
|
||||
It's final issue, again coming out of the era it was designed, is random death syndrome. This is not too common, but there are a few places where things that have no initially apparent fatal outcome lead to one anyhow. In some ways, this "fatal outcome" is just the game reaching an unwinnable state. For an example of the former, type "shake trees" in the first room. For an example of the latter, send either the lamp, the key, or the shovel through the ftp without switching ftp modes first. At least with the former, there is a sense of exploration in finding out new ways to die. In IF, creative deaths is a form of victory in their own right.
|
||||
|
||||
_Dunnet_ has a couple of differences from most IF. The former difference is minor. There are little odd descriptions throughout the game. "This room is red" or "The towel has a picture of Snoopy one it" or "There is a cliff here" that do not seem to have an immediate effect on the game. Sure, you can jump over the cliff (and die, obviously) but but it still comes off as a bright spot in the standard description matrix. Towards the end, you will be forced to bring back these details. It makes a neat little diversion of looking around and exploring things. Most of the details are cute and/or add to the surreality of the game overall.
|
||||
|
||||
The other big difference, and the one that greatly increased both my annoyance with and my enjoyment of the game, revolves around the two-three computer oriented scenes in the game. You have to type commands into two different computers throughout. One is a VAX and the other is, um, something like a PC (I forget). In both cases, there are clues to be found by knowing your way around the interface. This is a game for computer folk, so most who play it will have a sense of how to type "ls" or "dir" depending on the OS. But not all, will. Beating the game requires a general sense of computer literacy. You must know what types are in ftp. You must know how to determine what type a file is. You must know how to read a text file on a DOS style prompt. You must know something about protocols and etiquette for logging into ftp servers. All this sort of thing. If you do, or are willing to learn (I looked up some of the stuff online) then you can get past this portion with no problem. But this can be like the maze to some people, requiring several replays to get things right.
|
||||
|
||||
The end result is a quirky but fun game that I wish I had known about before because now I have the feeling that my computer is hiding other secrets from me. Glad to have played. Will likely play again to see how many ways I can die.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.wyrmis.com/games/if/dunnet.html
|
||||
|
||||
作者:[W Doug Bolden][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.wyrmis.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://www.driver-aces.com/ronnie.html
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Book Review: Fundamentals of Linux)
|
||||
[#]: via: (https://itsfoss.com/fundamentals-of-linux-book-review)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Book Review: Fundamentals of Linux
|
||||
======
|
||||
|
||||
There are many great books that cover the basics of what Linux is and how it works. Today, I will be taking a look at one such book. Today, the subject of our discussion is [Fundamentals of Linux][1] by Oliver Pelz and is published by [PacktPub][2].
|
||||
|
||||
[Oliver Pelz][3] has over ten years of experience as a software developer and a system administrator. He holds a degree in bioinformatics.
|
||||
|
||||
### What is the book ‘Fundamentals of Linux’ about?
|
||||
|
||||
![Fundamental of Linux books][4]
|
||||
|
||||
As can be guessed from the title, the goal of Fundamentals of Linux is to give the reader a strong foundation from which to learn about the Linux command line. The book is a little over two hundred pages long, so it only focuses on teaching the everyday tasks and problems that users commonly encounter. The book is designed for readers who want to become Linux administrators.
|
||||
|
||||
The first chapter starts out by giving an overview of virtualization. From there the author instructs how to create a virtual instance of [CentOS][5] in [VirtualBox][6], how to clone it, and how to use snapshots. You will also learn how to connect to the virtual machines via SSH.
|
||||
|
||||
The second chapter covers the basics of the Linux command line. This includes shell globbing, shell expansion, how to work with file names that contain spaces or special characters. It also explains how to interpret a command’s manual page, as well as, how to use `sed`, `awk`, and to navigate the Linux file system.
|
||||
|
||||
The third chapter takes a more in-depth look at the Linux file system. You will learn how files are linked in Linux and how to search for them. You will also be given an overview of users, groups and file permissions. Since the chapter focuses on interacting with files, it tells how to read text files from the command line, as well as, an overview of how to use the VIM editor.
|
||||
|
||||
Chapter four focuses on using the command line. It covers important commands, such as `cat`, `sort`, `awk`. `tee`, `tar`, `rsync`, `nmap`, `htop` and more. You will learn what processes are and how they communicate with each other. This chapter also includes an introduction to Bash shell scripting.
|
||||
|
||||
The fifth and final chapter covers networking on Linux and other advanced command line concepts. The author discusses how Linux handles networking and gives examples using multiple virtual machines. He also covers how to install new programs and how to set up a firewall.
|
||||
|
||||
### Thoughts on the book
|
||||
|
||||
Fundamentals of Linux might seem short at five chapters and a little over two hundred pages. However, quite a bit of information is covered. You are given everything that you need to get going on the command line.
|
||||
|
||||
The book’s sole focus on the command line is one thing to keep in mind. You won’t get any information on how to use a graphical user interface. That is partially because Linux has so many different desktop environments and so many similar system applications that it would be hard to write a book that could cover all of the variables. It is also partially because the book is aimed at potential Linux administrators.
|
||||
|
||||
I was kinda surprised to see that the author used [CentOS][7] to teach Linux. I would have expected him to use a more common Linux distro, like Ubuntu, Debian, or Fedora. However, because it is a distro designed for servers very little changes over time, so it is a very stable basis for a course on Linux basics.
|
||||
|
||||
I’ve used Linux for over half a decade. I spent most of that time using desktop Linux. I dove into the terminal when I needed to, but didn’t spend lots of time there. I have performed many of the actions covered in this book using a mouse. Now, I know how to do the same things via the terminal. It won’t change the way I do my tasks, but it will help me understand what goes on behind the curtain.
|
||||
|
||||
If you have either just started using Linux or are planning to do so in the future, I would not recommend this book. It might be a little overwhelming. If you have already spent some time with Linux or can quickly grasp the technical language, this book may very well be for you.
|
||||
|
||||
If you think this book is apt for your learning needs, you can get the book from the link below:
|
||||
|
||||
We will be trying to review more Linux books in coming months so stay tuned with us.
|
||||
|
||||
What is your favorite introductory book on Linux? Let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][8].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fundamentals-of-linux-book-review
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.packtpub.com/networking-and-servers/fundamentals-linux
|
||||
[2]: https://www.packtpub.com/
|
||||
[3]: http://www.oliverpelz.de/index.html
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/fundamentals-of-linux-book-review.jpeg?resize=800%2C450&ssl=1
|
||||
[5]: https://centos.org/
|
||||
[6]: https://www.virtualbox.org/
|
||||
[7]: https://www.centos.org/
|
||||
[8]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,136 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How our non-profit works openly to make education accessible)
|
||||
[#]: via: (https://opensource.com/open-organization/19/2/building-curriculahub)
|
||||
[#]: author: (Tanner Johnson https://opensource.com/users/johnsontanner3)
|
||||
|
||||
How our non-profit works openly to make education accessible
|
||||
======
|
||||
To build an open access education hub, our team practiced the same open methods we teach our students.
|
||||

|
||||
|
||||
I'm lucky to work with a team of impressive students at Duke University who are leaders in their classrooms and beyond. As members of [CSbyUs][1], a non-profit and student-run organization based at Duke, we connect university students to middle school students, mostly from [title I schools][2] across North Carolina's Research Triangle Park. Our mission is to fuel future change agents from under-resourced learning environments by fostering critical technology skills for thriving in the digital age.
|
||||
|
||||
The CSbyUs Tech R&D team (TRD for short) recently set an ambitious goal to build and deploy a powerful web application over the course of one fall semester. Our team of six knew we had to do something about our workflow to ship a product by winter break. In our middle school classrooms, we teach our learners to use agile methodologies and design thinking to create mobile applications. On the TRD team, we realized we needed to practice what we preach in those classrooms to ship a quality product by semester's end.
|
||||
|
||||
This is the story of how and why we utilized the principles we teach our students in order to deploy technology that will scale our mission and make our teaching resources open and accessible.
|
||||
|
||||
### Setting the scene
|
||||
|
||||
For the past two years, CSbyUs has operated "on the ground," connecting Duke undergraduates to Durham middle schools via after-school programming. After teaching and evaluating several iterations of our unique, student-centered mobile app development curriculum, we saw promising results. Our middle schoolers were creating functional mobile apps, connecting to their mentors, and leaving the class more confident in their computer science skills. Naturally, we wondered how to expand our programming.
|
||||
|
||||
We knew we should take our own advice and lean into web-based technologies to share our work, but we weren't immediately sure what problem we needed to solve. Ultimately, we decided to create a web app that serves as a centralized hub for open source and open access digital education curricula. "CurriculaHub" (name inspired by GitHub) would be the defining pillar of CSbyUs's new website, where educators could share and adapt resources.
|
||||
|
||||
But the vision and implementation didn't happen overnight.
|
||||
|
||||
Given our sense of urgency and the potential of "CurriculaHub," we wanted to start this project with a well defined plan. The stakes were (and are) high, so planning, albeit occasionally tedious, was critical to our success. Like the curriculum we teach, we scaffolded our workflow process with design thinking and agile methodology, two critical 21st century frameworks we often fail to practice in higher ed.
|
||||
|
||||
What follows is a step-wise explanation of our design thinking process, starting from inspiration and ending in a shipped prototype.
|
||||
|
||||
```
|
||||
This is the story of how and why we utilized the principles we teach our students in order to deploy technology that will scale our mission and make our teaching resources open and accessible.
|
||||
```
|
||||
|
||||
### Our Process
|
||||
|
||||
#### **Step 1: Pre-Work**
|
||||
|
||||
In order to understand the why to our what, you have to know who our team is.
|
||||
|
||||
The members of this team are busy. All of us contribute to CSbyUs beyond our TRD-related responsibilities. As an organization with lofty goals beyond creating a web-based platform, we have to reconcile our "on the ground" commitments (i.e., curriculum curation, research and evaluation, mentorship training and practice, presentations at conferences, etc.) with our "in the cloud" technological goals.
|
||||
|
||||
In addition to balancing time across our organization, we have to be flexible in the ways we communicate. As a remote member of the team, I'm writing this post from Spain, but the rest of our team is based in North Carolina, adding collaboration challenges.
|
||||
|
||||
Before diving into development (or even problem identification), we knew we had to set some clear expectations for how we'd operate as a team. We took a note from our curriculum team's book and started with some [rules of engagement][3]. This is actually [a well-documented approach][4] to setting up a team's [social contract][5] used by teams across the tech space. During a summer internship at IBM, I remember pre-project meetings where my manager and team spent more than an hour clarifying principles of interaction. Whenever we faced uncertainty in our team operations, we'd pull out the rules of engagement and clear things up almost immediately. (An aside: I've found this strategy to be wildly effective not only in my teams, but in all relationships).
|
||||
|
||||
Considering the remote nature of our team, one of our favorite tools is Slack. We use it for almost everything. We can't have sticky-note brainstorms, so we create Slack brainstorm threads. In fact, that's exactly what we did to generate our rules of engagement. One [open source principle we take to heart is transparency][6]; Slack allows us to archive and openly share our thought processes and decision-making steps with the rest of our team.
|
||||
|
||||
#### **Step 2: Empathy Research**
|
||||
|
||||
We're all here for unique reasons, but we find a common intersection: the desire to broaden equity in access to quality digital era education.
|
||||
|
||||
Each member of our team has been lucky enough to study at Duke. We know how it feels to have limitless opportunities and the support of talented peers and renowned professors. But we're mindful that this isn't normal. Across the country and beyond, these opportunities are few and far between. Where they do exist, they're confined within the guarded walls of higher institutes of learning or come with a lofty price tag.
|
||||
|
||||
While our team members' common desire to broaden access is clear, we work hard to root our decisions in research. So our team begins each semester [reviewing][7] [research][8] that justifies our existence. TRD works with CRD (curriculum research and development) and TT (teaching team), our two other CSbyUs sub-teams, to discuss current trends in digital education access, their systemic roots, and novel approaches to broaden access and make materials relevant to learners. We not only perform research collaboratively at the beginning of the semester but also implement weekly stand-up research meetings with the sub-teams. During these, CRD often presents new findings we've gleaned from interviewing current teachers and digging into the current state of access in our local community. They are our constant source of data-driven, empathy-fueling research.
|
||||
|
||||
Through this type of empathy-based research, we have found that educators interested in student-centered teaching and digital era education lack a centralized space for proven and adaptable curricula and lesson plans. The bureaucracy and rigid structures that shape classroom learning in the United States makes reshaping curricula around the personal needs of students daunting and seemingly impossible. As students, educators, and technologists, we wondered how we might unleash the creativity and agency of others by sharing our own resources and creating an online ecosystem of support.
|
||||
|
||||
#### **Step 3: Defining the Problem**
|
||||
|
||||
We wanted to avoid [scope creep][9] caused by a poorly defined mission and vision (something that happens too often in some organizations). We needed structures to define our goals and maintain clarity in scope. Before imagining our application features, we knew we'd have to start with defining our north star. We would generate a clear problem statement to which we could refer throughout development.
|
||||
|
||||
Before imagining our application features, we knew we'd have to start with defining our north star.
|
||||
|
||||
This is common practice for us. Before committing to new programming, new partnerships, or new changes, the CSbyUs team always refers back to our mission and vision and asks, "Does this make sense?" (in fact, we post our mission and vision to the top of every meeting minutes document). If it fits and we have capacity to pursue it, we go for it. And if we don't, then we don't. In the case of a "no," we are always sure to document what and why because, as engineers know, [detailed logs are almost always a good decision][10]. TRD gleaned that big-picture wisdom and implemented a group-defined problem statement to guide our sub-team mission and future development decisions.
|
||||
|
||||
To formulate a single, succinct problem statement, we each began by posting our own takes on the problem. Then, during one of our weekly [30-minute-no-more-no-less stand-up meetings][11], we identified commonalities and differences, ultimately [merging all our ideas into one][12]. Boiled down, we identified that there exist massive barriers for educators, parents, and students to share, modify, and discuss open source and accessible curricula. And of course, our mission would be to break down those barriers with user-centered technology. This "north star" lives as a highly visible document in our Google Drive, which has influenced our feature prioritization and future directions.
|
||||
|
||||
#### **Step 4: Ideating a Solution**
|
||||
|
||||
With our problem defined and our rules of engagement established, we were ready to imagine a solution.
|
||||
|
||||
We believe that effective structures can ensure meritocracy and community. Sometimes, certain personalities dominate team decision-making and leave little space for collaborative input. To avoid that pitfall and maximize our equality of voice, we tend to use "offline" individual brainstorms and merge collective ideas online. It's the same process we used to create our rules of engagement and problem statement. In the case of ideating a solution, we started with "offline" brainstorms of three [S.M.A.R.T. goals][13]. Those goals would be ones we could achieve as a software development team (specifically because the CRD and TT teams offer different skill sets) and address our problem statement. Finally, we wrote these goals in a meeting minutes document, clustering common goals and ultimately identifying themes that describe our application features. In the end, we identified three: support, feedback, and open source curricula.
|
||||
|
||||
From here, we divided ourselves into sub-teams, repeating the goal-setting process with those teams—but in a way that was specific to our features. And if it's not obvious by now, we realized a web-based platform would be the most optimal and scalable solution for supporting students, educators, and parents by providing a hub for sharing and adapting proven curricula.
|
||||
|
||||
To work efficiently, we needed to be adaptive, reinforcing structures that worked and eliminating those that didn't. For example, we put a lot of effort in crafting meeting agendas. We strive to include only those subjects we must discuss in-person and table everything else for offline discussions on Slack or individually organized calls. We practice this in real time, too. During our regular meetings on Google Hangouts, if someone brings up a topic that isn't highly relevant or urgent, the current stand-up lead (a role that rotates weekly) "parking lots" it until the end of the meeting. If we have space at the end, we pull from the parking lot, and if not, we reserve that discussion for a Slack thread.
|
||||
|
||||
This prioritization structure has led to massive gains in meeting efficiency and a focus on progress updates, shared technical hurdle discussions, collective decision-making, and assigning actionable tasks (the next-steps a person has committed to taking, documented with their name attached for everyone to view).
|
||||
|
||||
#### **Step 5: Prototyping**
|
||||
|
||||
This is where the fun starts.
|
||||
|
||||
Our team was only able to unite new people with highly varied experience through the power of open principles and methodologies.
|
||||
|
||||
Given our requirements—like an interactive user experience, the ability to collaborate on blogs and curricula, and the ability to receive feedback from our users—we began identifying the best technologies. Ultimately, we decided to build our web app with a ReactJS frontend and a Ruby on Rails backend. We chose these due to the extensive documentation and active community for both, and the well-maintained libraries that bridge the relationship between the two (e.g., react-on-rails). Since we chose Rails for our backend, it was obvious from the start that we'd work within a Model-View-Controller framework.
|
||||
|
||||
Most of us didn't have previous experience with web development, neither on the frontend nor the backend. So, getting up and running with either technology independently presented a steep learning curve, and gluing the two together only steepened it. To centralize our work, we use an open-access GitHub repository. Given our relatively novice experience in web development, our success hinged on extremely efficient and open collaborations.
|
||||
|
||||
And to explain that, we need to revisit the idea of structures. Some of ours include peer code reviews—where we can exchange best-practices and reusable solutions, maintaining up-to-date tech and user documentation so we can look back and understand design decisions—and (my personal favorite) our questions bot on Slack, which gently reminds us to post and answer questions in a separate Slack #questions channel.
|
||||
|
||||
We've also dabbled with other strategies, like instructional videos for generating basic React components and rendering them in Rails Views. I tried this and in my first video, [I covered a basic introduction to our repository structure][14] and best practices for generating React components. While this proved useful, our team has since realized the wealth of online resources that document various implementations of these technologies robustly. Also, we simply haven't had enough time (but we might revisit them in the future—stay tuned).
|
||||
|
||||
We're also excited about our cloud-based implementation. We use Heroku to host our application and manage data storage. In next iterations, we plan to both expand upon our current features and configure a continuous iteration/continuous development pipeline using services like Jenkins integrated with GitHub.
|
||||
|
||||
#### **Step 6: Testing**
|
||||
|
||||
Since we've [just deployed][1], we are now in a testing stage. Our goals are to collect user feedback across our feature domains and our application experience as a whole, especially as they interact with our specific audiences. Given our original constraints (namely, time and people power), this iteration is the first of many to come. For example, future iterations will allow for individual users to register accounts and post external curricula directly on our site without going through the extra steps of email. We want to scale and maximize our efficiency, and that's part of the recipe we'll deploy in future iterations. As for user testing: We collect user feedback via our contact form, via informal testing within our team, and via structured focus groups. [We welcome your constructive feedback and collaboration][15].
|
||||
|
||||
Our team was only able to unite new people with highly varied experience through the power of open principles and methodologies. Luckily enough, each one I described in this post is adaptable to virtually every team.
|
||||
|
||||
Regardless of whether you work—on a software development team, in a classroom, or, heck, [even in your family][16]—principles like transparency and community are almost always the best foundation for a successful organization.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/2/building-curriculahub
|
||||
|
||||
作者:[Tanner Johnson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/johnsontanner3
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://csbyus.org
|
||||
[2]: https://www2.ed.gov/programs/titleiparta/index.html
|
||||
[3]: https://docs.google.com/document/d/1tqV6B6Uk-QB7Psj1rX9tfCyW3E64_v6xDlhRZ-L2rq0/edit
|
||||
[4]: https://www.atlassian.com/team-playbook/plays/rules-of-engagement
|
||||
[5]: https://openpracticelibrary.com/practice/social-contract/
|
||||
[6]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[7]: https://services.google.com/fh/files/misc/images-of-computer-science-report.pdf
|
||||
[8]: https://drive.google.com/file/d/1_iK0ZRAXVwGX9owtjUUjNz3_2kbyYZ79/view?usp=sharing
|
||||
[9]: https://www.pmi.org/learning/library/top-five-causes-scope-creep-6675
|
||||
[10]: https://www.codeproject.com/Articles/42354/The-Art-of-Logging#what
|
||||
[11]: https://opensource.com/open-organization/16/2/6-steps-running-perfect-30-minute-meeting
|
||||
[12]: https://docs.google.com/document/d/1wdPRvFhMKPCrwOG2CGp7kP4rKOXrJKI77CgjMfaaXnk/edit?usp=sharing
|
||||
[13]: https://www.projectmanager.com/blog/how-to-create-smart-goals
|
||||
[14]: https://www.youtube.com/watch?v=52kvV0plW1E
|
||||
[15]: http://csbyus.org/
|
||||
[16]: https://opensource.com/open-organization/15/11/what-our-families-teach-us-about-organizational-life
|
@ -1,3 +1,5 @@
|
||||
translating by Cycoe
|
||||
Cycoe 翻译中
|
||||
8 KDE Plasma Tips and Tricks to Improve Your Productivity
|
||||
======
|
||||
|
||||
|
@ -1,78 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with Org mode without Emacs)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-org-mode)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
Get started with Org mode without Emacs
|
||||
======
|
||||
No, you don't need Emacs to use Org, the 16th in our series on open source tools that will make you more productive in 2019.
|
||||
|
||||

|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Here's the 16th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
|
||||
|
||||
### Org (without Emacs)
|
||||
|
||||
[Org mode][1] (or just Org) is not in the least bit new, but there are still many people who have never used it. They would love to try it out to get a feel for how Org can help them be productive. But the biggest barrier is that Org is associated with Emacs, and many people think one requires the other. Not so! Org can be used with a variety of other tools and editors once you understand the basics.
|
||||
|
||||

|
||||
|
||||
Org, at its very heart, is a structured text file. It has headers, subheaders, and keywords that allow other tools to parse files into agendas and to-do lists. Org files can be edited with any flat-text editor (e.g., [Vim][2], [Atom][3], or [Visual Studio Code][4]), and many have plugins that help create and manage Org files.
|
||||
|
||||
A basic Org file looks something like this:
|
||||
|
||||
```
|
||||
* Task List
|
||||
** TODO Write Article for Day 16 - Org w/out emacs
|
||||
DEADLINE: <2019-01-25 12:00>
|
||||
*** DONE Write sample org snippet for article
|
||||
- Include at least one TODO and one DONE item
|
||||
- Show notes
|
||||
- Show SCHEDULED and DEADLINE
|
||||
*** TODO Take Screenshots
|
||||
** Dentist Appointment
|
||||
SCHEDULED: <2019-01-31 13:30-14:30>
|
||||
```
|
||||
|
||||
Org uses an outline format that uses ***** as bullets to indicate an item's level. Any item that begins with the word TODO (yes, in all caps) is just that—a to-do item. The work DONE indicates it is completed. SCHEDULED and DEADLINE indicate dates and times relevant to the item. If there's no time in either field, the item is considered an all-day event.
|
||||
|
||||
With the right plugins, your favorite text editor becomes a powerhouse of productivity and organization. For example, the [vim-orgmode][5] plugin's features include functions to create Org files, syntax highlighting, and key commands to generate agendas and comprehensive to-do lists across files.
|
||||
|
||||

|
||||
|
||||
The Atom [Organized][6] plugin adds a sidebar on the right side of the screen that shows the agenda and to-do items in Org files. It can read from multiple files by default with a path set up in the configuration options. The Todo sidebar allows you to click on a to-do item to mark it done, then automatically updates the source Org file.
|
||||
|
||||

|
||||
|
||||
There are also a whole host of tools that "speak Org" to help keep you productive. With libraries in Python, Perl, PHP, NodeJS, and more, you can develop your own scripts and tools. And, of course, there is also [Emacs][7], which has Org support within the core distribution.
|
||||
|
||||

|
||||
|
||||
Org mode is one of the best tools for keeping on track with what needs to be done and when. And, contrary to myth, it doesn't need Emacs, just a text editor.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-org-mode
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://orgmode.org/
|
||||
[2]: https://www.vim.org/
|
||||
[3]: https://atom.io/
|
||||
[4]: https://code.visualstudio.com/
|
||||
[5]: https://github.com/jceb/vim-orgmode
|
||||
[6]: https://atom.io/packages/organized
|
||||
[7]: https://www.gnu.org/software/emacs/
|
@ -1,158 +0,0 @@
|
||||
The 5 Best Linux Distributions for Development
|
||||
============================================================
|
||||
|
||||

|
||||
Jack Wallen looks at some of the best LInux distributions for development efforts.[Creative Commons Zero][6]
|
||||
|
||||
When considering Linux, there are so many variables to take into account. What package manager do you wish to use? Do you prefer a modern or old-standard desktop interface? Is ease of use your priority? How flexible do you want your distribution? What task will the distribution serve?
|
||||
|
||||
It is that last question which should often be considered first. Is the distribution going to work as a desktop or a server? Will you be doing network or system audits? Or will you be developing? If you’ve spent much time considering Linux, you know that for every task there are several well-suited distributions. This certainly holds true for developers. Even though Linux, by design, is an ideal platform for developers, there are certain distributions that rise above the rest, to serve as great operating systems to serve developers.
|
||||
|
||||
I want to share what I consider to be some of the best distributions for your development efforts. Although each of these five distributions can be used for general purpose development (with maybe one exception), they each serve a specific purpose. You may or may not be surprised by the selections.
|
||||
|
||||
With that said, let’s get to the choices.
|
||||
|
||||
### Debian
|
||||
|
||||
The [Debian][14] distribution winds up on the top of many a Linux list. With good reason. Debian is that distribution from which so many are based. It is this reason why many developers choose Debian. When you develop a piece of software on Debian, chances are very good that package will also work on [Ubuntu][15], [Linux Mint][16], [Elementary OS][17], and a vast collection of other distributions.
|
||||
|
||||
Beyond that obvious answer, Debian also has a very large amount of applications available, by way of the default repositories (Figure 1).
|
||||
|
||||

|
||||
Figure 1: Available applications from the standard Debian repositories.[Used with permission][1]
|
||||
|
||||
To make matters even programmer-friendly, those applications (and their dependencies) are simple to install. Take, for instance, the build-essential package (which can be installed on any distribution derived from Debian). This package includes the likes of dkpg-dev, g++, gcc, hurd-dev, libc-dev, and make—all tools necessary for the development process. The build-essential package can be installed with the command sudo apt install build-essential.
|
||||
|
||||
There are hundreds of other developer-specific applications available from the standard repositories, tools such as:
|
||||
|
||||
* Autoconf—configure script builder
|
||||
|
||||
* Autoproject—creates a source package for a new program
|
||||
|
||||
* Bison—general purpose parser generator
|
||||
|
||||
* Bluefish—powerful GUI editor, targeted towards programmers
|
||||
|
||||
* Geany—lightweight IDE
|
||||
|
||||
* Kate—powerful text editor
|
||||
|
||||
* Eclipse—helps builders independently develop tools that integrate with other people’s tools
|
||||
|
||||
The list goes on and on.
|
||||
|
||||
Debian is also as rock-solid a distribution as you’ll find, so there’s very little concern you’ll lose precious work, by way of the desktop crashing. As a bonus, all programs included with Debian have met the [Debian Free Software Guidelines][18], which adheres to the following “social contract”:
|
||||
|
||||
* Debian will remain 100% free.
|
||||
|
||||
* We will give back to the free software community.
|
||||
|
||||
* We will not hide problems.
|
||||
|
||||
* Our priorities are our users and free software
|
||||
|
||||
* Works that do not meet our free software standards are included in a non-free archive.
|
||||
|
||||
Also, if you’re new to developing on Linux, Debian has a handy [Programming section in their user manual][19].
|
||||
|
||||
### openSUSE Tumbleweed
|
||||
|
||||
If you’re looking to develop with a cutting-edge, rolling release distribution, [openSUSE][20] offers one of the best in [Tumbleweed][21]. Not only will you be developing with the most up to date software available, you’ll be doing so with the help of openSUSE’s amazing administrator tools … of which includes YaST. If you’re not familiar with YaST (Yet another Setup Tool), it’s an incredibly powerful piece of software that allows you to manage the whole of the platform, from one convenient location. From within YaST, you can also install using RPM Groups. Open YaST, click on RPM Groups (software grouped together by purpose), and scroll down to the Development section to see the large amount of groups available for installation (Figure 2).
|
||||
|
||||
|
||||

|
||||
Figure 2: Installing package groups in openSUSE Tumbleweed.[Creative Commons Zero][2]
|
||||
|
||||
openSUSE also allows you to quickly install all the necessary devtools with the simple click of a weblink. Head over to the [rpmdevtools install site][22] and click the link for Tumbleweed. This will automatically add the necessary repository and install rpmdevtools.
|
||||
|
||||
By developing with a rolling release distribution, you know you’re working with the most recent releases of installed software.
|
||||
|
||||
### CentOS
|
||||
|
||||
Let’s face it, [Red Hat Enterprise Linux][23] (RHEL) is the de facto standard for enterprise businesses. If you’re looking to develop for that particular platform, and you can’t afford a RHEL license, you cannot go wrong with [CentOS][24]—which is, effectively, a community version of RHEL. You will find many of the packages found on CentOS to be the same as in RHEL—so once you’re familiar with developing on one, you’ll be fine on the other.
|
||||
|
||||
If you’re serious about developing on an enterprise-grade platform, you cannot go wrong starting with CentOS. And because CentOS is a server-specific distribution, you can more easily develop for a web-centric platform. Instead of developing your work and then migrating it to a server (hosted on a different machine), you can easily have CentOS setup to serve as an ideal host for both developing and testing.
|
||||
|
||||
Looking for software to meet your development needs? You only need open up the CentOS Application Installer, where you’ll find a Developer section that includes a dedicated sub-section for Integrated Development Environments (IDEs - Figure 3).
|
||||
|
||||

|
||||
Figure 3: Installing a powerful IDE is simple in CentOS.[Used with permission][3]
|
||||
|
||||
CentOS also includes Security Enhanced Linux (SELinux), which makes it easier for you to test your software’s ability to integrate with the same security platform found in RHEL. SELinux can often cause headaches for poorly designed software, so having it at the ready can be a real boon for ensuring your applications work on the likes of RHEL. If you’re not sure where to start with developing on CentOS 7, you can read through the [RHEL 7 Developer Guide][25].
|
||||
|
||||
### Raspbian
|
||||
|
||||
Let’s face it, embedded systems are all the rage. One easy means of working with such systems is via the Raspberry Pi—a tiny footprint computer that has become incredibly powerful and flexible. In fact, the Raspberry Pi has become the hardware used by DIYers all over the planet. Powering those devices is the [Raspbian][26] operating system. Raspbian includes tools like [BlueJ][27], [Geany][28], [Greenfoot][29], [Sense HAT Emulator][30], [Sonic Pi][31], and [Thonny Python IDE][32], [Python][33], and [Scratch][34], so you won’t want for the necessary development software. Raspbian also includes a user-friendly desktop UI (Figure 4), to make things even easier.
|
||||
|
||||

|
||||
Figure 4: The Raspbian main menu, showing pre-installed developer software.[Used with permission][4]
|
||||
|
||||
For anyone looking to develop for the Raspberry Pi platform, Raspbian is a must have. If you’d like to give Raspbian a go, without the Raspberry Pi hardware, you can always install it as a VirtualBox virtual machine, by way of the ISO image found [here][35].
|
||||
|
||||
### Pop!_OS
|
||||
|
||||
Don’t let the name full you, [System76][36]’s [Pop!_OS][37] entry into the world of operating systems is serious. And although what System76 has done to this Ubuntu derivative may not be readily obvious, it is something special.
|
||||
|
||||
The goal of System76 is to create an operating system specific to the developer, maker, and computer science professional. With a newly-designed GNOME theme, Pop!_OS is beautiful (Figure 5) and as highly functional as you would expect from both the hardware maker and desktop designers.
|
||||
|
||||
### [devel_5.jpg][11]
|
||||
|
||||

|
||||
Figure 5: The Pop!_OS Desktop.[Used with permission][5]
|
||||
|
||||
But what makes Pop!_OS special is the fact that it is being developed by a company dedicated to Linux hardware. This means, when you purchase a System76 laptop, desktop, or server, you know the operating system will work seamlessly with the hardware—on a level no other company can offer. I would predict that, with Pop!_OS, System76 will become the Apple of Linux.
|
||||
|
||||
### Time for work
|
||||
|
||||
In their own way, each of these distributions. You have a stable desktop (Debian), a cutting-edge desktop (openSUSE Tumbleweed), a server (CentOS), an embedded platform (Raspbian), and a distribution to seamless meld with hardware (Pop!_OS). With the exception of Raspbian, any one of these distributions would serve as an outstanding development platform. Get one installed and start working on your next project with confidence.
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/5-best-linux-distributions-development
|
||||
|
||||
作者:[JACK WALLEN ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/used-permission
|
||||
[5]:https://www.linux.com/licenses/category/used-permission
|
||||
[6]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[7]:https://www.linux.com/files/images/devel1jpg
|
||||
[8]:https://www.linux.com/files/images/devel2jpg
|
||||
[9]:https://www.linux.com/files/images/devel3jpg
|
||||
[10]:https://www.linux.com/files/images/devel4jpg
|
||||
[11]:https://www.linux.com/files/images/devel5jpg
|
||||
[12]:https://www.linux.com/files/images/king-penguins1920jpg
|
||||
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[14]:https://www.debian.org/
|
||||
[15]:https://www.ubuntu.com/
|
||||
[16]:https://linuxmint.com/
|
||||
[17]:https://elementary.io/
|
||||
[18]:https://www.debian.org/social_contract
|
||||
[19]:https://www.debian.org/doc/manuals/debian-reference/ch12.en.html
|
||||
[20]:https://www.opensuse.org/
|
||||
[21]:https://en.opensuse.org/Portal:Tumbleweed
|
||||
[22]:https://software.opensuse.org/download.html?project=devel%3Atools&package=rpmdevtools
|
||||
[23]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[24]:https://www.centos.org/
|
||||
[25]:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/pdf/developer_guide/Red_Hat_Enterprise_Linux-7-Developer_Guide-en-US.pdf
|
||||
[26]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[27]:https://www.bluej.org/
|
||||
[28]:https://www.geany.org/
|
||||
[29]:https://www.greenfoot.org/
|
||||
[30]:https://www.raspberrypi.org/blog/sense-hat-emulator/
|
||||
[31]:http://sonic-pi.net/
|
||||
[32]:http://thonny.org/
|
||||
[33]:https://www.python.org/
|
||||
[34]:https://scratch.mit.edu/
|
||||
[35]:http://rpf.io/x86iso
|
||||
[36]:https://system76.com/
|
||||
[37]:https://system76.com/pop
|
286
sources/tech/20180926 HTTP- Brief History of HTTP.md
Normal file
286
sources/tech/20180926 HTTP- Brief History of HTTP.md
Normal file
@ -0,0 +1,286 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (HTTP: Brief History of HTTP)
|
||||
[#]: via: (https://hpbn.co/brief-history-of-http/#http-09-the-one-line-protocol)
|
||||
[#]: author: (Ilya Grigorik https://www.igvita.com/)
|
||||
|
||||
HTTP: Brief History of HTTP
|
||||
======
|
||||
|
||||
### Introduction
|
||||
|
||||
The Hypertext Transfer Protocol (HTTP) is one of the most ubiquitous and widely adopted application protocols on the Internet: it is the common language between clients and servers, enabling the modern web. From its simple beginnings as a single keyword and document path, it has become the protocol of choice not just for browsers, but for virtually every Internet-connected software and hardware application.
|
||||
|
||||
In this chapter, we will take a brief historical tour of the evolution of the HTTP protocol. A full discussion of the varying HTTP semantics is outside the scope of this book, but an understanding of the key design changes of HTTP, and the motivations behind each, will give us the necessary background for our discussions on HTTP performance, especially in the context of the many upcoming improvements in HTTP/2.
|
||||
|
||||
### §HTTP 0.9: The One-Line Protocol
|
||||
|
||||
The original HTTP proposal by Tim Berners-Lee was designed with simplicity in mind as to help with the adoption of his other nascent idea: the World Wide Web. The strategy appears to have worked: aspiring protocol designers, take note.
|
||||
|
||||
In 1991, Berners-Lee outlined the motivation for the new protocol and listed several high-level design goals: file transfer functionality, ability to request an index search of a hypertext archive, format negotiation, and an ability to refer the client to another server. To prove the theory in action, a simple prototype was built, which implemented a small subset of the proposed functionality:
|
||||
|
||||
* Client request is a single ASCII character string.
|
||||
|
||||
* Client request is terminated by a carriage return (CRLF).
|
||||
|
||||
* Server response is an ASCII character stream.
|
||||
|
||||
|
||||
|
||||
* Server response is a hypertext markup language (HTML).
|
||||
|
||||
* Connection is terminated after the document transfer is complete.
|
||||
|
||||
|
||||
|
||||
|
||||
However, even that sounds a lot more complicated than it really is. What these rules enable is an extremely simple, Telnet-friendly protocol, which some web servers support to this very day:
|
||||
|
||||
```
|
||||
$> telnet google.com 80
|
||||
|
||||
Connected to 74.125.xxx.xxx
|
||||
|
||||
GET /about/
|
||||
|
||||
(hypertext response)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
The request consists of a single line: `GET` method and the path of the requested document. The response is a single hypertext document—no headers or any other metadata, just the HTML. It really couldn’t get any simpler. Further, since the previous interaction is a subset of the intended protocol, it unofficially acquired the HTTP 0.9 label. The rest, as they say, is history.
|
||||
|
||||
From these humble beginnings in 1991, HTTP took on a life of its own and evolved rapidly over the coming years. Let us quickly recap the features of HTTP 0.9:
|
||||
|
||||
* Client-server, request-response protocol.
|
||||
|
||||
* ASCII protocol, running over a TCP/IP link.
|
||||
|
||||
* Designed to transfer hypertext documents (HTML).
|
||||
|
||||
* The connection between server and client is closed after every request.
|
||||
|
||||
|
||||
```
|
||||
Popular web servers, such as Apache and Nginx, still support the HTTP 0.9 protocol—in part, because there is not much to it! If you are curious, open up a Telnet session and try accessing google.com, or your own favorite site, via HTTP 0.9 and inspect the behavior and the limitations of this early protocol.
|
||||
```
|
||||
|
||||
### §HTTP/1.0: Rapid Growth and Informational RFC
|
||||
|
||||
The period from 1991 to 1995 is one of rapid coevolution of the HTML specification, a new breed of software known as a "web browser," and the emergence and quick growth of the consumer-oriented public Internet infrastructure.
|
||||
|
||||
```
|
||||
##### §The Perfect Storm: Internet Boom of the Early 1990s
|
||||
|
||||
Building on Tim Berner-Lee’s initial browser prototype, a team at the National Center of Supercomputing Applications (NCSA) decided to implement their own version. With that, the first popular browser was born: NCSA Mosaic. One of the programmers on the NCSA team, Marc Andreessen, partnered with Jim Clark to found Mosaic Communications in October 1994. The company was later renamed Netscape, and it shipped Netscape Navigator 1.0 in December 1994. By this point, it was already clear that the World Wide Web was bound to be much more than just an academic curiosity.
|
||||
|
||||
In fact, that same year the first World Wide Web conference was organized in Geneva, Switzerland, which led to the creation of the World Wide Web Consortium (W3C) to help guide the evolution of HTML. Similarly, a parallel HTTP Working Group (HTTP-WG) was established within the IETF to focus on improving the HTTP protocol. Both of these groups continue to be instrumental to the evolution of the Web.
|
||||
|
||||
Finally, to create the perfect storm, CompuServe, AOL, and Prodigy began providing dial-up Internet access to the public within the same 1994–1995 time frame. Riding on this wave of rapid adoption, Netscape made history with a wildly successful IPO on August 9, 1995—the Internet boom had arrived, and everyone wanted a piece of it!
|
||||
```
|
||||
|
||||
The growing list of desired capabilities of the nascent Web and their use cases on the public Web quickly exposed many of the fundamental limitations of HTTP 0.9: we needed a protocol that could serve more than just hypertext documents, provide richer metadata about the request and the response, enable content negotiation, and more. In turn, the nascent community of web developers responded by producing a large number of experimental HTTP server and client implementations through an ad hoc process: implement, deploy, and see if other people adopt it.
|
||||
|
||||
From this period of rapid experimentation, a set of best practices and common patterns began to emerge, and in May 1996 the HTTP Working Group (HTTP-WG) published RFC 1945, which documented the "common usage" of the many HTTP/1.0 implementations found in the wild. Note that this was only an informational RFC: HTTP/1.0 as we know it is not a formal specification or an Internet standard!
|
||||
|
||||
Having said that, an example HTTP/1.0 request should look very familiar:
|
||||
|
||||
```
|
||||
$> telnet website.org 80
|
||||
|
||||
Connected to xxx.xxx.xxx.xxx
|
||||
|
||||
GET /rfc/rfc1945.txt HTTP/1.0
|
||||
User-Agent: CERN-LineMode/2.15 libwww/2.17b3
|
||||
Accept: */*
|
||||
|
||||
HTTP/1.0 200 OK
|
||||
Content-Type: text/plain
|
||||
Content-Length: 137582
|
||||
Expires: Thu, 01 Dec 1997 16:00:00 GMT
|
||||
Last-Modified: Wed, 1 May 1996 12:45:26 GMT
|
||||
Server: Apache 0.84
|
||||
|
||||
(plain-text response)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
1. Request line with HTTP version number, followed by request headers
|
||||
|
||||
2. Response status, followed by response headers
|
||||
|
||||
|
||||
|
||||
|
||||
The preceding exchange is not an exhaustive list of HTTP/1.0 capabilities, but it does illustrate some of the key protocol changes:
|
||||
|
||||
* Request may consist of multiple newline separated header fields.
|
||||
|
||||
* Response object is prefixed with a response status line.
|
||||
|
||||
* Response object has its own set of newline separated header fields.
|
||||
|
||||
* Response object is not limited to hypertext.
|
||||
|
||||
* The connection between server and client is closed after every request.
|
||||
|
||||
|
||||
|
||||
|
||||
Both the request and response headers were kept as ASCII encoded, but the response object itself could be of any type: an HTML file, a plain text file, an image, or any other content type. Hence, the "hypertext transfer" part of HTTP became a misnomer not long after its introduction. In reality, HTTP has quickly evolved to become a hypermedia transport, but the original name stuck.
|
||||
|
||||
In addition to media type negotiation, the RFC also documented a number of other commonly implemented capabilities: content encoding, character set support, multi-part types, authorization, caching, proxy behaviors, date formats, and more.
|
||||
|
||||
```
|
||||
Almost every server on the Web today can and will still speak HTTP/1.0. Except that, by now, you should know better! Requiring a new TCP connection per request imposes a significant performance penalty on HTTP/1.0; see [Three-Way Handshake][1], followed by [Slow-Start][2].
|
||||
```
|
||||
|
||||
### §HTTP/1.1: Internet Standard
|
||||
|
||||
The work on turning HTTP into an official IETF Internet standard proceeded in parallel with the documentation effort around HTTP/1.0 and happened over a period of roughly four years: between 1995 and 1999. In fact, the first official HTTP/1.1 standard is defined in RFC 2068, which was officially released in January 1997, roughly six months after the publication of HTTP/1.0. Then, two and a half years later, in June of 1999, a number of improvements and updates were incorporated into the standard and were released as RFC 2616.
|
||||
|
||||
The HTTP/1.1 standard resolved a lot of the protocol ambiguities found in earlier versions and introduced a number of critical performance optimizations: keepalive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, transfer encodings, and request pipelining.
|
||||
|
||||
With these capabilities in place, we can now inspect a typical HTTP/1.1 session as performed by any modern HTTP browser and client:
|
||||
|
||||
```
|
||||
$> telnet website.org 80
|
||||
Connected to xxx.xxx.xxx.xxx
|
||||
|
||||
GET /index.html HTTP/1.1
|
||||
Host: website.org
|
||||
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4)... (snip)
|
||||
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
|
||||
Accept-Encoding: gzip,deflate,sdch
|
||||
Accept-Language: en-US,en;q=0.8
|
||||
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
|
||||
Cookie: __qca=P0-800083390... (snip)
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.0.11
|
||||
Connection: keep-alive
|
||||
Content-Type: text/html; charset=utf-8
|
||||
Via: HTTP/1.1 GWA
|
||||
Date: Wed, 25 Jul 2012 20:23:35 GMT
|
||||
Expires: Wed, 25 Jul 2012 20:23:35 GMT
|
||||
Cache-Control: max-age=0, no-cache
|
||||
Transfer-Encoding: chunked
|
||||
|
||||
100
|
||||
<!doctype html>
|
||||
(snip)
|
||||
|
||||
100
|
||||
(snip)
|
||||
|
||||
0
|
||||
|
||||
GET /favicon.ico HTTP/1.1
|
||||
Host: www.website.org
|
||||
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4)... (snip)
|
||||
Accept: */*
|
||||
Referer: http://website.org/
|
||||
Connection: close
|
||||
Accept-Encoding: gzip,deflate,sdch
|
||||
Accept-Language: en-US,en;q=0.8
|
||||
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
|
||||
Cookie: __qca=P0-800083390... (snip)
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.0.11
|
||||
Content-Type: image/x-icon
|
||||
Content-Length: 3638
|
||||
Connection: close
|
||||
Last-Modified: Thu, 19 Jul 2012 17:51:44 GMT
|
||||
Cache-Control: max-age=315360000
|
||||
Accept-Ranges: bytes
|
||||
Via: HTTP/1.1 GWA
|
||||
Date: Sat, 21 Jul 2012 21:35:22 GMT
|
||||
Expires: Thu, 31 Dec 2037 23:55:55 GMT
|
||||
Etag: W/PSA-GAu26oXbDi
|
||||
|
||||
(icon data)
|
||||
(connection closed)
|
||||
```
|
||||
|
||||
1. Request for HTML file, with encoding, charset, and cookie metadata
|
||||
|
||||
2. Chunked response for original HTML request
|
||||
|
||||
3. Number of octets in the chunk expressed as an ASCII hexadecimal number (256 bytes)
|
||||
|
||||
4. End of chunked stream response
|
||||
|
||||
5. Request for an icon file made on same TCP connection
|
||||
|
||||
6. Inform server that the connection will not be reused
|
||||
|
||||
7. Icon response, followed by connection close
|
||||
|
||||
|
||||
|
||||
|
||||
Phew, there is a lot going on in there! The first and most obvious difference is that we have two object requests, one for an HTML page and one for an image, both delivered over a single connection. This is connection keepalive in action, which allows us to reuse the existing TCP connection for multiple requests to the same host and deliver a much faster end-user experience; see [Optimizing for TCP][3].
|
||||
|
||||
To terminate the persistent connection, notice that the second client request sends an explicit `close` token to the server via the `Connection` header. Similarly, the server can notify the client of the intent to close the current TCP connection once the response is transferred. Technically, either side can terminate the TCP connection without such signal at any point, but clients and servers should provide it whenever possible to enable better connection reuse strategies on both sides.
|
||||
|
||||
```
|
||||
HTTP/1.1 changed the semantics of the HTTP protocol to use connection keepalive by default. Meaning, unless told otherwise (via `Connection: close` header), the server should keep the connection open by default.
|
||||
|
||||
However, this same functionality was also backported to HTTP/1.0 and enabled via the `Connection: Keep-Alive` header. Hence, if you are using HTTP/1.1, technically you don’t need the `Connection: Keep-Alive` header, but many clients choose to provide it nonetheless.
|
||||
```
|
||||
|
||||
Additionally, the HTTP/1.1 protocol added content, encoding, character set, and even language negotiation, transfer encoding, caching directives, client cookies, plus a dozen other capabilities that can be negotiated on each request.
|
||||
|
||||
We are not going to dwell on the semantics of every HTTP/1.1 feature. This is a subject for a dedicated book, and many great ones have been written already. Instead, the previous example serves as a good illustration of both the quick progress and evolution of HTTP, as well as the intricate and complicated dance of every client-server exchange. There is a lot going on in there!
|
||||
|
||||
```
|
||||
For a good reference on all the inner workings of the HTTP protocol, check out O’Reilly’s HTTP: The Definitive Guide by David Gourley and Brian Totty.
|
||||
```
|
||||
|
||||
### §HTTP/2: Improving Transport Performance
|
||||
|
||||
Since its publication, RFC 2616 has served as a foundation for the unprecedented growth of the Internet: billions of devices of all shapes and sizes, from desktop computers to the tiny web devices in our pockets, speak HTTP every day to deliver news, video, and millions of other web applications we have all come to depend on in our lives.
|
||||
|
||||
What began as a simple, one-line protocol for retrieving hypertext quickly evolved into a generic hypermedia transport, and now a decade later can be used to power just about any use case you can imagine. Both the ubiquity of servers that can speak the protocol and the wide availability of clients to consume it means that many applications are now designed and deployed exclusively on top of HTTP.
|
||||
|
||||
Need a protocol to control your coffee pot? RFC 2324 has you covered with the Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0)—originally an April Fools’ Day joke by IETF, and increasingly anything but a joke in our new hyper-connected world.
|
||||
|
||||
> The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol that can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred.
|
||||
>
|
||||
> RFC 2616: HTTP/1.1, June 1999
|
||||
|
||||
The simplicity of the HTTP protocol is what enabled its original adoption and rapid growth. In fact, it is now not unusual to find embedded devices—sensors, actuators, and coffee pots alike—using HTTP as their primary control and data protocols. But under the weight of its own success and as we increasingly continue to migrate our everyday interactions to the Web—social, email, news, and video, and increasingly our entire personal and job workspaces—it has also begun to show signs of stress. Users and web developers alike are now demanding near real-time responsiveness and protocol performance from HTTP/1.1, which it simply cannot meet without some modifications.
|
||||
|
||||
To meet these new challenges, HTTP must continue to evolve, and hence the HTTPbis working group announced a new initiative for HTTP/2 in early 2012:
|
||||
|
||||
> There is emerging implementation experience and interest in a protocol that retains the semantics of HTTP without the legacy of HTTP/1.x message framing and syntax, which have been identified as hampering performance and encouraging misuse of the underlying transport.
|
||||
>
|
||||
> The working group will produce a specification of a new expression of HTTP’s current semantics in ordered, bi-directional streams. As with HTTP/1.x, the primary target transport is TCP, but it should be possible to use other transports.
|
||||
>
|
||||
> HTTP/2 charter, January 2012
|
||||
|
||||
The primary focus of HTTP/2 is on improving transport performance and enabling both lower latency and higher throughput. The major version increment sounds like a big step, which it is and will be as far as performance is concerned, but it is important to note that none of the high-level protocol semantics are affected: all HTTP headers, values, and use cases are the same.
|
||||
|
||||
Any existing website or application can and will be delivered over HTTP/2 without modification: you do not need to modify your application markup to take advantage of HTTP/2. The HTTP servers will have to speak HTTP/2, but that should be a transparent upgrade for the majority of users. The only difference if the working group meets its goal, should be that our applications are delivered with lower latency and better utilization of the network link!
|
||||
|
||||
Having said that, let’s not get ahead of ourselves. Before we get to the new HTTP/2 protocol features, it is worth taking a step back and examining our existing deployment and performance best practices for HTTP/1.1. The HTTP/2 working group is making fast progress on the new specification, but even if the final standard was already done and ready, we would still have to support older HTTP/1.1 clients for the foreseeable future—realistically, a decade or more.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://hpbn.co/brief-history-of-http/#http-09-the-one-line-protocol
|
||||
|
||||
作者:[Ilya Grigorik][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.igvita.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://hpbn.co/building-blocks-of-tcp/#three-way-handshake
|
||||
[2]: https://hpbn.co/building-blocks-of-tcp/#slow-start
|
||||
[3]: https://hpbn.co/building-blocks-of-tcp/#optimizing-for-tcp
|
@ -1,176 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (Three SSH GUI Tools for Linux)
|
||||
[#]: via: (https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
[#]: url: ( )
|
||||
|
||||
Three SSH GUI Tools for Linux
|
||||
======
|
||||
|
||||

|
||||
|
||||
At some point in your career as a Linux administrator, you’re going to use Secure Shell (SSH) to remote into a Linux server or desktop. Chances are, you already have. In some instances, you’ll be SSH’ing into multiple Linux servers at once. In fact, Secure Shell might well be one of the most-used tools in your Linux toolbox. Because of this, you’ll want to make the experience as efficient as possible. For many admins, nothing is as efficient as the command line. However, there are users out there who do prefer a GUI tool, especially when working from a desktop machine to remote into and work on a server.
|
||||
|
||||
If you happen to prefer a good GUI tool, you’ll be happy to know there are a couple of outstanding graphical tools for SSH on Linux. Couple that with a unique terminal window that allows you to remote into multiple machines from the same window, and you have everything you need to work efficiently. Let’s take a look at these three tools and find out if one (or more) of them is perfectly apt to meet your needs.
|
||||
|
||||
I’ll be demonstrating these tools on [Elementary OS][1], but they are all available for most major distributions.
|
||||
|
||||
### PuTTY
|
||||
|
||||
Anyone that’s been around long enough knows about [PuTTY][2]. In fact, PuTTY is the de facto standard tool for connecting, via SSH, to Linux servers from the Windows environment. But PuTTY isn’t just for Windows. In fact, from withing the standard repositories, PuTTY can also be installed on Linux. PuTTY’s feature list includes:
|
||||
|
||||
* Saved sessions.
|
||||
|
||||
* Connect via IP address or hostname.
|
||||
|
||||
* Define alternative SSH port.
|
||||
|
||||
* Connection type definition.
|
||||
|
||||
* Logging.
|
||||
|
||||
* Options for keyboard, bell, appearance, connection, and more.
|
||||
|
||||
* Local and remote tunnel configuration
|
||||
|
||||
* Proxy support
|
||||
|
||||
* X11 tunneling support
|
||||
|
||||
|
||||
|
||||
|
||||
The PuTTY GUI is mostly a way to save SSH sessions, so it’s easier to manage all of those various Linux servers and desktops you need to constantly remote into and out of. Once you’ve connected, from PuTTY to the Linux server, you will have a terminal window in which to work. At this point, you may be asking yourself, why not just work from the terminal window? For some, the convenience of saving sessions does make PuTTY worth using.
|
||||
|
||||
Installing PuTTY on Linux is simple. For example, you could issue the command on a Debian-based distribution:
|
||||
|
||||
```
|
||||
sudo apt-get install -y putty
|
||||
```
|
||||
|
||||
Once installed, you can either run the PuTTY GUI from your desktop menu or issue the command putty. In the PuTTY Configuration window (Figure 1), type the hostname or IP address in the HostName (or IP address) section, configure the port (if not the default 22), select SSH from the connection type, and click Open.
|
||||
|
||||
![PuTTY Connection][4]
|
||||
|
||||
Figure 1: The PuTTY Connection Configuration Window.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
Once the connection is made, you’ll then be prompted for the user credentials on the remote server (Figure 2).
|
||||
|
||||
![log in][7]
|
||||
|
||||
Figure 2: Logging into a remote server with PuTTY.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
To save a session (so you don’t have to always type the remote server information), fill out the IP address (or hostname), configure the port and connection type, and then (before you click Open), type a name for the connection in the top text area of the Saved Sessions section, and click Save. This will then save the configuration for the session. To then connect to a saved session, select it from the saved sessions window, click Load, and then click Open. You should then be prompted for the remote credentials on the remote server.
|
||||
|
||||
### EasySSH
|
||||
|
||||
Although [EasySSH][8] doesn’t offer the amount of configuration options found in PuTTY, it’s (as the name implies) incredibly easy to use. One of the best features of EasySSH is that it offers a tabbed interface, so you can have multiple SSH connections open and quickly switch between them. Other EasySSH features include:
|
||||
|
||||
* Groups (so you can group tabs for an even more efficient experience).
|
||||
|
||||
* Username/password save.
|
||||
|
||||
* Appearance options.
|
||||
|
||||
* Local and remote tunnel support.
|
||||
|
||||
|
||||
|
||||
|
||||
Install EasySSH on a Linux desktop is simple, as the app can be installed via flatpak (which does mean you must have Flatpak installed on your system). Once flatpak is installed, add EasySSH with the commands:
|
||||
|
||||
```
|
||||
sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||
|
||||
sudo flatpak install flathub com.github.muriloventuroso.easyssh
|
||||
```
|
||||
|
||||
Run EasySSH with the command:
|
||||
|
||||
```
|
||||
flatpak run com.github.muriloventuroso.easyssh
|
||||
```
|
||||
|
||||
The EasySSH app will open, where you can click the + button in the upper left corner. In the resulting window (Figure 3), configure your SSH connection as required.
|
||||
|
||||
![Adding a connection][10]
|
||||
|
||||
Figure 3: Adding a connection in EasySSH is simple.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
Once you’ve added the connection, it will appear in the left navigation of the main window (Figure 4).
|
||||
|
||||
![EasySSH][12]
|
||||
|
||||
Figure 4: The EasySSH main window.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
To connect to a remote server in EasySSH, select it from the left navigation and then click the Connect button (Figure 5).
|
||||
|
||||
![Connecting][14]
|
||||
|
||||
Figure 5: Connecting to a remote server with EasySSH.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
The one caveat with EasySSH is that you must save the username and password in the connection configuration (otherwise the connection will fail). This means anyone with access to the desktop running EasySSH can remote into your servers without knowing the passwords. Because of this, you must always remember to lock your desktop screen any time you are away (and make sure to use a strong password). The last thing you want is to have a server vulnerable to unwanted logins.
|
||||
|
||||
### Terminator
|
||||
|
||||
Terminator is not actually an SSH GUI. Instead, Terminator functions as a single window that allows you to run multiple terminals (and even groups of terminals) at once. Effectively you can open Terminator, split the window vertical and horizontally (until you have all the terminals you want), and then connect to all of your remote Linux servers by way of the standard SSH command (Figure 6).
|
||||
|
||||
![Terminator][16]
|
||||
|
||||
Figure 6: Terminator split into three different windows, each connecting to a different Linux server.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
To install Terminator, issue a command like:
|
||||
|
||||
### sudo apt-get install -y terminator
|
||||
|
||||
Once installed, open the tool either from your desktop menu or from the command terminator. With the window opened, you can right-click inside Terminator and select either Split Horizontally or Split Vertically. Continue splitting the terminal until you have exactly the number of terminals you need, and then start remoting into those servers.
|
||||
The caveat to using Terminator is that it is not a standard SSH GUI tool, in that it won’t save your sessions or give you quick access to those servers. In other words, you will always have to manually log into your remote Linux servers. However, being able to see your remote Secure Shell sessions side by side does make administering multiple remote machines quite a bit easier.
|
||||
|
||||
Few (But Worthwhile) Options
|
||||
|
||||
There aren’t a lot of SSH GUI tools available for Linux. Why? Because most administrators prefer to simply open a terminal window and use the standard command-line tools to remotely access their servers. However, if you have a need for a GUI tool, you have two solid options and one terminal that makes logging into multiple machines slightly easier. Although there are only a few options for those looking for an SSH GUI tool, those that are available are certainly worth your time. Give one of these a try and see for yourself.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://elementary.io/
|
||||
[2]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
|
||||
[3]: https://www.linux.com/files/images/sshguis1jpg
|
||||
[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_1.jpg?itok=DiNTz_wO (PuTTY Connection)
|
||||
[5]: https://www.linux.com/licenses/category/used-permission
|
||||
[6]: https://www.linux.com/files/images/sshguis2jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_2.jpg?itok=4ORsJlz3 (log in)
|
||||
[8]: https://github.com/muriloventuroso/easyssh
|
||||
[9]: https://www.linux.com/files/images/sshguis3jpg
|
||||
[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_3.jpg?itok=bHC2zlda (Adding a connection)
|
||||
[11]: https://www.linux.com/files/images/sshguis4jpg
|
||||
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_4.jpg?itok=hhJzhRIg (EasySSH)
|
||||
[13]: https://www.linux.com/files/images/sshguis5jpg
|
||||
[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_5.jpg?itok=piFEFYTQ (Connecting)
|
||||
[15]: https://www.linux.com/files/images/sshguis6jpg
|
||||
[16]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_6.jpg?itok=-kYl6iSE (Terminator)
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,369 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (pityonline)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 useful Vim plugins for developers)
|
||||
[#]: via: (https://opensource.com/article/19/1/vim-plugins-developers)
|
||||
[#]: author: (Ricardo Gerardi https://opensource.com/users/rgerardi)
|
||||
|
||||
5 useful Vim plugins for developers
|
||||
======
|
||||
Expand Vim's capabilities and improve your workflow with these five plugins for writing code.
|
||||

|
||||
|
||||
I have used [Vim][1] as a text editor for over 20 years, but about two years ago I decided to make it my primary text editor. I use Vim to write code, configuration files, blog articles, and pretty much everything I can do in plaintext. Vim has many great features and, once you get used to it, you become very productive.
|
||||
|
||||
I tend to use Vim's robust native capabilities for most of what I do, but there are a number of plugins developed by the open source community that extend Vim's capabilities, improve your workflow, and make you even more productive.
|
||||
|
||||
Following are five plugins that are useful when using Vim to write code in any programming language.
|
||||
|
||||
### 1. Auto Pairs
|
||||
|
||||
The [Auto Pairs][2] plugin helps insert and delete pairs of characters, such as brackets, parentheses, or quotation marks. This is very useful for writing code, since most programming languages use pairs of characters in their syntax—such as parentheses for function calls or quotation marks for string definitions.
|
||||
|
||||
In its most basic functionality, Auto Pairs inserts the corresponding closing character when you type an opening character. For example, if you enter a bracket **[** , Auto-Pairs automatically inserts the closing bracket **]**. Conversely, if you use the Backspace key to delete the opening bracket, Auto Pairs deletes the corresponding closing bracket.
|
||||
|
||||
If you have automatic indentation on, Auto Pairs inserts the paired character in the proper indented position when you press Return/Enter, saving you from finding the correct position and typing the required spaces or tabs.
|
||||
|
||||
Consider this Go code block for instance:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
x := true
|
||||
items := []string{"tv", "pc", "tablet"}
|
||||
|
||||
if x {
|
||||
for _, i := range items
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Inserting an opening curly brace **{** after **items** and pressing Return/Enter produces this result:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
x := true
|
||||
items := []string{"tv", "pc", "tablet"}
|
||||
|
||||
if x {
|
||||
for _, i := range items {
|
||||
| (cursor here)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Auto Pairs offers many other options (which you can read about on [GitHub][3]), but even these basic features will save time.
|
||||
|
||||
### 2. NERD Commenter
|
||||
|
||||
The [NERD Commenter][4] plugin adds code-commenting functions to Vim, similar to the ones found in an integrated development environment (IDE). With this plugin installed, you can select one or several lines of code and change them to comments with the press of a button.
|
||||
|
||||
NERD Commenter integrates with the standard Vim [filetype][5] plugin, so it understands several programming languages and uses the appropriate commenting characters for single or multi-line comments.
|
||||
|
||||
The easiest way to get started is by pressing **Leader+Space** to toggle the current line between commented and uncommented. The standard Vim Leader key is the **\** character.
|
||||
|
||||
In Visual mode, you can select multiple lines and toggle their status at the same time. NERD Commenter also understands counts, so you can provide a count n followed by the command to change n lines together.
|
||||
|
||||
Other useful features are the "Sexy Comment," triggered by **Leader+cs** , which creates a fancy comment block using the multi-line comment character. For example, consider this block of code:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
x := true
|
||||
items := []string{"tv", "pc", "tablet"}
|
||||
|
||||
if x {
|
||||
for _, i := range items {
|
||||
fmt.Println(i)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Selecting all the lines in **function main** and pressing **Leader+cs** results in the following comment block:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
/*
|
||||
* x := true
|
||||
* items := []string{"tv", "pc", "tablet"}
|
||||
*
|
||||
* if x {
|
||||
* for _, i := range items {
|
||||
* fmt.Println(i)
|
||||
* }
|
||||
* }
|
||||
*/
|
||||
}
|
||||
```
|
||||
|
||||
Since all the lines are commented in one block, you can uncomment the entire block by toggling any of the lines of the block with **Leader+Space**.
|
||||
|
||||
NERD Commenter is a must-have for any developer using Vim to write code.
|
||||
|
||||
### 3. VIM Surround
|
||||
|
||||
The [Vim Surround][6] plugin helps you "surround" existing text with pairs of characters (such as parentheses or quotation marks) or tags (such as HTML or XML tags). It's similar to Auto Pairs but, instead of working while you're inserting text, it's more useful when you're editing text.
|
||||
|
||||
For example, if you have the following sentence:
|
||||
|
||||
```
|
||||
"Vim plugins are awesome !"
|
||||
```
|
||||
|
||||
You can remove the quotation marks around the sentence by pressing the combination **ds"** while your cursor is anywhere between the quotation marks:
|
||||
|
||||
```
|
||||
Vim plugins are awesome !
|
||||
```
|
||||
|
||||
You can also change the double quotation marks to single quotation marks with the command **cs"'** :
|
||||
|
||||
```
|
||||
'Vim plugins are awesome !'
|
||||
```
|
||||
|
||||
Or replace them with brackets by pressing **cs'[**
|
||||
|
||||
```
|
||||
[ Vim plugins are awesome ! ]
|
||||
```
|
||||
|
||||
While it's a great help for text objects, this plugin really shines when working with HTML or XML tags. Consider the following HTML line:
|
||||
|
||||
```
|
||||
<p>Vim plugins are awesome !</p>
|
||||
```
|
||||
|
||||
You can emphasize the word "awesome" by pressing the combination **ysiw <em>** while the cursor is anywhere on that word:
|
||||
|
||||
```
|
||||
<p>Vim plugins are <em>awesome</em> !</p>
|
||||
```
|
||||
|
||||
Notice that the plugin is smart enough to use the proper closing tag **< /em>**.
|
||||
|
||||
Vim Surround can also indent text and add tags in their own lines using **ySS**. For example, if you have:
|
||||
|
||||
```
|
||||
<p>Vim plugins are <em>awesome</em> !</p>
|
||||
```
|
||||
|
||||
Add a **div** tag with this combination: **ySS <div class="normal">**, and notice that the paragraph line is indented automatically.
|
||||
|
||||
```
|
||||
<div class="normal">
|
||||
<p>Vim plugins are <em>awesome</em> !</p>
|
||||
</div>
|
||||
```
|
||||
|
||||
Vim Surround has many other options. Give it a try—and consult [GitHub][7] for additional information.
|
||||
|
||||
### 4\. Vim Gitgutter
|
||||
|
||||
The [Vim Gitgutter][8] plugin is useful for anyone using Git for version control. It shows the output of **Git diff** as symbols in the "gutter"—the sign column where Vim presents additional information, such as line numbers. For example, consider the following as the committed version in Git:
|
||||
|
||||
```
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
5 func main() {
|
||||
6 x := true
|
||||
7 items := []string{"tv", "pc", "tablet"}
|
||||
8
|
||||
9 if x {
|
||||
10 for _, i := range items {
|
||||
11 fmt.Println(i)
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
After making some changes, Vim Gitgutter displays the following symbols in the gutter:
|
||||
|
||||
```
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
_ 5 func main() {
|
||||
6 items := []string{"tv", "pc", "tablet"}
|
||||
7
|
||||
~ 8 if len(items) > 0 {
|
||||
9 for _, i := range items {
|
||||
10 fmt.Println(i)
|
||||
+ 11 fmt.Println("------")
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
The **-** symbol shows that a line was deleted between lines 5 and 6. The **~** symbol shows that line 8 was modified, and the symbol **+** shows that line 11 was added.
|
||||
|
||||
In addition, Vim Gitgutter allows you to navigate between "hunks"—individual changes made in the file—with **[c** and **]c** , or even stage individual hunks for commit by pressing **Leader+hs**.
|
||||
|
||||
This plugin gives you immediate visual feedback of changes, and it's a great addition to your toolbox if you use Git.
|
||||
|
||||
### 5\. VIM Fugitive
|
||||
|
||||
[Vim Fugitive][9] is another great plugin for anyone incorporating Git into the Vim workflow. It's a Git wrapper that allows you to execute Git commands directly from Vim and integrates with Vim's interface. This plugin has many features—check its [GitHub][10] page for more information.
|
||||
|
||||
Here's a basic Git workflow example using Vim Fugitive. Considering the changes we've made to the Go code block on section 4, you can use **git blame** by typing the command **:Gblame** :
|
||||
|
||||
```
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 1 package main
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 2
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 3 import "fmt"
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 4
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│_ 5 func main() {
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 6 items := []string{"tv", "pc", "tablet"}
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 7
|
||||
00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│~ 8 if len(items) > 0 {
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 9 for _, i := range items {
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 10 fmt.Println(i)
|
||||
00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│+ 11 fmt.Println("------")
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 12 }
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 13 }
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 14 }
|
||||
```
|
||||
|
||||
You can see that lines 8 and 11 have not been committed. Check the repository status by typing **:Gstatus** :
|
||||
|
||||
```
|
||||
1 # On branch master
|
||||
2 # Your branch is up to date with 'origin/master'.
|
||||
3 #
|
||||
4 # Changes not staged for commit:
|
||||
5 # (use "git add <file>..." to update what will be committed)
|
||||
6 # (use "git checkout -- <file>..." to discard changes in working directory)
|
||||
7 #
|
||||
8 # modified: vim-5plugins/examples/test1.go
|
||||
9 #
|
||||
10 no changes added to commit (use "git add" and/or "git commit -a")
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
_ 5 func main() {
|
||||
6 items := []string{"tv", "pc", "tablet"}
|
||||
7
|
||||
~ 8 if len(items) > 0 {
|
||||
9 for _, i := range items {
|
||||
10 fmt.Println(i)
|
||||
+ 11 fmt.Println("------")
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
Vim Fugitive opens a split window with the result of **git status**. You can stage a file for commit by pressing the **-** key on the line with the name of the file. You can reset the status by pressing **-** again. The message updates to reflect the new status:
|
||||
|
||||
```
|
||||
1 # On branch master
|
||||
2 # Your branch is up to date with 'origin/master'.
|
||||
3 #
|
||||
4 # Changes to be committed:
|
||||
5 # (use "git reset HEAD <file>..." to unstage)
|
||||
6 #
|
||||
7 # modified: vim-5plugins/examples/test1.go
|
||||
8 #
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
_ 5 func main() {
|
||||
6 items := []string{"tv", "pc", "tablet"}
|
||||
7
|
||||
~ 8 if len(items) > 0 {
|
||||
9 for _, i := range items {
|
||||
10 fmt.Println(i)
|
||||
+ 11 fmt.Println("------")
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
Now you can use the command **:Gcommit** to commit the changes. Vim Fugitive opens another split that allows you to enter a commit message:
|
||||
|
||||
```
|
||||
1 vim-5plugins: Updated test1.go example file
|
||||
2 # Please enter the commit message for your changes. Lines starting
|
||||
3 # with '#' will be ignored, and an empty message aborts the commit.
|
||||
4 #
|
||||
5 # On branch master
|
||||
6 # Your branch is up to date with 'origin/master'.
|
||||
7 #
|
||||
8 # Changes to be committed:
|
||||
9 # modified: vim-5plugins/examples/test1.go
|
||||
10 #
|
||||
```
|
||||
|
||||
Save the file with **:wq** to complete the commit:
|
||||
|
||||
```
|
||||
[master c3bf80f] vim-5plugins: Updated test1.go example file
|
||||
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||
Press ENTER or type command to continue
|
||||
```
|
||||
|
||||
You can use **:Gstatus** again to see the result and **:Gpush** to update the remote repository with the new commit.
|
||||
|
||||
```
|
||||
1 # On branch master
|
||||
2 # Your branch is ahead of 'origin/master' by 1 commit.
|
||||
3 # (use "git push" to publish your local commits)
|
||||
4 #
|
||||
5 nothing to commit, working tree clean
|
||||
```
|
||||
|
||||
If you like Vim Fugitive and want to learn more, the GitHub repository has links to screencasts showing additional functionality and workflows. Check it out!
|
||||
|
||||
### What's next?
|
||||
|
||||
These Vim plugins help developers write code in any programming language. There are two other categories of plugins to help developers: code-completion plugins and syntax-checker plugins. They are usually related to specific programming languages, so I will cover them in a follow-up article.
|
||||
|
||||
Do you have another Vim plugin you use when writing code? Please share it in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/vim-plugins-developers
|
||||
|
||||
作者:[Ricardo Gerardi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rgerardi
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.vim.org/
|
||||
[2]: https://www.vim.org/scripts/script.php?script_id=3599
|
||||
[3]: https://github.com/jiangmiao/auto-pairs
|
||||
[4]: https://github.com/scrooloose/nerdcommenter
|
||||
[5]: http://vim.wikia.com/wiki/Filetype.vim
|
||||
[6]: https://www.vim.org/scripts/script.php?script_id=1697
|
||||
[7]: https://github.com/tpope/vim-surround
|
||||
[8]: https://github.com/airblade/vim-gitgutter
|
||||
[9]: https://www.vim.org/scripts/script.php?script_id=2975
|
||||
[10]: https://github.com/tpope/vim-fugitive
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,64 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with gPodder, an open source podcast client)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-gpodder)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
Get started with gPodder, an open source podcast client
|
||||
======
|
||||
Keep your podcasts synced across your devices with gPodder, the 17th in our series on open source tools that will make you more productive in 2019.
|
||||
|
||||

|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Here's the 17th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
|
||||
|
||||
### gPodder
|
||||
|
||||
I like podcasts. Heck, I like them so much I record three of them (you can find links to them in [my profile][1]). I learn a lot from podcasts and play them in the background when I'm working. But keeping them in sync between multiple desktops and mobile devices can be a bit of a challenge.
|
||||
|
||||
[gPodder][2] is a simple, cross-platform podcast downloader, player, and sync tool. It supports RSS feeds, [FeedBurner][3], [YouTube][4], and [SoundCloud][5], and it also has an open source sync service that you can run if you want. gPodder doesn't do podcast playback; instead, it uses your audio or video player of choice.
|
||||
|
||||

|
||||
|
||||
Installing gPodder is very straightforward. Installers are available for Windows and MacOS, and packages are available for major Linux distributions. If it isn't available in your distribution, you can run it directly from a Git checkout. With the "Add Podcasts via URL" menu option, you can enter a podcast's RSS feed URL or one of the "special" URLs for the other services. gPodder will fetch a list of episodes and present a dialog where you can select which episodes to download or mark old episodes on the list.
|
||||
|
||||

|
||||
|
||||
One of its nicer features is that if a URL is already in your clipboard, gPodder will automatically place it in its URL field, which makes it really easy to add a new podcast to your list. If you already have an OPML file of podcast feeds, you can upload and import it. There is also a discovery option that allows you to search for podcasts on [gPodder.net][6], the free and open source podcast listing site by the people who write and maintain gPodder.
|
||||
|
||||

|
||||
|
||||
A [mygpo][7] server synchronizes podcasts between devices. By default, gPodder uses [gPodder.net][8]'s servers, but you can change this in the configuration files if want to run your own (be aware that you'll have to modify the configuration file directly). Syncing allows you to keep your lists consistent between desktops and mobile devices. This is very useful if you listen to podcasts on multiple devices (for example, I listen on my work computer, home computer, and mobile phone), as it means no matter where you are, you have the most recent lists of podcasts and episodes without having to set things up again and again.
|
||||
|
||||

|
||||
|
||||
Clicking on a podcast episode will bring up the text post associated with it, and clicking "Play" will launch your device's default audio or video player. If you want to use something other than the default, you can change this in gPodder's configuration settings.
|
||||
|
||||
gPodder makes it simple to find, download, and listen to podcasts, synchronize them across devices, and access a lot of other features in an easy-to-use interface.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-gpodder
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/users/ksonney
|
||||
[2]: https://gpodder.github.io/
|
||||
[3]: https://feedburner.google.com/
|
||||
[4]: https://youtube.com
|
||||
[5]: https://soundcloud.com/
|
||||
[6]: http://gpodder.net
|
||||
[7]: https://github.com/gpodder/mygpo
|
||||
[8]: http://gPodder.net
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (19 days of productivity in 2019: The fails)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-wish-list)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
19 days of productivity in 2019: The fails
|
||||
======
|
||||
Here are some tools the open source world doesn't do as well as it could.
|
||||

|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Part of being productive is accepting that failure happens. I am a big proponent of [Howard Tayler's][1] Maxim 70: "Failure is not an option—it is mandatory. The option is whether or not to let failure be the last thing you do." And there were many things I wanted to talk about in this series that I failed to find good answers for.
|
||||
|
||||
So, for the final edition of my 19 new (or new-to-you) open source tools to help you be more productive in 2019, I present the tools I wanted but didn't find. I am hopeful that you, the reader, will be able to help me find some good solutions to the items below. If you do, please share them in the comments.
|
||||
|
||||
### Calendaring
|
||||
|
||||

|
||||
|
||||
If there is one thing the open source world is weak on, it is calendaring. I've tried about as many calendar programs as I've tried email programs. There are basically three good options for shared calendaring: [Evolution][2], the [Lightning add-on to Thunderbird][3], or [KOrganizer][4]. All the other applications I've tried (including [Orage][5], [Osmo][6], and almost all of the [Org mode][7] add-ons) seem to reliably support only read-only access to remote calendars. If the shared calendar uses either [Google Calendar][8] or [Microsoft Exchange][9] as the server, the first three are the only easily configured options (and even then, additional add-ons are often required).
|
||||
|
||||
### Linux on the inside
|
||||
|
||||

|
||||
|
||||
I love [Chrome OS][10], with its simplicity and lightweight requirements. I have owned several Chromebooks, including the latest models from Google. I find it to be reasonably distraction-free, lightweight, and easy to use. With the addition of Android apps and a Linux container, it's easy to be productive almost anywhere.
|
||||
|
||||
I'd like to carry that over to some of the older laptops I have hanging around, but unless I do a full compile of Chromium OS, it is hard to find that same experience. The desktop [Android][11] projects like [Bliss OS][12], [Phoenix OS][13], and [Android-x86][14] are getting close, and I'm keeping an eye on them for the future.
|
||||
|
||||
### Help desks
|
||||
|
||||

|
||||
|
||||
Customer service is a big deal for companies big and small. And with the added focus on DevOps these days, it is important to have tools to help bridge the gap. Almost every company I've worked with uses either [Jira][15], [GitHub][16], or [GitLab][17] for code issues, but none of these tools are very good at customer support tickets (without a lot of work). While there are many applications designed around customer support tickets and issues, most (if not all) of them are silos that don't play nice with other systems, again without a lot of work.
|
||||
|
||||
On my wishlist is an open source solution that allows customers, support, and developers to work together without an unwieldy pile of code to glue multiple systems together.
|
||||
|
||||
### Your turn
|
||||
|
||||

|
||||
|
||||
I'm sure there are a lot of options I missed during this series. I try new applications regularly, in the hopes that they will help me be more productive. I encourage everyone to do the same, because when it comes to being productive with open source tools, there is always something new to try. And, if you have favorite open source productivity apps that didn't make it into this series, please make sure to share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-wish-list
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.schlockmercenary.com/
|
||||
[2]: https://wiki.gnome.org/Apps/Evolution
|
||||
[3]: https://www.thunderbird.net/en-US/calendar/
|
||||
[4]: https://userbase.kde.org/KOrganizer
|
||||
[5]: https://github.com/xfce-mirror/orage
|
||||
[6]: http://clayo.org/osmo/
|
||||
[7]: https://orgmode.org/
|
||||
[8]: https://calendar.google.com
|
||||
[9]: https://products.office.com/
|
||||
[10]: https://en.wikipedia.org/wiki/Chrome_OS
|
||||
[11]: https://www.android.com/
|
||||
[12]: https://blissroms.com/
|
||||
[13]: http://www.phoenixos.com/
|
||||
[14]: http://www.android-x86.org/
|
||||
[15]: https://www.atlassian.com/software/jira
|
||||
[16]: https://github.com
|
||||
[17]: https://about.gitlab.com/
|
@ -0,0 +1,533 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SPEED TEST: x86 vs. ARM for Web Crawling in Python)
|
||||
[#]: via: (https://blog.dxmtechsupport.com.au/speed-test-x86-vs-arm-for-web-crawling-in-python/)
|
||||
[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/)
|
||||
|
||||
SPEED TEST: x86 vs. ARM for Web Crawling in Python
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Can you imagine if your job was to trawl competitor websites and jot prices down by hand, again and again and again? You’d burn your whole office down by lunchtime.
|
||||
|
||||
So, little wonder web crawlers are huge these days. They can keep track of customer sentiment and trending topics, monitor job openings, real estate transactions, UFC results, all sorts of stuff.
|
||||
|
||||
For those of a certain bent, this is fascinating stuff. Which is how I found myself playing around with [Scrapy][2], an open source web crawling framework written in Python.
|
||||
|
||||
Being wary of the potential to do something catastrophic to my computer while poking with things I didn’t understand, I decided to install it on my main machine but a Raspberry Pi.
|
||||
|
||||
And wouldn’t you know it? It actually didn’t run too shabby on the little tacker. Maybe this is a good use case for an ARM server?
|
||||
|
||||
Google had no solid answer. The nearest thing I found was [this Drupal hosting drag race][3], which showed an ARM server outperforming a much more expensive x86 based account.
|
||||
|
||||
That was definitely interesting. I mean, isn’t a web server kind of like a crawler in reverse? But with one operating on a LAMP stack and the other on a Python interpreter, it’s hardly the exact same thing.
|
||||
|
||||
So what could I do? Only one thing. Get some VPS accounts and make them race each other.
|
||||
|
||||
### What’s the Deal With ARM Processors?
|
||||
|
||||
ARM is now the most popular CPU architecture in the world.
|
||||
|
||||
But it’s generally seen as something you’d opt for to save money and battery life, rather than a serious workhorse.
|
||||
|
||||
It wasn’t always that way: this CPU was designed in Cambridge, England to power the fiendishly expensive [Acorn Archimedes][4]. This was the most powerful desktop computer in the world, and by a long way too: it was multiple times the speed of the fastest 386.
|
||||
|
||||
Acorn, like Commodore and Atari, somewhat ignorantly believed that the making of a great computer company was in the making of great computers. Bill Gates had a better idea. He got DOS on as many x86 machines – of the most widely varying quality and expense – as he could.
|
||||
|
||||
Having the best user base made you the obvious platform for third party developers to write software for; having all the software support made yours the most useful computer.
|
||||
|
||||
Even Apple nearly bit the dust. All the $$$$ were in building a better x86 chip, this was the architecture that ended up being developed for serious computing.
|
||||
|
||||
That wasn’t the end for ARM though. Their chips weren’t just fast, they could run well without drawing much power or emitting much heat. That made them a preferred technology in set top boxes, PDAs, digital cameras, MP3 players, and basically anything that either used a battery or where you’d just rather avoid the noise of a large fan.
|
||||
|
||||
So it was that Acorn spun off ARM, who began an idiosyncratic business model that continues to today: ARM doesn’t actually manufacture any chips, they license their intellectual property to others who do.
|
||||
|
||||
Which is more or less how they ended up in so many phones and tablets. When Linux was ported to the architecture, the door opened to other open source technologies, which is how we can run a web crawler on these chips today.
|
||||
|
||||
#### ARM in the Server Room
|
||||
|
||||
Some big names, like [Microsoft][5] and [Cloudflare][6], have placed heavy bets on the British Bulldog for their infrastructure. But for those of us with more modest budgets, the options are fairly sparse.
|
||||
|
||||
In fact, when it comes to cheap and cheerful VPS accounts that you can stick on the credit card for a few bucks a month, for years the only option was [Scaleway][7].
|
||||
|
||||
This changed a few months ago when public cloud heavyweight [AWS][8] launched its own ARM processor: the [AWS Graviton][9].
|
||||
|
||||
I decided to grab one of each, and race them against the most similar Intel offering from the same provider.
|
||||
|
||||
### Looking Under the Hood
|
||||
|
||||
So what are we actually racing here? Let’s jump right in.
|
||||
|
||||
#### Scaleway
|
||||
|
||||
Scaleway positions itself as “designed for developers”. And you know what? I think that’s fair enough: it’s definitely been a good little sandbox for developing and prototyping.
|
||||
|
||||
The dirt simple product offering and clean, easy dashboard guides you from home page to bash shell in minutes. That makes it a strong option for small businesses, freelancers and consultants who just want to get straight into a good VPS at a great price to run some crawls.
|
||||
|
||||
The ARM account we will be using is their [ARM64-2GB][10], which costs 3 euros a month and has 4 Cavium ThunderX cores. This launched in 2014 as the first server-class ARMv8 processor, but is now looking a bit middle-aged, having been superseded by the younger, prettier ThunderX2.
|
||||
|
||||
The x86 account we will be comparing it to is the [1-S][11], which costs a more princely 4 euros a month and has 2 Intel Atom C3995 cores. Intel’s Atom range is a low power single-threaded system on chip design, first built for laptops and then adapted for server use.
|
||||
|
||||
These accounts are otherwise fairly similar: they each have 2 gigabytes of memory, 50 gigabytes of SSD storage and 200 Mbit/s bandwidth. The disk drives possibly differ, but with the crawls we’re going to run here, this won’t come into play, we’re going to be doing everything in memory.
|
||||
|
||||
When I can’t use a package manager I’m familiar with, I become angry and confused, a bit like an autistic toddler without his security blanket, entirely beyond reasoning or consolation, it’s quite horrendous really, so both of these accounts will use Debian Stretch.
|
||||
|
||||
#### Amazon Web Services
|
||||
|
||||
In the same length of time as it takes you to give Scaleway your credit card details, launch a VPS, add a sudo user and start installing dependencies, you won’t even have gotten as far as registering your AWS account. You’ll still be reading through the product pages trying to figure out what’s going on.
|
||||
|
||||
There’s a serious breadth and depth here aimed at enterprises and others with complicated or specialised needs.
|
||||
|
||||
The AWS Graviton we wanna drag race is part of AWS’s “Elastic Compute Cloud” or EC2 range. I’ll be running it as an on-demand instance, which is the most convenient and expensive way to use EC2. AWS also operates a [spot market][12], where you get the server much cheaper if you can be flexible about when it runs. There’s also a [mid-priced option][13] if you want to run it 24/7.
|
||||
|
||||
Did I mention that AWS is complicated? Anyhoo..
|
||||
|
||||
The two accounts we’re comparing are [a1.medium][14] and [t2.small][15]. They both offer 2GB of RAM. Which begs the question: WTF is a vCPU? Confusingly, it’s a different thing on each account.
|
||||
|
||||
On the a1.medium account, a vCPU is a single core of the new AWS Graviton chip. This was built by Annapurna Labs, an Israeli chip maker bought by Amazon in 2015. This is a single-threaded 64-bit ARMv8 core exclusive to AWS. This has an on-demand price of 0.0255 US dollars per hour.
|
||||
|
||||
Our t2.small account runs on an Intel Xeon – though exactly which Xeon chip it is, I couldn’t really figure out. This has two threads per core – though we’re not really getting the whole core, or even the whole thread.
|
||||
|
||||
Instead we’re getting a “baseline performance of 20%, with the ability to burst above that baseline using CPU credits”. Which makes sense in principle, though it’s completely unclear to me what to actually expect from this. The on-demand price for this account is 0.023 US dollars per hour.
|
||||
|
||||
I couldn’t find Debian in the image library here, so both of these accounts will run Ubuntu 18.04.
|
||||
|
||||
### Beavis and Butthead Do Moz’s Top 500
|
||||
|
||||
To test these VPS accounts, I need a crawler to run – one that will let the CPU stretch its legs a bit. One way to do this would be to just hammer a few websites with as many requests as fast as possible, but that’s not very polite. What we’ll do instead is a broad crawl of many websites at once.
|
||||
|
||||
So it’s in tribute to my favourite physicist turned filmmaker, Mike Judge, that I wrote beavis.py. This crawls Moz’s Top 500 Websites to a depth of 3 pages to count how many times the words “wood” and “ass” occur anywhere within the HTML source.
|
||||
|
||||
Not all 500 websites will actually get crawled here – some will be excluded by robots.txt and others will require javascript to follow links and so on. But it’s a wide enough crawl to keep the CPU busy.
|
||||
|
||||
Python’s [global interpreter lock][16] means that beavis.py can only make use of a single CPU thread. To test multi-threaded we’re going to have to launch multiple spiders as seperate processes.
|
||||
|
||||
This is why I wrote butthead.py. Any true fan of the show knows that, as crude as Butthead was, he was always slightly more sophisticated than Beavis.
|
||||
|
||||
Splitting the crawl into multiple lists of start pages and allowed domains might slightly impact what gets crawled – fewer external links to other websites in the top 500 will get followed. But every crawl will be different anyway, so we will count how many pages are scraped as well as how long they take.
|
||||
|
||||
### Installing Scrapy on an ARM Server
|
||||
|
||||
Installing Scrapy is basically the same on each architecture. You install pip and various other dependencies, then install Scrapy from pip.
|
||||
|
||||
Installing Scrapy from pip to an ARM device does take noticeably longer though. I’m guessing this is because it has to compile the binary parts from source.
|
||||
|
||||
Once Scrapy is installed, I ran it from the shell to check that it’s fetching pages.
|
||||
|
||||
On Scaleway’s ARM account, there seemed to be a hitch with the service_identity module: it was installed but not working. This issue had come up on the Raspberry Pi as well, but not the AWS Graviton.
|
||||
|
||||
Not to worry, this was easily fixed with the following command:
|
||||
|
||||
```
|
||||
sudo pip3 install service_identity --force --upgrade
|
||||
```
|
||||
|
||||
Then we were off and racing!
|
||||
|
||||
### Single Threaded Crawls
|
||||
|
||||
The Scrapy docs say to try to [keep your crawls running between 80-90% CPU usage][17]. In practice, it’s hard – at least it is with the script I’ve written. What tends to happen is that the CPU gets very busy early in the crawl, drops a little bit and then rallies again.
|
||||
|
||||
The last part of the crawl, where most of the domains have been finished, can go on for quite a few minutes, which is frustrating, because at that point it feels like more a measure of how big the last website is than anything to do with the processor.
|
||||
|
||||
So please take this for what it is: not a state of the art benchmarking tool, but a short and slightly balding Australian in his underpants running some scripts and watching what happens.
|
||||
|
||||
So let’s get down to brass tacks. We’ll start with the Scaleway crawls.
|
||||
|
||||
| VPS | Account | Time | Pages | Scraped | Pages/Hour | €/million | pages |
|
||||
| --------- | ------- | ------- | ------ | ---------- | ---------- | --------- | ----- |
|
||||
| Scaleway | | | | | | | |
|
||||
| ARM64-2GB | 108m | 59.27s | 38,205 | 21,032.623 | 0.28527 | | |
|
||||
| --------- | ------- | ------- | ------ | ---------- | ---------- | --------- | ----- |
|
||||
| Scaleway | | | | | | | |
|
||||
| 1-S | 97m | 44.067s | 39,476 | 24,324.648 | 0.33011 | | |
|
||||
|
||||
I kept an eye on the CPU use of both of these crawls using [top][18]. Both crawls hit 100% CPU use at the beginning, but the ThunderX chip was definitely redlining a lot more. That means these figures understate how much faster the Atom core crawls than the ThunderX.
|
||||
|
||||
While I was watching CPU use in top, I could also see how much RAM was in use – this increased as the crawl continued. The ARM account used 14.7% at the end of the crawl, while the x86 was at 15%.
|
||||
|
||||
Watching the logs of these crawls, I also noticed a lot more pages timing out and going missing when the processor was maxed out. That makes sense – if the CPU’s too busy to respond to everything then something’s gonna go missing.
|
||||
|
||||
That’s not such a big deal when you’re just racing the things to see which is fastest. But in a real-world situation, with business outcomes at stake in the quality of your data, it’s probably worth having a little bit of headroom.
|
||||
|
||||
And what about AWS?
|
||||
|
||||
| VPS Account | Time | Pages Scraped | Pages / Hour | $ / Million Pages |
|
||||
| ----------- | ---- | ------------- | ------------ | ----------------- |
|
||||
| a1.medium | 100m 39.900s | 41,294 | 24,612.725 | 1.03605 |
|
||||
| t2.small | 78m 53.171s | 41,200 | 31,336.286 | 0.73397 |
|
||||
|
||||
I’ve included these results for sake of comparison with the Scaleway crawls, but these crawls were kind of a bust. Monitoring the CPU use – this time through the AWS dashboard rather than through top – showed that the script wasn’t making good use of the available processing power on either account.
|
||||
|
||||
This was clearest with the a1.medium account – it hardly even got out of bed. It peaked at about 45% near the beginning and then bounced around between 20% and 30% for the rest.
|
||||
|
||||
What’s intriguing to me about this is that the exact same script ran much slower on the ARM processor – and that’s not because it hit a limit of the Graviton’s CPU power. It had oodles of headroom left. Even the Intel Atom core managed to finish, and that was maxing out for some of the crawl. The settings were the same in the code, the way they were being handled differently on the different architecture.
|
||||
|
||||
It’s a bit of a black box to me whether that’s something inherent to the processor itself, the way the binaries were compiled, or some interaction between the two. I’m going to speculate that we might have seen the same thing on the Scaleway ARM VPS, if we hadn’t hit the limit of the CPU core’s processing power first.
|
||||
|
||||
It was harder to know how the t2.small account was doing. The crawl sat at about 20%, sometimes going as high as 35%. Was that it meant by “baseline performance of 20%, with the ability to burst to a higher level”? I had no idea. But I could see on the dashboard I wasn’t burning through any CPU credits.
|
||||
|
||||
Just to make extra sure, I installed [stress][19] and ran it for a few minutes; sure enough, this thing could do 100% if you pushed it.
|
||||
|
||||
Clearly, I was going to need to crank the settings up on both these processors to make them sweat a bit, so I set CONCURRENT_REQUESTS to 5000 and REACTOR_THREADPOOL_MAXSIZE to 120 and ran some more crawls.
|
||||
|
||||
| VPS Account | Time | Pages Scraped | Pages/hr | $/10000 Pages |
|
||||
| ----------- | ---- | ------------- | -------- | ------------- |
|
||||
| a1.medium | 46m 13.619s | 40,283 | 52,285.047 | 0.48771 |
|
||||
| t2.small | 41m7.619s | 36,241 | 52,871.857 | 0.43501 |
|
||||
| t2.small (No CPU credits) | 73m 8.133s | 34,298 | 28,137.8891 | 0.81740 |
|
||||
|
||||
The a1 instance hit 100% usage about 5 minutes into the crawl, before dropping back to 80% use for another 20 minutes, climbing up to 96% again and then dropping down again as it was wrapping things up. That was probably about as well-tuned as I was going to get it.
|
||||
|
||||
The t2 instance hit 50% early in the crawl and stayed there for until it was nearly done. With 2 threads per core, 50% CPU use is one thread maxed out.
|
||||
|
||||
Here we see both accounts produce similar speeds. But the Xeon thread was redlining for most of the crawl, and the Graviton was not. I’m going to chalk this up as a slight win for the Graviton.
|
||||
|
||||
But what about once you’ve burnt through all your CPU credits? That’s probably the fairer comparison – to only use them as you earn them. I wanted to test that as well. So I ran stress until all the CPU credits were exhausted and ran the crawl again.
|
||||
|
||||
With no credits in the bank, the CPU usage maxed out at 27% and stayed there. So many pages ended up going missing that it actually performed worse than when on the lower settings.
|
||||
|
||||
### Multi Threaded Crawls
|
||||
|
||||
Dividing our crawl up between multiple spiders in separate processes offers a few more options to make use of the available cores.
|
||||
|
||||
I first tried dividing everything up between 10 processes and launching them all at once. This turned out to be slower than just dividing them up into 1 process per core.
|
||||
|
||||
I got the best result by combining these methods – dividing the crawl up into 10 processes and then launching 1 process per core at the start and then the rest as these crawls began to wind down.
|
||||
|
||||
To make this even better, you could try to minimise the problem of the last lingering crawler by making sure the longest crawls start first. I actually attempted to do this.
|
||||
|
||||
Figuring that the number of links on the home page might be a rough proxy for how large the crawl would be, I built a second spider to count them and then sort them in descending order of number of outgoing links. This preprocessing worked well and added a little over a minute.
|
||||
|
||||
It turned out though that blew the crawling time out beyond two hours! Putting all the most link heavy websites together in the same process wasn’t a great idea after all.
|
||||
|
||||
You might effectively deal with this by tweaking the number of domains per process as well – or by shuffling the list after it’s ordered. That’s a bit much for Beavis and Butthead though.
|
||||
|
||||
So I went back to my earlier method that had worked somewhat well:
|
||||
|
||||
| VPS Account | Time | Pages Scraped | Pages/hr | €/10,000 pages |
|
||||
| ----------- | ---- | ------------- | -------- | -------------- |
|
||||
| Scaleway ARM64-2GB | 62m 10.078s | 36,158 | 34,897.0719 | 0.17193 |
|
||||
| Scaleway 1-S | 60m 56.902s | 36,725 | 36,153.5529 | 0.22128 |
|
||||
|
||||
After all that, using more cores did speed up the crawl. But it’s hardly a matter of just halving or quartering the time taken.
|
||||
|
||||
I’m certain that a more experienced coder could better optimise this to take advantage of all the cores. But, as far as “out of the box” Scrapy performance goes, it seems to be a lot easier to speed up a crawl by using faster threads rather than by throwing more cores at it.
|
||||
|
||||
As it is, the Atom has scraped slightly more pages in slightly less time. On a value for money metric, you could possibly say that the ThunderX is ahead. Either way, there’s not a lot of difference here.
|
||||
|
||||
### Everything You Always Wanted to Know About Ass and Wood (But Were Afraid to Ask)
|
||||
|
||||
After scraping 38,205 pages, our crawler found 24,170,435 mentions of ass and 54,368 mentions of wood.
|
||||
|
||||
![][20]
|
||||
|
||||
Considered on its own, this is a respectable amount of wood.
|
||||
|
||||
But when you set it against the sheer quantity of ass we’re dealing with here, the wood looks miniscule.
|
||||
|
||||
### The Verdict
|
||||
|
||||
From what’s visible to me at the moment, it looks like the CPU architecture you use is actually less important than how old the processor is. The AWS Graviton from 2018 was the winner here in single-threaded performance.
|
||||
|
||||
You could of course argue that the Xeon still wins, core for core. But then you’re not really going dollar for dollar anymore, or even thread for thread.
|
||||
|
||||
The Atom from 2017, on the other hand, comfortably bested the ThunderX from 2014. Though, on the value for money metric, the ThunderX might be the clear winner. Then again, if you can run your crawls on Amazon’s spot market, the Graviton is still ahead.
|
||||
|
||||
All in all, I think this shows that, yes, you can crawl the web with an ARM device, and it can compete on both performance and price.
|
||||
|
||||
Whether the difference is significant enough for you to turn what you’re doing upside down is a whole other question of course. Certainly, if you’re already on the AWS cloud – and your code is portable enough – then it might be worthwhile testing out their a1 instances.
|
||||
|
||||
Hopefully we will see more ARM options on the public cloud in near future.
|
||||
|
||||
### The Scripts
|
||||
|
||||
This is my first real go at doing anything in either Python or Scrapy. So this might not be great code to learn from. Some of what I’ve done here – such as using global variables – is definitely a bit kludgey.
|
||||
|
||||
Still, I want to be transparent about my methods, so here are my scripts.
|
||||
|
||||
To run them, you’ll need Scrapy installed and you will need the CSV file of [Moz’s top 500 domains][21]. To run butthead.py you will also need [psutil][22].
|
||||
|
||||
##### beavis.py
|
||||
|
||||
```
|
||||
import scrapy
|
||||
from scrapy.spiders import CrawlSpider, Rule
|
||||
from scrapy.linkextractors import LinkExtractor
|
||||
from scrapy.crawler import CrawlerProcess
|
||||
|
||||
ass = 0
|
||||
wood = 0
|
||||
totalpages = 0
|
||||
|
||||
def getdomains():
|
||||
|
||||
moz500file = open('top500.domains.05.18.csv')
|
||||
|
||||
domains = []
|
||||
moz500csv = moz500file.readlines()
|
||||
|
||||
del moz500csv[0]
|
||||
|
||||
for csvline in moz500csv:
|
||||
leftquote = csvline.find('"')
|
||||
rightquote = leftquote + csvline[leftquote + 1:].find('"')
|
||||
domains.append(csvline[leftquote + 1:rightquote])
|
||||
|
||||
return domains
|
||||
|
||||
def getstartpages(domains):
|
||||
|
||||
startpages = []
|
||||
|
||||
for domain in domains:
|
||||
startpages.append('http://' + domain)
|
||||
|
||||
return startpages
|
||||
|
||||
class AssWoodItem(scrapy.Item):
|
||||
ass = scrapy.Field()
|
||||
wood = scrapy.Field()
|
||||
url = scrapy.Field()
|
||||
|
||||
class AssWoodPipeline(object):
|
||||
def __init__(self):
|
||||
self.asswoodstats = []
|
||||
|
||||
def process_item(self, item, spider):
|
||||
self.asswoodstats.append((item.get('url'), item.get('ass'), item.get('wood')))
|
||||
|
||||
def close_spider(self, spider):
|
||||
asstally, woodtally = 0, 0
|
||||
|
||||
for asswoodcount in self.asswoodstats:
|
||||
asstally += asswoodcount[1]
|
||||
woodtally += asswoodcount[2]
|
||||
|
||||
global ass, wood, totalpages
|
||||
ass = asstally
|
||||
wood = woodtally
|
||||
totalpages = len(self.asswoodstats)
|
||||
|
||||
class BeavisSpider(CrawlSpider):
|
||||
name = "Beavis"
|
||||
allowed_domains = getdomains()
|
||||
start_urls = getstartpages(allowed_domains)
|
||||
#start_urls = [ 'http://medium.com' ]
|
||||
custom_settings = {
|
||||
'DEPTH_LIMIT': 3,
|
||||
'DOWNLOAD_DELAY': 3,
|
||||
'CONCURRENT_REQUESTS': 1500,
|
||||
'REACTOR_THREADPOOL_MAXSIZE': 60,
|
||||
'ITEM_PIPELINES': { '__main__.AssWoodPipeline': 10 },
|
||||
'LOG_LEVEL': 'INFO',
|
||||
'RETRY_ENABLED': False,
|
||||
'DOWNLOAD_TIMEOUT': 30,
|
||||
'COOKIES_ENABLED': False,
|
||||
'AJAXCRAWL_ENABLED': True
|
||||
}
|
||||
|
||||
rules = ( Rule(LinkExtractor(), callback='parse_asswood'), )
|
||||
|
||||
def parse_asswood(self, response):
|
||||
if isinstance(response, scrapy.http.TextResponse):
|
||||
item = AssWoodItem()
|
||||
item['ass'] = response.text.casefold().count('ass')
|
||||
item['wood'] = response.text.casefold().count('wood')
|
||||
item['url'] = response.url
|
||||
yield item
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
process = CrawlerProcess({
|
||||
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
|
||||
})
|
||||
|
||||
process.crawl(BeavisSpider)
|
||||
process.start()
|
||||
|
||||
print('Uhh, that was, like, ' + str(totalpages) + ' pages crawled.')
|
||||
print('Uh huhuhuhuh. It said ass ' + str(ass) + ' times.')
|
||||
print('Uh huhuhuhuh. It said wood ' + str(wood) + ' times.')
|
||||
```
|
||||
|
||||
##### butthead.py
|
||||
|
||||
```
|
||||
import scrapy, time, psutil
|
||||
from scrapy.spiders import CrawlSpider, Rule, Spider
|
||||
from scrapy.linkextractors import LinkExtractor
|
||||
from scrapy.crawler import CrawlerProcess
|
||||
from multiprocessing import Process, Queue, cpu_count
|
||||
|
||||
ass = 0
|
||||
wood = 0
|
||||
totalpages = 0
|
||||
linkcounttuples =[]
|
||||
|
||||
def getdomains():
|
||||
|
||||
moz500file = open('top500.domains.05.18.csv')
|
||||
|
||||
domains = []
|
||||
moz500csv = moz500file.readlines()
|
||||
|
||||
del moz500csv[0]
|
||||
|
||||
for csvline in moz500csv:
|
||||
leftquote = csvline.find('"')
|
||||
rightquote = leftquote + csvline[leftquote + 1:].find('"')
|
||||
domains.append(csvline[leftquote + 1:rightquote])
|
||||
|
||||
return domains
|
||||
|
||||
def getstartpages(domains):
|
||||
|
||||
startpages = []
|
||||
|
||||
for domain in domains:
|
||||
startpages.append('http://' + domain)
|
||||
|
||||
return startpages
|
||||
|
||||
class AssWoodItem(scrapy.Item):
|
||||
ass = scrapy.Field()
|
||||
wood = scrapy.Field()
|
||||
url = scrapy.Field()
|
||||
|
||||
class AssWoodPipeline(object):
|
||||
def __init__(self):
|
||||
self.asswoodstats = []
|
||||
|
||||
def process_item(self, item, spider):
|
||||
self.asswoodstats.append((item.get('url'), item.get('ass'), item.get('wood')))
|
||||
|
||||
def close_spider(self, spider):
|
||||
asstally, woodtally = 0, 0
|
||||
|
||||
for asswoodcount in self.asswoodstats:
|
||||
asstally += asswoodcount[1]
|
||||
woodtally += asswoodcount[2]
|
||||
|
||||
global ass, wood, totalpages
|
||||
ass = asstally
|
||||
wood = woodtally
|
||||
totalpages = len(self.asswoodstats)
|
||||
|
||||
|
||||
class ButtheadSpider(CrawlSpider):
|
||||
name = "Butthead"
|
||||
custom_settings = {
|
||||
'DEPTH_LIMIT': 3,
|
||||
'DOWNLOAD_DELAY': 3,
|
||||
'CONCURRENT_REQUESTS': 250,
|
||||
'REACTOR_THREADPOOL_MAXSIZE': 30,
|
||||
'ITEM_PIPELINES': { '__main__.AssWoodPipeline': 10 },
|
||||
'LOG_LEVEL': 'INFO',
|
||||
'RETRY_ENABLED': False,
|
||||
'DOWNLOAD_TIMEOUT': 30,
|
||||
'COOKIES_ENABLED': False,
|
||||
'AJAXCRAWL_ENABLED': True
|
||||
}
|
||||
|
||||
rules = ( Rule(LinkExtractor(), callback='parse_asswood'), )
|
||||
|
||||
|
||||
def parse_asswood(self, response):
|
||||
if isinstance(response, scrapy.http.TextResponse):
|
||||
item = AssWoodItem()
|
||||
item['ass'] = response.text.casefold().count('ass')
|
||||
item['wood'] = response.text.casefold().count('wood')
|
||||
item['url'] = response.url
|
||||
yield item
|
||||
|
||||
def startButthead(domainslist, urlslist, asswoodqueue):
|
||||
crawlprocess = CrawlerProcess({
|
||||
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
|
||||
})
|
||||
|
||||
crawlprocess.crawl(ButtheadSpider, allowed_domains = domainslist, start_urls = urlslist)
|
||||
crawlprocess.start()
|
||||
asswoodqueue.put( (ass, wood, totalpages) )
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
asswoodqueue = Queue()
|
||||
domains=getdomains()
|
||||
startpages=getstartpages(domains)
|
||||
processlist =[]
|
||||
cores = cpu_count()
|
||||
|
||||
for i in range(10):
|
||||
domainsublist = domains[i * 50:(i + 1) * 50]
|
||||
pagesublist = startpages[i * 50:(i + 1) * 50]
|
||||
p = Process(target = startButthead, args = (domainsublist, pagesublist, asswoodqueue))
|
||||
processlist.append(p)
|
||||
|
||||
for i in range(cores):
|
||||
processlist[i].start()
|
||||
|
||||
time.sleep(180)
|
||||
|
||||
i = cores
|
||||
|
||||
while i != 10:
|
||||
time.sleep(60)
|
||||
if psutil.cpu_percent() < 66.7:
|
||||
processlist[i].start()
|
||||
i += 1
|
||||
|
||||
for i in range(10):
|
||||
processlist[i].join()
|
||||
|
||||
for i in range(10):
|
||||
asswoodtuple = asswoodqueue.get()
|
||||
ass += asswoodtuple[0]
|
||||
wood += asswoodtuple[1]
|
||||
totalpages += asswoodtuple[2]
|
||||
|
||||
print('Uhh, that was, like, ' + str(totalpages) + ' pages crawled.')
|
||||
print('Uh huhuhuhuh. It said ass ' + str(ass) + ' times.')
|
||||
print('Uh huhuhuhuh. It said wood ' + str(wood) + ' times.')
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.dxmtechsupport.com.au/speed-test-x86-vs-arm-for-web-crawling-in-python/
|
||||
|
||||
作者:[James Mawson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://blog.dxmtechsupport.com.au/wp-content/uploads/2019/02/quadbike-1024x683.jpg
|
||||
[2]: https://scrapy.org/
|
||||
[3]: https://www.info2007.net/blog/2018/review-scaleway-arm-based-cloud-server.html
|
||||
[4]: https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/
|
||||
[5]: https://www.computerworld.com/article/3178544/microsoft-windows/microsoft-and-arm-look-to-topple-intel-in-servers.html
|
||||
[6]: https://www.datacenterknowledge.com/design/cloudflare-bets-arm-servers-it-expands-its-data-center-network
|
||||
[7]: https://www.scaleway.com/
|
||||
[8]: https://aws.amazon.com/
|
||||
[9]: https://www.theregister.co.uk/2018/11/27/amazon_aws_graviton_specs/
|
||||
[10]: https://www.scaleway.com/virtual-cloud-servers/#anchor_arm
|
||||
[11]: https://www.scaleway.com/virtual-cloud-servers/#anchor_starter
|
||||
[12]: https://aws.amazon.com/ec2/spot/pricing/
|
||||
[13]: https://aws.amazon.com/ec2/pricing/reserved-instances/
|
||||
[14]: https://aws.amazon.com/ec2/instance-types/a1/
|
||||
[15]: https://aws.amazon.com/ec2/instance-types/t2/
|
||||
[16]: https://wiki.python.org/moin/GlobalInterpreterLock
|
||||
[17]: https://docs.scrapy.org/en/latest/topics/broad-crawls.html
|
||||
[18]: https://linux.die.net/man/1/top
|
||||
[19]: https://linux.die.net/man/1/stress
|
||||
[20]: https://blog.dxmtechsupport.com.au/wp-content/uploads/2019/02/Screenshot-from-2019-02-16-17-01-08.png
|
||||
[21]: https://moz.com/top500
|
||||
[22]: https://pypi.org/project/psutil/
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 tools for viewing files at the command line)
|
||||
[#]: via: (https://opensource.com/article/19/2/view-files-command-line)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
3 tools for viewing files at the command line
|
||||
======
|
||||
Take a look at less, Antiword, and odt2txt, three utilities for viewing files in the terminal.
|
||||

|
||||
|
||||
I always say you don't need to use the command line to use Linux effectively—I know many Linux users who never crack open a terminal window and are quite happy. However, even though I don't consider myself a techie, I spend about 20% of my computing time at the command line, manipulating files, processing text, and using utilities.
|
||||
|
||||
One thing I often do in a terminal window is viewing files, whether text or word processor files. Sometimes it's just easier to use a command line utility than to fire up a text editor or a word processor.
|
||||
|
||||
Here are three of the utilities I use to view files at the command line.
|
||||
|
||||
### less
|
||||
|
||||
The beauty of [less][1] is that it's easy to use and it breaks the files you're viewing down into discrete chunks (or pages), which makes them easier to read. You use it to view text files at the command line, such as a README, an HTML file, a LaTeX file, or anything else in plaintext. I took a look at less in a [previous article][2].
|
||||
|
||||
To use less, just type:
|
||||
|
||||
```
|
||||
less file_name
|
||||
```
|
||||
|
||||

|
||||
|
||||
Scroll down through the file by pressing the spacebar or PgDn key on your keyboard. You can move up through a file by pressing the PgUp key. To stop viewing the file, press the Q key on your keyboard.
|
||||
|
||||
### Antiword
|
||||
|
||||
[Antiword][3] is great little utility that you can use to that can convert Word documents to plaintext. If you want, you can also convert them to [PostScript][4] or [PDF][5]. For this article, let's just stick with the conversion to text.
|
||||
|
||||
Antiword can read and convert files created with versions of Word from 2.0 to 2003. It doesn't read DOCX files—if you try, Antiword displays an error message that what you're trying to read is a ZIP file. That's technically correct, but it's still frustrating.
|
||||
|
||||
To view a Word document using Antiword, type the following command:
|
||||
|
||||
```
|
||||
antiword file_name.doc
|
||||
```
|
||||
|
||||
Antiword converts the document to text and displays it in the terminal window. Unfortunately, it doesn't break the document into pages in the terminal. You can, though, redirect Antiword's output to a utility like less or [more][6] to paginate it. Do that by typing the following command:
|
||||
|
||||
```
|
||||
antiword file_name.doc | less
|
||||
```
|
||||
|
||||
If you're new to the command line, the | is called a pipe. That's what does the redirection.
|
||||
|
||||

|
||||
|
||||
### odt2txt
|
||||
|
||||
Being a good open source citizen, you'll want to use as many open formats as possible. For your word processing needs, you might deal with [ODT][7] files (used by such word processors as LibreOffice Writer and AbiWord) instead of Word files. Even if you don't, you might run into ODT files. And they're easy to view at the command line, even if you don't have Writer or AbiWord installed on your computer.
|
||||
|
||||
How? With a little utility called [odt2txt][8]. As you've probably guessed, odt2txt converts an ODT file to plaintext. To use it, run the command:
|
||||
|
||||
```
|
||||
odt2txt file_name.odt
|
||||
```
|
||||
|
||||
Like Antiword, odt2txt converts the document to text and displays it in the terminal window. And, like Antiword, it doesn't page the document. Once again, though, you can pipe the output from odt2txt to a utility like less or more using the following command:
|
||||
|
||||
```
|
||||
odt2txt file_name.odt | more
|
||||
```
|
||||
|
||||

|
||||
|
||||
Do you have a favorite utility for viewing files at the command line? Feel free to share it with the community by leaving a comment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/view-files-command-line
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnu.org/software/less/
|
||||
[2]: https://opensource.com/article/18/4/using-less-view-text-files-command-line
|
||||
[3]: http://www.winfield.demon.nl/
|
||||
[4]: http://en.wikipedia.org/wiki/PostScript
|
||||
[5]: http://en.wikipedia.org/wiki/Portable_Document_Format
|
||||
[6]: https://opensource.com/article/19/1/more-text-files-linux
|
||||
[7]: http://en.wikipedia.org/wiki/OpenDocument
|
||||
[8]: https://github.com/dstosberg/odt2txt
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems)
|
||||
[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text)
|
||||
[#]: author: (Simon James https://fosspost.org/author/simonjames)
|
||||
|
||||
5 Good Open Source Speech Recognition/Speech-to-Text Systems
|
||||
======
|
||||
|
||||

|
||||
|
||||
A speech-to-text (STT) system is as its name implies; A way of transforming the spoken words via sound into textual files that can be used later for any purpose.
|
||||
|
||||
Speech-to-text technology is extremely useful. It can be used for a lot of applications such as a automation of transcription, writing books/texts using your own sound only, enabling complicated analyses on information using the generated textual files and a lot of other things.
|
||||
|
||||
In the past, the speech-to-text technology was dominated by proprietary software and libraries; Open source alternatives didn’t exist or existed with extreme limitations and no community around. This is changing, today there are a lot of open source speech-to-text tools and libraries that you can use right now.
|
||||
|
||||
Here we list 5 of them.
|
||||
|
||||
### Open Source Speech Recognition Libraries
|
||||
|
||||
#### Project DeepSpeech
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1]
|
||||
|
||||
This project is made by Mozilla; The organization behind the Firefox browser. It’s a 100% free and open source speech-to-text library that also implies the machine learning technology using TensorFlow framework to fulfill its mission.
|
||||
|
||||
In other words, you can use it to build training models yourself to enhance the underlying speech-to-text technology and get better results, or even to bring it to other languages if you want. You can also easily integrate it to your other machine learning projects that you are having on TensorFlow. Sadly it sounds like the project is currently only supporting English by default.
|
||||
|
||||
It’s also available in many languages such as Python (3.6); Which allows you to have it working in seconds:
|
||||
|
||||
```
|
||||
pip3 install deepspeech
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
```
|
||||
|
||||
You can also install it using npm:
|
||||
|
||||
```
|
||||
npm install deepspeech
|
||||
```
|
||||
|
||||
For more information, refer to the [project’s homepage][2].
|
||||
|
||||
#### Kaldi
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3]
|
||||
|
||||
Kaldi is an open source speech recognition software written in C++, and is released under the Apache public license. It works on Windows, macOS and Linux. Its development started back in 2009.
|
||||
|
||||
Kaldi’s main features over some other speech recognition software is that it’s extendable and modular; The community is providing tons of 3rd-party modules that you can use for your tasks. Kaldi also supports deep neural networks, and offers an [excellent documentation on its website][4].
|
||||
|
||||
While the code is mainly written in C++, it’s “wrapped” by Bash and Python scripts. So if you are looking just for the basic usage of converting speech to text, then you’ll find it easy to accomplish that via either Python or Bash.
|
||||
|
||||
[Project’s homepage][5].
|
||||
|
||||
#### Julius
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6]
|
||||
|
||||
Probably one of the oldest speech recognition software ever; It’s development started in 1991 at the University of Kyoto, and then its ownership was transferred to an independent project team in 2005.
|
||||
|
||||
Julius main features include its ability to perform real-time STT processes, low memory usage (Less than 64MB for 20000 words), ability to produce N-best/Word-graph output, ability to work as a server unit and a lot more. This software was mainly built for academic and research purposes. It is written in C, and works on Linux, Windows, macOS and even Android (on smartphones).
|
||||
|
||||
Currently it supports both English and Japanese languages only. The software is probably availbale to install easily in your Linux distribution’s repository; Just search for julius package in your package manager. The latest version was [released][7] around one and half months ago.
|
||||
|
||||
[Project’s homepage][8].
|
||||
|
||||
#### Wav2Letter++
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9]
|
||||
|
||||
If you are looking for something modern, then this one is for you. Wav2Letter++ is an open source speech recognition software that was released by Facebook’s AI Research Team just 2 months ago. The code is released under the BSD license.
|
||||
|
||||
Facebook is [describing][10] its library as “the fastest state-of-the-art speech recognition system available”. The concepts on which this tool is built makes it optimized for performance by default; Facebook’s also-new machine learning library [FlashLight][11] is used as the underlying core of Wav2Letter++.
|
||||
|
||||
Wav2Letter++ needs you first to build a training model for the language you desire by yourself in order to train the algorithms on it. No pre-built support of any language (including English) is available; It’s just a machine-learning-driven tool to convert speech to text. It was written in C++, hence the name (Wav2Letter++).
|
||||
|
||||
[Project’s homepage][12].
|
||||
|
||||
#### DeepSpeech2
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13]
|
||||
|
||||
Researchers at the Chinese giant Baidu are also working on their own speech-to-text engine, called DeepSpeech2. It’s an end-to-end open source engine that uses the “PaddlePaddle” deep learning framework for converting both English & Mandarin Chinese languages speeches into text. The code is released under BSD license.
|
||||
|
||||
The engine can be trained on any model and for any language you desire. The models are not released with the code; You’ll have to build them yourself, just like the other software. DeepSpeech2’s source code is written in Python; So it should be easy for you to get familiar with it if that’s the language you use.
|
||||
|
||||
[Project’s homepage][14].
|
||||
|
||||
### Conclusion
|
||||
|
||||
The speech recognition category is still mainly dominated by proprietary software giants like Google and IBM (which do provide their own closed-source commercial services for this), but the open source alternatives are promising. Those 5 open source speech recognition engines should get you going in building your application, all of them are still under heavy development by time. In few years, we expect open source to become the norm for those technologies just like in the other industries.
|
||||
|
||||
If you have any other recommendations for this list, or comments in general, we’d love to hear them below!
|
||||
|
||||
**
|
||||
|
||||
Shares
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text
|
||||
|
||||
作者:[Simon James][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fosspost.org/author/simonjames
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/hero_speech-machine-learning2.png?resize=820%2C280&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 16 open source speech recognition)
|
||||
[2]: https://github.com/mozilla/DeepSpeech
|
||||
[3]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-19-1134.png?resize=591%2C138&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 18 open source speech recognition)
|
||||
[4]: http://kaldi-asr.org/doc/index.html
|
||||
[5]: http://kaldi-asr.org
|
||||
[6]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/mic_web.png?resize=385%2C100&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 20 open source speech recognition)
|
||||
[7]: https://github.com/julius-speech/julius/releases
|
||||
[8]: https://github.com/julius-speech/julius
|
||||
[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/fully_convolutional_ASR.png?resize=850%2C177&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 22 open source speech recognition)
|
||||
[10]: https://code.fb.com/ai-research/wav2letter/
|
||||
[11]: https://github.com/facebookresearch/flashlight
|
||||
[12]: https://github.com/facebookresearch/wav2letter
|
||||
[13]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/ds2.png?resize=850%2C313&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 24 open source speech recognition)
|
||||
[14]: https://github.com/PaddlePaddle/DeepSpeech
|
227
sources/tech/20190219 Logical - in Bash.md
Normal file
227
sources/tech/20190219 Logical - in Bash.md
Normal file
@ -0,0 +1,227 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zero-mk)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Logical & in Bash)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash)
|
||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||
|
||||
Logical & in Bash
|
||||
======
|
||||
|
||||

|
||||
|
||||
One would think you could dispatch `&` in two articles. Turns out you can't. While [the first article dealt with using `&` at the end of commands to push them into the background][1] and then diverged into explaining process management, the second article saw [`&` being used as a way to refer to file descriptors][2], which led us to seeing how, combined with `<` and `>`, you can route inputs and outputs from and to different places.
|
||||
|
||||
This means we haven't even touched on `&` as an AND operator, so let's do that now.
|
||||
|
||||
### & is a Bitwise Operator
|
||||
|
||||
If you are at all familiar with binary operations, you will have heard of AND and OR. These are bitwise operations that operate on individual bits of a binary number. In Bash, you use `&` as the AND operator and `|` as the OR operator:
|
||||
|
||||
**AND**
|
||||
|
||||
```
|
||||
0 & 0 = 0
|
||||
|
||||
0 & 1 = 0
|
||||
|
||||
1 & 0 = 0
|
||||
|
||||
1 & 1 = 1
|
||||
```
|
||||
|
||||
**OR**
|
||||
|
||||
```
|
||||
0 | 0 = 0
|
||||
|
||||
0 | 1 = 1
|
||||
|
||||
1 | 0 = 1
|
||||
|
||||
1 | 1 = 1
|
||||
```
|
||||
|
||||
You can test this by ANDing any two numbers and outputting the result with `echo`:
|
||||
|
||||
```
|
||||
$ echo $(( 2 & 3 )) # 00000010 AND 00000011 = 00000010
|
||||
|
||||
2
|
||||
|
||||
$ echo $(( 120 & 97 )) # 01111000 AND 01100001 = 01100000
|
||||
|
||||
96
|
||||
```
|
||||
|
||||
The same goes for OR (`|`):
|
||||
|
||||
```
|
||||
$ echo $(( 2 | 3 )) # 00000010 OR 00000011 = 00000011
|
||||
|
||||
3
|
||||
|
||||
$ echo $(( 120 | 97 )) # 01111000 OR 01100001 = 01111001
|
||||
|
||||
121
|
||||
```
|
||||
|
||||
Three things about this:
|
||||
|
||||
1. You use `(( ... ))` to tell Bash that what goes between the double brackets is some sort of arithmetic or logical operation. `(( 2 + 2 ))`, `(( 5 % 2 ))` (`%` being the [modulo][3] operator) and `((( 5 % 2 ) + 1))` (equals 3) will all work.
|
||||
2. [Like with variables][4], `$` extracts the value so you can use it.
|
||||
3. For once spaces don't matter: `((2+3))` will work the same as `(( 2+3 ))` and `(( 2 + 3 ))`.
|
||||
4. Bash only operates with integers. Trying to do something like this `(( 5 / 2 ))` will give you "2", and trying to do something like this `(( 2.5 & 7 ))` will result in an error. Then again, using anything but integers in a bitwise operation (which is what we are talking about now) is generally something you wouldn't do anyway.
|
||||
|
||||
|
||||
|
||||
**TIP:** If you want to check what your decimal number would look like in binary, you can use _bc_ , the command-line calculator that comes preinstalled with most Linux distros. For example, using:
|
||||
|
||||
```
|
||||
bc <<< "obase=2; 97"
|
||||
```
|
||||
|
||||
will convert `97` to binary (the _o_ in `obase` stands for _output_ ), and ...
|
||||
|
||||
```
|
||||
bc <<< "ibase=2; 11001011"
|
||||
```
|
||||
|
||||
will convert `11001011` to decimal (the _i_ in `ibase` stands for _input_ ).
|
||||
|
||||
### && is a Logical Operator
|
||||
|
||||
Although it uses the same logic principles as its bitwise cousin, Bash's `&&` operator can only render two results: 1 ("true") and 0 ("false"). For Bash, any number not 0 is “true” and anything that equals 0 is “false.” What is also false is anything that is not a number:
|
||||
|
||||
```
|
||||
$ echo $(( 4 && 5 )) # Both non-zero numbers, both true = true
|
||||
|
||||
1
|
||||
|
||||
$ echo $(( 0 && 5 )) # One zero number, one is false = false
|
||||
|
||||
0
|
||||
|
||||
$ echo $(( b && 5 )) # One of them is not number, one is false = false
|
||||
|
||||
0
|
||||
```
|
||||
|
||||
The OR counterpart for `&&` is `||` and works exactly as you would expect.
|
||||
|
||||
All of this is simple enough... until it comes to a command's exit status.
|
||||
|
||||
### && is a Logical Operator for Command Exit Status
|
||||
|
||||
[As we have seen in previous articles][2], as a command runs, it outputs error messages. But, more importantly for today's discussion, it also outputs a number when it ends. This number is called an _exit code_ , and if it is 0, it means the command did not encounter any problem during its execution. If it is any other number, it means something, somewhere, went wrong, even if the command completed.
|
||||
|
||||
So 0 is good, any other number is bad, and, in the context of exit codes, 0/good means "true" and everything else means “false.” Yes, this is **the exact contrary of what you saw in the logical operations above** , but what are you gonna do? Different contexts, different rules. The usefulness of this will become apparent soon enough.
|
||||
|
||||
Moving on.
|
||||
|
||||
Exit codes are stored _temporarily_ in the [special variable][5] `?` \-- yes, I know: another confusing choice. Be that as it may, [remember that in our article about variables][4], and we said that you read the value in a variable using a the `$` symbol. So, if you want to know if a command has run without a hitch, you have to read `?` as soon as the command finishes and before running anything else.
|
||||
|
||||
Try it with:
|
||||
|
||||
```
|
||||
$ find /etc -iname "*.service"
|
||||
|
||||
find: '/etc/audisp/plugins.d': Permission denied
|
||||
|
||||
/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service
|
||||
|
||||
/etc/systemd/system/dbus-org.freedesktop.ModemManager1.service
|
||||
|
||||
[etcetera]
|
||||
```
|
||||
|
||||
[As you saw in the previous article][2], running `find` over _/etc_ as a regular user will normally throw some errors when it tries to read subdirectories for which you do not have access rights.
|
||||
|
||||
So, if you execute...
|
||||
|
||||
```
|
||||
echo $?
|
||||
```
|
||||
|
||||
... right after `find`, it will print a `1`, indicating that there were some errors.
|
||||
|
||||
(Notice that if you were to run `echo $?` a second time in a row, you'd get a `0`. This is because `$?` would contain the exit code of `echo $?`, which, supposedly, will have executed correctly. So the first lesson when using `$?` is: **use`$?` straight away** or store it somewhere safe -- like in another variable, or you will lose it).
|
||||
|
||||
One immediate use of `?` is to fold it into a list of chained commands and bork the whole thing if anything fails as Bash runs through it. For example, you may be familiar with the process of building and compiling the source code of an application. You can run them on after another by hand like this:
|
||||
|
||||
```
|
||||
$ configure
|
||||
|
||||
.
|
||||
|
||||
.
|
||||
|
||||
.
|
||||
|
||||
$ make
|
||||
|
||||
.
|
||||
|
||||
.
|
||||
|
||||
.
|
||||
|
||||
$ make install
|
||||
|
||||
.
|
||||
|
||||
.
|
||||
|
||||
.
|
||||
```
|
||||
|
||||
You can also put all three on one line...
|
||||
|
||||
```
|
||||
$ configure; make; make install
|
||||
```
|
||||
|
||||
... and hope for the best.
|
||||
|
||||
The disadvantage of this is that if, say, `configure` fails, Bash will still try and run `make` and `sudo make install`, even if there is nothing to make or, indeed, install.
|
||||
|
||||
The smarter way of doing it is like this:
|
||||
|
||||
```
|
||||
$ configure && make && make install
|
||||
```
|
||||
|
||||
This takes the exit code from each command and uses it as an operand in a chained `&&` operation.
|
||||
|
||||
But, and here's the kicker, Bash knows the whole thing is going to fail if `configure` returns a non-zero result. If that happens, it doesn't have to run `make` to check its exit code, since the result is going to be false no matter what. So, it forgoes `make` and just passes a non-zero result onto the next step of the operation. And, as `configure && make` delivers false, Bash doesn't have to run `make install` either. This means that, in a long chain of commands, you can join them with `&&`, and, as soon as one fails, you can save time as the rest of the commands get canceled immediately.
|
||||
|
||||
You can do something similar with `||`, the OR logical operator, and make Bash continue processing chained commands if only one of a pair completes.
|
||||
|
||||
In view of all this (along with the stuff we covered earlier), you should now have a clearer idea of what the command line we set at the beginning of [this article does][1]:
|
||||
|
||||
```
|
||||
mkdir test_dir 2>/dev/null || touch backup/dir/images.txt && find . -iname "*jpg" > backup/dir/images.txt &
|
||||
```
|
||||
|
||||
So, assuming you are running the above from a directory for which you have read and write privileges, what it does it do and how does it do it? How does it avoid unseemly and potentially execution-breaking errors? Next week, apart from giving you the solution, we'll be dealing with brackets: curly, curvy and straight. Don't miss it!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
|
||||
[2]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
|
||||
[3]: https://en.wikipedia.org/wiki/Modulo_operation
|
||||
[4]: https://www.linux.com/blog/learn/2018/12/bash-variables-environmental-and-otherwise
|
||||
[5]: https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (mySoul8012)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Book Review: Fundamentals of Linux)
|
||||
[#]: via: (https://itsfoss.com/fundamentals-of-linux-book-review)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
书评:linux的基础知识
|
||||
======
|
||||
有很多很棒的书籍涵盖了Linux的基础知识以及它的工作原理,今天,我们将会书评这样一本书,讨论的主题为Oliver Pelz所写的[linux的基础知识][1],由[PacktPub][2]发布。
|
||||
|
||||
[Oliver Pelz][3] 是一位超过十年软件开发人员和系统管理员经验人员,同时拥有生物信息学学位证书。
|
||||
|
||||
### 什么是linux基础知识一书
|
||||
|
||||
![Fundamental of Linux books][4]
|
||||
正如可以从标题中猜到那样,该书(Linux基础知识)的目标是为读者打下一个了解Linux命令行的坚实基础。这本书一共有两百多页。因此它专注于教授用户日常遇到的问题,以及任务。本书为想要成为Linux管理员的读者而书写。
|
||||
|
||||
第一章首先概述了虚拟化。本书作者指导了读者如何在[VirtualBox][6]中创建[CentOS][5]实例。如何克隆实例,如何使用快照。并且同时你也会学习到如何通过SSH命令连接到虚拟机。
|
||||
|
||||
第二章介绍了Linux的命令行的基础知识,包括shell GLOB模式,shell扩展,如何使用包含空格和特殊字符的文件名称。如何来获取命令手册的帮助页面。如何使用`sed`, `awk`这两个命令。如何浏览Linux的文件系统。
|
||||
|
||||
第三章更深入的介绍了Linux文件系统。您将了解如何在Linux中链接文件,以及如何搜索它们。您还将获得用户,组,以及文件权限的概述。由于本章的重点介绍了如何与文件进行交互。因此还将会介绍如何从命令行中读取文本文件,以及如何使用vim编辑器。
|
||||
|
||||
第四章重点介绍了如何使用命令行。以及涵盖的重要命令。如`cat`, `sort`, `awk`. `tee`, `tar`,`rsync`, `nmap`, `htop`等。您还将会了解这些命令的流程,以及如何相互使用,还将介绍Bash shell脚本。
|
||||
|
||||
第五章同时也是本书的最后一章,将会介绍Linux和其他高级命令,以及网络的概念。本书的作者讨论了Linux如何处理网络并提供使用多个虚拟机的示例。同时还将会介绍如何安装新的程序,如何设置防火墙。
|
||||
|
||||
### 关于这本书的想法
|
||||
|
||||
Linux的基础知识可能看起来很见到,但是涵盖了相当多的信息。同时也将会获得如何使用命令行所需要的知识的一切。
|
||||
|
||||
使用本书的时候,需要注意一件事情,即,本书专注于对命令行的关注,没有任何关于如何使用图形化的用户界面的任何教程。这是因为在Linux中有太多不同的桌面环境,以及很多的类似的操作系统。因此很难编写一本可以涵盖所有变量的书。部分原因还因为本书的面向的用户群体为Linux管理员。
|
||||
|
||||
当我看到作者使用Centos教授Linux的时候有点惊讶。我原本以为他会使用更为常见的Linux的发行版本,例如Ubuntu,Debian或者Fedora。原因在于Centos是为服务器设计的发行版本。随着时间的推移变化很小。能够为Linux的基础知识打下一个非常坚实的基础。
|
||||
|
||||
自己使用Linux已经操作五年了。我大部分时间都在使用桌面版本的Linux。我有些时候会使用命令行操作。但我并没有花太多的时间在哪里。我使用鼠标执行了本书中很多的操作。现在呢。我同时也知道了如何通过终端做出同样的事情。这种方式不会改变我完成任务的路径。但是会更加帮助自己理解幕后发生的事情。
|
||||
|
||||
如果您刚刚使用Linux,或者计划使用。我不会推荐您阅读这本书。这可能有点绝对化。但是如何您已经花了一些时间在Linux上。或者可以快速掌握某种技术语言。那么这本书很适合你。
|
||||
|
||||
如果您认为本书适合您的学习需求。您可以从以下链接获取到该书。
|
||||
|
||||
我们将在未来几个月内尝试查看更多Linux书籍,敬请关注我们。
|
||||
|
||||
你最喜欢的关于Linux的入门书籍是什么?请在下面的评论中告诉我们。
|
||||
|
||||
如果您发现这篇文章很有趣,请花一点时间在社交媒体,黑客新闻或[Reddit][8]上分享
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fundamentals-of-linux-book-review
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.packtpub.com/networking-and-servers/fundamentals-linux
|
||||
[2]: https://www.packtpub.com/
|
||||
[3]: http://www.oliverpelz.de/index.html
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/fundamentals-of-linux-book-review.jpeg?resize=800%2C450&ssl=1
|
||||
[5]: https://centos.org/
|
||||
[6]: https://www.virtualbox.org/
|
||||
[7]: https://www.centos.org/
|
||||
[8]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,78 @@
|
||||
[#]:collector:(lujun9972)
|
||||
[#]:translator:(lujun9972)
|
||||
[#]:reviewer:( )
|
||||
[#]:publisher:( )
|
||||
[#]:url:( )
|
||||
[#]:subject:(Get started with Org mode without Emacs)
|
||||
[#]:via:(https://opensource.com/article/19/1/productivity-tool-org-mode)
|
||||
[#]:author:(Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
在没有 Emacs 的情况下开始使用 Org mode
|
||||
======
|
||||
不,你不需要 Emacs 也能用 Org,这是我开源工具系列的第 16 集,将会让你在 2019 年变得更加有生产率。
|
||||
|
||||

|
||||
|
||||
每到年初似乎总有这么一个疯狂的冲动来寻找提高生产率的方法。新年决心,正确地开始一年的冲动,以及"向前看"的态度都是这种冲动的表现。软件推荐通常都会选择闭源和专利软件。但这不是必须的。
|
||||
|
||||
这是我 2019 年改进生产率的 19 个新工具中的第 16 个。
|
||||
|
||||
### Org (非 Emacs)
|
||||
|
||||
[Org mode][1] (或者就是 Org) 并不是新鲜货,但依然有许多人没有用过。他们很乐意试用一下以体验 Org 是如何改善生产率的。但最大的障碍来自于 Org 是与 Emacs 相沟联的,而且很多人都认为两者缺一不可。并不是这样的!一旦你理解了其基础,Org 就可以与各种其他工具和编辑器一起使用。
|
||||
|
||||

|
||||
|
||||
Org,本质上,是一个结构化的文本文件。它有标题,子标题,以及各种关键字,其他工具可以根据这些关键字将文件解析成日程表和代办列表。Org 文件可以被任何纯文本编辑器编辑(例如。,[Vim][2],[Atom][3],或 [Visual Studio Code][4]),而且很多编辑器都有插件可以帮你创建和管理 Org 文件。
|
||||
|
||||
一个基础的 Org 文件看起来是这样的:
|
||||
|
||||
```
|
||||
* Task List
|
||||
** TODO Write Article for Day 16 - Org w/out emacs
|
||||
DEADLINE: <2019-01-25 12:00>
|
||||
*** DONE Write sample org snippet for article
|
||||
- Include at least one TODO and one DONE item
|
||||
- Show notes
|
||||
- Show SCHEDULED and DEADLINE
|
||||
*** TODO Take Screenshots
|
||||
** Dentist Appointment
|
||||
SCHEDULED: <2019-01-31 13:30-14:30>
|
||||
```
|
||||
|
||||
Org 是一种大纲格式,它使用 ***** 作为标识指明事项的级别。任何以 TODO( 是的,全大些) 开头的事项都代办事项。标注为 DONE 的工作表示该工作已经完成。SCHEDULED 和 DEADLINE 标识与该事务相关的日期和时间。如何任何地方都没有时间,则该事务被视为全天活动。
|
||||
|
||||
使用正确的插件,你喜欢的文本编辑器可以成为一个充满生产率和组织能力的强大工具。例如,[vim-orgmode][5] 插件拥有函数来创建 Org 文件,语法高亮,以及各种用来生成跨文件的日程和综合代办事项列表的关键命令。
|
||||
|
||||

|
||||
|
||||
Atom 的 [Organized][6] 插件在屏幕右边添加一个侧边栏,用来现实 Org 文件中的日程和代办事项。默认情况下它从配置项中设置的路径中读取多个 Org 文件。Todo 侧边栏允许你通过点击未完事项来将其标记为已完成,它会自动更新源 Org 文件。
|
||||
|
||||

|
||||
|
||||
还有一大堆 Org 工具可以帮助你保持生产率。使用 Python,Perl,PHP,NodeJS 等库,你可以开发自己的脚本和工具。当然,少不了 [Emacs][7],它的核心功能就包括支持 Org。
|
||||
|
||||

|
||||
|
||||
Org mode 是跟踪需要完成的工作和时间的最好工具之一。而且,与传闻相反,它无需 Emacs,任何一个文本编辑器都行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-org-mode
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://orgmode.org/
|
||||
[2]: https://www.vim.org/
|
||||
[3]: https://atom.io/
|
||||
[4]: https://code.visualstudio.com/
|
||||
[5]: https://github.com/jceb/vim-orgmode
|
||||
[6]: https://atom.io/packages/organized
|
||||
[7]: https://www.gnu.org/software/emacs/
|
371
translated/tech/20190110 5 useful Vim plugins for developers.md
Normal file
371
translated/tech/20190110 5 useful Vim plugins for developers.md
Normal file
@ -0,0 +1,371 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (pityonline)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 useful Vim plugins for developers)
|
||||
[#]: via: (https://opensource.com/article/19/1/vim-plugins-developers)
|
||||
[#]: author: (Ricardo Gerardi https://opensource.com/users/rgerardi)
|
||||
|
||||
5 个好用的 Vim 插件
|
||||
======
|
||||
|
||||
通过这 5 个插件扩展 Vim 功能来提升你的编码效率。
|
||||
|
||||

|
||||
|
||||
我用 Vim 已经超过 20 年了,两年前我决定把它作为我的首要文本编辑器。我用 Vim 来编写代码,配置文件,博客文章及其它任意可以用纯文本表达的东西。Vim 有很多超级棒的功能,一旦你适合了它,你的工作会变得非常高效。
|
||||
|
||||
在日常编辑工作中,我更倾向于使用 Vim 稳定的原生扩展,但开源社区对 Vim 开发了大量可以提升工作效率的插件。
|
||||
|
||||
以下列举 5 个非常好用的可以用于编写任意编程语言的插件。
|
||||
|
||||
### 1. Auto Pairs
|
||||
|
||||
[Auto Pairs][2] 插件可以帮助你插入和删除成对的文字,如花括号,圆括号或引用标记。这在编写代码时非常有用,因为很多编程语言都有成对标记的语法,就像圆括号用于函数调用,或引号用于字符串定义。
|
||||
|
||||
Auto Pairs 最基本的功能是在你输入一个左括号时会自动补全对应的另一半括号。比如,你输入了一个 `[`,它会自动帮你补充另一半 `]`。相反,如果你用退格键删除开头的一半括号,Auto Pairs 会删除另一半。
|
||||
|
||||
如果你设置了自动缩进,当你按下回车键时 Auto Pairs 会在恰当的缩进位置补全另一半括号,这比你找到放置另一半的位置并选择一个正确的括号要省劲多了。
|
||||
|
||||
例如下面这段代码:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
x := true
|
||||
items := []string{"tv", "pc", "tablet"}
|
||||
|
||||
if x {
|
||||
for _, i := range items
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
在 `items` 后面输入一个左花括号按下回车会产生下面的结果:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
x := true
|
||||
items := []string{"tv", "pc", "tablet"}
|
||||
|
||||
if x {
|
||||
for _, i := range items {
|
||||
| (cursor here)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Auto Pairs 提供了大量其它选项(你可以在 [GitHub][3] 上找到),但最基本的功能已经很让人省时间了。
|
||||
|
||||
### 2. NERD Commenter
|
||||
|
||||
[NERD Commenter][4] 插件增加了方便注释的功能,类似在 <ruby>IDE<rt>integrated development environment</rt></ruby> 中注释功能。有了这个插件,你可以一键注释单行或多行代码。
|
||||
|
||||
NERD Commenter 使用了标准的 Vim [filetype][5],所以它能理解一些编程语言并使用合适的方式来注释代码。
|
||||
|
||||
最易上手的方法是按 `Leader+Space` 组合键来开关当前行的注释。Vim 默认的 Leader 键是 `\`。
|
||||
|
||||
在<ruby>可视化模式<rt>Visual mode</rt></ruby>中,你可以选择多行一并注释。NERD Commenter 也可以按计数注释,所以你可以加个数量 n 来注释 n 行。
|
||||
|
||||
还有个有用的特性 Sexy Comment 可以用 `Leader+cs` 来触发,它的块注释风格更漂亮一些。例如下面这段代码:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
x := true
|
||||
items := []string{"tv", "pc", "tablet"}
|
||||
|
||||
if x {
|
||||
for _, i := range items {
|
||||
fmt.Println(i)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
选择 `main` 函数中的所有行然后按下 `Leader+cs` 会出来以下注释效果:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
/*
|
||||
* x := true
|
||||
* items := []string{"tv", "pc", "tablet"}
|
||||
*
|
||||
* if x {
|
||||
* for _, i := range items {
|
||||
* fmt.Println(i)
|
||||
* }
|
||||
* }
|
||||
*/
|
||||
}
|
||||
```
|
||||
|
||||
因为这些行都是在一个块中注释的,你可以用 `Leader+Space` 组合键一次去掉这里所有的注释。
|
||||
|
||||
NERD Commenter 是任何使用 Vim 写代码的开发者都必装的插件。
|
||||
|
||||
### 3. VIM Surround
|
||||
|
||||
[Vim Surround][6] 插件可以帮你在现有文本中实现环绕插入成对的符号(如括号或双引号)或标签(如 HTML 或 XML 标签)。它和 Auto Pairs 有点儿类似,但在编辑文本时更有用。
|
||||
|
||||
比如你有以下一个句子:
|
||||
|
||||
```
|
||||
"Vim plugins are awesome !"
|
||||
```
|
||||
|
||||
当你的光标处理句中任何位置时,你可以用 `ds"` 组合键删除句子两端的双引号。
|
||||
|
||||
```
|
||||
Vim plugins are awesome !
|
||||
```
|
||||
|
||||
你也可以用 `cs"'` 把双端的双引号换成单引号:
|
||||
|
||||
```
|
||||
'Vim plugins are awesome !'
|
||||
```
|
||||
|
||||
或者用 `cs'[` 替换成中括号:
|
||||
|
||||
```
|
||||
[ Vim plugins are awesome ! ]
|
||||
```
|
||||
|
||||
它对编辑 HTML 或 XML 文本中的<ruby>标签<rt>tag</rt></ruby>尤其在行。假如你有以下一行 HTML 代码:
|
||||
|
||||
```
|
||||
<p>Vim plugins are awesome !</p>
|
||||
```
|
||||
|
||||
当光标在 awesome 这个单词的任何位置时,你可以按 `ysiw <em>` 直接给它加上着重标签:
|
||||
|
||||
```
|
||||
<p>Vim plugins are <em>awesome</em> !</p>
|
||||
```
|
||||
|
||||
注意它聪明地加上了 `</em>` 闭合标签。
|
||||
|
||||
Vim Surround 也可以用 `ySS` 缩进文本。比如你有以下文本:
|
||||
|
||||
```
|
||||
<p>Vim plugins are <em>awesome</em> !</p>
|
||||
```
|
||||
|
||||
你可以用 `ySS <div class="normal">` 加上 `div` 标签,注意生成的段落是自动缩进的。
|
||||
|
||||
```
|
||||
<div class="normal">
|
||||
<p>Vim plugins are <em>awesome</em> !</p>
|
||||
</div>
|
||||
```
|
||||
|
||||
Vim Surround 有很多其它选项,你可以参照 [GitHub][7] 上的说明尝试它们。
|
||||
|
||||
### 4. Vim Gitgutter
|
||||
|
||||
[Vim Gitgutter][8] 插件对使用 Git 作为版本控制工具的人来说非常有用。它会在 Vim 显示行号的列旁 `git diff` 的差异标记。假设你有如下已提交过的代码:
|
||||
|
||||
```
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
5 func main() {
|
||||
6 x := true
|
||||
7 items := []string{"tv", "pc", "tablet"}
|
||||
8
|
||||
9 if x {
|
||||
10 for _, i := range items {
|
||||
11 fmt.Println(i)
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
当你做出一些修改后,Vim Gitgutter 会显示如下标记:
|
||||
|
||||
```
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
_ 5 func main() {
|
||||
6 items := []string{"tv", "pc", "tablet"}
|
||||
7
|
||||
~ 8 if len(items) > 0 {
|
||||
9 for _, i := range items {
|
||||
10 fmt.Println(i)
|
||||
+ 11 fmt.Println("------")
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
`-` 标记表示在第 5 行和第 6 行之间删除了一行。`~` 表示第 8 行有修改,`+` 表示新增了第 11 行。
|
||||
|
||||
另外,Vim Gitgutter 允许你用 `[c` 和 `]c` 在多个有修改的块之间跳转,甚至可以用 `Leader+hs` 来暂存某个变更集。
|
||||
|
||||
这个插件提供了对变更的即时视觉反馈,如果你用 Git 的话,有了它简直是如虎添翼。
|
||||
|
||||
### 5. VIM Fugitive
|
||||
|
||||
[Vim Fugitive][9] 是另一个超棒的将 Git 工作流集成到 Vim 中的插件。它对 Git 做了一些封装,可以让你在 Vim 里直接执行 Git 命令并将结果集成在 Vim 界面里。这个插件有超多的特性,更多信息请访问它的 [GitHub][10] 项目页面。
|
||||
|
||||
这里有一个使用 Vim Fugitive 的基础 Git 工作流示例。设想我们已经对下面的 Go 代码做出修改,你可以用 `:Gblame` 调用 `git blame` 来查看每行最后的提交信息:
|
||||
|
||||
```
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 1 package main
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 2
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 3 import "fmt"
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 4
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│_ 5 func main() {
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 6 items := []string{"tv", "pc", "tablet"}
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 7
|
||||
00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│~ 8 if len(items) > 0 {
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 9 for _, i := range items {
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 10 fmt.Println(i)
|
||||
00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│+ 11 fmt.Println("------")
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 12 }
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 13 }
|
||||
e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 14 }
|
||||
```
|
||||
|
||||
可以看到第 8 行和第 11 行显示还未提交。用 `:Gstatus` 命令检查仓库当前的状态:
|
||||
|
||||
```
|
||||
1 # On branch master
|
||||
2 # Your branch is up to date with 'origin/master'.
|
||||
3 #
|
||||
4 # Changes not staged for commit:
|
||||
5 # (use "git add <file>..." to update what will be committed)
|
||||
6 # (use "git checkout -- <file>..." to discard changes in working directory)
|
||||
7 #
|
||||
8 # modified: vim-5plugins/examples/test1.go
|
||||
9 #
|
||||
10 no changes added to commit (use "git add" and/or "git commit -a")
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
_ 5 func main() {
|
||||
6 items := []string{"tv", "pc", "tablet"}
|
||||
7
|
||||
~ 8 if len(items) > 0 {
|
||||
9 for _, i := range items {
|
||||
10 fmt.Println(i)
|
||||
+ 11 fmt.Println("------")
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
Vim Fugitive 在分割的窗口里显示 `git status` 的输出结果。你可以在某文件名所在的行用 `-` 键暂存这个文件,再按一次 `-` 可以取消暂存。这个信息会随着你的操作自动更新:
|
||||
|
||||
```
|
||||
1 # On branch master
|
||||
2 # Your branch is up to date with 'origin/master'.
|
||||
3 #
|
||||
4 # Changes to be committed:
|
||||
5 # (use "git reset HEAD <file>..." to unstage)
|
||||
6 #
|
||||
7 # modified: vim-5plugins/examples/test1.go
|
||||
8 #
|
||||
--------------------------------------------------------------------------------------------------------
|
||||
1 package main
|
||||
2
|
||||
3 import "fmt"
|
||||
4
|
||||
_ 5 func main() {
|
||||
6 items := []string{"tv", "pc", "tablet"}
|
||||
7
|
||||
~ 8 if len(items) > 0 {
|
||||
9 for _, i := range items {
|
||||
10 fmt.Println(i)
|
||||
+ 11 fmt.Println("------")
|
||||
12 }
|
||||
13 }
|
||||
14 }
|
||||
```
|
||||
|
||||
现在你可以用 `:Gcommit` 来提交修改了。Vim Fugitive 会打开另一个分割窗口让你输入提交信息:
|
||||
|
||||
```
|
||||
1 vim-5plugins: Updated test1.go example file
|
||||
2 # Please enter the commit message for your changes. Lines starting
|
||||
3 # with '#' will be ignored, and an empty message aborts the commit.
|
||||
4 #
|
||||
5 # On branch master
|
||||
6 # Your branch is up to date with 'origin/master'.
|
||||
7 #
|
||||
8 # Changes to be committed:
|
||||
9 # modified: vim-5plugins/examples/test1.go
|
||||
10 #
|
||||
```
|
||||
|
||||
按 `:wq` 保存文件完成提交:
|
||||
|
||||
```
|
||||
[master c3bf80f] vim-5plugins: Updated test1.go example file
|
||||
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||
Press ENTER or type command to continue
|
||||
```
|
||||
|
||||
然后你可以再用 `:Gstatus` 检查结果并用 `:Gpush` 把新的提交推送到远程。
|
||||
|
||||
```
|
||||
1 # On branch master
|
||||
2 # Your branch is ahead of 'origin/master' by 1 commit.
|
||||
3 # (use "git push" to publish your local commits)
|
||||
4 #
|
||||
5 nothing to commit, working tree clean
|
||||
```
|
||||
|
||||
Vim Fugitive 的 GitHub 项目主页有很多屏幕录像展示了它的更多功能和工作流,如果你喜欢它并想多学一些,快去看看吧。
|
||||
|
||||
### 接下来?
|
||||
|
||||
这些 Vim 插件都是程序开发者的神器!还有其它几类开发者常用的插件:自动完成插件和语法检查插件。它些大都是和具体的编程语言相关的,以后我会在一些文章中介绍它们。
|
||||
|
||||
你在写代码时是否用到一些其它 Vim 插件?请在评论区留言分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/vim-plugins-developers
|
||||
|
||||
作者:[Ricardo Gerardi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rgerardi
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.vim.org/
|
||||
[2]: https://www.vim.org/scripts/script.php?script_id=3599
|
||||
[3]: https://github.com/jiangmiao/auto-pairs
|
||||
[4]: https://github.com/scrooloose/nerdcommenter
|
||||
[5]: http://vim.wikia.com/wiki/Filetype.vim
|
||||
[6]: https://www.vim.org/scripts/script.php?script_id=1697
|
||||
[7]: https://github.com/tpope/vim-surround
|
||||
[8]: https://github.com/airblade/vim-gitgutter
|
||||
[9]: https://www.vim.org/scripts/script.php?script_id=2975
|
||||
[10]: https://github.com/tpope/vim-fugitive
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with gPodder, an open source podcast client)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-gpodder)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
开始使用 gPodder,一个开源播客客户端
|
||||
======
|
||||
使用 gPodder 将你的播客同步到你的设备上,gPodder 是我们开源工具系列中的第 17 个工具,它将在 2019 年提高你的工作效率。
|
||||
|
||||

|
||||
|
||||
每年年初似乎都有疯狂的冲动,想方设法提高工作效率。新年的决议,开始一年的权利,当然,“与旧的,与新的”的态度都有助于实现这一目标。通常的一轮建议严重偏向封闭源和专有软件。它不一定是这样。
|
||||
|
||||
这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 17 个工具来帮助你在 2019 年更有效率。
|
||||
|
||||
### gPodder
|
||||
|
||||
我喜欢播客。哎呀,我非常喜欢他们,因此我录制了其中的三个(你可以在[我的个人资料][1]中找到它们的链接)。我从播客那里学到了很多东西,并在我工作时在后台播放它们。但是,在多台桌面和移动设备之间保持同步可能会带来一些挑战。
|
||||
|
||||
[gPodder][2] 是一个简单的跨平台播客下载器、播放器和同步工具。它支持 RSS feed、[FeedBurner][3]、[YouTube][4] 和 [SoundCloud][5],它还有一个开源同步服务,你可以根据需要运行它。gPodder 不直接播放播客。相反, 它使用你选择的音频或视频播放器。
|
||||
|
||||

|
||||
|
||||
安装 gPodder 非常简单。安装程序适用于 Windows 和 MacOS,同时包可用于主要的 Linux 发行版。如果你的发行版中没有它,你可以直接从 Git 下载运行。通过 “Add Podcasts via URL” 菜单,你可以输入播客的 RSS 源 URL 或其他服务的“特殊” URL。gPodder 将获取节目列表并显示一个对话框,你可以在其中选择要下载的节目或在列表上标记旧节目。
|
||||
|
||||

|
||||
|
||||
它一个更好的功能是,如果 URL 已经在你的剪贴板中,gPodder 会自动将它放入播放 URL 中,这样你就可以很容易地将新的播客添加到列表中。如果你已有播客 feed 的 OPML 文件,那么可以上传并导入它。还有一个发现选项,让你可搜索 [gPodder.net][6] 上的播客,这是由编写和维护 gPodder 的人员提供的免费和开源播客列表网站。
|
||||
|
||||

|
||||
|
||||
[mygpo][7] 服务器在设备之间同步播客。gPodder 默认使用 [gPodder.net][8] 的服务器,但是如果你想要运行自己的服务器,那么可以在配置文件中更改它(请注意,你需要直接修改配置文件)。同步能让你在桌面和移动设备之间保持列表一致。如果你在多个设备上收听播客(例如,我在我的工作电脑、家用电脑和手机上收听),这会非常有用,因为这意味着无论你身在何处,你都拥有最近的播客和节目列表而无需一次又一次地设置。
|
||||
|
||||

|
||||
|
||||
单击播客节目将显示与其关联的文本,单击“播放”将启动设备的默认音频或视频播放器。如果要使用默认之外的其他播放器,可以在 gPodder 的配置设置中更改此设置。
|
||||
|
||||
通过 gPodder,你可以轻松查找、下载和收听播客,在设备之间同步这些播客,在易于使用的界面中访问许多其他功能。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-gpodder
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/users/ksonney
|
||||
[2]: https://gpodder.github.io/
|
||||
[3]: https://feedburner.google.com/
|
||||
[4]: https://youtube.com
|
||||
[5]: https://soundcloud.com/
|
||||
[6]: http://gpodder.net
|
||||
[7]: https://github.com/gpodder/mygpo
|
||||
[8]: http://gPodder.net
|
@ -1,111 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 Linux Distributions for New Users)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
5 个面向新手的 Linux 发行版
|
||||
======
|
||||
|
||||
> 5 个可使用新用户如归家般感觉的发行版。
|
||||
|
||||

|
||||
|
||||
从最初的 Linux 到现在,Linux 已经发展了很长一段路。但是,无论你曾经多少次听说过使用 Linux 现在有多容易,仍然会表示怀疑的人。而要真的承担得其这份声明,桌面必须足够简单,以便不熟悉 Linux 的人也能够使用它。事实上大量的桌面发行版使这成为了现实。
|
||||
|
||||
### 无需 Linux 知识
|
||||
|
||||
将这个清单误解为又一个“最佳用户友好型 Linux 发行版”的清单可能很简单。但这不是我们要在这里看到的。这二者有什么不同?就我的目的而言,定义的界限是 Linux 是否真正起到了使用的作用。换句话说,你是否可以将这个桌面操作系统放在一个用户面前,并让他们应用自如而无需 Linux 知识呢?
|
||||
|
||||
不管你相信与否,有些发行版就能做到。这里我将介绍给你 5 个这样的发行版。这些或许你全都听说过。它们或许不是你所选择的发行版,但你可以保证它们无需过多关注,而将用户放在眼前。
|
||||
|
||||
我们来看看选中的几个。
|
||||
|
||||
### Elementary OS
|
||||
|
||||
Elementary OS 的理念主要围绕人们如何实际使用他们的桌面。开发人员和设计人员不遗余力地创建尽可能简单的桌面。在这个过程中,他们致力于将 Linux 去 Linux 化。这并不是说他们已经从这个等式中删除了 Linux。不,恰恰相反,他们所做的就是创建一个与你所发现的一样中立的操作系统。Elementary OS 是如此顺畅以确保一切都完美合理。从单个 Dock 到每个人都清晰明了的应用程序菜单,这是一个桌面,而不用提醒用户说,“你正在使用 Linux!” 事实上,其布局本身就让人联想到 Mac,但附加了一个简单的应用程序菜单(图 1)。
|
||||
|
||||
![Elementary OS Juno][2]
|
||||
|
||||
*图 1:Elementary OS Juno 应用菜单*
|
||||
|
||||
将 Elementary OS 放在此列表中的另一个重要原因是它不像其他桌面发行版那样灵活。当然,有些用户会对此不以为然,但是如果桌面没有向用户扔出各种花哨的定制诱惑,那么就会形成一个非常熟悉的环境:一个既不需要也不允许大量修修补补的环境。操作系统在让新用户熟悉该平台这一方面还有很长的路要走。
|
||||
|
||||
与任何现代 Linux 桌面发行版一样,Elementary OS 包括 App Store,也称为 AppCenter,用户可以在其中安装所需的所有应用程序,而无需触及命令行。
|
||||
|
||||
### Deepin
|
||||
|
||||
Deepin 不仅得到了市场上最漂亮的台式机之一的赞誉,它也像任何桌面操作系统一样容易采用。其桌面界面非常简单,对于毫无 Linux 经验的用户来说,它的上手速度非常快。事实上,你很难找到无法立即上手使用 Deepin 桌面的用户。而这里唯一可能的障碍可能是其侧边栏控制中心(图 2)。
|
||||
|
||||
![][5]
|
||||
|
||||
*图 2:Deepin 的侧边栏控制编码*
|
||||
|
||||
但即使是侧边栏控制面板,也像市场上的任何其他配置工具一样直观。使用移动设备的任何人对于这种布局都很熟悉。至于打开应用程序,Deepin 的启动器采用了 macOS Launchpad 的方式。此按钮位于桌面底座上通常最右侧的位置,因此用户将立即就可以会心,知道它可能类似于标准的“开始”菜单。
|
||||
|
||||
与 Elementary OS(以及市场上大多数 Linux 发行版)类似,Deepin 也包含一个应用程序商店(简称为“商店”),可以轻松安装大量应用程序。
|
||||
|
||||
### Ubuntu
|
||||
|
||||
你知道肯定有它。Ubuntu 通常在大多数用户友好的 Linux 列表中排名第一。为什么?因为它是少数几个不需要了解 Linux 就能使用的桌面之一。但在采用 GNOME(和 Unity 谢幕)之前,情况并非如此。为什么?因为 Unity 经常需要进行一些调整才能达到不需要一点 Linux 知识的程度(图 3)。现在 Ubuntu 已经采用了 GNOME,并将其调整到甚至不需要懂得 GNOME 的程度,这个桌面使得对 Linux 的简单性和可用性处于次要地位。
|
||||
|
||||
![Ubuntu 18.04][7]
|
||||
|
||||
*图 3:Ubuntu 18.04 桌面可使用马上熟悉起来*
|
||||
|
||||
与 Elementary OS 不同,Ubuntu 不会让用户半路放弃。因此,任何想从桌面上获得更多信息的人都可以拥有它。但是,开箱即用的体验对于任何用户类型都是足够的。任何一个让用户不知道他们触手可及的力量有多少的桌面,肯定不如 Ubuntu。
|
||||
|
||||
### Linux Mint
|
||||
|
||||
我需要首先声明,我从来都不是 Linux Mint 的忠实粉丝。这并不是说我不尊重开发者的工作,这更多的是一种审美观点。我更喜欢现代的桌面环境。但是,旧式学校计算机桌面的隐喻(可以在默认的 Cinnamon 桌面中找到)让几乎每个人使用它的人都格外熟悉。Linux Mint 使用任务栏、开始按钮、系统托盘和桌面图标(图 4),提供了一个需要零学习曲线的界面。事实上,一些用户最初可能会被愚弄,以为他们正在使用Windows 7 的克隆。甚至是它的更新警告图标也会让用户感到非常熟悉。
|
||||
|
||||
![Linux Mint ][9]
|
||||
|
||||
*图 4:Linux Mint 的 Cinnamon 桌面非常像 Windows 7*
|
||||
|
||||
因为 Linux Mint 受益于其所基于的 Ubuntu,它不仅会让你马上熟悉起来,而且具有很高的可用性。无论你是否对底层平台有所了解,用户都会立即感受到宾至如归的感觉。
|
||||
|
||||
### Ubuntu Budgie
|
||||
|
||||
我们的列表将以这样一个发行版做结,它也能让用户忘记他们正在使用 Linux,并且使用常用工具变得简单、美观。使 Ubuntu 融合 Budgie 桌面可以实现以令人印象深刻的易用发行版。虽然其桌面布局(图 5)可能不太一样,但毫无疑问,适应这个环境并不会浪费时间。实际上,在 Dock 外面默认为桌面的左侧,Ubuntu Budgie 确实看起来像 Elementary OS。
|
||||
|
||||
![Budgie][11]
|
||||
|
||||
*图 5:Budgie 桌面既漂亮又简单*
|
||||
|
||||
Ubuntu Budgie 中的系统托盘/通知区域提供了一些不太寻常的功能,比如:快速访问 Caffeine(一种保持桌面清醒的工具)、快速笔记工具(用于记录简单笔记)、Night Lite 开关、原地下拉菜单(用于快速访问文件夹),当然还有 Raven 小程序/通知侧边栏(与 Deepin 中的控制中心侧边栏类似,但不太优雅)。Budgie 还包括一个应用程序菜单(左上角),用户可以访问所有已安装的应用程序。打开一个应用程序,该图标将出现在 Dock 中。右键单击该应用程序图标,然后选择“保留在 Dock”以便更快地访问。
|
||||
|
||||
Ubuntu Budgie 的一切都很直观,所以几乎没有学习曲线。这种发行版既优雅又易于使用,这并没有什么坏处。
|
||||
|
||||
### 选择一个吧
|
||||
|
||||
至此介绍了 5 个 Linux 发行版,它们各自以自己的方式提供了让任何用户即刻熟悉的桌面体验。虽然这些可能不是你选择顶级发行版的选择,但对于那些不熟悉 Linux 的用户来说,很难否定它们的价值。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/files/images/elementaryosjpg-2
|
||||
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos_0.jpg?itok=KxgNUvMW (Elementary OS Juno)
|
||||
[3]: https://www.linux.com/licenses/category/used-permission
|
||||
[4]: https://www.linux.com/files/images/deepinjpg
|
||||
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/deepin.jpg?itok=VV381a9f
|
||||
[6]: https://www.linux.com/files/images/ubuntujpg-1
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_1.jpg?itok=bax-_Tsg (Ubuntu 18.04)
|
||||
[8]: https://www.linux.com/files/images/linuxmintjpg
|
||||
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linuxmint.jpg?itok=8sPon0Cq (Linux Mint )
|
||||
[10]: https://www.linux.com/files/images/budgiejpg-0
|
||||
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/budgie_0.jpg?itok=zcf-AHmj (Budgie)
|
||||
[12]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,180 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (guevaraya)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to List Installed Packages on Ubuntu and Debian [Quick Tip])
|
||||
[#]: via: (https://itsfoss.com/list-installed-packages-ubuntu)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何索引 Ubuntu 和 Debian 上已安装的软件包 [快速提示]
|
||||
======
|
||||
|
||||
当你安装了 [Ubuntu 并想好好用一用][1]。但在将来某个时候,你肯定会遇到忘记曾经安装了那些软件包。
|
||||
|
||||
这个是完全正常。没有人要求你把系统里所有已安装的软件包都记住。但是问题是,如何才能知道已经安装了哪些软件包?如何查看安装过的软件包呢?
|
||||
|
||||
### 索引 Ubuntu 和 Debian 上已安装的软件包
|
||||
|
||||
![索引已安装的软件包][2]
|
||||
|
||||
如果你经常用 [apt 命令][3],你可能会注意到 apt 可以索引已安装的软件包。这里说对了一点。
|
||||
|
||||
|
||||
[apt-get 命令] 没有类似索引已安装软件包的简单的选项,但是 apt 有一个这样的命令:
|
||||
```
|
||||
apt list --installed
|
||||
```
|
||||
这个会显示 apt 命令安装的所有的软件包。同时也会包含由于依赖被安装的软件包。也就是说不仅会包含你曾经安装的程序,而且会包含大量库文件和间接安装的软件包。
|
||||
|
||||
![用 atp 命令索引显示已安装的软件包][5] 用 atp 命令索引显示已安装的软件包
|
||||
|
||||
由于索引出来的已安装的软件包太多,用 grep 过滤特定的软件包是一个比较好的办法。
|
||||
```
|
||||
apt list --installed | grep program_name
|
||||
```
|
||||
|
||||
如上命令也可以检索出 .deb 格式的软件包文件。是不是很酷?不是吗?
|
||||
|
||||
如果你阅读过 [apt 与 apt-get 对比][7]的文章,你可能已经知道 apt 和 apt-get 命令都是基于 [dpkg][8]。也就是说用 dpkg 命令可以索引 Debian 系统的所有已经安装的软件包。
|
||||
|
||||
```
|
||||
dpkg-query -l
|
||||
```
|
||||
你可以用 grep 命令检索指定的软件包。
|
||||
|
||||
![用 dpkg 命令索引显示已经安装的软件包][9]![用 dpkg 命令索引显示已经安装的软件包][9]用 dpkg 命令索引显示已经安装的软件包
|
||||
|
||||
|
||||
现在你可以搞定索引 Debian 的软件包管理器安装的应用了。那 Snap 和 Flatpak 这个两种应用呢?如何索引他们?因为他们不能被 apt 和 dpkg 访问。
|
||||
|
||||
显示系统里所有已经安装的 [Snap 软件包][10],可以这个命令:
|
||||
|
||||
```
|
||||
snap list
|
||||
```
|
||||
Snap 可以用绿色勾号索引显示经过认证的发布者。
|
||||
![索引已经安装的 Snap 软件包][11]索引已经安装的 Snap 软件包
|
||||
|
||||
显示系统里所有已安装的 [Flatpak 软件包][12],可以用这个命令:
|
||||
|
||||
```
|
||||
flatpak list
|
||||
```
|
||||
|
||||
让我来个汇总:
|
||||
|
||||
|
||||
用 apt 命令显示已安装软件包:
|
||||
|
||||
**apt** **list –installed**
|
||||
|
||||
用 dpkg 命令显示已安装软件包:
|
||||
|
||||
**dpkg-query -l**
|
||||
|
||||
索引系统里 Snap 已安装软件包:
|
||||
|
||||
**snap list**
|
||||
|
||||
索引系统里 Flatpak 已安装软件包:
|
||||
|
||||
**flatpak list**
|
||||
|
||||
### 显示最近安装的软件包
|
||||
|
||||
现在你已经看过以字母顺序索引的已经安装软件包了。如何显示最近已经安装的软件包?
|
||||
|
||||
幸运的是,Linux 系统保存了所有发生事件的日志。你可以参考最近安装软件包的日志。
|
||||
|
||||
有两个方法可以来做。用 dpkg 命令的日志或者 apt 命令的日志。
|
||||
|
||||
你仅仅需要用 grep 命令过滤已经安装的软件包日志。
|
||||
|
||||
```
|
||||
grep " install " /var/log/dpkg.log
|
||||
```
|
||||
|
||||
这会显示所有的软件安装包,其中包括最近安装的过程中被依赖的软件包。
|
||||
|
||||
```
|
||||
2019-02-12 12:41:42 install ubuntu-make:all 16.11.1ubuntu1
|
||||
2019-02-13 21:03:02 install xdg-desktop-portal:amd64 0.11-1
|
||||
2019-02-13 21:03:02 install libostree-1-1:amd64 2018.8-0ubuntu0.1
|
||||
2019-02-13 21:03:02 install flatpak:amd64 1.0.6-0ubuntu0.1
|
||||
2019-02-13 21:03:02 install xdg-desktop-portal-gtk:amd64 0.11-1
|
||||
2019-02-14 11:49:10 install qml-module-qtquick-window2:amd64 5.9.5-0ubuntu1.1
|
||||
2019-02-14 11:49:10 install qml-module-qtquick2:amd64 5.9.5-0ubuntu1.1
|
||||
2019-02-14 11:49:10 install qml-module-qtgraphicaleffects:amd64 5.9.5-0ubuntu1
|
||||
```
|
||||
|
||||
你也可以查看 apt历史命令日志。这个仅会显示用 apt 命令安装的的程序。但不会显示被依赖安装的软件包,详细的日志在日志里可以看到。有时你只是想看看对吧?
|
||||
|
||||
```
|
||||
grep " install " /var/log/apt/history.log
|
||||
```
|
||||
|
||||
具体的显示如下:
|
||||
|
||||
```
|
||||
Commandline: apt install pinta
|
||||
Commandline: apt install pinta
|
||||
Commandline: apt install tmux
|
||||
Commandline: apt install terminator
|
||||
Commandline: apt install moreutils
|
||||
Commandline: apt install ubuntu-make
|
||||
Commandline: apt install flatpak
|
||||
Commandline: apt install cool-retro-term
|
||||
Commandline: apt install ubuntu-software
|
||||
```
|
||||
|
||||
![显示最近已安装的软件包][13]显示最近已安装的软件包
|
||||
|
||||
apt 的历史日志非常有用。因为他显示了什么时候执行了 apt 命令,哪个用户执行的命令以及安装的软件包名
|
||||
|
||||
### 小贴士: 在软件中心显示已安装的程序包名
|
||||
|
||||
如果你觉得终端和命令行交互不友好,可以有一个方法查看系统的程序名。
|
||||
|
||||
可以打开软件中心,然后点击已安装标签。你可以看到系统上已经安装的程序包名
|
||||
|
||||
![Ubuntu 软件中心显示已安装的软件包][14] 在软件中心显示已安装的软件包
|
||||
|
||||
这个不会显示库和其他命令行的东西,有可能你也不想看到他们,因为你是大量交互都是在 GUI,相反你可以一直用 Synaptic 软件包管理器。
|
||||
|
||||
**结束语**
|
||||
|
||||
我希望这个简易的教程可以帮你查看 Ubuntu 和 基于 Debian 的发行版的已安装软件包。
|
||||
|
||||
如果你对本文有什么问题或建议,请在下面留言。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/list-installed-packages-ubuntu
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[guevaraya](https://github.com/guevaraya)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/getting-started-with-ubuntu/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages.png?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/apt-command-guide/
|
||||
[4]: https://itsfoss.com/apt-get-linux-guide/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages-in-ubuntu-with-apt.png?resize=800%2C407&ssl=1
|
||||
[6]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[7]: https://itsfoss.com/apt-vs-apt-get-difference/
|
||||
[8]: https://wiki.debian.org/dpkg
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages-with-dpkg.png?ssl=1
|
||||
[10]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-snap-packages.png?ssl=1
|
||||
[12]: https://itsfoss.com/flatpak-guide/
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/apt-list-recently-installed-packages.png?resize=800%2C187&ssl=1
|
||||
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/installed-software-ubuntu.png?ssl=1
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages.png?fit=800%2C450&ssl=1
|
Loading…
Reference in New Issue
Block a user