Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-10-10 10:45:21 +08:00
commit 8e8f4a4401
55 changed files with 5889 additions and 1990 deletions

View File

@ -0,0 +1,49 @@
从过时的 Windows 机器迁移到 Linux
======
> 这是一个当老旧的 Windows 机器退役时,决定迁移到 Linux 的故事。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-)
我在 ONLYOFFICE 的市场部门工作的每一天,我都能看到 Linux 用户在网上讨论我们的办公软件。我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu并且决定去尝试使用它。我的两个同事也加入了我的计划。
### 为何选择 Linux
我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当运行超过两个应用时,机器就会负载过度,关闭机器时有一半的几率冻结等等。这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。
升级到 Windows 的新版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。因此我们在这方面也存在意识形态的问题。
其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。
在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu][1] 的机器。我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。
### Linux 让人喜欢的地方
我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。
另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面小工具(甚至包括漫画和颜色选择器)。我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。
Linux 发行版通常是一个非常安全的环境。人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。因此你可以拥有很好的系统速度,并且节省了时间和金钱。
总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。仅仅通过短时间的使用,我们已经可以给它总结出以下特性:
* 操作很快很顺畅
* 高度可定制
* 对新手很友好
* 了解基本组件很有挑战性,但回报丰厚
* 安全可靠
* 对所有想改变工作场所的人来说都是一次绝佳的体验
你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么?或者你是否正在考虑做出改变?请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/move-to-linux-old-windows
作者:[Michael Korotaev][a]
译者:[bookug](https://github.com/bookug)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/michaelk
[1]:https://kubuntu.org/

View File

@ -1,57 +1,52 @@
Linux 开发的五大必备工具
======
> Linux 上的开发工具如此之多,以至于会担心找不到恰好适合你的。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-tools.png?itok=kkDNylRg)
Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的发展速度超出了人们的想象Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。
Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的变革速度超出了人们的想象Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。
为了选择更合适自己的开发工具,缩小选择范围是很必要的。但是这篇文章并不会要求你必须使用某个工具,而只是缩小到五个工具类别,然后对每个类别提供一个例子。然而,对于大多数类别,都会有不止一种选择。下面我们来看一下。
### 容器
放眼于现实,现在已经是容器的时代了。容器既容易进行部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。
放眼于现实,现在已经是容器的时代了。容器既及其容易部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。
一个使用容器的最佳范例是使用 [Docker][1],使用容器(或 Docker有这些好处
* 开发环境保持一致
* 部署后即可运行
* 易于跨平台部署
* Docker 映像适用于多种开发环境和语言
* 部署单个容器或容器集群都并不繁琐
通过 [Docker Hub][2],几乎可以找到适用于任何平台、任何开发环境、任何服务器,任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像就相当于免除了搭建开发环境的步骤可以直接开始开发应用程序、服务器、API 或服务。
通过 [Docker Hub][2],几乎可以找到适用于任何平台、任何开发环境、任何服务器、任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像就相当于免除了搭建开发环境的步骤可以直接开始开发应用程序、服务器、API 或服务。
Docker 在所有 Linux 平台上都很容易安装,例如可以通过终端输入以下命令在 Ubuntu 上安装 Docker
```
sudo apt-get install docker.io
```
Docker 安装完毕后,就可以从 Docker 仓库中拉取映像,然后开始开发和部署了(如下图)。
![Docker images][4]
*图 1 Docker 镜像准备部署*
### 版本控制工具
如果你正在开发一个巨大的项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git][6] 和 [GitHub][7] 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装:
如果你正在开发一个大型项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git][6] 和 [GitHub][7] 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装:
```
sudo apt-get install git
```
安装完毕后,就可以使用 Git 来实施版本控制了(如下图)。
![Git installed][9]
*图 2Git 已经安装,可以用于很多重要任务*
Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来管理非商用项目,当然也可以使用 GitHub 的付费模式(更多相关信息,可以参阅[价格矩阵][10])。
@ -63,23 +58,23 @@ Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来
![Bluefish][13]
*图 3运行在 Ubuntu 18.04 上的 Bluefish*
### IDE
集成开发环境Integrated Development Environment, IDE是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE其中 [Geany][14] 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。
<ruby>集成开发环境<rt>Integrated Development Environment</rt></ruby>IDE是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE其中 [Geany][14] 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。
Geany 也能在系统上轻松安装,例如执行以下命令在基于 Debian 的 Linux 发行版上安装 Geany
```
sudo apt-get install geany
```
安装完毕后,就可以快速上手这个易用且强大的 IDE 了(如下图)。
![Geany][16]
*图 4Geany 可以作为你的 IDE*
### 文本比较工具
@ -89,19 +84,18 @@ Meld 可以打开两个文件进行比较,并突出显示文件之间的差异
![Comparing two files][19]
*图 5 以简单差异的模式比较两个文件*
Meld 也可以通过大多数标准的软件库安装,在基于 Debian 的系统上,执行以下命令就可以安装:
Meld 也可以通过标准软件如安装,在基于 Debian 的系统上,执行以下命令就可以安装:
```
sudo apt-get install meld
```
### 高效地工作
以上提到的五个工具除了帮助你完成工作,而且有助于提高效率。尽管适用于 Linux 开发者的工具有很多,但对于以上几个类别,你最好分别使用一个对应的工具。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development
@ -109,7 +103,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-d
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,21 +1,17 @@
如何在 Linux 上使用网络配置工具 Netplan
======
> netplan 是一个命令行工具,用于在某些 Linux 发行版上配置网络。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa)
多年以来 Linux 管理员和用户们使用相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 /etc/network/interfaces 文件里配置。配置相当简单且从未失败。在文件中配置看起来就像这样:
多年以来 Linux 管理员和用户们相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 `/etc/network/interfaces` 文件里配置。配置相当简单且可以奏效。在文件中配置看起来就像这样:
```
auto enp10s0
iface enp10s0 inet static
address 192.168.1.162
netmask 255.255.255.0
gateway 192.168.1.100
dns-nameservers 1.0.0.1,1.1.1.1
```
@ -25,7 +21,7 @@ dns-nameservers 1.0.0.1,1.1.1.1
sudo systemctl restart networking
```
或者如果你使用不带systemd 的发行版,你可以通过老办法来重启网络:
或者,如果你使用不带 systemd 的发行版,你可以通过老办法来重启网络:
```
sudo /etc/init.d/networking restart
@ -33,13 +29,13 @@ sudo /etc/init.d/networking restart
你的网络将会重新启动,新的配置将会生效。
这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 interfaces 文件和 /etc/init.d/networking 脚本,我们现在转向使用 [Netplan][1]。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。
这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 `interfaces` 文件和 `/etc/init.d/networking` 脚本,我们现在转向使用 [Netplan][1]。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。
我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的间距必须保持一致,否则将会失败。你不用为每行使用特定的间距,只需保持一致就行了。
我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的缩进必须保持一致,否则将会失败。你不用为每行使用特定的缩进间距,只需保持一致就行了。
### 新的配置文件
打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 /etc/netplan 文件夹下发现 Netplan 的新配置文件。使用 cd/etc/netplan 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件:
打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 `/etc/netplan` 文件夹下发现 Netplan 的新配置文件。使用 `cd /etc/netplan` 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件:
```
01-netcfg.yaml
@ -55,13 +51,11 @@ sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
### 网络设备名称
在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 ip a然后找出哪一个设备将会被用到图 1
在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 `ip a`,然后找出哪一个设备将会被用到(图 1
![netplan][3]
图 1使用 ip a 命令找出设备名称
[Used with permission][4] (译注:这是什么鬼?)
*图 1使用 ip a 命令找出设备名称*
我将为 ens5 配置一个静态的 IP。
@ -75,67 +69,46 @@ sudo nano /etc/netplan/01-netcfg.yaml
文件的布局看起来就像这样:
```
network:
Version: 2
Renderer: networkd
ethernets:
DEVICE_NAME:
Dhcp4: yes/no
Addresses: [IP/NETMASK]
Gateway: GATEWAY
Nameservers:
Addresses: [NAMESERVER, NAMESERVER]
Version: 2
Renderer: networkd
ethernets:
DEVICE_NAME:
Dhcp4: yes/no
Addresses: [IP/NETMASK]
Gateway: GATEWAY
Nameservers:
Addresses: [NAMESERVER, NAMESERVER]
```
其中:
* DEVICE_NAME 是需要配置设备的实际名称。
* yes/no 代表是否启用 dhcp4。
* IP 是设备的 IP 地址。
* NETMASK 是 IP 地址的掩码。
* GATEWAY 是网关的地址。
* NAMESERVER 是由逗号分开的 DNS 服务器列表。
* `DEVICE_NAME` 是需要配置设备的实际名称。
* `yes`/`no` 代表是否启用 dhcp4。
* `IP` 是设备的 IP 地址。
* `NETMASK` 是 IP 地址的掩码。
* `GATEWAY` 是网关的地址。
* `NAMESERVER` 是由逗号分开的 DNS 服务器列表。
这是一份 .yaml 文件的样例:
```
network:
version: 2
renderer: networkd
ethernets:
ens5:
dhcp4: no
addresses: [192.168.1.230/24]
gateway4: 192.168.1.254
nameservers:
addresses: [8.8.4.4,8.8.8.8]
version: 2
renderer: networkd
ethernets:
ens5:
dhcp4: no
addresses: [192.168.1.230/24]
gateway4: 192.168.1.254
nameservers:
addresses: [8.8.4.4,8.8.8.8]
```
编辑上面的文件以达到你想要的效果。保存并关闭文件。
注意,掩码已经不用再配置为 255.255.255.0 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。
注意,掩码已经不用再配置为 `255.255.255.0` 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。
### 测试配置
@ -165,20 +138,13 @@ sudo netplan apply
```
network:
version: 2
renderer: networkd
ethernets:
ens5:
Addresses: []
dhcp4: true
optional: true
version: 2
renderer: networkd
ethernets:
ens5:
Addresses: []
dhcp4: true
optional: true
```
保存并退出。用下面命令来测试文件:
@ -187,15 +153,15 @@ network:
sudo netplan try
```
Netplan 应该会成功配置 DHCP 服务。这时你可以使用 ip a 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。
Netplan 应该会成功配置 DHCP 服务。这时你可以使用 `ip a` 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。
也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 02-netcfg.yaml 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。
也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 `02-netcfg.yaml` 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。
### 就是这些了
不管你信不信,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提...因此你会适应的。
不管怎样,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提……因此你会适应的。
在 Linux Foundation 和 edX 上通过 ["Introduction to Linux"] 课程学习更多关于 Linux 的内容。
在 Linux Foundation 和 edX 上通过 [“Introduction to Linux”][5] 课程学习更多关于 Linux 的内容。
--------------------------------------------------------------------------------
@ -204,7 +170,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-c
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[LuuMing](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
IssueHunt: A New Bounty Hunting Platform for Open Source Software
======
One of the issues that many open-source developers and companies struggle with is funding. There is an assumption, an expectation even, among the community that Free and Open Source Software must be provided free of cost. But even FOSS needs funding for continued development. How can we keep expecting better quality software if we dont create systems that enable continued development?
We already wrote an article about [open source funding platforms][1] out there that try to tackle this shortcoming, as of this July there is a new contender in the market that aims to help fill this gap: [IssueHunt][2].
### IssueHunt: A Bounty Hunting platform for Open Source Software
![IssueHunt website][3]
IssueHunt offers a service that pays freelance developers for contributing to open-source code. It does so through what are called bounties: financial rewards granted to whoever solves a given problem. The funding for these bounties comes from anyone who is willing to donate to have any given bug fixed or feature added.
If there is a problem with a piece of open-source software that you want fixed, you can offer up a reward amount of your choosing to whoever fixes it.
Do you want your own product snapped? Offer a bounty on IssueHunt to whoever snaps it. Its as simple as that.
And if you are a programmer, you can browse through open issues. Fix the issue (if you could), submit a pull request on the GitHub repository and if your pull request is merged, you get the money.
#### IssueHunt was originally an internal project for Boostnote
![IssueHunt][4]
The product came to be when the developers behind the note-taking app [Boostnote][5] reached out to the community for contributions to their own product.
In the first two years of utilizing IssueHunt, Boostnote received over 8,400 Github stars through hundreds contributors and overwhelming donations.
The product was so successful that the team decided to open it up to the rest of the community.
Today, [a list of projects utilize this service][6], offering thousands of dollars in bounties among them.
Boostnote boasts [$2,800 in total bounties][7], while Settings Sync, previously known as Visual Studio Code Settings Sync, offers [more than $1,600 in bounties.][8]
There are other services that provide something similar to what IssueHunt is offering here. Perhaps the most notable is [Bountysource][9], which offers a similar bounty service to IssueHunt, while also offering subscription payment processing similar to [Librepay][10].
#### What do you think of IssueHunt?
At the time of writing this article, IssueHunt is in its infancy, but I am incredibly excited to see where this project ends up in the comings years.
I dont know about you, but I am more than happy paying for FOSS. If the product is high quality and adds value to my life, then I will happily pay the developer the product. Especially since FOSS developers are creating products that respect my freedom in the process.
That being said, I will definitely keep my eye on IssueHunt moving forward for ways I can support the community either with my own money or by spreading the word where contribution is needed.
But what do you think? Do you agree with me, or do you think software should be Gratis free, and that contributions should be made on a volunteer basis? Let us know what you think in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/issuehunt/
作者:[Phillip Prado][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/phillip/
[1]: https://itsfoss.com/open-source-funding-platforms/
[2]: https://issuehunt.io
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/issuehunt-website.png
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/issuehunt.jpg
[5]: https://itsfoss.com/boostnote-linux-review/
[6]: https://issuehunt.io/repos
[7]: https://issuehunt.io/repos/53266139
[8]: https://issuehunt.io/repos/47984369
[9]: https://www.bountysource.com/
[10]: https://liberapay.com/

View File

@ -0,0 +1,75 @@
Troubleshooting Node.js Issues with llnode
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/node_1920.jpg?itok=Cwd2YtPd)
The llnode plugin lets you inspect Node.js processes and core dumps; it adds the ability to inspect JavaScript stack frames, objects, source code and more. At [Node+JS Interactive][1], Matheus Marchini, Node.js Collaborator and Lead Software Engineer at Sthima, will host a [workshop][2] on how to use llnode to find and fix issues quickly and reliably, without bloating your application with logs or compromising performance. He explains more in this interview.
**Linux.com: What are some common issues that happen with a Node.js application in production?**
**Matheus Marchini:** One of the most common issues Node.js developers might experience -- either in production or during development -- are unhandled exceptions. They happen when your code throws an error, and this error is not properly handled. There's a variation of this issue with Promises, although in this case, the problem is worse: if a Promise is rejected but there's no handler for that rejection, the application might enter into an undefined state and it can start to misbehave.
The application might also crash when it's using too much memory. This usually happens when there's a memory leak in the application, although we usually don't have classic memory leaks in Node.js. Instead of unreferenced objects, we might have objects that are not used anymore but are still retained by another object, leading the Garbage Collector to ignore them. If this happens with several objects, we can quickly exhaust our available memory.
Memory is not the only resource that might get exhausted. Given the asynchronous nature of Node.js and how it scales for a large number of requests, the application might start to run out on other resources such as opened file descriptions and a number of concurrent connections to a database.
Infinite loops are not that common because we usually catch those during development, but every once in a while one manages to slip through our tests and get into our production servers. These are pretty catastrophic because they will block the main thread, rendering the entire application unresponsive.
The last issues I'd like to point out are performance issues. Those can happen for a variety of reasons, ranging from unoptimized function to I/O latency.
**Linux.com: Are there any quick tests you can do to determine what might be happening with your Node.js application?**
**Marchini:** Node.js and V8 have several tools and features built-in which developers can use to find issues faster. For example, if you're facing performance issues, you might want to use the built-in [V8 CpuProfiler][3]. Memory issues can be tracked down with [V8 Sampling Heap Profiler][4]. All of these options are interesting because you can open their results in Chrome DevTools and get some nice graphical visualizations by default.
If you are using native modules on your project, V8 built-in tools might not give you enough insights, since they focus only on JavaScript metrics. As an alternative to V8 CpuProfiler, you can use system profiler tools, such as [perf for Linux][5] and Dtrace for FreeBSD / OS X. You can grab the result from these tools and turn them into flamegraphs, making it easier to find which functions are taking more time to process.
You can use third-party tools as well: [node-report][6] is an amazing first failure data capture which doesn't introduce a significant overhead. When your application crashes, it will generate a report with detailed information about the state of the system, including environment variables, flags used, operating system details, etc. You can also generate this report on demand, and it is extremely useful when asking for help in forums, for example. The best part is that, after installing it through npm, you can enable it with a flag -- no need to make changes in your code!
But one of the tools I'm most amazed by is [llnode][7].
**Linux.com: When would you want to use something like llnode; and what exactly is it?**
**Marchini:** **** llnode is useful when debugging infinite loops, uncaught exceptions or out of memory issues since it allows you to inspect the state of your application when it crashed. How does llnode do this? You can tell Node.js and your operating system to take a core dump of your application when it crashes and load it into llnode. llnode will analyze this core dump and give you useful information such as how many objects were allocated in the heap, the complete stack trace for the process (including native calls and V8 internals), pending requests and handlers in the event loop queue, etc.
The most impressive feature llnode has is its ability to inspect objects and functions: you can see which variables are available for a given function, look at the function's code and inspect which properties your objects have with their respective values. For example, you can look up which variables are available for your HTTP handler function and which parameters it received. You can also look at headers and the payload of a given request.
llnode is a plugin for [lldb][8], and it uses lldb features alongside hints provided by V8 and Node.js to recreate the process heap. It uses a few heuristics, too, so results might not be entirely correct sometimes. But most of the times the results are good enough -- and way better than not using any tool.
This technique -- which is called post-mortem debugging -- is not something new, though, and it has been part of the Node.js project since 2012. This is a common technique used by C and C++ developers, but not many dynamic runtimes support it. I'm happy we can say Node.js is one of those runtimes.
**Linux.com: What are some key items folks should know before adding llnode to their environment?**
**Marchini:** To install and use llnode you'll need to have lldb installed on your system. If you're on OS X, lldb is installed as part of Xcode. On Linux, you can install it from your distribution's repository. We recommend using LLDB 3.9 or later.
You'll also have to set up your environment to generate core dumps. First, remember to set the flag --abort-on-uncaught-exception when running a Node.js application, otherwise, Node.js won't generate a core dump when an uncaught exception happens. You'll also need to tell your operating system to generate core dumps when an application crashes. The most common way to do that is by running `ulimit -c unlimited`, but this will only apply to your current shell session. If you're using a process manager such as systemd I suggest looking at the process manager docs. You can also generate on-demand core dumps of a running process with tools such as gcore.
**Linux.com: What can we expect from llnode in the future?**
**Marchini:** llnode collaborators are working on several features and improvements to make the project more accessible for developers less familiar with native debugging tools. To accomplish that, we're improving the overall user experience as well as the project's documentation and installation process. Future versions will include colorized output, more reliable output for some commands and a simplified mode focused on JavaScript information. We are also working on a JavaScript API which can be used to automate some analysis, create graphical user interfaces, etc.
If this project sounds interesting to you, and you would like to get involved, feel free join the conversation in [our issues tracker][9] or contact me on social [@mmarkini][10]. I would love to help you get started!
Learn more at [Node+JS Interactive][1], coming up October 10-12, 2018 in Vancouver, Canada.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/9/troubleshooting-nodejs-issues-llnode
作者:[The Linux Foundation][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/ericstephenbrown
[1]: https://events.linuxfoundation.org/events/node-js-interactive-2018/?utm_source=Linux.com&utm_medium=article&utm_campaign=jsint18
[2]: http://sched.co/G285
[3]: https://nodejs.org/api/inspector.html#inspector_cpu_profiler
[4]: https://github.com/v8/sampling-heap-profiler
[5]: http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html
[6]: https://github.com/nodejs/node-report
[7]: https://github.com/nodejs/llnode
[8]: https://lldb.llvm.org/
[9]: https://github.com/nodejs/llnode/issues
[10]: https://twitter.com/mmarkini

View File

@ -0,0 +1,44 @@
Creator of the World Wide Web is Creating a New Decentralized Web
======
**Creator of the world wide web, Tim Berners-Lee has unveiled his plans to create a new decentralized web where the data will be controlled by the users.**
[Tim Berners-Lee][1] is known for creating the world wide web, i.e., the internet you know today. More than two decades later, Tim is working to free the internet from the clutches of corporate giants and give the power back to the people via a decentralized web.
Berners-Lee was unhappy with the way powerful forces of the internet handle data of the users for their own agenda. So he [started working on his own open source project][2] Solid “to restore the power and agency of individuals on the web.”
> Solid changes the current model where users have to hand over personal data to digital giants in exchange for perceived value. As weve all discovered, this hasnt been in our best interests. Solid is how we evolve the web in order to restore balance — by giving every one of us complete control over data, personal or not, in a revolutionary way.
![Tim Berners-Lee is creating a decentralized web with open source project Solid][3]
Basically, [Solid][4] is a platform built using the existing web where you create own pods (personal data store). You decide where this pod will be hosted, who will access which data element and how the data will be shared through this pod.
Berners-Lee believes that Solid “will empower individuals, developers and businesses with entirely new ways to conceive, build and find innovative, trusted and beneficial applications and services.”
Developers need to integrate Solid into their apps and sites. Solid is still in the early stages so there are no apps for now but the project website claims that “the first wave of Solid apps are being created now.”
Berners-Lee has created a startup called [Inrupt][5] and has taken a sabbatical from MIT to work full-time on Solid and to take it “from the vision of a few to the reality of many.”
If you are interested in Solid, [learn how to create apps][6] or [contribute to the project][7] in your own way. Of course, it will take a lot of effort to build and drive the broad adoption of Solid so every bit of contribution will count to the success of a decentralized web.
Do you think a [decentralized web][8] will be a reality? What do you think of decentralized web in general and project Solid in particular?
--------------------------------------------------------------------------------
via: https://itsfoss.com/solid-decentralized-web/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[2]: https://medium.com/@timberners_lee/one-small-step-for-the-web-87f92217d085
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/tim-berners-lee-solid-project.jpeg
[4]: https://solid.inrupt.com/
[5]: https://www.inrupt.com/
[6]: https://solid.inrupt.com/docs/getting-started
[7]: https://solid.inrupt.com/community
[8]: https://tech.co/decentralized-internet-guide-2018-02

View File

@ -0,0 +1,84 @@
13 tools to measure DevOps success
======
How's your DevOps initiative really going? Find out with open source tools
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-)
In today's enterprise, business disruption is all about agility with quality. Traditional processes and methods of developing software are challenged to keep up with the complexities that come with these new environments. Modern DevOps initiatives aim to help organizations use collaborations among different IT teams to increase agility and accelerate software application deployment.
How is the DevOps initiative going in your organization? Whether or not it's going as well as you expected, you need to do assessments to verify your impressions. Measuring DevOps success is very important because these initiatives target the very processes that determine how IT works. DevOps also values measuring behavior, although measurements are more about your business processes and less about your development and IT systems.
A metrics-oriented mindset is critical to ensuring DevOps initiatives deliver the intended results. Data-driven decisions and focused improvement activities lead to increased quality and efficiency. Also, the use of feedback to accelerate delivery is one reason DevOps creates a successful IT culture.
With DevOps, as with any IT initiative, knowing what to measure is always the first step. Let's examine how to use continuous delivery improvement and open source tools to assess your DevOps program on three key metrics: team efficiency, business agility, and security. These will also help you identify what challenges your organization has and what problems you are trying to solve with DevOps.
### 3 tools for measuring team efficiency
Measuring team efficiency—in terms of how the DevOps initiative fits into your organization and how well it works for cultural innovation—is the hardest area to measure. The key metrics that enable the DevOps team to work more effectively on culture and organization are all about agile software development, such as knowledge sharing, prioritizing tasks, resource utilization, issue tracking, cross-functional teams, and collaboration. The following open source tools can help you improve and measure team efficiency:
* [FunRetro][1] is a simple, intuitive tool that helps you collaborate across teams and improve what you do.
* [Kanboard][2] is a [kanban][3] board that helps you visualize your work in progress to focus on your goal.
* [Bugzilla][4] is a popular development tool with issue-tracking capabilities.
### 6 tools for measuring business agility
Speed is all that matters for accelerating business agility. Because DevOps gives organizations capabilities to deliver software faster with fewer failures, it's fast gaining acceptance. The key metrics are deployment time, change lead time, release frequency, and failover time. Puppet's [2017 State of DevOps Report][5] shows that high-performing DevOps practitioners deploy code updates 46x more frequently and high performers experience change lead times of under an hour, or 440x faster than average. Following are some open source tools to help you measure business agility:
* [Kubernetes][6] is a container-orchestration system for automating deployment, scaling, and management of containerized applications. (Read more about [Kubernetes][7] on Opensource.com.)
* [CRI-O][8] is a Kubernetes orchestrator used to manage and launch containerized workloads without relying on a traditional container engine.
* [Ansible][9] is a popular automation engine used to automate apps and IT infrastructure and run tasks including installing and configuring applications.
* [Jenkins][10] is an automation tool used to automate the software development process with continuous integration. It facilitates the technical aspects of continuous delivery.
* [Spinnaker][11] is a multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers.
* [Istio][12] is a service mesh that helps reduce the complexity of deployments and eases the strain on your development teams.
### 4 tools for measuring security
Security is always the last phase of measuring your DevOps initiative's success. Enterprises that have combined development and operations teams under a DevOps model are generally successful in releasing code at a much faster rate. But this has increased the need for integrating security in the DevOps process (this is known as DevSecOps), because the faster you release code, the faster you release any vulnerabilities in it.
Measuring security vulnerabilities early ensures that builds are stable before they pass to the next stage in the release pipeline. In addition, measuring security can help overcome resistance to DevOps adoption. You need tools that can help your dev and ops teams identify and prioritize vulnerabilities as they are using software, and teams must ensure they don't introduce vulnerabilities when making changes. These open source tools can help you measure security:
* [Gauntlt][13] is a ruggedization framework that enables security testing by devs, ops, and security.
* [Vault][14] securely manages secrets and encrypts data in transit, including storing credentials and API keys and encrypting passwords for user signups.
* [Clair][15] is a project for static analysis of vulnerabilities in appc and Docker containers.
* [SonarQube][16] is a platform for continuous inspection of code quality. It performs automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities.
**[See our related security article,[7 open source tools for rugged DevOps][17].]**
Many DevOps initiatives start small. DevOps requires a commitment to a new culture and process rather than new technologies. That's why organizations looking to implement DevOps will likely need to adopt open source tools for collecting data and using it to optimize business success. In that case, highly visible, useful measurements will become an essential part of every DevOps initiative's success
### What to read next
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/devops-measurement-tools
作者:[Daniel Oh][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[1]: https://funretro.io/
[2]: http://kanboard.net/
[3]: https://en.wikipedia.org/wiki/Kanban
[4]: https://www.bugzilla.org/
[5]: https://puppet.com/resources/whitepaper/state-of-devops-report
[6]: https://kubernetes.io/
[7]: https://opensource.com/resources/what-is-kubernetes
[8]: https://github.com/kubernetes-incubator/cri-o
[9]: https://github.com/ansible
[10]: https://jenkins.io/
[11]: https://www.spinnaker.io/
[12]: https://istio.io/
[13]: http://gauntlt.org/
[14]: https://www.hashicorp.com/blog/vault.html
[15]: https://github.com/coreos/clair
[16]: https://www.sonarqube.org/
[17]: https://opensource.com/article/18/9/open-source-tools-rugged-devops

View File

@ -0,0 +1,97 @@
Interview With Peter Ganten, CEO of Univention GmbH
======
I have been asking the Univention team to share the behind-the-scenes story of [**Univention**][1] for a couple of months. Finally, today we got the interview of **Mr. Peter H. Ganten** , CEO of Univention GmbH. Despite his busy schedule, in this interview, he shares what he thinks of the Univention project and its impact on open source ecosystem, what open source developers and companies will need to do to keep thriving and what are the biggest challenges for open source projects.
**OSTechNix: Whats your background and why have you founded Univention?**
**Peter Ganten:** I studied physics and psychology. In psychology I was a research assistant and coded evaluation software. I realized how important it is that results have to be disclosed in order to verify or falsify them. The same goes for the code that leads to the results. This brought me into contact with Open Source Software (OSS) and Linux.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/peter-ganten-interview.jpg)
I was a kind of technical lab manager and I had the opportunity to try out a lot, which led to my book about Debian. That was still in the New Economy era where the first business models emerged on how to make money with Open Source. When the bubble burst, I had the plan to make OSS a solid business model without venture capital but with Hanseatic business style seriously, steadily, no bling bling.
**What were the biggest challenges at the beginning?**
When I came from the university, the biggest challenge clearly was to gain entrepreneurial and business management knowledge. I quickly learned that its not about Open Source software as an end to itself but always about customer value, and the benefits OSS offers its customers. We all had to learn a lot.
In the beginning, we expected that Linux on the desktop would become established in a similar way as Linux on the server. However, this has not yet been proven true. The replacement has happened with Android and the iPhone. Our conclusion then was to change our offerings towards ID management and enterprise servers.
**Why does UCS matter? And for whom makes it sense to use it?**
There is cool OSS in all areas, but many organizations are not capable to combine it all together and make it manageable. For the basic infrastructure (Windows desktops, users, user rights, roles, ID management, apps) we need a central instance to which groupware, CRM etc. is connected. Without Univention this would have to be laboriously assembled and maintained manually. This is possible for very large companies, but far too complex for many other organizations.
[**UCS**][2] can be used out of the box and is scalable. Thats why its becoming more and more popular more than 10,000 organizations are using UCS already today.
**Who are your users and most important clients? What do they love most about UCS?**
The Core Edition is free of charge and used by organizations from all sectors and industries such as associations, micro-enterprises, universities or large organizations with thousands of users. In the enterprise environment, where Long Term Servicing (LTS) and professional support are particularly important, we have organizations ranging in size from 30-50 users to several thousand users. One of the target groups is the education system in Germany. In many large cities and within their school administrations UCS is used, for example, in Cologne, Hannover, Bremen, Kassel and in several federal states. They are looking for manageable IT and apps for schools. Thats what we offer, because we can guarantee these authorities full control over their users identities.
Also, more and more cloud service providers and MSPs want to take UCS to deliver a selection of cloud-based app solutions.
**Is UCS 100% Open Source? If so, how can you run a profitable business selling it?**
Yes, UCS is 100% Open Source, every line, the whole code is OSS. You can download and use UCS Core Edition for **FREE!**
We know that in large, complex organizations, vendor support and liability is needed for LTS, SLAs, and we offer that with our Enterprise subscriptions and consulting services. We dont offer these in the Core Edition.
**And what are you giving back to the OS community?**
A lot. We are involved in the Debian team and co-finance the LTS maintenance for Debian. For important OS components in UCS like [**OpenLDAP**][3], Samba or KVM we co-finance the development or have co-developed them ourselves. We make it all freely available.
We are also involved on the political level in ensuring that OSS is used. We are engaged, for example, in the [**Free Software Foundation Europe (FSFE)**][4] and the [**German Open Source Business Alliance**][5], of which I am the chairman. We are working hard to make OSS more successful.
**How can I get started with UCS?**
Its easy to get started with the Core Edition, which, like the Enterprise Edition, has an App Center and can be easily installed on your own hardware or as an appliance in a virtual machine. Just [**download Univention ISO**][6] and install it as described in the below link.
Alternatively, you can try the [**UCS Online Demo**][7] to get a first impression of Univention Corporate Server without actually installing it on your system.
**What do you think are the biggest challenges for Open Source?**
There is a certain attitude you can see over and over again even in bigger projects: OSS alone is viewed as an almost mandatory prerequisite for a good, sustainable, secure and trustworthy IT solution but just having decided to use OSS is no guarantee for success. You have to carry out projects professionally and cooperate with the manufacturers. A danger is that in complex projects people think: “Oh, OSS is free, I just put it all together by myself”. But normally you do not have the know-how to successfully implement complex software solutions. You would never proceed like this with Closed Source. There people think: “Oh, the software costs 3 $ millions, so its okay if I have to spend another 300,000 Dollars on consultants.”
At OSS this is different. If such projects fail and leave burnt ground behind, we have to explain again and again that the failure of such projects is not due to the nature of OSS but to its poor implementation and organization in a specific project: You have to conclude reasonable contracts and involve partners as in the proprietary world, but youll gain a better solution.
Another challenge: We must stay innovative, move forward, attract new people who are enthusiastic about working on projects. Thats sometimes a challenge. For example, there are a number of proprietary cloud services that are good but lead to extremely high dependency. There are approaches to alternatives in OSS, but no suitable business models yet. So its hard to find and fund developers. For example, I can think of Evernote and OneNote for which there is no reasonable OSS alternative.
**And what will the future bring for Univention?**
I dont have a crystal ball, but we are extremely optimistic. We see a very high growth potential in the education market. More OSS is being made in the public sector, because we have repeatedly experienced the dead ends that can be reached if we solely rely on Closed Source.
Overall, we will continue our organic growth at double-digit rates year after year.
UCS and its core functionalities of identity management, infrastructure management and app center will increasingly be offered and used from the cloud as a managed service. We will support our technology in this direction, e.g., through containers, so that a hypervisor or bare metal is not always necessary for operation.
**You have been the CEO of Univention for a long time. What keeps you motivated?**
I have been the CEO of Univention for more than 16 years now. My biggest motivation is to realize that something is moving. That we offer the better way for IT. That the people who go this way with us are excited to work with us. I go home satisfied in the evening (of course not every evening). Its totally cool to work with the team I have. It motivates and pushes you every time I need it myself.
Im a techie and nerd at heart, I enjoy dealing with technology. So Im totally happy at this place and Im grateful to the world that I can do whatever I want every day. Not everyone can say that.
**Who gives you inspiration?**
My employees, the customers and the Open Source projects. The exchange with other people.
The motivation behind everything is that we want to make sure that mankind will be able to influence and change the IT that surrounds us today and in the future just the way we want it and we thinks its good. We want to make a contribution to this. That is why Univention is there. That is important to us every day.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/interview-with-peter-ganten-ceo-of-univention-gmbh/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/introduction-univention-corporate-server/
[2]: https://www.univention.com/products/ucs/
[3]: https://www.ostechnix.com/redhat-and-suse-announced-to-withdraw-support-for-openldap/
[4]: https://fsfe.org/
[5]: https://osb-alliance.de/
[6]: https://www.univention.com/downloads/download-ucs/
[7]: https://www.univention.com/downloads/ucs-online-demo/

View File

@ -0,0 +1,637 @@
# Compiling Lisp to JavaScript From Scratch in 350
In this article we will look at a from-scratch implementation of a compiler from a simple LISP-like calculator language to JavaScript. The complete source code can be found [here][7].
We will:
1. Define our language and write a simple program in it
2. Implement a simple parser combinator library
3. Implement a parser for our language
4. Implement a pretty printer for our language
5. Define a subset of JavaScript for our usage
6. Implement a code translator to the JavaScript subset we defined
7. Glue it all together
Let's start!
### 1\. Defining the language
The main attraction of lisps is that their syntax already represent a tree, this is why they are so easy to parse. We'll see that soon. But first let's define our language. Here's a BNF description of our language's syntax:
```
program ::= expr
expr ::= <integer> | <name> | ([<expr>])
```
Basically, our language let's us define one expression at the top level which it will evaluate. An expression is composed of either an integer, for example `5`, a variable, for example `x`, or a list of expressions, for example `(add x 1)`.
An integer evaluate to itself, a variable evaluates to what it's bound in the current environment, and a list evaluates to a function call where the first argument is the function and the rest are the arguments to the function.
We have some built-in special forms in our language so we can do more interesting stuff:
* let expression let's us introduce new variables in the environment of the body of the let. The syntax is:
```
let ::= (let ([<letarg>]) <body>)
letargs ::= (<name> <expr>)
body ::= <expr>
```
* lambda expression: evaluates to an anonymous function definition. The syntax is:
```
lambda ::= (lambda ([<name>]) <body>)
```
We also have a few built in functions: `add`, `mul`, `sub`, `div` and `print`.
Let's see a quick example of a program written in our language:
```
(let
((compose
(lambda (f g)
(lambda (x) (f (g x)))))
(square
(lambda (x) (mul x x)))
(add1
(lambda (x) (add x 1))))
(print ((compose square add1) 5)))
```
This program defines 3 functions: `compose`, `square` and `add1`. And then prints the result of the computation:`((compose square add1) 5)`
I hope this is enough information about the language. Let's start implementing it!
We can define the language in Haskell like this:
```
type Name = String
data Expr
= ATOM Atom
| LIST [Expr]
deriving (Eq, Read, Show)
data Atom
= Int Int
| Symbol Name
deriving (Eq, Read, Show)
```
We can parse programs in the language we defined to an `Expr`. Also, we are giving the new data types `Eq`, `Read`and `Show` instances to aid in testing and debugging. You'll be able to use those in the REPL for example to verify all this actually works.
The reason we did not define `lambda`, `let` and the other built-in functions as part of the syntax is because we can get away with it in this case. These functions are just a more specific case of a `LIST`. So I decided to leave this to a later phase.
Usually, you would like to define these special cases in the abstract syntax - to improve error messages, to unable static analysis and optimizations and such, but we won't do that here so this is enough for us.
Another thing you would like to do usually is add some annotation to the syntax. For example the location: Which file did this `Expr` come from and which row and col in the file. You can use this in later stages to print the location of errors, even if they are not in the parser stage.
* _Exercise 1_ : Add a `Program` data type to include multiple `Expr` sequentially
* _Exercise 2_ : Add location annotation to the syntax tree.
### 2\. Implement a simple parser combinator library
First thing we are going to do is define an Embedded Domain Specific Language (or EDSL) which we will use to define our languages' parser. This is often referred to as parser combinator library. The reason we are doing it is strictly for learning purposes, Haskell has great parsing libraries and you should definitely use them when building real software, or even when just experimenting. One such library is [megaparsec][8].
First let's talk about the idea behind our parser library implementation. In it's essence, our parser is a function that takes some input, might consume some or all of the input, and returns the value it managed to parse and the rest of the input it didn't parse yet, or throws an error if it failed. Let's write that down.
```
newtype Parser a
= Parser (ParseString -> Either ParseError (a, ParseString))
data ParseString
= ParseString Name (Int, Int) String
data ParseError
= ParseError ParseString Error
type Error = String
```
Here we defined three main new types.
First, `Parser a`, is the parsing function we described before.
Second, `ParseString` is our input or state we carry along. It has three significant parts:
* `Name`: This is the name of the source
* `(Int, Int)`: This is the current location in the source
* `String`: This is the remaining string left to parse
Third, `ParseError` contains the current state of the parser and an error message.
Now we want our parser to be flexible, so we will define a few instances for common type classes for it. These instances will allow us to combine small parsers to make bigger parsers (hence the name 'parser combinators').
The first one is a `Functor` instance. We want a `Functor` instance because we want to be able to define a parser using another parser simply by applying a function on the parsed value. We will see an example of this when we define the parser for our language.
```
instance Functor Parser where
fmap f (Parser parser) =
Parser (\str -> first f <$> parser str)
```
The second instance is an `Applicative` instance. One common use case for this instance instance is to lift a pure function on multiple parsers.
```
instance Applicative Parser where
pure x = Parser (\str -> Right (x, str))
(Parser p1) <*> (Parser p2) =
Parser $
\str -> do
(f, rest) <- p1 str
(x, rest') <- p2 rest
pure (f x, rest')
```
(Note:  _We will also implement a Monad instance so we can use do notation here._ )
The third instance is an `Alternative` instance. We want to be able to supply an alternative parser in case one fails.
```
instance Alternative Parser where
empty = Parser (`throwErr` "Failed consuming input")
(Parser p1) <|> (Parser p2) =
Parser $
\pstr -> case p1 pstr of
Right result -> Right result
Left _ -> p2 pstr
```
The forth instance is a `Monad` instance. So we'll be able to chain parsers.
```
instance Monad Parser where
(Parser p1) >>= f =
Parser $
\str -> case p1 str of
Left err -> Left err
Right (rs, rest) ->
case f rs of
Parser parser -> parser rest
```
Next, let's define a way to run a parser and a utility function for failure:
```
runParser :: String -> String -> Parser a -> Either ParseError (a, ParseString)
runParser name str (Parser parser) = parser $ ParseString name (0,0) str
throwErr :: ParseString -> String -> Either ParseError a
throwErr ps@(ParseString name (row,col) _) errMsg =
Left $ ParseError ps $ unlines
[ "*** " ++ name ++ ": " ++ errMsg
, "* On row " ++ show row ++ ", column " ++ show col ++ "."
]
```
Now we'll start implementing the combinators which are the API and heart of the EDSL.
First, we'll define `oneOf`. `oneOf` will succeed if one of the characters in the list supplied to it is the next character of the input and will fail otherwise.
```
oneOf :: [Char] -> Parser Char
oneOf chars =
Parser $ \case
ps@(ParseString name (row, col) str) ->
case str of
[] -> throwErr ps "Cannot read character of empty string"
(c:cs) ->
if c `elem` chars
then Right (c, ParseString name (row, col+1) cs)
else throwErr ps $ unlines ["Unexpected character " ++ [c], "Expecting one of: " ++ show chars]
```
`optional` will stop a parser from throwing an error. It will just return `Nothing` on failure.
```
optional :: Parser a -> Parser (Maybe a)
optional (Parser parser) =
Parser $
\pstr -> case parser pstr of
Left _ -> Right (Nothing, pstr)
Right (x, rest) -> Right (Just x, rest)
```
`many` will try to run a parser repeatedly until it fails. When it does, it'll return a list of successful parses. `many1`will do the same, but will throw an error if it fails to parse at least once.
```
many :: Parser a -> Parser [a]
many parser = go []
where go cs = (parser >>= \c -> go (c:cs)) <|> pure (reverse cs)
many1 :: Parser a -> Parser [a]
many1 parser =
(:) <$> parser <*> many parser
```
These next few parsers use the combinators we defined to make more specific parsers:
```
char :: Char -> Parser Char
char c = oneOf [c]
string :: String -> Parser String
string = traverse char
space :: Parser Char
space = oneOf " \n"
spaces :: Parser String
spaces = many space
spaces1 :: Parser String
spaces1 = many1 space
withSpaces :: Parser a -> Parser a
withSpaces parser =
spaces *> parser <* spaces
parens :: Parser a -> Parser a
parens parser =
(withSpaces $ char '(')
*> withSpaces parser
<* (spaces *> char ')')
sepBy :: Parser a -> Parser b -> Parser [b]
sepBy sep parser = do
frst <- optional parser
rest <- many (sep *> parser)
pure $ maybe rest (:rest) frst
```
Now we have everything we need to start defining a parser for our language.
* _Exercise_ : implement an EOF (end of file/input) parser combinator.
### 3\. Implementing a parser for our language
To define our parser, we'll use the top-bottom method.
```
parseExpr :: Parser Expr
parseExpr = fmap ATOM parseAtom <|> fmap LIST parseList
parseList :: Parser [Expr]
parseList = parens $ sepBy spaces1 parseExpr
parseAtom :: Parser Atom
parseAtom = parseSymbol <|> parseInt
parseSymbol :: Parser Atom
parseSymbol = fmap Symbol parseName
```
Notice that these four function are a very high-level description of our language. This demonstrate why Haskell is so nice for parsing. Still, after defining the high-level parts, we still need to define the lower-level `parseName` and `parseInt`.
What characters can we use as names in our language? Let's decide to use lowercase letters, digits and underscores, where the first character must be a letter.
```
parseName :: Parser Name
parseName = do
c <- oneOf ['a'..'z']
cs <- many $ oneOf $ ['a'..'z'] ++ "0123456789" ++ "_"
pure (c:cs)
```
For integers, we want a sequence of digits optionally preceding by '-':
```
parseInt :: Parser Atom
parseInt = do
sign <- optional $ char '-'
num <- many1 $ oneOf "0123456789"
let result = read $ maybe num (:num) sign of
pure $ Int result
```
Lastly, we'll define a function to run a parser and get back an `Expr` or an error message.
```
runExprParser :: Name -> String -> Either String Expr
runExprParser name str =
case runParser name str (withSpaces parseExpr) of
Left (ParseError _ errMsg) -> Left errMsg
Right (result, _) -> Right result
```
* _Exercise 1_ : Write a parser for the `Program` type you defined in the first section
* _Exercise 2_ : Rewrite `parseName` in Applicative style
* _Exercise 3_ : Find a way to handle the overflow case in `parseInt` instead of using `read`.
### 4\. Implement a pretty printer for our language
One more thing we'd like to do is be able to print our programs as source code. This is useful for better error messages.
```
printExpr :: Expr -> String
printExpr = printExpr' False 0
printAtom :: Atom -> String
printAtom = \case
Symbol s -> s
Int i -> show i
printExpr' :: Bool -> Int -> Expr -> String
printExpr' doindent level = \case
ATOM a -> indent (bool 0 level doindent) (printAtom a)
LIST (e:es) ->
indent (bool 0 level doindent) $
concat
[ "("
, printExpr' False (level + 1) e
, bool "\n" "" (null es)
, intercalate "\n" $ map (printExpr' True (level + 1)) es
, ")"
]
indent :: Int -> String -> String
indent tabs e = concat (replicate tabs " ") ++ e
```
* _Exercise_ : Write a pretty printer for the `Program` type you defined in the first section
Okay, we wrote around 200 lines so far of what's typically called the front-end of the compiler. We have around 150 more lines to go and three more tasks: We need to define a subset of JS for our usage, define the translator from our language to that subset, and glue the whole thing together. Let's go!
### 5\. Define a subset of JavaScript for our usage
First, we'll define the subset of JavaScript we are going to use:
```
data JSExpr
= JSInt Int
| JSSymbol Name
| JSBinOp JSBinOp JSExpr JSExpr
| JSLambda [Name] JSExpr
| JSFunCall JSExpr [JSExpr]
| JSReturn JSExpr
deriving (Eq, Show, Read)
type JSBinOp = String
```
This data type represent a JavaScript expression. We have two atoms - `JSInt` and `JSSymbol` to which we'll translate our languages' `Atom`, We have `JSBinOp` to represent a binary operation such as `+` or `*`, we have `JSLambda`for anonymous functions same as our `lambda expression`, We have `JSFunCall` which we'll use both for calling functions and introducing new names as in `let`, and we have `JSReturn` to return values from functions as that's required in JavaScript.
This `JSExpr` type is an **abstract representation** of a JavaScript expression. We will translate our own `Expr`which is an abstract representation of our languages' expression to `JSExpr` and from there to JavaScript. But in order to do that we need to take this `JSExpr` and produce JavaScript code from it. We'll do that by pattern matching on `JSExpr` recursively and emit JS code as a `String`. This is basically the same thing we did in `printExpr`. We'll also track the scoping of elements so we can indent the generated code in a nice way.
```
printJSOp :: JSBinOp -> String
printJSOp op = op
printJSExpr :: Bool -> Int -> JSExpr -> String
printJSExpr doindent tabs = \case
JSInt i -> show i
JSSymbol name -> name
JSLambda vars expr -> (if doindent then indent tabs else id) $ unlines
["function(" ++ intercalate ", " vars ++ ") {"
,indent (tabs+1) $ printJSExpr False (tabs+1) expr
] ++ indent tabs "}"
JSBinOp op e1 e2 -> "(" ++ printJSExpr False tabs e1 ++ " " ++ printJSOp op ++ " " ++ printJSExpr False tabs e2 ++ ")"
JSFunCall f exprs -> "(" ++ printJSExpr False tabs f ++ ")(" ++ intercalate ", " (fmap (printJSExpr False tabs) exprs) ++ ")"
JSReturn expr -> (if doindent then indent tabs else id) $ "return " ++ printJSExpr False tabs expr ++ ";"
```
* _Exercise 1_ : Add a `JSProgram` type that will hold multiple `JSExpr` and create a function `printJSExprProgram` to generate code for it.
* _Exercise 2_ : Add a new type of `JSExpr` - `JSIf`, and generate code for it.
### 6\. Implement a code translator to the JavaScript subset we defined
We are almost there. In this section we'll create a function to translate `Expr` to `JSExpr`.
The basic idea is simple, we'll translate `ATOM` to `JSSymbol` or `JSInt` and `LIST` to either a function call or a special case we'll translate later.
```
type TransError = String
translateToJS :: Expr -> Either TransError JSExpr
translateToJS = \case
ATOM (Symbol s) -> pure $ JSSymbol s
ATOM (Int i) -> pure $ JSInt i
LIST xs -> translateList xs
translateList :: [Expr] -> Either TransError JSExpr
translateList = \case
[] -> Left "translating empty list"
ATOM (Symbol s):xs
| Just f <- lookup s builtins ->
f xs
f:xs ->
JSFunCall <$> translateToJS f <*> traverse translateToJS xs
```
`builtins` is a list of special cases to translate, like `lambda` and `let`. Every case gets the list of arguments for it, verify that its syntactically valid and translates it to the equivalent `JSExpr`.
```
type Builtin = [Expr] -> Either TransError JSExpr
type Builtins = [(Name, Builtin)]
builtins :: Builtins
builtins =
[("lambda", transLambda)
,("let", transLet)
,("add", transBinOp "add" "+")
,("mul", transBinOp "mul" "*")
,("sub", transBinOp "sub" "-")
,("div", transBinOp "div" "/")
,("print", transPrint)
]
```
In our case, we treat built-in special forms as special and not first class, so will not be able to use them as first class functions and such.
We'll translate a Lambda to an anonymous function:
```
transLambda :: [Expr] -> Either TransError JSExpr
transLambda = \case
[LIST vars, body] -> do
vars' <- traverse fromSymbol vars
JSLambda vars' <$> (JSReturn <$> translateToJS body)
vars ->
Left $ unlines
["Syntax error: unexpected arguments for lambda."
,"expecting 2 arguments, the first is the list of vars and the second is the body of the lambda."
,"In expression: " ++ show (LIST $ ATOM (Symbol "lambda") : vars)
]
fromSymbol :: Expr -> Either String Name
fromSymbol (ATOM (Symbol s)) = Right s
fromSymbol e = Left $ "cannot bind value to non symbol type: " ++ show e
```
We'll translate let to a definition of a function with the relevant named arguments and call it with the values, Thus introducing the variables in that scope:
```
transLet :: [Expr] -> Either TransError JSExpr
transLet = \case
[LIST binds, body] -> do
(vars, vals) <- letParams binds
vars' <- traverse fromSymbol vars
JSFunCall . JSLambda vars' <$> (JSReturn <$> translateToJS body) <*> traverse translateToJS vals
where
letParams :: [Expr] -> Either Error ([Expr],[Expr])
letParams = \case
[] -> pure ([],[])
LIST [x,y] : rest -> ((x:) *** (y:)) <$> letParams rest
x : _ -> Left ("Unexpected argument in let list in expression:\n" ++ printExpr x)
vars ->
Left $ unlines
["Syntax error: unexpected arguments for let."
,"expecting 2 arguments, the first is the list of var/val pairs and the second is the let body."
,"In expression:\n" ++ printExpr (LIST $ ATOM (Symbol "let") : vars)
]
```
We'll translate an operation that can work on multiple arguments to a chain of binary operations. For example: `(add 1 2 3)` will become `1 + (2 + 3)`
```
transBinOp :: Name -> Name -> [Expr] -> Either TransError JSExpr
transBinOp f _ [] = Left $ "Syntax error: '" ++ f ++ "' expected at least 1 argument, got: 0"
transBinOp _ _ [x] = translateToJS x
transBinOp _ f list = foldl1 (JSBinOp f) <$> traverse translateToJS list
```
And we'll translate a `print` as a call to `console.log`
```
transPrint :: [Expr] -> Either TransError JSExpr
transPrint [expr] = JSFunCall (JSSymbol "console.log") . (:[]) <$> translateToJS expr
transPrint xs = Left $ "Syntax error. print expected 1 arguments, got: " ++ show (length xs)
```
Notice that we could have skipped verifying the syntax if we'd parse those as special cases of `Expr`.
* _Exercise 1_ : Translate `Program` to `JSProgram`
* _Exercise 2_ : add a special case for `if Expr Expr Expr` and translate it to the `JSIf` case you implemented in the last exercise
### 7\. Glue it all together
Finally, we are going to glue this all together. We'll:
1. Read a file
2. Parse it to `Expr`
3. Translate it to `JSExpr`
4. Emit JavaScript code to the standard output
We'll also enable a few flags for testing:
* `--e` will parse and print the abstract representation of the expression (`Expr`)
* `--pp` will parse and pretty print
* `--jse` will parse, translate and print the abstract representation of the resulting JS (`JSExpr`)
* `--ppc` will parse, pretty print and compile
```
main :: IO ()
main = getArgs >>= \case
[file] ->
printCompile =<< readFile file
["--e",file] ->
either putStrLn print . runExprParser "--e" =<< readFile file
["--pp",file] ->
either putStrLn (putStrLn . printExpr) . runExprParser "--pp" =<< readFile file
["--jse",file] ->
either print (either putStrLn print . translateToJS) . runExprParser "--jse" =<< readFile file
["--ppc",file] ->
either putStrLn (either putStrLn putStrLn) . fmap (compile . printExpr) . runExprParser "--ppc" =<< readFile file
_ ->
putStrLn $ unlines
["Usage: runghc Main.hs [ --e, --pp, --jse, --ppc ] <filename>"
,"--e print the Expr"
,"--pp pretty print Expr"
,"--jse print the JSExpr"
,"--ppc pretty print Expr and then compile"
]
printCompile :: String -> IO ()
printCompile = either putStrLn putStrLn . compile
compile :: String -> Either Error String
compile str = printJSExpr False 0 <$> (translateToJS =<< runExprParser "compile" str)
```
That's it. We have a compiler from our language to JS. Again, you can view the full source file [here][9].
Running our compiler with the example from the first section yields this JavaScript code:
```
$ runhaskell Lisp.hs example.lsp
(function(compose, square, add1) {
return (console.log)(((compose)(square, add1))(5));
})(function(f, g) {
return function(x) {
return (f)((g)(x));
};
}, function(x) {
return (x * x);
}, function(x) {
return (x + 1);
})
```
If you have node.js installed on your computer, you can run this code by running:
```
$ runhaskell Lisp.hs example.lsp | node -p
36
undefined
```
* _Final exercise_ : instead of compiling an expression, compile a program of multiple expressions.
--------------------------------------------------------------------------------
via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js
作者:[ Gil Mizrahi ][a]
选题:[oska874][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://gilmi.me/home
[b]:https://github.com/oska874
[1]:https://gilmi.me/blog/authors/Gil
[2]:https://gilmi.me/blog/tags/compilers
[3]:https://gilmi.me/blog/tags/fp
[4]:https://gilmi.me/blog/tags/haskell
[5]:https://gilmi.me/blog/tags/lisp
[6]:https://gilmi.me/blog/tags/parsing
[7]:https://gist.github.com/soupi/d4ff0727ccb739045fad6cdf533ca7dd
[8]:https://mrkkrp.github.io/megaparsec/
[9]:https://gist.github.com/soupi/d4ff0727ccb739045fad6cdf533ca7dd
[10]:https://gilmi.me/blog/post/2016/10/14/lisp-to-js

View File

@ -1,140 +0,0 @@
[translating by dianbanjiu] The Best Linux Distributions for 2018
============================================================
![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018")
Jack Wallen shares his picks for the best Linux distributions for 2018.[Creative Commons Zero][6]Pixabay
Its a new year and the landscape of possibility is limitless for Linux. Whereas 2017 brought about some big changes to a number of Linux distributions, I believe 2018 will bring serious stability and market share growth—for both the server and the desktop.
For those who might be looking to migrate to the open source platform (or those looking to switch it up), what are the best choices for the coming year? If you hop over to [Distrowatch][14], youll find a dizzying array of possibilities, some of which are on the rise, and some that are seeing quite the opposite effect.
So, which Linux distributions will 2018 favor? I have my thoughts. In fact, Im going to share them with you now.
Similar to what I did for[ last years list][15], Im going to make this task easier and break down the list, as follows: sysadmin, lightweight distribution, desktop, distro with more to prove, IoT, and server. These categories should cover the needs of any type of Linux user.
With that said, lets get to the list of best Linux distributions for 2018.
### Best distribution for sysadmins
[Debian][16] isnt often seen on “best of” lists. It should be. Why? If you consider that Debian is the foundation for Ubuntu (which is, in turn, the foundation for so many distributions), its pretty easy to understand why this distribution should find its way on many a list. But why for administrators? Ive considered this for two very important reasons:
* Ease of use
* Extreme stability
Because Debian uses the dpkg and apt package managers, it makes for an incredibly easy to use environment. And because Debian offers one of the the most stable Linux platforms, it makes for an ideal environment for so many things: Desktops, servers, testing, development. Although Debian may not include the plethora of applications found in last years winner (for this category), [Parrot Linux][17], it is very easy to add any/all the necessary applications you need to get the job done. And because Debian can be installed with your choice of desktop (Cinnamon, GNOME, KDE, LXDE, Mate, or Xfce), you can be sure the interface will meet your needs.
![debian](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/debian.jpg?itok=XkHHG692 "debian")
Figure 1: The GNOME desktop running on top of Debian 9.3.[Used with permission][1]
At the moment, Debian is listed at #2 on Distrowatch. Download it, install it, and then make it serve a specific purpose. It may not be flashy, but Debian is a sysadmin dream come true.
### Best lightweight distribution
Lightweight distribution serve a very specific purpose—giving new life to older, lesser-powered machines. But that doesnt mean these particular distributions should only be considered for your older hardware. If speed is your ultimate need, you might want to see just how fast this category of distribution will run on your modern machine.
Topping the list of lightweight distributions for 2018 is [Lubuntu][18]. Although there are plenty of options in this category, few come even close to the next-to-zero learning curve found on this distribution. And although Lubuntus footprint isnt quite as small as Puppy Linux, thanks to it being a member of the Ubuntu family, the ease of use gained with this distribution makes up for it. But fear not, Lubuntu wont bog down your older hardware.The requirements are:
* CPU: Pentium 4 or Pentium M or AMD K8
* For local applications, Lubuntu can function with 512MB of RAM. For online usage (Youtube, Google+, Google Drive, and Facebook),  1GB of RAM is recommended.
Lubuntu makes use of the LXDE desktop (Figure 2), which means users new to Linux wont have the slightest problem working with this distribution. The short list of included apps (such as Abiword, Gnumeric, and Firefox) are all lightning fast and user-friendly.
### [lubuntu.jpg][8]
![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/lubuntu_2.jpg?itok=BkTnh7hU "Lubuntu")
Figure 2: The Lubuntu LXDE desktop in action.[Used with permission][2]
Lubntu can make short and easy work of breathing life into hardware that is up to ten years old.
### Best desktop distribution
For the second year in a row, [Elementary OS][19] tops my list of best Desktop distribution. For many, the leader on the Desktop is [Linux Mint][20] (which is a very fine flavor). However, for my money, its hard to beat the ease of use and stability of Elementary OS. Case in point, I was certain the release of [Ubuntu][21] 17.10 would have me migrating back to Canonicals distribution. Very soon after migrating to the new GNOME-Friendly Ubuntu, I found myself missing the look, feel, and reliability of Elementary OS (Figure 3). After two weeks with Ubuntu, I was back to Elementary OS.
### [elementaros.jpg][9]
![Elementary OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaros.jpg?itok=SRZC2vkg "Elementary OS")
Figure 3: The Pantheon desktop is a work of art as a desktop.[Used with permission][3]
Anyone that has given Elementary OS a go immediately feels right at home. The Pantheon desktop is a perfect combination of slickness and user-friendliness. And with each update, it only gets better.
Although Elementary OS stands at #6 on the Distrowatch page hit ranking, I predict it will find itself climbing to at least the third spot by the end of 2018\. The Elementary developers are very much in tune with what users want. They listen and they evolve. However, the current state of this distribution is so good, it seems all they could do to better it is a bit of polish here and there. Anyone looking for a desktop that offers a unified look and feel throughout the UI, Elementary OS is hard to beat. If you need a desktop that offers an outstanding ratio of reliability and ease of use, Elementary OS is your distribution.
### Best distro for those with something to prove
For the longest time [Gentoo][22] sat on top of the “show us your skills” distribution list. However, I think its time Gentoo took a backseat to the true leader of “something to prove”: [Linux From Scratch][23]. You may not think this fair, as LFS isnt actually a distribution, but a project that helps users create their own Linux distribution. But, seriously, if you want to go a very long way to proving your Linux knowledge, what better way than to create your own distribution? From the LFS project, you can build a custom Linux system, from the ground up... entirely from source code. So, if you really have something to prove, download the [Linux From Scratch Book][24] and start building.
### Best distribution for IoT
For the second year in a row [Ubuntu Core][25] wins, hands down. Ubuntu Core is a tiny, transactional version of Ubuntu, built specifically for embedded and IoT devices. What makes Ubuntu Core so perfect for IoT is that it places the focus on snap packages—universal packages that can be installed onto a platform, without interfering with the base system. These snap packages contain everything they need to run (including dependencies), so there is no worry the installation will break the operating system (or any other installed software). Also, snaps are very easy to upgrade and run in an isolated sandbox, making them a great solution for IoT.
Another area of security built into Ubuntu Core is the login mechanism. Ubuntu Core works with Ubuntu One ssh keys, such that the only way to log into the system is via uploaded ssh keys to a [Ubuntu One account][26] (Figure 4). This makes for a heightened security for your IoT devices.
### [ubuntucore.jpg][10]
![ Ubuntu Core](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntucore.jpg?itok=Ydfq8NKH " Ubuntu Core")
Figure 4:The Ubuntu Core screen indicating a remote access enabled via Ubuntu One user.[Used with permission][4]
### Best server distribution
This where things get a bit confusing. The primary reason is support. If you need commercial support your best choice might be, at first blush, [Red Hat Enterprise Linux][27]. Red Hat has proved itself, year after year, to not only be one of the strongest enterprise server platforms on the planet, but the single most profitable open source businesses (with over $2 billion in annual revenue).
However, Red Hat isnt far and away the only server distribution. In fact, Red Hat doesnt even dominate every aspect of Enterprise server computing. If you look at cloud statistics on Amazons Elastic Compute Cloud alone, Ubuntu blows away Red Hat Enterprise Linux. According to [The Cloud Market][28], EC2 statistics show RHEL at under 100k deployments, whereas Ubuntu is over 200k deployments. Thats significant.
The end result is that Ubuntu has pretty much taken over as the leader in the cloud. And if you combine that with Ubuntus ease of working with and managing containers, it starts to become clear that Ubuntu Server is the clear winner for the Server category. And, if you need commercial support, Canonical has you covered, with [Ubuntu Advantage][29].
The one caveat to Ubuntu Server is that it defaults to a text-only interface (Figure 5). You can install a GUI, if needed, but working with the Ubuntu Server command line is pretty straightforward (and something every Linux administrator should know).
### [ubuntuserver.jpg][11]
![Ubuntu server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntuserver_1.jpg?itok=qtFSUlee "Ubuntu server")
Figure 5: The Ubuntu server login, informing of updates.[Used with permission][5]
### The choice is yours
As I said before, these choices are all very subjective … but if youre looking for a great place to start, give these distributions a try. Each one can serve a very specific purpose and do it better than most. Although you may not agree with my particular picks, chances are youll agree that Linux offers amazing possibilities on every front. And, stay tuned for more “best distro” picks next week.
_Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018
作者:[JACK WALLEN ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/creative-commons-zero
[7]:https://www.linux.com/files/images/debianjpg
[8]:https://www.linux.com/files/images/lubuntujpg-2
[9]:https://www.linux.com/files/images/elementarosjpg
[10]:https://www.linux.com/files/images/ubuntucorejpg
[11]:https://www.linux.com/files/images/ubuntuserverjpg-1
[12]:https://www.linux.com/files/images/linux-distros-2018jpg
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[14]:https://distrowatch.com/
[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017
[16]:https://www.debian.org/
[17]:https://www.parrotsec.org/
[18]:http://lubuntu.me/
[19]:https://elementary.io/
[20]:https://linuxmint.com/
[21]:https://www.ubuntu.com/
[22]:https://www.gentoo.org/
[23]:http://www.linuxfromscratch.org/
[24]:http://www.linuxfromscratch.org/lfs/download.html
[25]:https://www.ubuntu.com/core
[26]:https://login.ubuntu.com/
[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[28]:http://thecloudmarket.com/stats#/by_platform_definition
[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993

View File

@ -1,3 +1,5 @@
translating---geekpi
A Desktop GUI Application For NPM
======

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Complete Sed Command Guide [Explained with Practical Examples]
======
In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now!

View File

@ -1,320 +0,0 @@
Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server
======
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png)
This step by step tutorial walk you through how to install **Oracle VirtualBox** on Ubuntu 18.04 LTS headless server. And, this guide also describes how to manage the VirtualBox headless instances using **phpVirtualBox** , a web-based front-end tool for VirtualBox. The steps described below might also work on Debian, and other Ubuntu derivatives such as Linux Mint. Let us get started.
### Prerequisites
Before installing Oracle VirtualBox, we need to do the following prerequisites in our Ubuntu 18.04 LTS server.
First of all, update the Ubuntu server by running the following commands one by one.
```
$ sudo apt update
$ sudo apt upgrade
$ sudo apt dist-upgrade
```
Next, install the following necessary packages:
```
$ sudo apt install build-essential dkms unzip wget
```
After installing all updates and necessary prerequisites, restart the Ubuntu server.
```
$ sudo reboot
```
### Install Oracle VirtualBox on Ubuntu 18.04 LTS server
Add Oracle VirtualBox official repository. To do so, edit **/etc/apt/sources.list** file:
```
$ sudo nano /etc/apt/sources.list
```
Add the following lines.
Here, I will be using Ubuntu 18.04 LTS, so I have added the following repository.
```
deb http://download.virtualbox.org/virtualbox/debian bionic contrib
```
![][2]
Replace the word **bionic** with your Ubuntu distributions code name, such as xenial, vivid, utopic, trusty, raring, quantal, precise, lucid, jessie, wheezy, or squeeze**.**
Then, run the following command to add the Oracle public key:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
```
For VirtualBox older versions, add the following key:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
```
Next, update the software sources using command:
```
$ sudo apt update
```
Finally, install latest Oracle VirtualBox latest version using command:
```
$ sudo apt install virtualbox-5.2
```
### Adding users to VirtualBox group
We need to create and add our system user to the **vboxusers** group. You can either create a separate user and assign it to vboxusers group or use the existing user. I dont want to create a new user, so I added my existing user to this group. Please note that if you use a separate user for virtualbox, you must log out and log in to that particular user and do the rest of the steps.
I am going to use my username named **sk** , so, I ran the following command to add it to the vboxusers group.
```
$ sudo usermod -aG vboxusers sk
```
Now, run the following command to check if virtualbox kernel modules are loaded or not.
```
$ sudo systemctl status vboxdrv
```
![][3]
As you can see in the above screenshot, the vboxdrv module is loaded and running!
For older Ubuntu versions, run:
```
$ sudo /etc/init.d/vboxdrv status
```
If the virtualbox module doesnt start, run the following command to start it.
```
$ sudo /etc/init.d/vboxdrv setup
```
Great! We have successfully installed VirtualBox and started virtualbox module. Now, let us go ahead and install Oracle VirtualBox extension pack.
### Install VirtualBox Extension pack
The VirtualBox Extension pack provides the following functionalities to the VirtualBox guests.
* The virtual USB 2.0 (EHCI) device
* VirtualBox Remote Desktop Protocol (VRDP) support
* Host webcam passthrough
* Intel PXE boot ROM
* Experimental support for PCI passthrough on Linux hosts
Download the latest Extension pack for VirtualBox 5.2.x from [**here**][4].
```
$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
Install Extension pack using command:
```
$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
Congratulations! We have successfully installed Oracle VirtualBox with extension pack in Ubuntu 16.04 LTS server. It is time to deploy virtual machines. Refer the [**virtualbox official guide**][5] to start creating and managing virtual machines in command line.
Not everyone is command line expert. Some of you might want to create and use virtual machines graphically. No worries! Here is where **phpVirtualBox** comes in handy!!
### About phpVirtualBox
**phpVirtualBox** is a free, web-based front-end to Oracle VirtualBox. It is written using PHP language. Using phpVirtualBox, we can easily create, delete, manage and administer virtual machines via a web browser from any remote system on the network.
### Install phpVirtualBox in Ubuntu 18.04 LTS
Since it is a web-based tool, we need to install Apache web server, PHP and some php modules.
To do so, run:
```
$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml
```
Then, Download the phpVirtualBox 5.2.x version from the [**releases page**][6]. Please note that we have installed VirtualBox 5.2, so we must install phpVirtualBox version 5.2 as well.
To download it, run:
```
$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip
```
Extract the downloaded archive with command:
```
$ unzip 5.2-0.zip
```
This command will extract the contents of 5.2.0.zip file into a folder named “phpvirtualbox-5.2-0”. Now, copy or move the contents of this folder to your apache web server root folder.
```
$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox
```
Assign the proper permissions to the phpvirtualbox folder.
```
$ sudo chmod 777 /var/www/html/phpvirtualbox/
```
Next, let us configure phpVirtualBox.
Copy the sample config file as shown below.
```
$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php
```
Edit phpVirtualBox **config.php** file:
```
$ sudo nano /var/www/html/phpvirtualbox/config.php
```
Find the following lines and replace the username and password with your system user (The same username that we used in “Adding users to VirtualBox group” section).
In my case, my Ubuntu system username is **sk** , and its password is **ubuntu**.
```
var $username = 'sk';
var $password = 'ubuntu';
```
![][7]
Save and close the file.
Next, create a new file called **/etc/default/virtualbox** :
```
$ sudo nano /etc/default/virtualbox
```
Add the following line. Replace sk with your own username.
```
VBOXWEB_USER=sk
```
Finally, Reboot your system or simply restart the following services to complete the configuration.
```
$ sudo systemctl restart vboxweb-service
$ sudo systemctl restart vboxdrv
$ sudo systemctl restart apache2
```
### Adjust firewall to allow Apache web server
By default, the apache web browser cant be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https traffic via UFW by following the below steps.
First, let us view which applications have installed a profile using command:
```
$ sudo ufw app list
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH
```
As you can see, Apache and OpenSSH applications have installed UFW profiles.
If you look into the **“Apache Full”** profile, you will see that it enables traffic to the ports **80** and **443** :
```
$ sudo ufw app info "Apache Full"
Profile: Apache Full
Title: Web Server (HTTP,HTTPS)
Description: Apache v2 is the next generation of the omnipresent Apache web
server.
Ports:
80,443/tcp
```
Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile:
```
$ sudo ufw allow in "Apache Full"
Rules updated
Rules updated (v6)
```
If you want to allow https traffic, but only http (80) traffic, run:
```
$ sudo ufw app info "Apache"
```
### Access phpVirtualBox Web console
Now, go to any remote system that has graphical web browser.
In the address bar, type: **<http://IP-address-of-virtualbox-headless-server/phpvirtualbox>**.
In my case, I navigated to this link **<http://192.168.225.22/phpvirtualbox>**
You should see the following screen. Enter the phpVirtualBox administrative user credentials.
The default username and phpVirtualBox is **admin** / **admin**.
![][8]
Congratulations! You will now be greeted with phpVirtualBox dashboard.
![][9]
Now, start creating your VMs and manage them from phpvirtualbox dashboard. As I mentioned earlier, You can access the phpVirtualBox from any system in the same network. All you need is a web browser and the username and password of phpVirtualBox.
If you havent enabled virtualization support in the BISO of host system (not the guest), phpVirtualBox allows you to create 32-bit guests only. To install 64-bit guest systems, you must enable virtualization in your host systems BIOS. Look for an option that is something like “virtualization” or “hypervisor” in your bios and make sure it is enabled.
Thats it. Hope this helps. If you find this guide useful, please share it on your social networks and support us.
More good stuffs to come. Stay tuned!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png
[4]:https://www.virtualbox.org/wiki/Downloads
[5]:http://www.virtualbox.org/manual/ch08.html
[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases
[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png
[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png
[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS
======

View File

@ -0,0 +1,205 @@
Why is Python so slow?
============================================================
Python is booming in popularity. It is used in DevOps, Data Science, Web Development and Security.
It does not, however, win any medals for speed.
![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg)
> How does Java compare in terms of speed to C or C++ or C# or Python? The answer depends greatly on the type of application youre running. No benchmark is perfect, but The Computer Language Benchmarks Game is [a good starting point][5].
Ive been referring to the Computer Language Benchmarks Game for over a decade; compared with other languages like Java, C#, Go, JavaScript, C++, Python is [one of the slowest][6]. This includes [JIT][7] (C#, Java) and [AOT][8] (C, C++) compilers, as well as interpreted languages like JavaScript.
_NB: When I say “Python”, Im talking about the reference implementation of the language, CPython. I will refer to other runtimes in this article._
> I want to answer this question: When Python completes a comparable application 210x slower than another language,  _why is it slow_  and cant we  _make it faster_ ?
Here are the top theories:
* “ _Its the GIL (Global Interpreter Lock)_
* “ _Its because its interpreted and not compiled_
* “ _Its because its a dynamically typed language_
Which one of these reasons has the biggest impact on performance?
### “Its the GIL”
Modern computers come with CPUs that have multiple cores, and sometimes multiple processors. In order to utilise all this extra processing power, the Operating System defines a low-level structure called a thread, where a process (e.g. Chrome Browser) can spawn multiple threads and have instructions for the system inside. That way if one process is particularly CPU-intensive, that load can be shared across the cores and this effectively makes most applications complete tasks faster.
My Chrome Browser, as Im writing this article, has 44 threads open. Keep in mind that the structure and API of threading are different between POSIX-based (e.g. Mac OS and Linux) and Windows OS. The operating system also handles the scheduling of threads.
IF you havent done multi-threaded programming before, a concept youll need to quickly become familiar with locks. Unlike a single-threaded process, you need to ensure that when changing variables in memory, multiple threads dont try and access/change the same memory address at the same time.
When CPython creates variables, it allocates the memory and then counts how many references to that variable exist, this is a concept known as reference counting. If the number of references is 0, then it frees that piece of memory from the system. This is why creating a “temporary” variable within say, the scope of a for loop, doesnt blow up the memory consumption of your application.
The challenge then becomes when variables are shared within multiple threads, how CPython locks the reference count. There is a “global interpreter lock” that carefully controls thread execution. The interpreter can only execute one operation at a time, regardless of how many threads it has.
#### What does this mean to the performance of Python application?
If you have a single-threaded, single interpreter application. It will make no difference to the speed. Removing the GIL would have no impact on the performance of your code.
If you wanted to implement concurrency within a single interpreter (Python process) by using threading, and your threads were IO intensive (e.g. Network IO or Disk IO), you would see the consequences of GIL-contention.
![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png)
From David Beazleys GIL visualised post [http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]
If you have a web-application (e.g. Django) and youre using WSGI, then each request to your web-app is a separate Python interpreter, so there is only 1 lock  _per_  request. Because the Python interpreter is slow to start, some WSGI implementations have a “Daemon Mode” [which keep Python process(es) on the go for you.][9]
#### What about other Python runtimes?
[PyPy has a GIL][10] and it is typically >3x faster than CPython.
[Jython does not have a GIL][11] because a Python thread in Jython is represented by a Java thread and benefits from the JVM memory-management system.
#### How does JavaScript do this?
Well, firstly all Javascript engines [use mark-and-sweep Garbage Collection][12]. As stated, the primary need for the GIL is CPythons memory-management algorithm.
JavaScript does not have a GIL, but its also single-threaded so it doesnt require one. JavaScripts event-loop and Promise/Callback pattern are how asynchronous-programming is achieved in place of concurrency. Python has a similar thing with the asyncio event-loop.
### “Its because its an interpreted language”
I hear this a lot and I find it a gross-simplification of the way CPython actually works. If at a terminal you wrote `python myscript.py` then CPython would start a long sequence of reading, lexing, parsing, compiling, interpreting and executing that code.
If youre interested in how that process works, Ive written about it before:
[Modifying the Python language in 6 minutes
This week I raised my first pull-request to the CPython core project, which was declined :-( but as to not completely…hackernoon.com][13][][14]
An important point in that process is the creation of a `.pyc` file, at the compiler stage, the bytecode sequence is written to a file inside `__pycache__/`on Python 3 or in the same directory in Python 2\. This doesnt just apply to your script, but all of the code you imported, including 3rd party modules.
So most of the time (unless you write code which you only ever run once?), Python is interpreting bytecode and executing it locally. Compare that with Java and C#.NET:
> Java compiles to an “Intermediate Language” and the Java Virtual Machine reads the bytecode and just-in-time compiles it to machine code. The .NET CIL is the same, the .NET Common-Language-Runtime, CLR, uses just-in-time compilation to machine code.
So, why is Python so much slower than both Java and C# in the benchmarks if they all use a virtual machine and some sort of Bytecode? Firstly, .NET and Java are JIT-Compiled.
JIT or Just-in-time compilation requires an intermediate language to allow the code to be split into chunks (or frames). Ahead of time (AOT) compilers are designed to ensure that the CPU can understand every line in the code before any interaction takes place.
The JIT itself does not make the execution any faster, because it is still executing the same bytecode sequences. However, JIT enables optimizations to be made at runtime. A good JIT optimizer will see which parts of the application are being executed a lot, call these “hot spots”. It will then make optimizations to those bits of code, by replacing them with more efficient versions.
This means that when your application does the same thing again and again, it can be significantly faster. Also, keep in mind that Java and C# are strongly-typed languages so the optimiser can make many more assumptions about the code.
PyPy has a JIT and as mentioned in the previous section, is significantly faster than CPython. This performance benchmark article goes into more detail —
[Which is the fastest version of Python?
Of course, “it depends”, but what does it depend on and how can you assess which is the fastest version of Python for…hackernoon.com][15][][16]
#### So why doesnt CPython use a JIT?
There are downsides to JITs: one of those is startup time. CPython startup time is already comparatively slow, PyPy is 23x slower to start than CPython. The Java Virtual Machine is notoriously slow to boot. The .NET CLR gets around this by starting at system-startup, but the developers of the CLR also develop the Operating System on which the CLR runs.
If you have a single Python process running for a long time, with code that can be optimized because it contains “hot spots”, then a JIT makes a lot of sense.
However, CPython is a general-purpose implementation. So if you were developing command-line applications using Python, having to wait for a JIT to start every time the CLI was called would be horribly slow.
CPython has to try and serve as many use cases as possible. There was the possibility of [plugging a JIT into CPython][17] but this project has largely stalled.
> If you want the benefits of a JIT and you have a workload that suits it, use PyPy.
### “Its because its a dynamically typed language”
In a “Statically-Typed” language, you have to specify the type of a variable when it is declared. Those would include C, C++, Java, C#, Go.
In a dynamically-typed language, there are still the concept of types, but the type of a variable is dynamic.
```
a = 1
a = "foo"
```
In this toy-example, Python creates a second variable with the same name and a type of `str` and deallocates the memory created for the first instance of `a`
Statically-typed languages arent designed as such to make your life hard, they are designed that way because of the way the CPU operates. If everything eventually needs to equate to a simple binary operation, you have to convert objects and types down to a low-level data structure.
Python does this for you, you just never see it, nor do you need to care.
Not having to declare the type isnt what makes Python slow, the design of the Python language enables you to make almost anything dynamic. You can replace the methods on objects at runtime, you can monkey-patch low-level system calls to a value declared at runtime. Almost anything is possible.
Its this design that makes it incredibly hard to optimise Python.
To illustrate my point, Im going to use a syscall tracing tool that works in Mac OS called Dtrace. CPython distributions do not come with DTrace builtin, so you have to recompile CPython. Im using 3.6.6 for my demo
```
wget https://github.com/python/cpython/archive/v3.6.6.zip
unzip v3.6.6.zip
cd v3.6.6
./configure --with-dtrace
make
```
Now `python.exe` will have Dtrace tracers throughout the code. [Paul Ross wrote an awesome Lightning Talk on Dtrace][19]. You can [download DTrace starter files][20] for Python to measure function calls, execution time, CPU time, syscalls, all sorts of fun. e.g.
`sudo dtrace -s toolkit/<tracer>.d -c ../cpython/python.exe script.py`
The `py_callflow` tracer shows all the function calls in your application
![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif)
So, does Pythons dynamic typing make it slow?
* Comparing and converting types is costly, every time a variable is read, written to or referenced the type is checked
* It is hard to optimise a language that is so dynamic. The reason many alternatives to Python are so much faster is that they make compromises to flexibility in the name of performance
* Looking at [Cython][2], which combines C-Static Types and Python to optimise code where the types are known[ can provide ][3]an 84x performanceimprovement.
### Conclusion
> Python is primarily slow because of its dynamic nature and versatility. It can be used as a tool for all sorts of problems, where more optimised and faster alternatives are probably available.
There are, however, ways of optimising your Python applications by leveraging async, understanding the profiling tools, and consider using multiple-interpreters.
For applications where startup time is unimportant and the code would benefit a JIT, consider PyPy.
For parts of your code where performance is critical and you have more statically-typed variables, consider using [Cython][4].
#### Further reading
Jake VDPs excellent article (although slightly dated) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21]
Dave Beazleys talk on the GIL [http://www.dabeaz.com/python/GIL.pdf][22]
All about JIT compilers [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23]
--------------------------------------------------------------------------------
via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b
作者:[Anthony Shaw][a]
选题:[oska874][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup
[b]:https://github.com/oska874
[1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html
[2]:http://cython.org/
[3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html
[4]:http://cython.org/
[5]:http://algs4.cs.princeton.edu/faq/
[6]:https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html
[7]:https://en.wikipedia.org/wiki/Just-in-time_compilation
[8]:https://en.wikipedia.org/wiki/Ahead-of-time_compilation
[9]:https://www.slideshare.net/GrahamDumpleton/secrets-of-a-wsgi-master
[10]:http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why
[11]:http://www.jython.org/jythonbook/en/1.0/Concurrency.html#no-global-interpreter-lock
[12]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
[13]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
[14]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
[15]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
[16]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
[17]:https://www.slideshare.net/AnthonyShaw5/pyjion-a-jit-extension-system-for-cpython
[18]:https://github.com/python/cpython/archive/v3.6.6.zip
[19]:https://github.com/paulross/dtrace-py#the-lightning-talk
[20]:https://github.com/paulross/dtrace-py/tree/master/toolkit
[21]:https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/
[22]:http://www.dabeaz.com/python/GIL.pdf
[23]:https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/

View File

@ -1,284 +0,0 @@
Building a network attached storage device with a Raspberry Pi
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud][1].
This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link.
The target architecture of our system looks like this:
![](https://opensource.com/sites/default/files/uploads/nas_part1.png)
### Hardware
Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example.
The computing power is delivered by a [Raspberry Pi 3][2], which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives.
### Software
The operating system with the highest visibility in the community is [Raspbian][3] , which is excellent for custom projects. There are plenty of [guides][4] that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch][5] , which worked fine for me.
At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`.
### Prepare the USB drives
To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/<x>`. Using the command `fdisk -l`, you can find out which two USB drives you just attached. Please note that all data on the USB drives will be lost as soon as you follow these steps.
```
pi@raspberrypi:~ $ sudo fdisk -l
<...>
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe8900690
Device     Boot Start        End    Sectors   Size Id Type
/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6aa4f598
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1  *     2048 1953521663 1953519616 931.5G  83 Linux
```
As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda` and `/dev/sdb` are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda` with `sdb` the second time (assuming your devices are also listed as `/dev/sda` and `/dev/sdb` in `fdisk`).
First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):` as follows (you can also use the `m` command anytime to get more information):
```
pi@raspberrypi:~ $ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS disklabel with disk identifier 0x9c310964.
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-1953525167, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Command (m for help): p
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9c310964
Device     Boot Start        End    Sectors   Size Id Type
/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux
Command (m for help): w
The partition table has been altered.
Syncing disks.
```
Now we will format the newly created partition `/dev/sda1` using the ext4 filesystem:
```
pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
<...>
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
```
After repeating the above steps, let's label the new partitions according to their usage in your system:
```
pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
```
Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed.
First install autofs and create the mount point for the storage:
```
pi@raspberrypi:~ $ sudo apt install autofs
pi@raspberrypi:~ $ sudo mkdir /nas
```
Then mount the devices by adding the following line to `/etc/auto.master`:
```
/nas    /etc/auto.usb
```
Create the file `/etc/auto.usb` if not existing with the following content, and restart the autofs service:
```
data -fstype=ext4,rw :/dev/disk/by-label/data
backup -fstype=ext4,rw :/dev/disk/by-label/backup
pi@raspberrypi3:~ $ sudo service autofs restart
```
Now you should be able to access the disks at `/nas/data` and `/nas/backup`, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands:
```
pi@raspberrypi3:~ $ cd /nas/data
pi@raspberrypi3:/nas/data $ cd /nas/backup
pi@raspberrypi3:/nas/backup $ mount
<...>
/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
<...>
/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
```
First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount` command shows that the two devices are actually mounted where we wanted them.
Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment.
### Mount network storage
Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi:
```
pi@raspberrypi:~ $ sudo apt install nfs-kernel-server
```
Next we need to tell the NFS server to expose the `/nas/data` directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports` and add the following line to allow all devices with access to the NAS to mount your storage:
```
/nas/data *(rw,sync,no_subtree_check)
```
For more information about restricting the mount to single devices and so on, refer to `man exports`. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111` and `2049`. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server.
To mount the storage on a Linux computer, run the commands:
```
you@desktop:~ $ sudo mkdir /nas/data
you@desktop:~ $ sudo mount -t nfs <raspberry-pi-hostname-or-ip>:/nas/data /nas/data
```
Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares][6].
Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ntlx
[1]:https://nextcloud.com/
[2]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
[3]:https://www.raspbian.org/
[4]:https://www.raspberrypi.org/documentation/installation/installing-images/
[5]:https://www.raspberrypi.org/blog/raspbian-stretch/
[6]:https://opensource.com/article/18/6/using-autofs-mount-nfs-shares

View File

@ -1,170 +0,0 @@
A checklist for submitting your first Linux kernel patch
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22)
One of the biggest—and the fastest moving—open source projects, the Linux kernel, is composed of about 53,600 files and nearly 20-million lines of code. With more than 15,600 programmers contributing to the project worldwide, the Linux kernel follows a maintainer model for collaboration.
![](https://opensource.com/sites/default/files/karnik_figure1.png)
In this article, I'll provide a quick checklist of steps involved with making your first kernel contribution, and look at what you should know before submitting a patch. For a more in-depth look at the submission process for contributing your first patch, read the [KernelNewbies First Kernel Patch tutorial][1].
### Contributing to the kernel
#### Step 1: Prepare your system.
Steps in this article assume you have the following tools on your system:
+ Text editor
+ Email client
+ Version control system (e.g., git)
#### Step 2: Download the Linux kernel code repository`:`
```
git clone -b staging-testing
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
```
### Copy your current config: ````
```
cp /boot/config-`uname -r`* .config
```
### Step 3: Build/install your kernel.
```
make -jX
sudo make modules_install install
```
### Step 4: Make a branch and switch to it.
```
git checkout -b first-patch
```
### Step 5: Update your kernel to point to the latest code base.
```
git fetch origin
git rebase origin/staging-testing
```
### Step 6: Make a change to the code base.
Recompile using `make` command to ensure that your change does not produce errors.
### Step 7: Commit your changes and create a patch.
```
git add <file>
git commit -s -v
git format-patch -o /tmp/ HEAD^
```
![](https://opensource.com/sites/default/files/karnik_figure2.png)
The subject consists of the path to the file name separated by colons, followed by what the patch does in the imperative tense. After a blank line comes the description of the patch and the mandatory signed off tag and, lastly, a diff of your patch.
Here is another example of a simple patch:
![](https://opensource.com/sites/default/files/karnik_figure3.png)
Next, send the patch [using email from the command line][2] (in this case, Mutt): ``
```
mutt -H /tmp/0001-<whatever your filename is>
```
To know the list of maintainers to whom to send the patch, use the [get_maintainer.pl script][11].
### What to know before submitting your first patch
* [Greg Kroah-Hartman][3]'s [staging tree][4] is a good place to submit your [first patch][1] as he accepts easy patches from new contributors. When you get familiar with the patch-sending process, you could send subsystem-specific patches with increased complexity.
* You also could start with correcting coding style issues in the code. To learn more, read the [Linux kernel coding style documentation][5].
* The script [checkpatch.pl][6] detects coding style errors for you. For example, run:
```
perl scripts/checkpatch.pl -f drivers/staging/android/* | less
```
* You could complete TODOs left incomplete by developers:
```
find drivers/staging -name TODO
```
* [Coccinelle][7] is a helpful tool for pattern matching.
* Read the [kernel mailing archives][8].
* Go through the [linux.git log][9] to see commits by previous authors for inspiration.
* Note: Do not top-post to communicate with the reviewer of your patch! Here's an example:
**Wrong way:**
Chris,
_Yes lets schedule the meeting tomorrow, on the second floor._
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1\. Do you want to schedule the meeting tomorrow?
> 2\. On which floor in the office?
> 3\. What time is suitable to you?
(Notice that the last question was unintentionally left unanswered in the reply.)
**Correct way:**
Chris,
See my answers below...
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1\. Do you want to schedule the meeting tomorrow?
_Yes tomorrow is fine._
> 2\. On which floor in the office?
_Let's keep it on the second floor._
> 3\. What time is suitable to you?
_09:00 am would be alright._
(All questions were answered, and this way saves reading time.)
* The [Eudyptula challenge][10] is a great way to learn kernel basics.
To learn more, read the [KernelNewbies First Kernel Patch tutorial][1]. After that, if you still have any questions, ask on the [kernelnewbies mailing list][12] or in the [#kernelnewbies IRC channel][13].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/first-linux-kernel-patch
作者:[Sayli Karnik][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sayli
[1]:https://kernelnewbies.org/FirstKernelPatch
[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
[3]:https://twitter.com/gregkh
[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html
[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html
[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl
[7]:http://coccinelle.lip6.fr/
[8]:linux-kernel@vger.kernel.org
[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/
[10]:http://eudyptula-challenge.org/
[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl
[12]:https://kernelnewbies.org/MailingList
[13]:https://kernelnewbies.org/IRC

View File

@ -1,3 +1,4 @@
Translating by qhwdw
What Stable Kernel Should I Use?
======
I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isnt always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but heres what I recommend.

View File

@ -1,114 +0,0 @@
translating by Flowsnow
A Simple, Beautiful And Cross-platform Podcast App
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png)
Podcasts have become very popular in the last few years. Podcasts are whats called “infotainment”, they are generally light-hearted, but they generally give you valuable information. Podcasts have blown up in the last few years, and if you like something, chances are there is a podcast about it. There are a lot of podcast players out there for the Linux desktop, but if you want something that is visually beautiful, has slick animations, and works on every platform, there arent a lot of alternatives to **CPod**. CPod (formerly known as **Cumulonimbus** ) is an open source and slickest podcast app that works on Linux, MacOS and Windows.
CPod runs on something called **Electron** a tool that allows developers to build cross-platform (E.g Windows, MacOs and Linux) desktop GUI applications. In this brief guide, we will be discussing how to install and use CPod podcast app in Linux.
### Installing CPod
Go to the [**releases page**][1] of CPod. Download and Install the binary for your platform of choice. If you use Ubuntu/Debian, you can just download and install the .deb file from the releases page as shown below.
```
$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb
$ sudo apt update
$ sudo apt install gdebi
$ sudo gdebi CPod_1.25.7_amd64.deb
```
If you use any other distribution, you probably should use the **AppImage** in the releases page.
Download the AppImage file from the releases page.
Open your terminal, and go to the directory where the AppImage file has been stored. Change the permissions to allow execution:
```
$ chmod +x CPod-1.25.7-x86_64.AppImage
```
Execute the AppImage File:
```
$ ./CPod-1.25.7-x86_64.AppImage
```
Youll be presented a dialog asking whether to integrate the app with the system. Click **Yes** if you want to do so.
### Features
**Explore Tab**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png)
CPod uses the Apple iTunes database to find podcasts. This is good, because the iTunes database is the biggest one out there. If there is a podcast out there, chances are its on iTunes. To find podcasts, just use the top search bar in the Explore section. The Explore Section also shows a few popular podcasts.
**Home Tab**
![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png)
The Home Tab is the tab that opens by default when you open the app. The Home Tab shows a chronological list of all the episodes of all the podcasts that you have subscribed to.
From the home tab, you can:
1. Mark episodes read.
2. Download them for offline playing
3. Add them to the queue.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png)
**Subscriptions Tab**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png)
You can of course, subscribe to podcasts that you like. A few other things you can do in the Subscriptions Tab is:
1. Refresh Podcast Artwork
2. Export and Import Subscriptions to/from an .OPML file.
**The Player**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png)
The player is perhaps the most beautiful part of CPod. The app changes the overall look and feel according to the banner of the podcast. Theres a sound visualiser at the bottom. To the right, you can see and search for other episodes of this podcast.
**Cons/Missing Features**
While I love this app, there are a few features and disadvantages that CPod does have:
1. Poor MPRIS Integration You can play/pause the podcast from the media player dialog of your desktop environment, but not much more. The name of the podcast is not shown, and you can go to the next/previous episode.
2. No support for chapters.
3. No auto-downloading you have to manually download episodes.
4. CPU usage during use is pretty high (even for an Electron app).
### Verdict
While it does have its cons, CPod is clearly the most aesthetically pleasing podcast player app out there, and it has most basic features down. If you love using visually beautiful apps, and dont need the advanced features, this is the perfect app for you. I know for a fact that Im going to use it.
Do you like CPod? Please put your opinions on the comments below!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://github.com/z-------------/CPod/releases

View File

@ -1,80 +0,0 @@
translating---geekpi
Hegemon A Modular System Monitor Application Written In Rust
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png)
When it comes to monitor running processes in Unix-like systems, the most commonly used applications are **top** and **htop** , which is an enhanced version of top. My personal favorite is htop. However, the developers are releasing few alternatives to these applications every now and then. One such alternative to top and htop utilities is **Hegemon**. It is a modular system monitor application written using **Rust** programming language.
Concerning about the features of Hegemon, we can list the following:
* Hegemon will monitor the usage of CPU, memory and Swap.
* It monitors the systems temperature and fan speed.
* The update interval time can be adjustable. The default value is 3 seconds.
* We can reveal more detailed graph and additional information by expanding the data streams.
* Unit tests
* Clean interface
* Free and open source.
### Installing Hegemon
Make sure you have installed **Rust 1.26** or later version. To install Rust in your Linux distribution, refer the following guide:
[Install Rust Programming Language In Linux][2]
Also, install [libsensors][1] library. It is available in the default repositories of most Linux distributions. For example, you can install it in RPM based systems such as Fedora using the following command:
```
$ sudo dnf install lm_sensors-devel
```
On Debian-based systems like Ubuntu, Linux Mint, it can be installed using command:
```
$ sudo apt-get install libsensors4-dev
```
Once you installed Rust and libsensors, install Hegemon using command:
```
$ cargo install hegemon
```
Once hegemon installed, start monitoring the running processes in your Linux system using command:
```
$ hegemon
```
Here is the sample output from my Arch Linux desktop.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif)
To exit, press **Q**.
Please be mindful that hegemon is still in its early development stage and it is not complete replacement for **top** command. There might be bugs and missing features. If you came across any bugs, report them in the projects github page. The developer is planning to bring more features in the upcoming versions. So, keep an eye on this project.
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://github.com/lm-sensors/lm-sensors
[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/

View File

@ -1,3 +1,5 @@
HankChow translating
How to Replace one Linux Distro With Another in Dual Boot [Guide]
======
**If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.**

View File

@ -1,302 +0,0 @@
heguangzhi Translating
An introduction to swap space on Linux systems
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
Swap space is a common aspect of computing today, regardless of operating system. Linux uses swap space to increase the amount of virtual memory available to a host. It can use one or more dedicated swap partitions or a swap file on a regular filesystem or logical volume.
There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off.
Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computers processor, the CPU.
### Swap space
Swap space is the second type of memory in modern Linux systems. The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed.
For example, assume you have a computer system with 8GB of RAM. If you start up programs that dont fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs.
The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernels memory management code and can be paged back into RAM if they are needed.
The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory.
### Types of Linux swap
Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap` command.
A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap` command is run to configure it as swap space. I dont recommend using a file for swap space unless absolutely necessary.
### Thrashing
Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly.
If you can manage to issue a command like `free` that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated.
After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis.
### What is the right amount of swap space?
Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work.
RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM.
When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide][1], defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document.
The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage.
_Table 1: Recommended system swap space in Fedora 28 documentation_
| **Amount of system RAM** | **Recommended swap space** | **Recommended swap with hibernation** |
|--------------------------|-----------------------------|---------------------------------------|
| less than 2 GB | 2 times the amount of RAM | 3 times the amount of RAM |
| 2 GB - 8 GB | Equal to the amount of RAM | 2 times the amount of RAM |
| 8 GB - 64 GB | 0.5 times the amount of RAM | 1.5 times the amount of RAM |
| more than 64 GB | workload dependent | hibernation not recommended |
At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance.
Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started.
_Table 2: Recommended system swap space per the author_
| Amount of RAM | Recommended swap space |
|---------------|------------------------|
| ≤ 2GB | 2X RAM |
| 2GB 8GB | = RAM |
| >8GB | 8GB |
One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment.
#### Adding more swap space to a non-LVM disk environment
Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM).
The basic steps to take are simple:
1. Turn off the existing swap space.
2. Create a new swap partition of the desired size.
3. Reread the partition table.
4. Configure the partition as swap space.
5. Add the new partition/etc/fstab.
6. Turn on swap.
A reboot should not be necessary.
For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free` or `top` commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode.
Turn off the swap partition with the command which turns off all swap space:
```
swapoff -a
```
Now display the existing partitions on the hard drive.
```
fdisk -l
```
This displays the current partition tables on each drive. Identify the current swap partition by number.
Start `fdisk` in interactive mode with the command:
```
fdisk /dev/<device name>
```
For example:
```
fdisk /dev/sda
```
At this point, `fdisk` is now interactive and will operate only on the specified disk drive.
Use the fdisk `p` sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions.
Use the `n` sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder.
The `fdisk` command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter.
Use the `p` sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk` command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again.
Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t` allows you to specify the type of partition. So enter `t`, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter.
When you are satisfied with the partition you have created, use the `w` sub-command to write the new partition table to the disk. The `fdisk` program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk` completes writing the new partition table:
```
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
```
At this point, you use the `partprobe` command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot.
```
partprobe
```
Now use the command `fdisk -l` to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”.
It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this:
```
LABEL=SWAP-sdaX   swap        swap    defaults        0 0
```
where `X` is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition:
```
/dev/sdaY         swap        swap    defaults        0 0
```
Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap` command to define the partition as a swap partition.
```
mkswap /dev/sdaY
```
The final step is to turn swap on using the command:
```
swapon -a
```
Your new swap partition is now online along with the previously existing swap partition. You can use the `free` or `top` commands to verify this.
#### Adding swap to an LVM disk environment
If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume.
Here are the steps required to increase the amount of swap space in an LVM environment:
1. Turn off all swap.
2. Increase the size of the logical volume designated for swap.
3. Configure the resized volume as swap space.
4. Turn on swap.
First, lets verify that swap exists and is a logical volume using the `lvs` command (list logical volume).
```
[root@studentvm1 ~]# lvs
  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   fedora_studentvm1 -wi-ao----  2.00g                                                      
  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                            
  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                  
  swap   fedora_studentvm1 -wi-ao----  8.00g                                                      
  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                      
  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                      
  var    fedora_studentvm1 -wi-ao---- 10.00g                                                      
[root@studentvm1 ~]#
```
You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use.
```
swapoff -a
```
Now increase the size of the logical volume.
```
[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap
  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
  Logical volume fedora_studentvm1/swap successfully resized.
[root@studentvm1 ~]#
```
Run the `mkswap` command to make this entire 10GB partition into swap space.
```
[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap
mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10 GiB (10737414144 bytes)
no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a
[root@studentvm1 ~]#
```
Turn swap back on.
```
[root@studentvm1 ~]# swapon -a
[root@studentvm1 ~]#
```
Now verify the new swap space is present with the list block devices command. Again, a reboot is not required.
```
[root@studentvm1 ~]# lsblk
NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                    8:0    0   60G  0 disk
|-sda1                                 8:1    0    1G  0 part /boot
`-sda2                                 8:2    0   59G  0 part
  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm  
  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm  
  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  /
  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm  
  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm  
  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm  
  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  /
  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm  
  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP]
  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr
  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home
  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var
  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp
sr0                                   11:0    1 1024M  0 rom  
[root@studentvm1 ~]#
```
You can also use the `swapon -s` command, or `top`, `free`, or any of several other commands to verify this.
```
[root@studentvm1 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        4038808      382404     2754072        4152      902332     3404184
Swap:      10485756           0    10485756
[root@studentvm1 ~]#
```
Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux][2], includes more information about the /dev directory and its contents.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/swap-space-linux-systems
作者:[David Both][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/
[2]: https://opensource.com/article/16/11/managing-devices-linux

View File

@ -1,260 +0,0 @@
translating by Flowsnow
How to use the Scikit-learn Python library for data science projects
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation.
### What is Scikit-learn?
[Scikit-learn][1] is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries:
* **NumPy** , a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations.
* **SciPy** , an ecosystem consisting of various libraries for completing technical computing tasks.
* **Matplotlib** , a library for plotting various charts and graphs.
Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects.
Here are the main ways the Scikit-learn library is used.
#### 1. Classification
The [classification][2] tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not.
* Support vector machines (SVMs)
* Nearest neighbors
* Random forest
#### 2. Regression
Classification algorithms in Scikit-learn include:
Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices.
Regression algorithms include:
* SVMs
* Ridge regression
* Lasso
#### 3. Clustering
The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities.
Clustering algorithms include:
* K-means
* Spectral clustering
* Mean-shift
#### 4. Dimensionality reduction
Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered.
Dimensionality reduction algorithms include:
* Principal component analysis (PCA)
* Feature selection
* Non-negative matrix factorization
#### 5. Model selection
Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects.
Model selection modules that can deliver enhanced accuracy through parameter tuning include:
* Grid search
* Cross-validation
* Metrics
#### 6. Preprocessing
The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis.
Preprocessing modules include:
* Preprocessing
* Feature extraction
### A Scikit-learn library example
Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects.
We'll use the [Iris flower dataset][3], which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species:
* Setosa—labeled 0
* Versicolor—labeled 1
* Virginica—labeled 2
The dataset includes the following characteristics of each flower species (in centimeters):
* Sepal length
* Sepal width
* Petal length
* Petal width
#### Step 1: Importing the library
Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows:
```
from sklearn import datasets
iris = datasets.load_iris()
```
These commands import the **datasets** module from **sklearn** , then use the **load_digits()** method from **datasets** to include the data in the workspace.
#### Step 2: Getting dataset characteristics
The **datasets** module contains several methods that make it easier to get acquainted with handling data.
In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list.
For instance, we can utilize **iris.data** to output information about the Iris flower dataset.
```
print(iris.data)
```
Here is the output (the results have been truncated):
```
[[5.1 3.5 1.4 0.2]
 [4.9 3.  1.4 0.2]
 [4.7 3.2 1.3 0.2]
 [4.6 3.1 1.5 0.2]
 [5.  3.6 1.4 0.2]
 [5.4 3.9 1.7 0.4]
 [4.6 3.4 1.4 0.3]
 [5.  3.4 1.5 0.2]
 [4.4 2.9 1.4 0.2]
 [4.9 3.1 1.5 0.1]
 [5.4 3.7 1.5 0.2]
 [4.8 3.4 1.6 0.2]
 [4.8 3.  1.4 0.1]
 [4.3 3.  1.1 0.1]
 [5.8 4.  1.2 0.2]
 [5.7 4.4 1.5 0.4]
 [5.4 3.9 1.3 0.4]
 [5.1 3.5 1.4 0.3]
```
Let's also use **iris.target** to give us information about the different labels of the flowers.
```
print(iris.target)
```
Here is the output:
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2]
```
If we use **iris.target_names** , we'll output an array of the names of the labels found in the dataset.
```
print(iris.target_names)
```
Here is the result after running the Python code:
```
['setosa' 'versicolor' 'virginica']
```
#### Step 3: Visualizing the dataset
We can use the [box plot][4] to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles.
Here's how to achieve this:
```
import seaborn as sns
box_data = iris.data #variable representing the data array
box_target = iris.target #variable representing the labels array
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Let's see the result:
![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png)
On the horizontal axis:
* 0 is sepal length
* 1 is sepal width
* 2 is petal length
* 3 is petal width
The vertical axis is dimensions in centimeters.
### Wrapping up
Here is the entire code for this simple Scikit-learn data science tutorial.
```
from sklearn import datasets
iris = datasets.load_iris()
print(iris.data)
print(iris.target)
print(iris.target_names)
import seaborn as sns
box_data = iris.data #variable representing the data array
box_target = iris.target #variable representing the labels array
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Scikit-learn is a versatile Python library you can use to efficiently complete data science projects.
If you want to learn more, check out the tutorials on [LiveEdu][5], such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application][6].
Do you have any questions or comments? Feel free to share them below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects
作者:[Dr.Michael J.Garbade][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/drmjg
[1]: http://scikit-learn.org/stable/index.html
[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/
[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set
[4]: https://en.wikipedia.org/wiki/Box_plot
[5]: https://www.liveedu.tv/guides/data-science/
[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/

View File

@ -0,0 +1,87 @@
5 cool tiling window managers
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg)
The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out.
### i3
[i3][1] is one of the most popular tiling window managers. Like most other such WMs, i3 focuses on low resource consumption and customizability by the user.
You can refer to [this previous article in the Magazine][2] to get started with i3 installation details and how to configure it.
### sway
[sway][3] is a tiling Wayland compositor. It has the advantage of compatibility with an existing i3 configuration, so you can use it to replace i3 and use Wayland as the display protocol.
You can use dnf to install sway from Fedora repository:
```
$ sudo dnf install sway
```
If you want to migrate from i3 to sway, theres a small [migration guide][4] available.
### Qtile
[Qtile][5] is another tiling manager that also happens to be written in Python. By default, you configure Qtile in a Python script located under ~/.config/qtile/config.py. When this script is not available, Qtile uses a default [configuration][6].
One of the benefits of Qtile being in Python is you can write scripts to control the WM. For example, the following script prints the screen details:
```
> from libqtile.command import Client
> c = Client()
> print(c.screen.info)
{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0}
```
To install Qlite on Fedora, use the following command:
```
$ sudo dnf install qtile
```
### dwm
The [dwm][7] window manager focuses more on being lightweight. One goal of the project is to keep dwm minimal and small. For example, the entire code base never exceeded 2000 lines of code. On the other hand, dwm isnt as easy to customize and configure. Indeed, the only way to change dwm default configuration is to [edit the source code and recompile the application][8].
If you want to try the default configuration, you can install dwm in Fedora using dnf:
```
$ sudo dnf install dwm
```
For those who wand to change their dwm configuration, the dwm-user package is available in Fedora. This package automatically recompiles dwm using the configuration stored in the user home directory at ~/.dwm/config.h.
### awesome
[awesome][9] originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. The configuration is done via Lua scripts, which allow you to write scripts to automate tasks or create widgets.
You can check out awesome on Fedora by installing it like this:
```
$ sudo dnf install awesome
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/5-cool-tiling-window-managers/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org
[1]: https://i3wm.org/
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
[3]: https://swaywm.org/
[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide
[5]: http://www.qtile.org/
[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py
[7]: https://dwm.suckless.org/
[8]: https://dwm.suckless.org/customisation/
[9]: https://awesomewm.org/

View File

@ -1,118 +0,0 @@
translating---geekpi
10 handy Bash aliases for Linux
======
Get more efficient by using condensed versions of long Bash commands.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U)
How many times have you repeatedly typed out a long command on the command line and wished there was a way to save it for later? This is where Bash aliases come in handy. They allow you to condense long, cryptic commands down to something easy to remember and use. Need some examples to get you started? No problem!
To use a Bash alias you've created, you need to add it to your .bash_profile file, which is located in your home folder. Note that this file is hidden and accessible only from the command line. The easiest way to work with this file is to use something like Vi or Nano.
### 10 handy Bash aliases
1. How many times have you needed to unpack a .tar file and couldn't remember the exact arguments needed? Aliases to the rescue! Just add the following to your .bash_profile file and then use **untar FileName** to unpack any .tar file.
```
alias untar='tar -zxvf '
```
2. Want to download something but be able to resume if something goes wrong?
```
alias wget='wget -c '
```
3. Need to generate a random, 20-character password for a new online account? No problem.
```
alias getpass="openssl rand -base64 20"
```
4. Downloaded a file and need to test the checksum? We've got that covered too.
```
alias sha='shasum -a 256 '
```
5. A normal ping will go on forever. We don't want that. Instead, let's limit that to just five pings.
```
alias ping='ping -c 5'
```
6. Start a web server in any folder you'd like.
```
alias www='python -m SimpleHTTPServer 8000'
```
7. Want to know how fast your network is? Just download Speedtest-cli and use this alias. You can choose a server closer to your location by using the **speedtest-cli --list** command.
```
alias speed='speedtest-cli --server 2406 --simple'
```
8. How many times have you needed to know your external IP address and had no idea how to get that info? Yeah, me too.
```
alias ipe='curl ipinfo.io/ip'
```
9. Need to know your local IP address?
```
alias ipi='ipconfig getifaddr en0'
```
10. Finally, let's clear the screen.
```
alias c='clear'
```
As you can see, Bash aliases are a super-easy way to simplify your life on the command line. Want more info? I recommend a quick Google search for "Bash aliases" or a trip to GitHub.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/handy-bash-aliases
作者:[Patrick H.Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pmullins

View File

@ -0,0 +1,261 @@
16 iptables tips and tricks for sysadmins
======
Iptables provides powerful capabilities to control traffic coming in and out of your system.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg)
Modern Linux kernels come with a packet-filtering framework named [Netfilter][1]. Netfilter enables you to allow, drop, and modify traffic coming in and going out of a system. The **iptables** userspace command-line tool builds upon this functionality to provide a powerful firewall, which you can configure by adding rules to form a firewall policy. [iptables][2] can be very daunting with its rich set of capabilities and baroque command syntax. Let's explore some of them and develop a set of iptables tips and tricks for many situations a system administrator might encounter.
### Avoid locking yourself out
Scenario: You are going to make changes to the iptables policy rules on your company's primary server. You want to avoid locking yourself—and potentially everybody else—out. (This costs time and money and causes your phone to ring off the wall.)
#### Tip #1: Take a backup of your iptables configuration before you start working on it.
Back up your configuration with the command:
```
/sbin/iptables-save > /root/iptables-works
```
#### Tip #2: Even better, include a timestamp in the filename.
Add the timestamp with the command:
```
/sbin/iptables-save > /root/iptables-works-`date +%F`
```
You get a file with a name like:
```
/root/iptables-works-2018-09-11
```
If you do something that prevents your system from working, you can quickly restore it:
```
/sbin/iptables-restore < /root/iptables-works-2018-09-11
```
#### Tip #3: Every time you create a backup copy of the iptables policy, create a link to the file with 'latest' in the name.
```
ln s /root/iptables-works-`date +%F` /root/iptables-works-latest
```
#### Tip #4: Put specific rules at the top of the policy and generic rules at the bottom.
Avoid generic rules like this at the top of the policy rules:
```
iptables -A INPUT -p tcp --dport 22 -j DROP
```
The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this:
```
iptables -A INPUT -p tcp --dport 22 s 10.0.0.0/8 d 192.168.100.101 -j DROP
```
This rule appends ( **-A** ) to the **INPUT** chain a rule that will **DROP** any packets originating from the CIDR block **10.0.0.0/8** on TCP ( **-p tcp** ) port 22 ( **\--dport 22** ) destined for IP address 192.168.100.101 ( **-d 192.168.100.101** ).
There are plenty of ways you can be more specific. For example, using **-i eth0** will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to **eth1**.
#### Tip #5: Whitelist your IP address at the top of your policy rules.
This is a very effective method of not locking yourself out. Everybody else, not so much.
```
iptables -I INPUT -s <your IP> -j ACCEPT
```
You need to put this as the first rule for it to work properly. Remember, **-I** inserts it as the first rule; **-A** appends it to the end of the list.
#### Tip #6: Know and understand all the rules in your current policy.
Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things.
### Set up a workstation firewall policy
Scenario: You want to set up a workstation with a restrictive firewall policy.
#### Tip #1: Set the default policy as DROP.
```
# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
```
#### Tip #2: Allow users the minimum amount of services needed to get their work done.
The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP ( **-p udp --dport 67:68 --sport 67:68** ). For remote management, the rules need to allow inbound SSH ( **\--dport 22** ), outbound mail ( **\--dport 25** ), DNS ( **\--dport 53** ), outbound ping ( **-p icmp** ), Network Time Protocol ( **\--dport 123 --sport 123** ), and outbound HTTP ( **\--dport 80** ) and HTTPS ( **\--dport 443** ).
```
# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
# Accept any related or established connections
-I INPUT  1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow all traffic on the loopback interface
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
# Allow outbound DHCP request
-A OUTPUT o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT
# Allow inbound SSH
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW  -j ACCEPT
# Allow outbound email
-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW  -j ACCEPT
# Outbound DNS lookups
-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT
# Outbound PING requests
-A OUTPUT o eth0 -p icmp -j ACCEPT
# Outbound Network Time Protocol (NTP) requests
-A OUTPUT o eth0 -p udp --dport 123 --sport 123 -j ACCEPT
# Outbound HTTP
-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
COMMIT
```
### Restrict an IP address range
Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook's IP address by using the **host** and **whois** commands.
```
host -t a www.facebook.com
www.facebook.com is an alias for star.c10r.facebook.com.
star.c10r.facebook.com has address 31.13.65.17
whois 31.13.65.17 | grep inetnum
inetnum:        31.13.64.0 - 31.13.127.255
```
Then convert that range to CIDR notation by using the [CIDR to IPv4 Conversion][3] page. You get **31.13.64.0/18**. To prevent outgoing access to [www.facebook.com][4], enter:
```
iptables -A OUTPUT -p tcp -i eth0 o eth1 d 31.13.64.0/18 -j DROP
```
### Regulate by time
Scenario: The backlash from the company's employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant's reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables' time features to open up access.
```
iptables A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 --timestart 12:00 timestop 13:00 d
31.13.64.0/18  -j ACCEPT
```
This command sets the policy to allow ( **-j ACCEPT** ) http and https ( **-m multiport --dport http,https** ) between noon ( **\--timestart 12:00** ) and 13PM ( **\--timestop 13:00** ) to Facebook.com ( **d[31.13.64.0/18][5]** ).
### Regulate by time—Take 2
Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won't be disrupted by incoming traffic. This will take two iptables rules:
```
iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP
iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP
```
With these rules, TCP and UDP traffic ( **-p tcp and -p udp** ) are denied ( **-j DROP** ) between the hours of 2AM ( **\--timestart 02:00** ) and 3AM ( **\--timestop 03:00** ) on input ( **-A INPUT** ).
### Limit connections with iptables
Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server:
```
iptables A INPUT p tcp syn -m multiport -dport http,https m connlimit -connlimit-above 20 j REJECT -reject-with-tcp-reset
```
Let's look at what this rule does. If a host makes more than 20 ( **-connlimit-above 20** ) new connections ( **p tcp syn** ) in a minute to the web servers ( **-dport http,https** ), reject the new connection ( **j REJECT** ) and tell the connecting host you are rejecting the connection ( **-reject-with-tcp-reset** ).
### Monitor iptables rules
Scenario: Since iptables operates on a "first match wins" basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom?
#### Tip #1: See how many times each rule has been hit.
Use this command:
```
iptables -L -v -n line-numbers
```
The command will list all the rules in the chain ( **-L** ). Since no chain was specified, all the chains will be listed with verbose output ( **-v** ) showing packet and byte counters in numeric format ( **-n** ) with line numbers at the beginning of each rule corresponding to that rule's position in the chain.
Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom.
#### Tip #2: Remove unnecessary rules.
Which rules aren't getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command:
```
iptables -nvL | grep -v "0     0"
```
Note: that's not a tab between the zeros; there are five spaces between the zeros.
#### Tip #3: Monitor what's going on.
You would like to monitor what's going on with iptables in real time, like with **top**. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed:
```
watch --interval=5 'iptables -nvL | grep -v "0     0"'
```
**watch** runs **'iptables -nvL | grep -v "0 0"'** every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time.
### Report on iptables
Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it's more important to write a report than to do the work.
Use the packet filter/firewall/IDS log analyzer [FWLogwatch][6] to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks.
Here is sample output from FWLogwatch:
![](https://opensource.com/sites/default/files/uploads/fwlogwatch.png)
### More than just ACCEPT and DROP
We've covered many facets of iptables, all the way from making sure you don't lock yourself out when working with iptables to monitoring iptables to visualizing the activity of an iptables firewall. These will get you started down the path to realizing even more iptables tips and tricks.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/iptables-tips-and-tricks
作者:[Gary Smith][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greptile
[1]: https://en.wikipedia.org/wiki/Netfilter
[2]: https://en.wikipedia.org/wiki/Iptables
[3]: http://www.ipaddressguide.com/cidr
[4]: http://www.facebook.com
[5]: http://31.13.64.0/18
[6]: http://fwlogwatch.inside-security.de/

View File

@ -0,0 +1,179 @@
How to Install Pip on Ubuntu
======
**Pip is a command line tool that allows you to install software packages written in Python. Learn how to install Pip on Ubuntu and how to use it for installing Python applications.**
There are numerous ways to [install software on Ubuntu][1]. You can install applications from the software center, from downloaded DEB files, from PPA, from [Snap packages][2], [using Flatpak][3], using [AppImage][4] and even from the good old source code.
There is one more way to install packages in [Ubuntu][5]. Its called Pip and you can use it to install Python-based applications.
### What is Pip
[Pip][6] stands for “Pip Installs Packages”. [Pip][7] is a command line based package management system. It is used to install and manage software written in [Python language][8].
You can use Pip to install packages listed in the Python Package Index ([PyPI][9]).
As a software developer, you can use pip to install various Python module and packages for your own Python projects.
As an end user, you may need pip in order to install some applications that are developed using Python and can be installed easily using pip. One such example is [Stress Terminal][10] application that you can easily install with pip.
Lets see how you can install pip on Ubuntu and other Ubuntu-based distributions.
### How to install Pip on Ubuntu
![Install pip on Ubuntu Linux][11]
Pip is not installed on Ubuntu by default. Youll have to install it. Installing pip on Ubuntu is really easy. Ill show it to you in a moment.
Ubuntu 18.04 has both Python 2 and Python 3 installed by default. And hence, you should install pip for both Python versions.
Pip, by default, refers to the Python 2. Pip in Python 3 is referred by pip3.
Note: I am using Ubuntu 18.04 in this tutorial. But the instructions here should be valid for other versions like Ubuntu 16.04, 18.10 etc. You may also use the same commands on other Linux distributions based on Ubuntu such as Linux Mint, Linux Lite, Xubuntu, Kubuntu etc.
#### Install pip for Python 2
First, make sure that you have Python 2 installed. On Ubuntu, use the command below to verify.
```
python2 --version
```
If there is no error and a valid output that shows the Python version, you have Python 2 installed. So now you can install pip for Python 2 using this command:
```
sudo apt install python-pip
```
It will install pip and a number of other dependencies with it. Once installed, verify that you have pip installed correctly.
```
pip --version
```
It should show you a version number, something like this:
```
pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)
```
This mans that you have successfully installed pip on Ubuntu.
#### Install pip for Python 3
You have to make sure that Python 3 is installed on Ubuntu. To check that, use this command:
```
python3 --version
```
If it shows you a number like Python 3.6.6, Python 3 is installed on your Linux system.
Now, you can install pip3 using the command below:
```
sudo apt install python3-pip
```
You should verify that pip3 has been installed correctly using this command:
```
pip3 --version
```
It should show you a number like this:
```
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
```
It means that pip3 is successfully installed on your system.
### How to use Pip command
Now that you have installed pip, lets quickly see some of the basic pip commands. These commands will help you use pip commands for searching, installing and removing Python packages.
To search packages from the Python Package Index, you can use the following pip command:
```
pip search <search_string>
```
For example, if you search or stress, it will show all the packages that have the string stress in its name or description.
```
pip search stress
stress (1.0.0) - A trivial utility for consuming system resources.
s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool
stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python
fuzzing (0.3.2) - Tools for stress testing applications.
stressant (0.4.1) - Simple stress-test tool
stressberry (0.1.7) - Stress tests for the Raspberry Pi
mobbage (0.2) - A HTTP stress test and benchmark tool
stresser (0.2.1) - A large-scale stress testing framework.
cyanide (1.3.0) - Celery stress testing and integration test support.
pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings.
ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher
pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients.
MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination
netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures
russtress (0.1.3) - Package that helps you to put lexical stress in russian text
switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing.
nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface
physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa.
fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures.
processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat.
```
If you want to install an application using pip, you can use it in the following manner:
```
pip install <package_name>
```
Pip doesnt support tab completion so the package name should be exact. It will download all the necessary files and installed that package.
If you want to remove a Python package installed via pip, you can use the remove option in pip.
```
pip uninstall <installed_package_name>
```
You can use pip3 instead of pip in the above commands.
I hope this quick tip helped you to install pip on Ubuntu. If you have any questions or suggestions, please let me know in the comment section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-pip-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/
[2]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[3]: https://itsfoss.com/flatpak-guide/
[4]: https://itsfoss.com/use-appimage-linux/
[5]: https://www.ubuntu.com/
[6]: https://en.wikipedia.org/wiki/Pip_(package_manager)
[7]: https://pypi.org/project/pip/
[8]: https://www.python.org/
[9]: https://pypi.org/
[10]: https://itsfoss.com/stress-terminal-ui/
[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/install-pip-ubuntu.png

View File

@ -0,0 +1,263 @@
Turn your book into a website and an ePub using Pandoc
======
Write once, publish twice using Markdown and Pandoc.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF.
In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example.
First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book.
You can find the code in my [Programming Fight Club][4] GitHub repository.
### Setting up the writing structure
I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later.
The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters.
My about file might begin like:
```
# About this book {-}
## Who should read this book {-}
Before creating a complex software system one needs to create a solid foundation.
General Responsibility Assignment Software Principles (GRASP) are guidelines to assign
responsibilities to software classes in object-oriented programming.
```
Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub.
### Generating the website
#### Create the HTML meta-information file
The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file.
I recommend (at minimum) including the following fields in the web-metadata.yaml file:
```
---
title: <a href="/grasp-principles/toc/">GRASP principles for the Object-oriented mind</a>
author: Kiko Fernandez-Reyes
rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
header-includes:
- |
  \```{=html}
  <link href="https://fonts.googleapis.com/css?family=Inconsolata" rel="stylesheet">
  <link href="https://fonts.googleapis.com/css?family=Gentium+Basic|Inconsolata" rel="stylesheet">
  \```
include-before:
- |
  \```{=html}
  <p>If you like this book, please consider
      spreading the word or
      <a href="https://www.buymeacoffee.com/programming">
        buying me a coffee
      </a>
  </p>
  \```
include-after:
- |
  ```{=html}
  <div class="footnotes">
    <hr>
    <div class="container">
        <nav class="pagination" role="pagination">
          <ul>
          <p>
          <span class="page-number">Designed with</span> ❤️  <span class="page-number"> from Uppsala, Sweden</span>
           </p>
           <p>
           <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a>
           </p>
           </ul>
        </nav>
    </div>
  </div>
  \```
---
```
Some variables to note:
* The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag.
* The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.)
* The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee.
* The **include-after** variable appends raw HTML at the end of the website and shows my book's license.
These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others.
#### Split the website into chapters
The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website.
To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html.
For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// <your-website.com>/about/**, the index.html file from the folder about will be displayed in their browser.
The following Makefile does all of this:
```
# Your book files
DEPENDENCIES= toc preface about
# Placement of your HTML files
DOCS=docs
all: web
web: setup $(DEPENDENCIES)
        @cp $(DOCS)/toc/index.html $(DOCS)
# Creation and copy of stylesheet and images into
# the assets folder. This is important to deploy the
# website to Github Pages.
setup:
        @mkdir -p $(DOCS)
        @cp -r assets $(DOCS)
# Creation of folder and index.html file on a
# per-chapter basis
$(DEPENDENCIES):
        @mkdir -p $(DOCS)/$@
        @pandoc -s --toc web-metadata.yaml parts/$@.md \
        -c /assets/pandoc.css -o $(DOCS)/$@/index.html
clean:
        @rm -rf $(DOCS)
.PHONY: all clean web setup
```
The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line:
```
<link rel="stylesheet" href="/assets/pandoc.css">
```
To generate the website, type:
```
make
```
The root folder should contain now the following structure and files:
```
.---parts
|    |--- toc.md
|    |--- preface.md
|    |--- about.md
|
|---docs
    |--- assets/
    |--- index.html
    |--- toc
    |     |--- index.html
    |
    |--- preface
    |     |--- index.html
    |
    |--- about
          |--- index.html
   
```
#### Deploy the website
To deploy the website on GitHub, follow these steps:
1. Create a new repository
2. Push your content to the repository
3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch
You can get more details on the [GitHub Pages][5] site.
Check out [my book's website][6], generated using this process, to see the result.
### Generating the ePub book
#### Create the ePub meta-information file
The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css.
```
---
title: 'GRASP principles for the Object-oriented Mind'
publisher: 'Programming Language Fight Club'
author: Kiko Fernandez-Reyes
rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
cover-image: assets/cover.png
stylesheet: assets/epub.css
...
```
Add the following content to the previous Makefile:
```
epub:
        @pandoc -s --toc epub-meta.yaml \
        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub
```
The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call:
```
@pandoc -s --toc epub-meta.yaml \
parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub
```
Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder.
Here's an [example][7] of an ePub created using this process.
### Summarizing the process
The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow.
* HTML book:
* Write chapters in Markdown
* Add metadata
* Create a Makefile to glue pieces together
* Set up GitHub Pages
* Deploy
* ePub book:
* Reuse chapters from previous work
* Add new metadata file
* Create a Makefile to glue pieces together
* Set up GitHub Pages
* Deploy
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc
作者:[Kiko Fernandez-Reyes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kikofernandez
[1]: https://opensource.com/article/18/9/intro-pandoc
[2]: https://pandoc.org/
[3]: https://www.programmingfightclub.com/
[4]: https://github.com/kikofernandez/programmingfightclub
[5]: https://pages.github.com/
[6]: https://www.programmingfightclub.com/grasp-principles/
[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub

View File

@ -0,0 +1,76 @@
4 open source invoicing tools for small businesses
======
Manage your billing and get paid with easy-to-use, web-based invoicing software.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp)
No matter what your reasons for starting a small business, the key to keeping that business going is getting paid. Getting paid usually means sending a client an invoice.
It's easy enough to whip up an invoice using LibreOffice Writer or LibreOffice Calc, but sometimes you need a bit more. A more professional look. A way of keeping track of your invoices. Reminders about when to follow up on the invoices you've sent.
There's a wide range of commercial and closed-source invoicing tools out there. But the offerings on the open source side of the fence are just as good, and maybe even more flexible, than their closed source counterparts.
Let's take a look at four web-based open source invoicing tools that are great choices for freelancers and small businesses on a tight budget. I reviewed two of them in 2014, in an [earlier version][1] of this article. These four picks are easy to use and you can use them on just about any device.
### Invoice Ninja
I've never been a fan of the term ninja. Despite that, I like [Invoice Ninja][2]. A lot. It melds a simple interface with a set of features that let you create, manage, and send invoices to clients and customers.
You can easily configure multiple clients, track payments and outstanding invoices, generate quotes, and email invoices. What sets Invoice Ninja apart from its competitors is its [integration with][3] over 40 online popular payment gateways, including PayPal, Stripe, WePay, and Apple Pay.
[Download][4] a version that you can install on your own server or get an account with the [hosted version][5] of Invoice Ninja. There's a free version and a paid tier that will set you back US$ 8 a month.
### InvoicePlane
Once upon a time, there was a nifty open source invoicing tool called FusionInvoice. One day, its creators took the latest version of the code proprietary. That didn't end happily, as FusionInvoice's doors were shut for good in 2018. But that wasn't the end of the application. An old version of the code stayed open source and morphed into [InvoicePlane][6], which packs all of FusionInvoice's goodness.
Creating an invoice takes just a couple of clicks. You can make them as minimal or detailed as you need. When you're ready, you can email your invoices or output them as PDFs. You can also create recurring invoices for clients or customers you regularly bill.
InvoicePlane does more than generate and track invoices. You can also create quotes for jobs or goods, track products you sell, view and enter payments, and run reports on your invoices.
[Grab the code][7] and install it on your web server. Or, if you're not quite ready to do that, [take the demo][8] for a spin.
### OpenSourceBilling
Described by its developer as "beautifully simple billing software," [OpenSourceBilling][9] lives up to the description. It has one of the cleanest interfaces I've seen, which makes configuring and using the tool a breeze.
OpenSourceBilling stands out because of its dashboard, which tracks your current and past invoices, as well as any outstanding amounts. Your information is broken up into graphs and tables, which makes it easy to follow.
You do much of the configuration on the invoice itself. You can add items, tax rates, clients, and even payment terms with a click and a few keystrokes. OpenSourceBilling saves that information across all of your invoices, both new and old.
As with some of the other tools we've looked at, OpenSourceBilling has a [demo][10] you can try.
### BambooInvoice
When I was a full-time freelance writer and consultant, I used [BambooInvoice][11] to bill my clients. When its original developer stopped working on the software, I was a bit disappointed. But BambooInvoice is back, and it's as good as ever.
What attracted me to BambooInvoice is its simplicity. It does one thing and does it well. You can create and edit invoices, and BambooInvoice keeps track of them by client and by the invoice numbers you assign to them. It also lets you know which invoices are open or overdue. You can email the invoices from within the application or generate PDFs. You can also run reports to keep tabs on your income.
To [install][12] and use BambooInvoice, you'll need a web server running PHP 5 or newer as well as a MySQL database. Chances are you already have access to one, so you're good to go.
Do you have a favorite open source invoicing tool? Feel free to share it by leaving a comment.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/open-source-invoicing-tools
作者:[Scott Nesbitt][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[1]: https://opensource.com/business/14/9/4-open-source-invoice-tools
[2]: https://www.invoiceninja.org/
[3]: https://www.invoiceninja.com/integrations/
[4]: https://github.com/invoiceninja/invoiceninja
[5]: https://www.invoiceninja.com/invoicing-pricing-plans/
[6]: https://invoiceplane.com/
[7]: https://wiki.invoiceplane.com/en/1.5/getting-started/installation
[8]: https://demo.invoiceplane.com/
[9]: http://www.opensourcebilling.org/
[10]: http://demo.opensourcebilling.org/
[11]: https://www.bambooinvoice.net/
[12]: https://sourceforge.net/projects/bambooinvoice/

View File

@ -0,0 +1,76 @@
How to use the SSH and SFTP protocols on your home network
======
Use the SSH and SFTP protocols to access other devices, efficiently and securely transfer files, and more.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
Years ago, I decided to set up an extra computer (I always have extra computers) so that I could access it from work to transfer files I might need. To do this, the basic first step is to have your ISP assign a fixed IP address.
The not-so-basic but much more important next step is to set up your accessible system safely. In this particular case, I was planning to access it only from work, so I could restrict access to that IP address. Even so, you want to use all possible security features. What is amazing—and scary—is that as soon as you set this up, people from all over the world will immediately attempt to access your system. You can discover this by checking the logs. I presume there are bots constantly searching for open doors wherever they can find them.
Not long after I set up my computer, I decided my access was more a toy than a need, so I turned it off and gave myself one less thing to worry about. Nonetheless, there is another use for SSH and SFTP inside your home network, and it is more or less already set up for you.
One requirement, of course, is that the other computer in your home must be turned on, although it doesnt matter whether someone is logged on or not. You also need to know its IP address. There are two ways to find this out. One is to get access to the router, which you can do through a browser. Typically, its address is something like **192.168.1.254**. With some searching, it should be easy enough to find out what is currently on and hooked up to the system by eth0 or WiFi. What can be challenging is recognizing the computer youre interested in.
I find it easier to go to the computer in question, bring up a shell, and type:
```
ifconfig
```
This spits out a lot of information, but the bit you want is right after `inet` and might look something like **192.168.1.234**. After you find that, go back to the client computer you want to access this host, and on the command line, type:
```
ssh gregp@192.168.1.234
```
For this to work, **gregp** must be a valid user on that system. You will then be asked for his password, and if you enter it correctly, you will be connected to that other computer in a shell environment. I confess that I dont use SSH in this way very often. I have used it at times so I can run `dnf` to upgrade some other computer than the one Im sitting at. Usually, I use SFTP:
```
sftp gregp@192.168.1.234
```
because I have a greater need for an easy method of transferring files from one computer to another. Its certainly more convenient and less time-consuming than using a USB stick or an external drive.
`get`, to receive files from the host; and `put`, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example, **home/gregp**. Once connected, you can then use `cd` just as you would in your client, except now youre changing your working directory on the host. You may need to use `ls` to make sure you know where you are.
Once youre connected, the two basic commands for SFTP are, to receive files from the host; and, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example,. Once connected, you can then usejust as you would in your client, except now youre changing your working directory on the host. You may need to useto make sure you know where you are.
If you need to change the working directory on your client, use the command `lcd` (as in **local change directory** ). Similarly, use `lls` to show the working directory contents on your client system.
What if the host doesnt have a directory with the name you would like? Use `mkdir` to make a new directory on it. Or you might copy a whole directory of files to the host with this:
```
put -r ThisDir/
```
which creates the directory and then copies all of its files and subdirectories to the host. These transfers are extremely fast, as fast as your hardware allows, and have none of the bottlenecks you might encounter on the internet. To see a list of commands you can use in an SFTP session, check:
```
man sftp
```
I have also been able to put SFTP to use on a Windows VM on my computer, yet another advantage of setting up a VM rather than a dual-boot system. This lets me move files to or from the Linux part of the system. So far I have only done this using a client in Windows.
You can also use SSH and SFTP to access any devices connected to your router by wire or WiFi. For a while, I used an app called [SSHDroid][1], which runs SSH in a passive mode. In other words, you use your computer to access the Android device that is the host. Recently I found another app, [Admin Hands][2], where the tablet or phone is the client and can be used for either SSH or SFTP operations. This app is great for backing up or sharing photos from your phone.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/ssh-sftp-home-network
作者:[Geg Pittman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-p
[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid
[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US

View File

@ -0,0 +1,72 @@
translating---geekpi
Introducing Swift on Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg)
Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora.
### Safe, Fast, Expressive
Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed.
Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let.
Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator.
Some additional features include:
* Closures unified with function pointers
* Tuples and multiple return values
* Generics
* Fast and concise iteration over a range or collection
* Structs that support methods, extensions, and protocols
* Functional programming patterns, e.g., map and filter
* Powerful error handling built-in
* Advanced control flow with do, guard, defer, and repeat keywords
### Try Swift out
Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up.
```
$ swift
Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance.
1> let greeting="Hello world!"
greeting: String = "Hello world!"
2> print(greeting)
Hello world!
3> greeting = "Hello universe!"
error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant
greeting = "Hello universe!"
~~~~~~~~ ^
3>
```
Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved.
Photo by [Uillian Vargas][3] on [Unsplash][4].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/introducing-swift-fedora/
作者:[Link Dupont][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/linkdupont/
[1]: https://swift.org/server/
[2]: http://swift.org
[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,128 @@
Oomox Customize And Create Your Own GTK2, GTK3 Themes
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-720x340.png)
Theming and Visual customization is one of the main advantages of Linux. Since all the code is open, you can change how your Linux system looks and behaves to a greater degree than you ever could with Windows/Mac OS. GTK theming is perhaps the most popular way in which people customize their Linux desktops. The GTK toolkit is used by a wide variety of desktop environments like Gnome, Cinnamon, Unity, XFCE, and budgie. This means that a single theme made for GTK can be applied to any of these Desktop Environments with little changes.
There are a lot of very high quality popular GTK themes out there, such as **Arc** , **Numix** , and **Adapta**. But if you want to customize these themes and create your own visual design, you can use **Oomox**.
The Oomox is a graphical app for customizing and creating your own GTK theme complete with your own color, icon and terminal style. It comes with several presets, which you can apply on a Numix, Arc, or Materia style theme to create your own GTK theme.
### Installing Oomox
On Arch Linux and its variants:
Oomox is available on [**AUR**][1], so you can install it using any AUR helper programs like [**Yay**][2].
```
$ yay -S oomox
```
On Debian/Ubuntu/Linux Mint, download `oomox.deb`package from [**here**][3] and install it as shown below. As of writing this guide, the latest version was **oomox_1.7.0.5.deb**.
```
$ sudo dpkg -i oomox_1.7.0.5.deb
$ sudo apt install -f
```
On Fedora, Oomox is available in third-party **COPR** repository.
```
$ sudo dnf copr enable tcg/themes
$ sudo dnf install oomox
```
Oomox is also available as a [**Flatpak app**][4]. Make sure you have installed Flatpak as described in [**this guide**][5]. And then, install and run Oomox using the following commands:
```
$ flatpak install flathub com.github.themix_project.Oomox
$ flatpak run com.github.themix_project.Oomox
```
For other Linux distributions, go to the Oomox project page (Link is given at the end of this guide) on Github and compile and install it manually from source.
### Customize And Create Your Own GTK2, GTK3 Themes
**Theme Customization**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-1-1.png)
You can change the colour of practically every UI element, like:
1. Headers
2. Buttons
3. Buttons inside Headers
4. Menus
5. Selected Text
To the left, there are a number of presets, like the Cars theme, modern themes like Materia, and Numix, and retro themes. Then, at the top of the main window, theres an option called **Theme Style** , that lets you set the overall visual style of the theme. You can choose from between Numix, Arc, and Materia.
With certain styles like Numix, you can even change things like the Header Gradient, Outline Width and Panel Opacity. You can also add a Dark Mode for your theme that will be automatically created from the default theme.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-2.png)
**Iconset Customization**
You can customize the iconset that will be used for the theme icons. There are 2 options Gnome Colors and Archdroid. You an change the base, and stroke colours of the iconset.
**Terminal Customization**
You can also customize the terminal colours. The app has several presets for this, but you can customize the exact colour code for each colour value like red, green,black, and so on. You can also auto swap the foreground and background colours.
**Spotify Theme**
A unique feature this app has is that you can theme the spotify app to your liking. You can change the foreground, background, and accent color of the spotify app to match the overall GTK theme.
Then, just press the **Apply Spotify Theme** button, and youll get this window:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-3.png)
Just hit apply, and youre done.
**Exporting your Theme**
Once youre done customizing the theme to your liking, you can rename it by clicking the rename button at the top left:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-4.png)
And then, just hit **Export Theme** to export the theme to your system.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-5.png)
You can also just export just the Iconset or the terminal theme.
After this, you can open any Visual Customization app for your Desktop Environment, like Tweaks for Gnome based DEs, or the **XFCE Appearance Settings** , and select your exported GTK and Shell theme.
### Verdict
If you are a Linux theme junkie, and you know exactly how each button, how each header in your system should look like, Oomox is worth a look. For extreme customizers, it lets you change virtually everything about how your system looks. For people who just want to tweak an existing theme a little bit, it has many, many presets so you can get what you want without a lot of effort.
Have you tried it? What are your thoughts on Oomox? Put them in the comments below!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://aur.archlinux.org/packages/oomox/
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[3]: https://github.com/themix-project/oomox/releases
[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox
[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/

View File

@ -0,0 +1,73 @@
Tips for listing files with ls at the Linux command line
======
Learn some of the Linux 'ls' command's most useful variations.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
One of the first commands I learned in Linux was `ls`. Knowing whats in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important.
My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column.
Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files.
According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MITs Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5].
Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with.
`$ ls -l` provides a simple list of the directory:
![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png)
Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options:
![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png)
To sort the directory by file sizes, use `ls -lS`:
![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png)
To list the contents in reverse order, use `ls -lr`:
![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png)
To list contents by columns, use `ls -c`:
![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png)
`ls -al` provides a list of all the files in the same directory:
![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png)
Here are some additional options that I find useful and interesting:
* List only the .txt files in the directory: `ls *.txt`
* List by file size: `ls -s`
* Sort by time and date: `ls -d`
* Sort by extension: `ls -X`
* Sort by file size: `ls -S`
* Long format with file size: `ls -ls`
* List only the .txt files in a directory: `ls *.txt`
To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents.
For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/ls-command
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf
[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html
[3]: https://en.wikipedia.org/wiki/Multics
[4]: https://en.wikipedia.org/wiki/Ls
[5]: http://www.gnu.org/s/coreutils/
[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation

View File

@ -0,0 +1,119 @@
Archiving web sites
======
I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web.
### Converting simple sites
The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard.
For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine:
```
$ nice wget --mirror --execute robots=off --no-verbose --convert-links \
--backup-converted --page-requisites --adjust-extension \
--base=./ --directory-prefix=./ --span-hosts \
--domains=www.example.com,example.com http://www.example.com/
```
The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site.
The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site.
That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well.
### JavaScript doom
Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm.
Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites.
As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach.
### Creating and displaying WARC files
At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13].
A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`:
```
$ pip install pywb
$ wb-manager init example
$ wb-manager add example crawl.warc.gz
$ wayback
```
This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents.
Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked:
```
$ crawl https://example.com/
```
(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb.
### Future work and alternatives
There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites.
This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line.
In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support.
Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay.
The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37].
--------------------------------------------------------------------------------
via: https://anarc.at/blog/2018-10-04-archiving-web-sites/
作者:[Anarcat][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://anarc.at
[1]: https://anarc.at/blog
[2]: https://drupal.org
[3]: https://www.gnu.org/software/wget/
[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/
[5]: https://en.wikipedia.org/wiki/Progressive_enhancement
[6]: https://noscript.net/
[7]: https://github.com/gorhill/uMatrix
[8]: https://pamplemousse.ca/
[9]: https://archive.org
[10]: http://www.archive.org/web/researcher/ArcFileFormat.php
[11]: https://iipc.github.io/warc-specifications/
[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium
[13]: https://github.com/internetarchive/heritrix3/wiki
[14]: https://github.com/webrecorder/pywb
[15]: https://webrecorder.io/
[16]: https://github.com/webrecorder/pywb/issues/294
[17]: https://github.com/iipc/warc-specifications/issues/23
[18]: https://github.com/iipc/warc-specifications/pull/24
[19]: https://git.autistici.org/ale/crawl/
[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem
[21]: https://github.com/chfoo/wpull
[22]: http://phantomjs.org/
[23]: http://rg3.github.io/youtube-dl/
[24]: https://www.archiveteam.org/index.php?title=ArchiveBot
[25]: https://archiveteam.org/
[26]: https://github.com/JustAnotherArchivist/snscrape
[27]: https://github.com/PromyLOPh/crocoite
[28]: http://www.httrack.com/
[29]: https://gitlab.com/gnuwget/wget2
[30]: https://gitlab.com/gnuwget/wget2/wikis/home
[31]: https://wallabag.org/
[32]: https://getpocket.com/
[33]: https://github.com/wallabag/wallabag/issues/2825
[34]: https://github.com/wallabag/wallabag/issues/2914
[35]: https://pirate.github.io/bookmark-archiver/
[36]: https://github.com/kanishka-linux/reminiscence
[37]: http://iabak.archiveteam.org

View File

@ -0,0 +1,190 @@
Functional programming in Python: Immutable data structures
======
Immutability can help us better understand our code. Here's how to achieve it without sacrificing performance.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D)
In this two-part series, I will discuss how to import ideas from the functional programming methodology into Python in order to have the best of both worlds.
This first post will explore how immutable data structures can help. The second part will explore higher-level functional programming concepts in Python using the **toolz** library.
Why functional programming? Because mutation is hard to reason about. If you are already convinced that mutation is problematic, great. If you're not convinced, you will be by the end of this post.
Let's begin by considering squares and rectangles. If we think in terms of interfaces, neglecting implementation details, are squares a subtype of rectangles?
The definition of a subtype rests on the [Liskov substitution principle][1]. In order to be a subtype, it must be able to do everything the supertype does.
How would we define an interface for a rectangle?
```
from zope.interface import Interface
class IRectangle(Interface):
    def get_length(self):
        """Squares can do that"""
    def get_width(self):
        """Squares can do that"""
    def set_dimensions(self, length, width):
        """Uh oh"""
```
If this is the definition, then squares cannot be a subtype of rectangles; they cannot respond to a `set_dimensions` method if the length and width are different.
A different approach is to choose to make rectangles immutable.
```
class IRectangle(Interface):
    def get_length(self):
        """Squares can do that"""
    def get_width(self):
        """Squares can do that"""
    def with_dimensions(self, length, width):
        """Returns a new rectangle"""
```
Now, a square can be a rectangle. It can return a new rectangle (which would not usually be a square) when `with_dimensions` is called, but it would not stop being a square.
This might seem like an academic problem—until we consider that squares and rectangles are, in a sense, a container for their sides. After we understand this example, the more realistic case this comes into play with is more traditional containers. For example, consider random-access arrays.
We have `ISquare` and `IRectangle`, and `ISquare` is a subtype of `IRectangle`.
We want to put rectangles in a random-access array:
```
class IArrayOfRectangles(Interface):
    def get_element(self, i):
        """Returns Rectangle"""
    def set_element(self, i, rectangle):
        """'rectangle' can be any IRectangle"""
```
We want to put squares in a random-access array too:
```
class IArrayOfSquare(Interface):
    def get_element(self, i):
        """Returns Square"""
    def set_element(self, i, square):
        """'square' can be any ISquare"""
```
Even though `ISquare` is a subtype of `IRectangle`, no array can implement both `IArrayOfSquare` and `IArrayOfRectangle`.
Why not? Assume `bucket` implements both.
```
>>> rectangle = make_rectangle(3, 4)
>>> bucket.set_element(0, rectangle) # This is allowed by IArrayOfRectangle
>>> thing = bucket.get_element(0) # That has to be a square by IArrayOfSquare
>>> assert thing.height == thing.width
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AssertionError
```
Being unable to implement both means that neither is a subtype of the other, even though `ISquare` is a subtype of `IRectangle`. The problem is the `set_element` method: If we had a read-only array, `IArrayOfSquare` would be a subtype of `IArrayOfRectangle`.
Mutability, in both the mutable `IRectangle` interface and the mutable `IArrayOf*` interfaces, has made thinking about types and subtypes much more difficult—and giving up on the ability to mutate meant that the intuitive relationships we expected to have between the types actually hold.
Mutation can also have non-local effects. This happens when a shared object between two places is mutated by one. The classic example is one thread mutating a shared object with another thread, but even in a single-threaded program, sharing between places that are far apart is easy. Consider that in Python, most objects are reachable from many places: as a module global, or in a stack trace, or as a class attribute.
If we cannot constrain the sharing, we might think about constraining the mutability.
Here is an immutable rectangle, taking advantage of the [attrs][2] library:
```
@attr.s(frozen=True)
class Rectange(object):
    length = attr.ib()
    width = attr.ib()
    @classmethod
    def with_dimensions(cls, length, width):
        return cls(length, width)
```
Here is a square:
```
@attr.s(frozen=True)
class Square(object):
    side = attr.ib()
    @classmethod
    def with_dimensions(cls, length, width):
        return Rectangle(length, width)
```
Using the `frozen` argument, we can easily have `attrs`-created classes be immutable. All the hard work of writing `__setitem__` correctly has been done by others and is completely invisible to us.
It is still easy to modify objects; it's just nigh impossible to mutate them.
```
too_long = Rectangle(100, 4)
reasonable = attr.evolve(too_long, length=10)
```
The [Pyrsistent][3] package allows us to have immutable containers.
```
# Vector of integers
a = pyrsistent.v(1, 2, 3)
# Not a vector of integers
b = a.set(1, "hello")
```
While `b` is not a vector of integers, nothing will ever stop `a` from being one.
What if `a` was a million elements long? Is `b` going to copy 999,999 of them? Pyrsistent comes with "big O" performance guarantees: All operations take `O(log n)` time. It also comes with an optional C extension to improve performance beyond the big O.
For modifying nested objects, it comes with a concept of "transformers:"
```
blog = pyrsistent.m(
    title="My blog",
    links=pyrsistent.v("github", "twitter"),
    posts=pyrsistent.v(
        pyrsistent.m(title="no updates",
                     content="I'm busy"),
        pyrsistent.m(title="still no updates",
                     content="still busy")))
new_blog = blog.transform(["posts", 1, "content"],
                          "pretty busy")
```
`new_blog` will now be the immutable equivalent of
```
{'links': ['github', 'twitter'],
 'posts': [{'content': "I'm busy",
            'title': 'no updates'},
           {'content': 'pretty busy',
            'title': 'still no updates'}],
 'title': 'My blog'}
```
But `blog` is still the same. This means anyone who had a reference to the old object has not been affected: The transformation had only local effects.
This is useful when sharing is rampant. For example, consider default arguments:
```
def silly_sum(a, b, extra=v(1, 2)):
    extra = extra.extend([a, b])
    return sum(extra)
```
In this post, we have learned why immutability can be useful for thinking about our code, and how to achieve it without an extravagant performance price. Next time, we will learn how immutable objects allow us to use powerful programming constructs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
作者:[Moshe Zadka][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[1]: https://en.wikipedia.org/wiki/Liskov_substitution_principle
[2]: https://www.attrs.org/en/stable/
[3]: https://pyrsistent.readthedocs.io/en/latest/

View File

@ -0,0 +1,181 @@
PyTorch 1.0 Preview Release: Facebooks newest Open Source AI
======
Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0.
For those who are not familiar, [PyTorch][1] is a Python-based library for Scientific Computing.
PyTorch harnesses the [superior computational power of Graphical Processing Units (GPUs)][2] for carrying out complex [Tensor][3] computations and implementing [deep neural networks][4]. So, it is used widely across the world by numerous researchers and developers.
This new ready-to-use [Preview Release][5] was announced at the [PyTorch Developer Conference][6] at [The Midway][7], San Francisco, CA on Tuesday, October 2, 2018.
### Highlights of PyTorch 1.0 Release Candidate
![PyTorhc is Python based open source AI framework from Facebook][8]
Some of the main new features in the release candidate are:
#### 1\. JIT
JIT is a set of compiler tools to bring research close to production. It includes a Python-based language called Torch Script and also ways to make existing code compatible with itself.
#### 2\. New torch.distributed library: “C10D”
“C10D” enables asynchronous operation on different backends with performance improvements on slower networks and more.
#### 3\. C++ frontend (experimental)
Though it has been specifically mentioned as an unstable API (expected in a pre-release), this is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend to enable research in high performance, low latency and C++ applications installed directly on hardware.
To know more, you can take a look at the complete [update notes][9] on GitHub.
The first stable version PyTorch 1.0 will be released in summer.
### Installing PyTorch on Linux
To install PyTorch v1.0rc0, the developers recommend using [conda][10] while there also other ways to do that as shown on their [local installation page][11] where they have documented everything necessary in detail.
#### Prerequisites
* Linux
* Pip
* Python
* [CUDA][12] (For Nvidia GPU owners)
As we recently showed you [how to install and use Pip][13], lets get to know how we can install PyTorch with it.
Note that PyTorch has GPU and CPU-only variants. You should install the one that suits your hardware.
#### Installing old and stable version of PyTorch
If you want the stable release (version 0.4) for your GPU, use:
```
pip install torch torchvision
```
Use these two commands in succession for a CPU-only stable release:
```
pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl
pip install torchvision
```
#### Installing PyTorch 1.0 Release Candidate
You install PyTorch 1.0 RC GPU version with this command:
```
pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html
```
If you do not have a GPU and would prefer a CPU-only version, use:
```
pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
```
#### Verifying your PyTorch installation
Startup the python console on a terminal with the following simple command:
```
python
```
Now enter the following sample code line by line to verify your installation:
```
from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)
```
You should get an output like:
```
tensor([[0.3380, 0.3845, 0.3217],
[0.8337, 0.9050, 0.2650],
[0.2979, 0.7141, 0.9069],
[0.1449, 0.1132, 0.1375],
[0.4675, 0.3947, 0.1426]])
```
To check whether you can use PyTorchs GPU capabilities, use the following sample code:
```
import torch
torch.cuda.is_available()
```
The resulting output should be:
```
True
```
Support for AMD GPUs for PyTorch is still under development, so complete test coverage is not yet provided as reported [here][14], suggesting this [resource][15] in case you have an AMD GPU.
Lets now look into some research projects that extensively use PyTorch:
### Ongoing Research Projects based on PyTorch
* [Detectron][16]: Facebook AI Researchs software system to intelligently detect and classify objects. It is based on Caffe2. Earlier this year, Caffe2 and PyTorch [joined forces][17] to create a Research + Production enabled PyTorch 1.0 we talk about.
* [Unsupervised Sentiment Discovery][18]: Such methods are extensively used with social media algorithms.
* [vid2vid][19]: Photorealistic video-to-video translation
* [DeepRecommender][20] (We covered how such systems work on our past [Netflix AI article][21])
Nvidia, leading GPU manufacturer covered more on this with their own [update][22] on this recent development where you can also read about ongoing collaborative research endeavours.
### How should we react to such PyTorch capabilities?
To think Facebook applies such amazingly innovative projects and more in its social media algorithms, should we appreciate all this or get alarmed? This is almost [Skynet][23]! This newly improved production-ready pre-release of PyTorch will certainly push things further ahead! Feel free to share your thoughts with us in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/pytorch-open-source-ai-framework/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[1]: https://pytorch.org/
[2]: https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
[3]: https://en.wikipedia.org/wiki/Tensor
[4]: https://www.techopedia.com/definition/32902/deep-neural-network
[5]: https://code.fb.com/ai-research/facebook-accelerates-ai-development-with-new-partners-and-production-capabilities-for-pytorch-1-0
[6]: https://pytorch.fbreg.com/
[7]: https://www.themidwaysf.com/
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/pytorch.jpeg
[9]: https://github.com/pytorch/pytorch/releases/tag/v1.0rc0
[10]: https://conda.io/
[11]: https://pytorch.org/get-started/locally/
[12]: https://www.pugetsystems.com/labs/hpc/How-to-install-CUDA-9-2-on-Ubuntu-18-04-1184/
[13]: https://itsfoss.com/install-pip-ubuntu/
[14]: https://github.com/pytorch/pytorch/issues/10657#issuecomment-415067478
[15]: https://rocm.github.io/install.html#installing-from-amd-rocm-repositories
[16]: https://github.com/facebookresearch/Detectron
[17]: https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html
[18]: https://github.com/NVIDIA/sentiment-discovery
[19]: https://github.com/NVIDIA/vid2vid
[20]: https://github.com/NVIDIA/DeepRecommender/
[21]: https://itsfoss.com/netflix-open-source-ai/
[22]: https://news.developer.nvidia.com/pytorch-1-0-accelerated-on-nvidia-gpus/
[23]: https://en.wikipedia.org/wiki/Skynet_(Terminator)

View File

@ -0,0 +1,133 @@
Dbxfs Mount Dropbox Folder Locally As Virtual File System In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dbxfs-720x340.png)
A while ago, we summarized all the possible ways to **[mount Google drive locally][1]** as a virtual file system and access the files stored in the google drive from your Linux operating system. Today, we are going to learn to mount Dropbox folder in your local file system using **dbxfs** utility. The dbxfs is used to mount your Dropbox folder locally as a virtual filesystem in Unix-like operating systems. While it is easy to [**install Dropbox client**][2] in Linux, this approach slightly differs from the official method. It is a command line dropbox client and requires no disk space for access. The dbxfs application is free, open source and written for Python 3.5+.
### Installing dbxfs
The dbxfs officially supports Linux and Mac OS. However, it should work on any POSIX system that provides a **FUSE-compatible library** or has the ability to mount **SMB** shares. Since it is written for Python 3.5, it can installed using **pip3** package manager. Refer the following guide if you havent installed PIP yet.
And, install FUSE library as well.
On Debian-based systems, run the following command to install FUSE:
```
$ sudo apt install libfuse2
```
On Fedora:
```
$ sudo dnf install fuse
```
Once you installed all required dependencies, run the following command to install dbxfs utility:
```
$ pip3 install dbxfs
```
### Mount Dropbox folder locally
Create a mount point to mount your dropbox folder in your local file system.
```
$ mkdir ~/mydropbox
```
Then, mount the dropbox folder locally using dbxfs utility as shown below:
```
$ dbxfs ~/mydropbox
```
You will be asked to generate an access token:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-1.png)
To generate an access token, just navigate to the URL given in the above output from your web browser and click **Allow** to authenticate Dropbox access. You need to log in to your dropbox account to complete authorization process.
A new authorization code will be generated in the next screen. Copy the code and head back to your Terminal and paste it into cli-dbxfs prompt to finish the process.
You will be then asked to save the credentials for future access. Type **Y** or **N** whether you want to save or decline. And then, you need to enter a passphrase twice for the new access token.
Finally, click **Y** to accept **“/home/username/mydropbox”** as the default mount point. If you want to set different path, type **N** and enter the location of your choice.
[![Generate access token 2][3]][4]
All done! From now on, you can see your Dropbox folder is locally mounted in your filesystem.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dropbox-in-file-manager.png)
### Change Access Token Storage Path
By default, the dbxfs application will store your Dropbox access token in the system keyring or an encrypted file. However, you might want to store it in a **gpg** encrypted file or something else. If so, get an access token by creating a personal app on the [Dropbox developers app console][5].
![](https://www.ostechnix.com/wp-content/uploads/2018/10/access-token.png)
Once the app is created, click **Generate** button in the next button. This access token can be used to access your Dropbox account via the API. Dont share your access token with anyone.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-a-new-app.png)
Once you created an access token, encrypt it using any encryption tools of your choice, such as [**Cryptomater**][6], [**Cryptkeeper**][7], [**CryptGo**][8], [**Cryptr**][9], [**Tomb**][10], [**Toplip**][11] and [**GnuPG**][12] etc., and store it in your preferred location.
Next edit the dbxfs configuration file and add the following line in it:
```
"access_token_command": ["gpg", "--decrypt", "/path/to/access/token/file.gpg"]
```
You can find the dbxfs configuration file by running the following command:
```
$ dbxfs --print-default-config-file
```
For more details, refer dbxfs help section:
```
$ dbxfs -h
```
As you can see, mounting Dropfox folder locally in your file system using Dbxfs utility is no big deal. As far tested, dbxfs just works fine as expected. Give it a try if youre interested to see how it works and let us know about your experience in the comment section below.
And, thats all for now. Hope this was useful. More good stuff to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/dbxfs-mount-dropbox-folder-locally-as-virtual-file-system-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-2.png
[5]: https://dropbox.com/developers/apps
[6]: https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/
[7]: https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/
[8]: https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/
[9]: https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/
[10]: https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/
[11]: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
[12]: https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/

View File

@ -0,0 +1,107 @@
How to use Kolibri to access educational material offline
======
Kolibri makes digital educational materials available to students without internet access.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_BYU_520x292_FINAL.png?itok=NVY7vR8o)
While the internet has thoroughly transformed the availability of educational content for much of the world, many people still live in places where online access is poor or even nonexistent. [Kolibri][1] is a great solution for these communities. It's an app that creates an offline server to deliver high-quality educational resources to learners. You can set up Kolibri on a wide range of [hardware][2], including low-cost Windows, MacOS, and Linux (including Raspberry Pi) computers. A version for Android tablets is in the works.
Because it's open source, free to use, works without broadband access (after initial setup), and includes a wide range of educational content, it gives students in rural schools, refugee camps, orphanages, informal schools, prisons, and other places without reliable internet service access to many of the same resources used by students all over the world.
In addition to being simple to install, it's easy to customize Kolibri for various educational missions and needs, including literacy building, general reference materials, and life skills training. Kolibri includes content from sources including [OpenStax,][3] [CK-12][4], [Khan Academy][5], and [EngageNY][6]; once these packages are "seeded" by connecting the Kolibri serving device to a robust internet connection, they are immediately available for offline access on client devices through a compatible browser.
### Installation and setup
I installed Kolibri on an Intel i3-based laptop running Fedora 28. I chose the **pip install** method, which is very easy. Here's how to do it.
Open a terminal and enter:
```
$ sudo pip install kolibri
```
Start Kolibri by entering **$** **kolibri** **start** in the terminal.
Find your Kolibri installation's URL in the terminal.
![](https://opensource.com/sites/default/files/uploads/kolibri_url.png)
Open your browser and point it to that URL, being sure to append port **8080**.
Select the default language—options include English, Spanish, French, Arabic, Portuguese, Hindi, Farsi, Burmese, and Bengali. (I chose English.)
Name your facility, i.e., your classroom, library, or home. (I named mine Test.)
![](https://opensource.com/sites/default/files/uploads/kolibri_name.png)
Tell Kolibri what type of facility you're setting up—self-managed, admin-managed, or informal. (I chose self-managed.)
![](https://opensource.com/sites/default/files/uploads/kolibri_facility-type.png)
Create an admin account.
![](https://opensource.com/sites/default/files/uploads/kolibri_admin.png)
### Add content
You can add Kolibri-curated content channels while you are connected to broadband service. Explore and add content from the menu at the top-left of the browser.
![](https://opensource.com/sites/default/files/uploads/kolibri_menu.png)
Choose Device and Import.
![](https://opensource.com/sites/default/files/uploads/kolibri_import.png)
Selecting English as the default language provides access to 29 content channels including Touchable Earth, Global Digital Library, Khan Academy, OpenStax, CK-12, EngageNY, Blockly games, and more.
Select a channel you're interested in. You have the option to download the entire channel (which might take a long time) or to select the specific content you want to download.
![](https://opensource.com/sites/default/files/uploads/kolibri_select-content.png)
To access your content, return to the top-left menu and select Learn.
![](https://opensource.com/sites/default/files/uploads/kolibri_content.png)
### Add users
User accounts can be set up as learners, coaches, or admins. Users can access the Kolibri server from most web browsers on any Linux, MacOS, Windows, Android, or iOS device on the same network, even if the network isn't connected to the internet. Admins can set up classes on the device, assign coaches and learners to classes, and see every user's interaction and how much time they spend with the content.
If your Kolibri server is set up as self-managed, users can create their own accounts by entering the Kolibri URL in their browser and following the prompts. For information on setting up users on an admin-managed server, check out Kolibri's [documentation][7].
![](https://opensource.com/sites/default/files/uploads/kolibri_user-account.png)
After logging in, the user can access content right away to begin learning.
### Learn more
Kolibri is a very powerful learning resource, especially for people who don't have a robust connection to the internet. Its [documentation][8] is very complete, and a [demo][9] site maintained by the project allows you to try it out.
Kolibri is open source under the [MIT License][10]. The project, which is managed by the nonprofit organization Learning Equality, is looking for developers—if you would like to get involved, be sure to check out them on [GitHub][11]. To learn more, follow Learning Equality and Kolibri on its [blog][12], [Twitter][13], and [Facebook][14] pages.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/getting-started-kolibri
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[1]: https://learningequality.org/kolibri/
[2]: https://drive.google.com/file/d/0B9ZzDms8cSNgVWRKdUlPc2lkTkk/view
[3]: https://openstax.org/
[4]: https://www.ck12.org/
[5]: https://www.khanacademy.org/
[6]: https://www.engageny.org/
[7]: https://kolibri.readthedocs.io/en/latest/manage.html#create-a-new-user-account
[8]: https://learningequality.org/documentation/
[9]: http://kolibridemo.learningequality.org/learn/#/topics
[10]: https://github.com/learningequality/kolibri/blob/develop/LICENSE
[11]: https://github.com/learningequality/
[12]: https://blog.learningequality.org/
[13]: https://twitter.com/LearnEQ/
[14]: https://www.facebook.com/learningequality

View File

@ -0,0 +1,188 @@
Open Source Logging Tools for Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs-main.jpg?itok=voNrSz4H)
If youre a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those whod rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options.
In this article, Ill highlight a few such tools available for the Linux platform. I wont be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log.
Speaking of which…
### What is /var/log?
If youre new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and youll see all of the various systems that have log files you can view (Figure 1).
![/var/log/][2]
Figure 1: Our ls command reveals the logs available in /var/log/.
[Used with permission][3]
Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isnt for you? What options do you have? Plenty. Lets take a look at few such options.
### Logs
If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and youll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs.
![Logs tool][5]
Figure 2: The GNOME Logs tool is one of the easiest GUI log viewers youll find for Linux.
[Used with permission][3]
Logs is a great tool, especially if youre not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems.
### KSystemLog
KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout.
From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities.
![KSystemLog][7]
Figure 3: The KSystemLog main window.
[Used with permission][3]
If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following:
1. Open KSystemLog.
2. Click File > Add Log Entry.
3. Create your log entry (Figure 4).
4. Click OK
![log entry][9]
Figure 4: Creating a manual log entry with KSystemLog.
[Used with permission][3]
KSystemLog makes viewing logs in KDE an incredibly easy task.
### Logwatch
Logwatch isnt a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so:
```
sudo apt-get install logwatch
```
Or:
```
sudo dnf install logwatch
```
During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, youll need to install the mailutils app (so you can view mail locally, via the mail command).
![ Logwatch][11]
Figure 5: Configuring Logwatch alert sending method.
[Used with permission][3]
All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. Youll want to edit the MailTo = option. If youre viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, youll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file.
Once configured, you can send your first mail with a command like:
```
logwatch --detail Med --mailto ADDRESS --service all --range today
Where ADDRESS is either the local user or an email address.
```
For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool.
### Rsyslog
Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents:
```
# Provide UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provide TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
# Use custom filenaming scheme
$template FILENAME,"/var/log/remote/%HOSTNAME%.log"
*.* ?FILENAME
$PreserveFQDN on
```
Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents:
```
$PreserveFQDN on
$ActionQueueType LinkedList
$ActionQueueFileName srvrfwd
$ActionResumeRetryCount -1
$ActionQueueSaveOnShutdown on
*.* @@SERVER_IP:514
```
Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command:
```
sudo systemctl restart rsyslog
```
You can now view the centralized log files with the command (run on the centralized server):
```
tail -f /var/log/remote/*.log
```
The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6).
![Rsyslog][13]
Figure 6: Rsyslog showing entries for a connected client.
[Used with permission][3]
Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers.
### More where that came from
This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day's journey into the Linux log file.
Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux
作者:[JACK WALLEN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[1]: /files/images/logs1jpg
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/)
[3]: /licenses/category/used-permission
[4]: /files/images/logs2jpg
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool)
[6]: /files/images/logs3jpg
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog)
[8]: /files/images/logs4jpg
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry)
[10]: /files/images/logs5jpg
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch)
[12]: /files/images/logs6jpg
[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog)
[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,171 @@
Terminalizer A Tool To Record Your Terminal And Generate Animated Gif Images
======
This is know topic for most of us and i dont want to give you the detailed information about this flow. Also, we had written many article under this topics.
Script command is the one of the standard command to record Linux terminal sessions. Today we are going to discuss about same kind of tool called Terminalizer.
This tool will help us to record the users terminal activity, also will help us to identify other useful information from the output.
### What Is Terminalizer
Terminalizer allow users to record their terminal activity and allow them to generate animated gif images. Its highly customizable CLI tool that user can share a link for an online player, web player for a recording file.
**Suggested Read :**
**(#)** [Script A Simple Command To Record Your Terminal Session Activity][1]
**(#)** [Automatically Record/Capture All Users Terminal Sessions Activity In Linux][2]
**(#)** [Teleconsole A Tool To Share Your Terminal Session Instantly To Anyone In Seconds][3]
**(#)** [tmate Instantly Share Your Terminal Session To Anyone In Seconds][4]
**(#)** [Peek Create a Animated GIF Recorder in Linux][5]
**(#)** [Kgif A Simple Shell Script to Create a Gif File from Active Window][6]
**(#)** [Gifine Quickly Create An Animated GIF Video In Ubuntu/Debian][7]
There is no distribution official package to install this utility and we can easily install it by using Node.js.
### How To Install Noje.js in Linux
Node.js can be installed in multiple ways. Here, we are going to teach you the standard method.
For Ubuntu/LinuxMint use [APT-GET Command][8] or [APT Command][9] to install Node.js
```
$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
$ sudo apt-get install -y nodejs
```
For Debian use [APT-GET Command][8] or [APT Command][9] to install Node.js
```
# curl -sL https://deb.nodesource.com/setup_8.x | bash -
# apt-get install -y nodejs
```
For **`RHEL/CentOS`** , use [YUM Command][10] to install tmux.
```
$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
$ sudo yum install epel-release
$ sudo yum -y install nodejs
```
For **`Fedora`** , use [DNF Command][11] to install tmux.
```
$ sudo dnf install nodejs
```
For **`Arch Linux`** , use [Pacman Command][12] to install tmux.
```
$ sudo pacman -S nodejs npm
```
For **`openSUSE`** , use [Zypper Command][13] to install tmux.
```
$ sudo zypper in nodejs6
```
### How to Install Terminalizer
As you have already installed prerequisite package called Node.js, now its time to install Terminalizer on your system. Simple run the below npm command to install Terminalizer.
```
$ sudo npm install -g terminalizer
```
### How to Use Terminalizer
To record your session activity using Terminalizer, just run the following Terminalizer command. Once you started the recording then play around it and finally hit `CTRL+D` to exit and save the recording.
```
# terminalizer record 2g-session
defaultConfigPath
The recording session is started
Press CTRL+D to exit and save the recording
```
This will save your recording session as a YAML file, in this case my filename would be 2g-session-activity.yml.
![][15]
Just type few commands to verify this and finally hit `CTRL+D` to exit the current capture. When you hit `CTRL+D` on the terminal and you will be getting the below output.
```
# logout
Successfully Recorded
The recording data is saved into the file:
/home/daygeek/2g-session.yml
You can edit the file and even change the configurations.
```
![][16]
### How to Play the Recorded File
Use the below command format to paly your recorded YAML file. Make sure, you have to input your recorded file instead of us.
```
# terminalizer play 2g-session
```
Render a recording file as an animated gif image.
```
# terminalizer render 2g-session
```
`Note:` Below two commands are not implemented yet in the current version and will be available in the next version.
If you would like to share your recording to others then upload a recording file and get a link for an online player and share it.
```
terminalizer share 2g-session
```
Generate a web player for a recording file
```
# terminalizer generate 2g-session
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/
[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/
[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/
[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/
[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/
[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/
[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/
[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif
[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif

View File

@ -0,0 +1,110 @@
KeeWeb An Open Source, Cross Platform Password Manager
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/keeweb-720x340.png)
If youve been using the internet for any amount of time, chances are, you have a lot of accounts on a lot of websites. All of those accounts must have passwords, and you have to remember all those passwords. Either that, or write them down somewhere. Writing down passwords on paper may not be secure, and remembering them wont be practically possible if you have more than a few passwords. This is why Password Managers have exploded in popularity in the last few years. A password Manager is like a central repository where you store all your passwords for all your accounts, and you lock it with a master password. With this approach, the only thing you need to remember is the Master password.
**KeePass** is one such open source password manager. KeePass has an official client, but its pretty barebones. But there are a lot of other apps, both for your computer and for your phone, that are compatible with the KeePass file format for storing encrypted passwords. One such app is **KeeWeb**.
KeeWeb is an open source, cross platform password manager with features like cloud sync, keyboard shortcuts and plugin support. KeeWeb uses Electron, which means it runs on Windows, Linux, and Mac OS.
### Using KeeWeb Password Manager
When it comes to using KeeWeb, you actually have 2 options. You can either use KeeWeb webapp without having to install it on your system and use it on the fly or simply install KeeWeb client in your local system.
**Using the KeeWeb webapp**
If you dont want to bother installing a desktop app, you can just go to [**https://app.keeweb.info/**][1] and use it as a password manager.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-webapp.png)
It has all the features of the desktop app. Obviously, this requires you to be online when using the app.
**Installing KeeWeb on your Desktop**
If you like the comfort and offline availability of using a desktop app, you can also install it on your desktop.
If you use Ubuntu/Debian, you can just go to [**releases pages**][2] and download KeeWeb latest **.deb** file, which you can install via this command:
```
$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb
```
If youre on Arch, it is available in the [**AUR**][3], so you can install using any helper programs like [**Yay**][4]:
```
$ yay -S keeweb
```
Once installed, launch it from Menu or application launcher. This is how KeeWeb default interface looks like:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-desktop-client.png)
### General Layout
KeeWeb basically shows a list of all your passwords, along with all your tags to the left. Clicking on a tag will filter the list to only passwords of that tag. To the right, all the fields for the selected account are shown. You can set username, password, website, or just add a custom note. You can even create your own fields and mark them as secure fields, which is great when storing things like credit card information. You can copy passwords by just clicking on them. KeeWeb also shows the date when an account was created and modified. Deleted passwords are kept in the trash, where they can be restored or permanently deleted.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-general-layout.png)
### KeeWeb Features
**Cloud Sync**
One of the main features of KeeWeb is the support for a wide variety of remote locations and cloud services.
Other than loading local files, you can open files from:
1. WebDAV Servers
2. Google Drive
3. Dropbox
4. OneDrive
This means that if you use multiple computers, you can synchronize the password files between them, so you dont have to worry about not having all the passwords available on all devices.
**Password Generator**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-password-generator.png)
Along with encrypting your passwords, its also important to create new, strong passwords for every single account. This means that if one of your account gets hacked, the attacker wont be able to get in to your other accounts using the same password.
To achieve this, KeeWeb has a built-in password generator, that lets you generate a custom password of a specific length, including specific type of characters.
**Plugins**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-plugins.png)
You can extend KeeWeb functionality with plugins. Some of these plugins are translations for other languages, while others add new functionality, like checking **<https://haveibeenpwned.com>** for exposed passwords.
**Local Backups**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-backup.png)
Regardless of where your password file is stored, you should probably keep local backups of the file on your computer. Luckily, KeeWeb has this feature built-in. You can backup to a specific path, and set it to backup periodically, or just whenever the file is changed.
### Verdict
I have actually been using KeeWeb for several years now. It completely changed the way I store my passwords. The cloud sync is basically the feature that makes it a done deal for me. I dont have to worry about keeping multiple unsynchronized files on multiple devices. If you want a great looking password manager that has cloud sync, KeeWeb is something you should look at.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://app.keeweb.info/
[2]: https://github.com/keeweb/keeweb/releases/latest
[3]: https://aur.archlinux.org/packages/keeweb/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/

View File

@ -0,0 +1,103 @@
Play Windows games on Fedora with Steam Play and Proton
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg)
Some weeks ago, Steam [announced][1] a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.
According to the Steam website, there are new features in the beta release:
* Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
* DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
* Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
* Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
* Performance for multi-threaded games has been greatly improved compared to vanilla WINE.
### Installation
If youre interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the [latest updated version of Steam installed][2]. In that case you no longer need Steam Beta to use Proton.)
Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.
![][3]
Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.
![][4]
Now the settings window pops up. Select the Account option and next to Beta participation, click on change.
![][5]
Now change None to Steam Beta Update.
![][6]
Click on OK and a prompt asks you to restart.
![][7]
Let Steam download the update. This can take a while depending on your internet speed and computer resources.
![][8]
After restarting, go back to the Settings window. This time youll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.
![][9]
The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.
![][10]
### Installing a Windows game using Steam Play
Now that you have Proton enabled, install a game. Select the title you want and youll find the process is similar to installing a normal game on Steam, as shown in these screenshots.
![][11]
![][12]
![][13]
![][14]
After the game is done downloading and installing, you can play it.
![][15]
![][16]
Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If youd like to read further, the community has created a [Google doc][17] with a list of games that have been tested.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/play-windows-games-steam-play-proton/
作者:[Francisco J. Vergara Torres][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/patxi/
[1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561
[2]: https://fedoramagazine.org/third-party-repositories-fedora/
[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png
[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png
[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png
[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png
[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png
[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png
[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png
[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png
[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png
[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png
[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png
[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png
[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png
[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png
[17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831

View File

@ -0,0 +1,101 @@
Python at the pump: A script for filling your gas tank
======
Here's how I used Python to discover a strategy for cost-effective fill-ups.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB)
I recently began driving a car that had traditionally used premium gas (93 octane). According to the maker, though, it requires only 91 octane. The thing is, in the US, you can buy only 87, 89, or 93 octane. Where I live, gas prices jump 30 cents per gallon jump from one grade to the next, so premium costs 60 cents more than regular. So why not try to save some money?
Its easy enough to wait until the gas gauge shows that the tank is half full and then fill it with 89 octane, and there you have 91 octane. But it gets tricky to know what to do next—half a tank of 91 octane plus half a tank of 93 ends up being 92, and where do you go from there? You can make continuing calculations, but they get increasingly messy. This is where Python came into the picture.
I wanted to come up with a simple scheme in which I could fill the tank at some level with 93 octane, then at the same or some other level with 89 octane, with the primary goal to never get below 91 octane with the final mixture. What I needed to do was create some recurring calculation that uses the previous octane value for the preceding fill-up. I suppose there would be some polynomial equation that would solve this, but in Python, this sounds like a loop.
```
#!/usr/bin/env python
# octane.py
o = 93.0
newgas = 93.0   # this represents the octane of the last fillup
i = 1
while i < 21:                   # 20 iterations (trips to the pump)
    if newgas == 89.0:          # if the last fillup was with 89 octane
                                # switch to 93
        newgas = 93.0
        o = newgas/2 + o/2      # fill when gauge is 1/2 full
    else:                       # if it wasn't 89 octane, switch to that
        newgas = 89.0
        o = newgas/2 + o/2      # fill when gauge says 1/2 full
    print str(i) + ': '+ str(o)
    i += 1
```
As you can see, I am initializing the variable o (the current octane mixture in the tank) and the variable newgas (what I last filled the tank with) at the same value of 93. The loop then will repeat 20 times, for 20 fill-ups, switching from 89 octane and 93 octane for every other trip to the station.
```
1: 91.0
2: 92.0
3: 90.5
4: 91.75
5: 90.375
6: 91.6875
7: 90.34375
8: 91.671875
9: 90.3359375
10: 91.66796875
11: 90.333984375
12: 91.6669921875
13: 90.3334960938
14: 91.6667480469
15: 90.3333740234
16: 91.6666870117
17: 90.3333435059
18: 91.6666717529
19: 90.3333358765
20: 91.6666679382
```
This shows is that I probably need only 10 or 15 loops to see stabilization. It also shows that soon enough, I undershoot my 91 octane target. Its also interesting to see this stabilization of the alternating mixture values, and it turns out this happens with any scheme where you choose the same amounts each time. In fact, it is true even if the amount of the fill-up is different for 89 and 93 octane.
So at this point, I began playing with fractions, reasoning that I would probably need a bigger 93 octane fill-up than the 89 fill-up. I also didnt want to make frequent trips to the gas station. What I ended up with (which seemed pretty good to me) was to wait until the tank was about 712 full and fill it with 89 octane, then wait until it was ¼ full and fill it with 93 octane.
Here is what the changes in the loop look like:
```
    if newgas == 89.0:            
                                 
        newgas = 93.0
        o = 3*newgas/4 + o/4      
    else:                        
        newgas = 89.0
        o = 5*newgas/12 + 7*o/12
```
Here are the numbers, starting with the tenth fill-up:
```
10: 92.5122272978
11: 91.0487992571
12: 92.5121998143
13: 91.048783225
14: 92.5121958062
15: 91.048780887
```
As you can see, this keeps the final octane very slightly above 91 all the time. Of course, my gas gauge isnt marked in twelfths, but 712 is slightly less than 58, and I can handle that.
An alternative simple solution might have been run the tank to empty and fill with 93 octane, then next time only half-fill it for 89—and perhaps this will be my default plan. Personally, Im not a fan of running the tank all the way down since this isnt always convenient. On the other hand, it could easily work on a long trip. And sometimes I buy gas because of a sudden drop in prices. So in the end, this scheme is one of a series of options that I can consider.
The most important thing for Python users: Dont code while driving!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/python-gas-pump
作者:[Greg Pittman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-p

View File

@ -0,0 +1,128 @@
Taking notes with Laverna, a web-based information organizer
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_)
I dont know anyone who doesnt take notes. Most of the people I know use an online note-taking application like Evernote, Simplenote, or Google Keep.
All of those are good tools, but theyre proprietary. And you have to wonder about the privacy of your information—especially in light of [Evernotes great privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you need to turn to an open source tool—preferably one that you can host yourself.
And there are a number of good [open source alternatives to Evernote][2]. One of these is Laverna. Lets take a look at it.
### Getting Laverna
You can [host Laverna yourself][3] or use the [web version][4]
Since I have nowhere to host the application, Ill focus here on using the web version of Laverna. Aside from the installation and setting up storage (more on that below), Im told that the experience with a self-hosted version of Laverna is the same.
### Setting up Laverna
To start using Laverna right away, click the **Start using now** button on the front page of [Laverna.cc][5].
On the welcome screen, click **Next**. Youll be asked to enter an encryption password to secure your notes and get to them when you need to. Youll also be asked to choose a way to synchronize your notes. Ill discuss synchronization in a moment, so just enter a password and click **Next**.
![](https://opensource.com/sites/default/files/uploads/laverna-set-password.png)
When you log in, you'll see a blank canvas:
![](https://opensource.com/sites/default/files/uploads/laverna-main-window.png)
### Storing your notes
Before diving into how to use Laverna, lets walk through how to store your notes.
Out of the box, Laverna stores your notes in your browsers cache. The problem with that is when you clear the cache, you lose your notes. You can also store your notes using:
* Dropbox, a popular and proprietary web-based file syncing and storing service
* [remoteStorage][6], which offers a way for web applications to store information in the cloud.
Using Dropbox is convenient, but its proprietary. There are also concerns about [privacy and surveillance][7]. Laverna encrypts your notes before saving them, but not all encryption is foolproof. Even if you dont have anything illegal or sensitive in your notes, theyre no ones business but your own.
remoteStorage, on the other hand, is kind of techie to set up. There are few hosted storage services out there. I use [5apps][8].
To change how Laverna stores your notes, click the hamburger menu in the top-left corner. Click **Settings** and then **Sync**.
![](https://opensource.com/sites/default/files/uploads/laverna-sync.png)
Select the service you want to use, then click **Save**. After that, click the left arrow in the top-left corner. Youll be asked to authorize Laverna with the service you chose.
### Using Laverna
With that out of the way, lets get down to using Laverna. Create a new note by clicking the **New Note** icon, which opens the note editor:
![](https://opensource.com/sites/default/files/uploads/laverna-new-note.png)
Type a title for your note, then start typing the note in the left pane of the editor. The right pane displays a preview of your note:
![](https://opensource.com/sites/default/files/uploads/laverna-writing-note.png)
You can format your notes using Markdown; add formatting using your keyboard or the toolbar at the top of the window.
You can also embed an image or file from your computer into a note, or link to one on the web. When you embed an image, its stored with your note.
When youre done, click **Save**.
### Organizing your notes
Like some other note-taking tools, Laverna lists the last note that you created or edited at the top. If you have a lot of notes, it can take a bit of work to find the one you're looking for.
To better organize your notes, you can group them into notebooks, where you can quickly filter them based on a topic or a grouping.
When youre creating or editing a note, you can select a notebook from the **Select notebook** list in the top-left corner of the window. If you dont have any notebooks, select **Add a new notebook** from the list and type the notebooks name.
You can also make that notebook a child of another notebook. Lets say, for example, you maintain three blogs. You can create a notebook called **Blog Post Notes** and name children for each blog.
To filter your notes by notebook, click the hamburger menu, followed by the name of a notebook. Only the notes in the notebook you choose will appear in the list.
![](https://opensource.com/sites/default/files/uploads/laverna-notebook.png)
### Using Laverna across devices
I use Laverna on my laptop and on an eight-inch tablet running [LineageOS][9]. Getting the two devices to use the same storage and display the same notes takes a little work.
First, youll need to export your settings. Log into wherever youre using Laverna and click the hamburger menu. Click **Settings** , then **Import & Export**. Under **Settings** , click **Export settings**. Laverna saves a file named laverna-settings.json to your device.
Copy that file to the other device or devices on which you want to use Laverna. You can do that by emailing it to yourself or by syncing the file across devices using an application like [ownCloud][10] or [Nextcloud][11].
On the other device, click **Import** on the splash screen. Otherwise, click the hamburger menu and then **Settings > Import & Export**. Click **Import settings**. Find the JSON file with your settings, click **Open** and then **Save**.
Laverna will ask you to:
* Log back in using your password.
* Register with the storage service youre using.
Repeat this process for each device that you want to use. Its cumbersome, I know. Ive done it. You should need to do it only once per device, though.
### Final thoughts
Once you set up Laverna, its easy to use and has just the right features for what I need to do. Im hoping that the developers can expand the storage and syncing options to include open source applications like Nextcloud and ownCloud.
While Laverna doesnt have all the bells and whistles of a note-taking application like Evernote, it does a great job of letting you take and organize your notes. The fact that Laverna is open source and supports Markdown are two additional great reasons to use it.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/taking-notes-laverna
作者:[Scott Nesbitt][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[1]: https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/
[2]: https://opensource.com/life/16/8/open-source-alternatives-evernote
[3]: https://github.com/Laverna/laverna
[4]: https://laverna.cc/
[5]: http://laverna.cc/
[6]: https://remotestorage.io/
[7]: https://www.zdnet.com/article/dropbox-faces-questions-over-claims-of-improper-data-sharing/
[8]: https://5apps.com/storage/beta
[9]: https://lineageos.org/
[10]: https://owncloud.com/
[11]: https://nextcloud.com/

View File

@ -1,63 +0,0 @@
从过时的 Windows 机器迁移到 Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-)
每天当我在 ONLYOFFICE 的市场部门工作的时候,我都能看到 Linux 用户在网上讨论我们的办公效率软件。
我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。
我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu )并且决定去尝试使用它。
我的两个同事加入了我的计划。
### 为何选择 Linux
我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当超过两个应用在运行机器就会负载过度,关闭机器时有一半的几率冻结等等。
这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。
升级到 Windows 更新的版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。
因此我们在这方面也存在意识形态的问题。
其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。
通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。
在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu][1] 的机器。
我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。
### Linux 让人喜欢的地方
我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。
从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。
另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。
在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面的小工具(甚至包括漫画和颜色选择器)
我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。
Linux 发行版通常是一个非常安全的环境。
人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。
因此你可以拥有很好的系统速度,并且节省了时间和金钱。
总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。
仅仅通过短时间的使用,我们已经可以给它总结出以下特性:
* 操作很快很顺畅
* 高度可定制
* 对新手很友好
* 了解基本组件很有挑战性,但回报丰厚
* 安全可靠
* 对所有想改变工作场所的人来说都是一次绝佳的体验
你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么?
或者你是否正在考虑做出改变?
请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/move-to-linux-old-windows
作者:[Michael Korotaev][a]
译者:[bookug](https://github.com/bookug)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/michaelk
[1]:https://kubuntu.org/

View File

@ -0,0 +1,134 @@
# 2018 年最好的 Linux 发行版
![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018")
Jack Wallen 分享他挑选的 2018 年最好的 Linux 发行版。
这是新的一年Linux仍有无限可能。而且许多 Linux 在 2017 年都带来了许多重大的改变,我相信在 2018 年它在服务器和桌面上将会带来更加稳定的系统和市场份额的增长。
对于那些期待迁移到开源平台(或是那些想要切换到)的人对于即将到来的一年,什么是最好的选择?如果你去 [Distrowatch][14] 找一下,你可能会因为众多的发行版而感到头晕,其中一些的排名在上升,而还有一些则恰恰相反。
因此,哪个 Linux 发行版将在 2018 年得到偏爱?我有我的看法。事实上,我现在就要和你们分享它。
跟我做的 [去年清单][15] 相似,我将会打破那张清单,使任务更加轻松。普通的 Linux 用户,至少包含以下几个类别:系统管理员,轻量级发行版,桌面,为物联网和服务器发行的版本。
根据这些,让我们开始 2018 年最好的 Linux 发行版清单吧。
### 对系统管理员最好的发行版
[Debian][16] 不常出现在“最好的”列表中。但他应该出现,为什么呢?如果了解到 Ubuntu 是基于 Debian 构建的(其实有很多的发行版都基于 Debian你就很容易理解为什么这个发行版应该在许多“最好”清单中。但为什么是对管理员最好的呢我想这是由于两个非常重要的原因
* 容易使用
* 非常稳定
因为 Debain 使用 dpkg 和 apt 包管理,它使得使用环境非常简单。而且因为 Debian 提供了最稳定的 Linux 平台之一,它为许多事物提供了理想的环境:桌面,服务器,测试,开发。虽然 Debian 可能不包括去年获奖者发现的大量应用程序,但添加完成任务所需的任何/所有必要应用程序都非常容易。而且因为 Debian 可以根据你的选择安装桌面Cinnamon, GNOME, KDE, LXDE, Mate, 或者 Xfce你可以确定满足你需要的桌面。
![debian](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/debian.jpg?itok=XkHHG692 "debian")
图1在 Debian 9.3 上运行的 GNOME 桌面。[使用][1]
同时Debain 在 Distrowatch 上名列第二。下载安装然后让它为你的工作而服务吧。Debain 尽管不那么华丽,但是对于管理员的工作来说十分有用。
### 最轻量级的发行版
轻量级的发行版对于一些老旧或是性能底下的机器有很好的支持。但是这不意味着这些发行版仅仅只为了老旧的硬件机器而生。如果你想要的是运行速度,你可能会想知道在你的现代机器上,这类发行版的运行速度。
在 2018 年上榜的最轻量级的发行版是 [Lubuntu][18]。尽管在这个类别里还有很多选择,而且尽管 Lubuntu 的大小与 Puppy Linux 相接近,但得益于它是 Ubuntu 家庭的一员这弥补了它在易用性上的一些不足。但是不要担心Lubuntu 对于硬件的要求并不高:
+ CPU奔腾 4 或者 奔腾 M 或者 AMD K8 以上
+ 对于本地应用512 MB 的内存就可以了对于网络使用YoutubeGoogle+Google Drive Facebook建议 1 GB 以上。
Lubuntu 使用的是 LXDE 桌面,这意味着用户在初次使用这个 Linux 发行版时不会有任何问题。这份短清单中包含的应用例如Abiword, Gnumeric, 和 Firefox都是非常轻量且对用户友好的。
### [lubuntu,jpg][8]
![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/lubuntu_2.jpg?itok=BkTnh7hU "Lubuntu")
图2LXDE桌面。[使用][2]
Lubuntu 能让十年以上的电脑如获新生。
### 最好的桌面发行版
[Elementary OS][19] 连续两年都是我清单中最好的桌面发行版。对于许多人,[Linux Mint][20] 都是桌面发行版的领导。但是,与我来说,它在易用性和稳定性上很难打败 Elementary OS。例如我确信 [Ubuntu][21] 17.10 的发布会让我迁移回 Canonical 的发行版。不久之后我会迁移到 新的使用 GNOME 桌面的 Ubuntu但是我发现我少了 Elementary OS 外观,可用性和感觉。在使用 Ubuntu 两周以后,我又换回了 Elementary OS。
### [elementaros.jpg][9]
![Elementary OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaros.jpg?itok=SRZC2vkg "Elementary OS")
图3Pantheon 桌面是一件像艺术品一样的桌面。[使用][3]
任何使用 Elementary OS 的感觉很好。Pantheon 桌面是缺省和用户友好做的最完美的桌面。每次更新,它都会变得更好。
尽管 Elementary OS 在 Distrowatch 中排名第六,但我预计到 2018 年第它将至少上升至第三名。Elementary 开发人员非常关注用户的需求。他们倾听并且改进,他们目前的状态是如此之好,似乎所有他们都可以做的更好。 如果您需要一个具有出色可靠性和易用性的桌面Elementary OS 就是你的发行版。
### 能够证明自己的最好的发行版
很长一段时间内,[Gentoo][22]都稳坐“展现你技能”的发行版的首座。但是,我认为现在 Gentoo 是时候让出“证明自己”的宝座给 [Linux From Svratch][23]。你可能认为这不公平,因为 LFS 实际上不是一个发行版,而是一个帮助用户创建自己的 Linux 发行版的项目。但是,有什么能比你自己创建一个自己的发行版更能证明自己所学的 Linux 知识的呢?在 LFS 项目中,你可以从头开始构建自定义的 Linux 系统。 所以,如果你真的有需要证明的东西,请下载 [Linux From Scratch Book][24] 并开始构建。
### 对于物联网最好的发行版
[Ubuntu Core][25] 已经是第二年赢得了该项的冠军。Ubuntu Core 是 Ubuntu 的一个小型版本专为嵌入式和物联网设备而构建。使Ubuntu Core 如此完美的物联网的原因在于它将重点放在快照包 - 通用包上,可以安装到平台上,而不会干扰基本系统。这些快照包包含它们运行所需的所有内容(包括依赖项),因此不必担心安装会破坏操作系统(或任何其他已安装的软件)。 此外,快照非常容易升级并在隔离的沙箱中运行,这使它们成为物联网的理想解决方案。
Ubuntu Core 内置的另一个安全领域是登录机制。Ubuntu Core使用Ubuntu One ssh密钥这样登录系统的唯一方法是通过上传的ssh密钥到[Ubuntu One帐户][26]。这为你的物联网设备提供了更高的安全性。
### [ubuntucore.jpg][10]
![ Ubuntu Core](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntucore.jpg?itok=Ydfq8NKH " Ubuntu Core")
图4Ubuntu Core屏幕指示通过Ubuntu One用户启用远程访问。[使用][3]
### 最好的服务器发行版
这让事情变得有些混乱。 主要原因是支持。 如果你需要商业支持,乍一看,你最好的选择可能是 [Red Hat Enterprise Linux][27]。红帽年复一年地证明了自己不仅是全球最强大的企业服务器平台之一而且是单一最赚钱的开源业务年收入超过20亿美元
但是Red Hat 并不是唯一的服务器发行版。 实际上Red Hat 甚至不支持企业服务器计算的各个方面。如果你关注亚马逊 Elastic Compute Cloud 上的云统计数据Ubuntu 就会打败红帽企业Linux。根据[云市场][28]EC2 统计数据显示 RHEL 的部署率低于 10 万,而 Ubuntu 的部署量超过 20 万。
最终的结果是Ubuntu 几乎已经成为云计算的领导者。如果你将它与 Ubuntu 易于使用和管理容器结合起来,就会发现 Ubuntu Server 是服务器类别的明显赢家。而且如果你需要商业支持Canonical 将为你提供 [Ubuntu Advantage][29]。
对使用 Ubuntu Server 的一个警告是它默认为纯文本界面。如果需要,你可以安装 GUI但使用Ubuntu Server 命令行非常简单每个Linux管理员都应该知道
### [ubuntuserver.jpg][11]
![Ubuntu server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntuserver_1.jpg?itok=qtFSUlee "Ubuntu server")
图5Ubuntu 服务器登录,通知更新。[使用][3]
### 你最好的选择
正如我之前所说,这些选择都非常主观,但如果你正在寻找一个好的开始,那就试试这些发行版。每一个都可以用于非常特定的目的,并且比大多数做得更好。虽然你可能不同意我的特定选择,但你可能会同意 Linux 在每个方面都提供了惊人的可能性。并且,请继续关注下周更多“最佳发行版”选秀。
通过 Linux 基金会和 edX 的免费[“Linux 简介”][13]课程了解有关Linux的更多信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018
作者:[JACK WALLEN ][a]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/creative-commons-zero
[7]:https://www.linux.com/files/images/debianjpg
[8]:https://www.linux.com/files/images/lubuntujpg-2
[9]:https://www.linux.com/files/images/elementarosjpg
[10]:https://www.linux.com/files/images/ubuntucorejpg
[11]:https://www.linux.com/files/images/ubuntuserverjpg-1
[12]:https://www.linux.com/files/images/linux-distros-2018jpg
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[14]:https://distrowatch.com/
[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017
[16]:https://www.debian.org/
[17]:https://www.parrotsec.org/
[18]:http://lubuntu.me/
[19]:https://elementary.io/
[20]:https://linuxmint.com/
[21]:https://www.ubuntu.com/
[22]:https://www.gentoo.org/
[23]:http://www.linuxfromscratch.org/
[24]:http://www.linuxfromscratch.org/lfs/download.html
[25]:https://www.ubuntu.com/core
[26]:https://login.ubuntu.com/
[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[28]:http://thecloudmarket.com/stats#/by_platform_definition
[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993

View File

@ -1,21 +1,22 @@
The df Command Tutorial With Examples For Beginners
df 命令的新手教程
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png)
In this guide, we are going to learn to use **df** command. The df command, stands for **D** isk **F** ree, reports file system disk space usage. It displays the amount of disk space available on the file system in a Linux system. The df command is not to be confused with **du** command. Both serves different purposes. The df command reports **how much disk space we have** (i.e free space) whereas the du command reports **how much disk space is being consumed** by the files and folders. Hope I made myself clear. Let us go ahead and see some practical examples of df command, so you can understand it better.
在本指南中,我们将学习如何使用 **df** 命令。df 命令是 `Disk Free` 的首字母组合,它报告文件系统磁盘空间的使用情况。它显示一个 Linux 系统中文件系统上可用磁盘空间的数量。df 命令很容易与 **du** 命令混淆。它们的用途不同。df 命令报告 **我们拥有多少磁盘空间**(空闲磁盘空间),而 du 命令报告 **被文件和目录占用了多少磁盘空间**。希望我这样的解释你能更清楚。在继续之前,我们来看一些 df 命令的实例,以便于你更好地理解它。
### The df Command Tutorial With Examples
### df 命令使用举例
**1\. View entire file system disk space usage**
**1、查看整个文件系统磁盘空间使用情况**
Run df command without any arguments to display the entire file system disk space.
无需任何参数来运行 df 命令,以显示整个文件系统磁盘空间使用情况。
```
$ df
```
**Sample output:**
**示例输出:**
```
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
@ -32,20 +33,20 @@ tmpfs 807776 28 807748 1% /run/user/1000
![][2]
As you can see, the result is divided into six columns. Let us see what each column means.
正如你所见,输出结果分为六列。我们来看一下每一列的含义。
* **Filesystem** the filesystem on the system.
* **1K-blocks** the size of the filesystem, measured in 1K blocks.
* **Used** the amount of space used in 1K blocks.
* **Available** the amount of available space in 1K blocks.
* **Use%** the percentage that the filesystem is in use.
* **Mounted on** the mount point where the filesystem is mounted.
* **Filesystem** Linux 系统中的文件系统
* **1K-blocks** 文件系统的大小,用 1K 大小的块来表示。
* **Used** 以 1K 大小的块所表示的已使用数量。
* **Available** 以 1K 大小的块所表示的可用空间的数量。
* **Use%** 文件系统中已使用的百分比。
* **Mounted on** 已挂载的文件系统的挂载点。
**2\. Display file system disk usage in human readable format**
**2、以人类友好格式显示文件系统硬盘空间使用情况**
As you may noticed in the above examples, the usage is showed in 1k blocks. If you want to display them in human readable format, use **-h** flag.
在上面的示例中你可能已经注意到了,它使用 1K 大小的块为单位来表示使用情况,如果你以人类友好格式来显示它们,可以使用 **-h** 标志。
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
@ -61,11 +62,11 @@ tmpfs 789M 28K 789M 1% /run/user/1000
```
Now look at the **Size** and **Avail** columns, the usage is shown in GB and MB.
现在,在 **Size** 列和 **Avail** 列,使用情况是以 GB 和 MB 为单位来显示的。
**3\. Display disk space usage only in MB**
To view file system disk space usage only in Megabytes, use **-m** flag.
如果仅以 MB 为单位来显示文件系统磁盘空间使用情况,使用 **-m** 标志。
```
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
@ -81,9 +82,9 @@ tmpfs 789 1 789 1% /run/user/1000
```
**4\. List inode information instead of block usage**
**4、列出节点而不是块的使用情况**
We can list inode information instead of block usage by using **-i** flag as shown below.
如下所示,我们可以通过使用 **-i** 标记来列出节点而不是块的使用情况。
```
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
@ -99,9 +100,9 @@ tmpfs 1009720 29 1009691 1% /run/user/1000
```
**5\. Display the file system type**
**5、显示文件系统类型**
To display the file system type, use **-T** flag.
使用 **-T** 标志显示文件系统类型。
```
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
@ -117,11 +118,11 @@ tmpfs tmpfs 807776 28 807748 1% /run/user/1000
```
As you see, there is an extra column (second from left) that shows the file system type.
正如你所见,现在出现了显示文件系统类型的额外的列(从左数的第二列)。
**6\. Display only the specific file system type**
**6、仅显示指定类型的文件系统**
We can limit the listing to a certain file systems. for example **ext4**. To do so, we use **-t** flag.
我们可以限制仅列出某些文件系统。比如,只列出 **ext4** 文件系统。我们使用 **-t** 标志。
```
$ df -t ext4
Filesystem 1K-blocks Used Available Use% Mounted on
@ -130,11 +131,11 @@ Filesystem 1K-blocks Used Available Use% Mounted on
```
See? This command shows only the ext4 file system disk space usage.
看到了吗?这个命令仅显示了 ext4 文件系统的磁盘空间使用情况。
**7\. Exclude specific file system type**
**7、不列出指定类型的文件系统**
Some times, you may want to exclude a specific file system from the result. This can be achieved by using **-x** flag.
有时,我们可能需要从结果中去排除指定类型的文件系统。我们可以使用 **-x** 标记达到我们的目的。
```
$ df -x ext4
Filesystem 1K-blocks Used Available Use% Mounted on
@ -148,11 +149,11 @@ tmpfs 807776 28 807748 1% /run/user/1000
```
The above command will display all file systems usage, except **ext4**.
上面的命令列出了除 **ext4** 类型以外的全部文件系统。
**8\. Display usage for a folder**
**8、显示一个目录的磁盘使用情况**
To display the disk space available and where it is mounted for a folder, for example **/home/sk/** , use this command:
去显示某个目录的硬盘空间使用情况以及它的挂载点,例如 **/home/sk/** 目录,可以使用如下的命令:
```
$ df -hT /home/sk/
Filesystem Type Size Used Avail Use% Mounted on
@ -160,19 +161,19 @@ Filesystem Type Size Used Avail Use% Mounted on
```
This command shows the file system type, used and available space in human readable form and where it is mounted. If you dont to display the file system type, just ignore the **-t** flag.
这个命令显示文件系统类型、以人类友好格式显示已使用和可用磁盘空间、以及它的挂载点。如果你不想去显示文件系统类型,只需要忽略 **-t** 标志即可。
For more details, refer the man pages.
更详细的使用情况,请参阅 man 手册页。
```
$ man df
```
**Recommended read:**
**建议阅读:**
And, thats all for today! I hope this was useful. More good stuffs to come. Stay tuned!
今天就到此这止!我希望对你有用。还有更多更好玩的东西即将奉上。请继续关注!
Cheers!
再见!
@ -181,7 +182,7 @@ Cheers!
via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
@ -190,3 +191,4 @@ via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginne
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/df-command.png

View File

@ -0,0 +1,319 @@
在 Ubuntu 18.04 LTS 无头服务器上安装 Oracle VirtualBox
======
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png)
本教程将指导你在 Ubuntu 18.04 LTS 无头服务器上,一步一步地安装 **Oracle VirtualBox**。同时,本教程也将介绍如何使用 **phpVirtualBox** 去管理安装在无头服务器上的 **VirtualBox** 实例。**phpVirtualBox** 是 VirtualBox 的一个基于 Web 的后端工具。这个教程也可以工作在 Debian 和其它 Ubuntu 衍生版本上,如 Linux Mint。现在我们开始。
### 前提条件
在安装 Oracle VirtualBox 之前,我们的 Ubuntu 18.04 LTS 服务器上需要满足如下的前提条件。
首先,逐个运行如下的命令来更新 Ubuntu 服务器。
```
$ sudo apt update
$ sudo apt upgrade
$ sudo apt dist-upgrade
```
接下来,安装如下的必需的包:
```
$ sudo apt install build-essential dkms unzip wget
```
安装完成所有的更新和必需的包之后,重启动 Ubuntu 服务器。
```
$ sudo reboot
```
### 在 Ubuntu 18.04 LTS 服务器上安装 VirtualBox
添加 Oracle VirtualBox 官方仓库。为此你需要去编辑 **/etc/apt/sources.list** 文件:
```
$ sudo nano /etc/apt/sources.list
```
添加下列的行。
在这里,我将使用 Ubuntu 18.04 LTS因此我添加下列的仓库。
```
deb http://download.virtualbox.org/virtualbox/debian bionic contrib
```
![][2]
用你的 Ubuntu 发行版的代码名字替换关键字 **bionic**,比如,**xenialvividutopictrustyraringquantalpreciselucidjessiewheezy、或 squeeze**。
然后,运行下列的命令去添加 Oracle 公钥:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
```
对于 VirtualBox 的老版本,添加如下的公钥:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
```
接下来,使用如下的命令去更新软件源:
```
$ sudo apt update
```
最后,使用如下的命令去安装最新版本的 Oracle VirtualBox
```
$ sudo apt install virtualbox-5.2
```
### 添加用户到 VirtualBox 组
我们需要去创建并添加我们的系统用户到 **vboxusers** 组中。你也可以单独创建用户,然后将它分配到 **vboxusers** 组中,也可以使用已有的用户。我不想去创建新用户,因此,我添加已存在的用户到这个组中。请注意,如果你为 virtualbox 使用一个单独的用户,那么你必须注销当前用户,并使用那个特定的用户去登入,来完成剩余的步骤。
我使用的是我的用户名 **sk**,因此,我运行如下的命令将它添加到 **vboxusers** 组中。
```
$ sudo usermod -aG vboxusers sk
```
现在,运行如下的命令去检查 virtualbox 内核模块是否已加载。
```
$ sudo systemctl status vboxdrv
```
![][3]
正如你在上面的截屏中所看到的vboxdrv 模块已加载,并且是已运行的状态!
对于老的 Ubuntu 版本,运行:
```
$ sudo /etc/init.d/vboxdrv status
```
如果 virtualbox 模块没有启动,运行如下的命令去启动它。
```
$ sudo /etc/init.d/vboxdrv setup
```
很好!我们已经成功安装了 VirtualBox 并启动了 virtualbox 模块。现在,我们继续来安装 Oracle VirtualBox 的扩展包。
### 安装 VirtualBox 扩展包
VirtualBox 扩展包为 VirtualBox 访客系统提供了如下的功能。
* 虚拟的 USB 2.0 (EHCI) 驱动
* VirtualBox 远程桌面协议VRDP支持
* 宿主机网络摄像头直通
* Intel PXE 引导 ROM
* 对 Linux 宿主机上的 PCI 直通提供支持
从[**这里**][4]为 VirtualBox 5.2.x 下载最新版的扩展包。
```
$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
使用如下的命令去安装扩展包:
```
$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
恭喜!我们已经成功地在 Ubuntu 18.04 LTS 服务器上安装了 Oracle VirtualBox 的扩展包。现在已经可以去部署虚拟机了。参考 [**virtualbox 官方指南**][5],在命令行中开始创建和管理虚拟机。
然而,并不是每个人都擅长使用命令行。有些人可能希望在图形界面中去创建和使用虚拟机。不用担心!下面我们为你带来非常好用的 **phpVirtualBox** 工具!
### 关于 phpVirtualBox
**phpVirtualBox** 是一个免费的、基于 web 的 Oracle VirtualBox 后端。它是使用 PHP 开发的。用 phpVirtualBox 我们可以通过 web 浏览器从网络上的任意一个系统上,很轻松地创建、删除、管理、和执行虚拟机。
### 在 Ubuntu 18.04 LTS 上安装 phpVirtualBox
由于它是基于 web 的工具,我们需要安装 Apache web 服务器、PHP 和一些 php 模块。
为此,运行如下命令:
```
$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml
```
然后,从 [**下载页面**][6] 上下载 phpVirtualBox 5.2.x 版。请注意,由于我们已经安装了 VirtualBox 5.2 版,因此,同样的我们必须去安装 phpVirtualBox 的 5.2 版本。
运行如下的命令去下载它:
```
$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip
```
使用如下命令解压下载的安装包:
```
$ unzip 5.2-0.zip
```
这个命令将解压 5.2.0.zip 文件的内容到一个命名为 “phpvirtualbox-5.2-0” 的文件夹中。现在,复制或移动这个文件夹的内容到你的 apache web 服务器的根文件夹中。
```
$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox
```
给 phpvirtualbox 文件夹分配适当的权限。
```
$ sudo chmod 777 /var/www/html/phpvirtualbox/
```
接下来,我们开始配置 phpVirtualBox。
像下面这样复制示例配置文件。
```
$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php
```
编辑 phpVirtualBox 的 **config.php** 文件:
```
$ sudo nano /var/www/html/phpvirtualbox/config.php
```
找到下列行,并且用你的系统用户名和密码去替换它(就是前面的“添加用户到 VirtualBox 组中”节中使用的用户名)。
在我的案例中,我的 Ubuntu 系统用户名是 **sk** ,它的密码是 **ubuntu**
```
var $username = 'sk';
var $password = 'ubuntu';
```
![][7]
保存并关闭这个文件。
接下来,创建一个名为 **/etc/default/virtualbox** 的新文件:
```
$ sudo nano /etc/default/virtualbox
```
添加下列行。用你自己的系统用户替换 sk
```
VBOXWEB_USER=sk
```
最后,重引导你的系统或重启下列服务去完成整个配置工作。
```
$ sudo systemctl restart vboxweb-service
$ sudo systemctl restart vboxdrv
$ sudo systemctl restart apache2
```
### 调整防火墙允许连接 Apache web 服务器
如果你在 Ubuntu 18.04 LTS 上启用了 UFW那么在默认情况下apache web 服务器是不能被任何远程系统访问的。你必须通过下列的步骤让 http 和 https 流量允许通过 UFW。
首先,我们使用如下的命令来查看在策略中已经安装了哪些应用:
```
$ sudo ufw app list
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH
```
正如你所见Apache 和 OpenSSH 应该已经在 UFW 的策略文件中安装了。
如果你在策略中看到的是 **“Apache Full”**,说明它允许流量到达 **80****443** 端口:
```
$ sudo ufw app info "Apache Full"
Profile: Apache Full
Title: Web Server (HTTP,HTTPS)
Description: Apache v2 is the next generation of the omnipresent Apache web
server.
Ports:
80,443/tcp
```
现在,运行如下的命令去启用这个策略中的 HTTP 和 HTTPS 的入站流量:
```
$ sudo ufw allow in "Apache Full"
Rules updated
Rules updated (v6)
```
如果你希望允许 https 流量,但是仅是 http (80) 的流量,运行如下的命令:
```
$ sudo ufw app info "Apache"
```
### 访问 phpVirtualBox 的 Web 控制台
现在,用任意一台远程系统的 web 浏览器来访问。
在地址栏中,输入:**<http://IP-address-of-virtualbox-headless-server/phpvirtualbox>**。
在我的案例中,我导航到这个链接 **<http://192.168.225.22/phpvirtualbox>**
你将看到如下的屏幕输出。输入 phpVirtualBox 管理员用户凭据。
phpVirtualBox 的默认管理员用户名和密码是 **admin** / **admin**
![][8]
恭喜!你现在已经进入了 phpVirtualBox 管理面板了。
![][9]
现在,你可以从 phpvirtualbox 的管理面板上,开始去创建你的 VM 了。正如我在前面提到的,你可以从同一网络上的任意一台系统上访问 phpVirtualBox 了,而所需要的仅仅是一个 web 浏览器和 phpVirtualBox 的用户名和密码。
如果在你的宿主机系统(不是访客机)的 BIOS 中没有启用虚拟化支持phpVirtualBox 将只允许你去创建 32 位的访客系统。要安装 64 位的访客系统,你必须在你的宿主机的 BIOS 中启用虚拟化支持。在你的宿主机的 BIOS 中你可以找到一些类似于 “virtualization” 或 “hypervisor” 字眼的选项,然后确保它是启用的。
本文到此结束了,希望能帮到你。如果你找到了更有用的指南,共享出来吧。
还有一大波更好玩的东西即将到来,请继续关注!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png
[4]:https://www.virtualbox.org/wiki/Downloads
[5]:http://www.virtualbox.org/manual/ch08.html
[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases
[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png
[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png
[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png

View File

@ -0,0 +1,298 @@
树莓派自建 NAS 云盘之-树莓派搭建网络存储盘
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
我将在接下来的三篇文章中讲述如何搭建一个简便、实用的 NAS 云盘系统。我在这个中心化的存储系统中存储数据,并且让它每晚都会自动的备份增量数据。本系列文章将利用 NFS 文件系统将磁盘挂载到同一网络下的不同设备上,使用 [Nextcloud][1] 来离线访问数据、分享数据。
本文主要讲述将数据盘挂载到远程设备上的软硬件步骤。本系列第二篇文章将讨论数据备份策略、如何添加定时备份数据任务。最后一篇文章中我们将会安装 Nextcloud 软件用户通过Nextcloud 提供的 web 接口可以方便的离线或在线访问数据。本系列教程最终搭建的 NAS 云盘支持多用户操作、文件共享等功能,所以你可以通过它方便的分享数据,比如说你可以发送一个加密链接,跟朋友分享你的照片等等。
最终的系统架构如下图所示:
![](https://opensource.com/sites/default/files/uploads/nas_part1.png)
### 硬件
首先需要准备硬件。本文所列方案只是其中一种示例,你也可以按不同的硬件方案进行采购。
最主要的就是[树莓派3][2],它带有四核 CPU1G RAM以及有些快速的网络接口。数据将存储在两个 USB 磁盘驱动器上(这里使用 1TB 磁盘);其中一个磁盘用于每天数据存储,另一个用于数据备份。请务必使用有源 USB 磁盘驱动器或者带附加电源的 USB 集线器,因为树莓派无法为两个 USB 磁盘驱动器供电。
### 软件
社区中最活跃的操作系统当属 [Raspbian][3],便于定制个性化项目。已经有很多 [操作指南][4] 讲述如何在树莓派中安装 Raspbian 系统,所以这里不再赘述。在撰写本文时,最新的官方支持版本是 [Raspbian Stretch][5],它对我来说很好使用。
到此,我将假设你已经配置好了基本的 Raspbian 系统并且可以通过 `ssh` 访问到你的树莓派。
### 准备 USB 磁盘驱动器
为了更好地读写数据,我建议使用 ext4 文件系统去格式化磁盘。首先,你必须先找到连接到树莓派的磁盘。你可以在 `/dev/sd/<x>` 中找到磁盘设备。使用命令 `fdisk -l`,你可以找到刚刚连接的两块 USB 磁盘驱动器。请注意,操作下面的步骤将会清除 USB 磁盘驱动器上的所有数据,请做好备份。
```
pi@raspberrypi:~ $ sudo fdisk -l
<...>
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe8900690
Device     Boot Start        End    Sectors   Size Id Type
/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6aa4f598
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1  *     2048 1953521663 1953519616 931.5G  83 Linux
```
由于这些设备是连接到树莓派的唯一的 1TB 的磁盘,所以我们可以很容易的辨别出 `/dev/sda``/dev/sdb` 就是那两个 USB 磁盘驱动器。每个磁盘末尾的分区表提示了在执行以下的步骤后如何查看,这些步骤将会格式化磁盘并创建分区表。为每个 USB 磁盘驱动器按以下步骤进行操作(假设你的磁盘也是 `/dev/sda``/dev/sdb`,第二次操作你只要替换命令中的 `sda``sdb` 即可)。
首先,删除磁盘分区表,创建一个新的并且只包含一个分区的新分区表。在 `fdisk` 中,你可以使用交互单字母命令来告诉程序你想要执行的操作。只需要在提示符 `Command(m for help):` 后输入相应的字母即可(可以使用 `m` 命令获得更多详细信息):
```
pi@raspberrypi:~ $ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS disklabel with disk identifier 0x9c310964.
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-1953525167, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Command (m for help): p
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9c310964
Device     Boot Start        End    Sectors   Size Id Type
/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux
Command (m for help): w
The partition table has been altered.
Syncing disks.
```
现在,我们将用 ext4 文件系统格式化新创建的分区 `/dev/sda1`
```
pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
<...>
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
```
重复以上步骤后,让我们根据用途来对它们建立标签:
```
pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
```
现在让我们安装这些磁盘并存储一些数据。以我运营该系统超过一年的经验来看当树莓派启动时例如在断电后USB 磁盘驱动器并不是总被安装,因此我建议使用 autofs 在需要的时候进行安装。
首先,安装 autofs 并创建挂载点:
```
pi@raspberrypi:~ $ sudo apt install autofs
pi@raspberrypi:~ $ sudo mkdir /nas
```
然后添加下面这行来挂载设备
`/etc/auto.master`:
```
/nas    /etc/auto.usb
```
如果不存在以下内容,则创建 `/etc/auto.usb`,然后重新启动 autofs 服务:
```
data -fstype=ext4,rw :/dev/disk/by-label/data
backup -fstype=ext4,rw :/dev/disk/by-label/backup
pi@raspberrypi3:~ $ sudo service autofs restart
```
现在你应该可以分别访问 `/nas/data` 以及 `/nas/backup` 磁盘了。显然,到此还不会令人太兴奋,因为你只是擦除了磁盘中的数据。不过,你可以执行以下命令来确认设备是否已经挂载成功:
```
pi@raspberrypi3:~ $ cd /nas/data
pi@raspberrypi3:/nas/data $ cd /nas/backup
pi@raspberrypi3:/nas/backup $ mount
<...>
/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
<...>
/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
```
首先进入对应目录以确保 autofs 能够挂载设备。Autofs 会跟踪文件系统的访问记录,并随时挂载所需要的设备。然后 `mount` 命令会显示这两个 USB 磁盘驱动器已经挂载到我们想要的位置了。
设置 autofs 的过程容易出错,如果第一次尝试失败,请不要沮丧。你可以上网搜索有关教程。
### 挂载网络存储
现在你已经设置了基本的网络存储,我们希望将它安装到远程 Linux 机器上。这里使用 NFS 文件系统,首先在树莓派上安装 NFS 服务器:
```
pi@raspberrypi:~ $ sudo apt install nfs-kernel-server
```
然后,需要告诉 NFS 服务器公开 `/nas/data` 目录,这是从树莓派外部可以访问的唯一设备(另一个用于备份)。编辑 `/etc/exports` 添加如下内容以允许所有可以访问 NAS 云盘的设备挂载存储:
```
/nas/data *(rw,sync,no_subtree_check)
```
更多有关限制挂载到单个设备的详细信息,请参阅 `man exports`。经过上面的配置,任何人都可以访问数据,只要他们可以访问 NFS 所需的端口:`111`和`2049`。我通过上面的配置,只允许通过路由器防火墙访问到我的家庭网络的 22 和 443 端口。这样,只有在家庭网络中的设备才能访问 NFS 服务器。
如果要在 Linux 计算机挂载存储,运行以下命令:
```
you@desktop:~ $ sudo mkdir /nas/data
you@desktop:~ $ sudo mount -t nfs <raspberry-pi-hostname-or-ip>:/nas/data /nas/data
```
同样,我建议使用 autofs 来挂载该网络设备。如果需要其他帮助,请参看 [如何使用 Autofs 来挂载 NFS 共享][6]。
现在你可以在远程设备上通过 NFS 系统访问位于你树莓派 NAS 云盘上的数据了。在后面一篇文章中,我将介绍如何使用 `rsync` 自动将数据备份到第二个 USB 磁盘驱动器。你将会学到如何使用 `rsync` 创建增量备份,在进行日常备份的同时还能节省设备空间。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[jrg](https://github.com/jrglinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ntlx
[1]: https://nextcloud.com/
[2]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
[3]: https://www.raspbian.org/
[4]: https://www.raspberrypi.org/documentation/installation/installing-images/
[5]: https://www.raspberrypi.org/blog/raspbian-stretch/
[6]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares

View File

@ -0,0 +1,173 @@
提交你的第一个 Linux 内核补丁时的一个检查列表
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22)
Linux 内核是最大的且变动最快的开源项目之一,它由大约 53,600 个文件和近 2,000 万行代码组成。在全世界范围内超过 15,600 位程序员为它贡献代码Linux 内核项目的维护者使用了如下的协作模型。
![](https://opensource.com/sites/default/files/karnik_figure1.png)
本文中,为了便于在 Linux 内核中提交你的第一个贡献,我将为你提供一个必需的快速检查列表,以告诉你在提交补丁时,应该去查看和了解的内容。对于你贡献的第一个补丁的提交流程方面的更多内容,请阅读 [KernelNewbies 第一个内核补丁教程][1]。
### 为内核作贡献
#### 第 1 步:准备你的系统
本文开始之前,假设你的系统已经具备了如下的工具:
+ 文本编辑器
+ Email 客户端
+ 版本控制系统git
#### 第 2 步:下载 Linux 内核代码仓库:
```
git clone -b staging-testing
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
```
### 复制你的当前配置:
```
cp /boot/config-`uname -r`* .config
```
### 第 3 步:构建/安装你的内核
```
make -jX
sudo make modules_install install
```
### 第 4 步:创建一个分支并切换到它
```
git checkout -b first-patch
```
### 第 5 步:更新你的内核并指向到最新的代码
```
git fetch origin
git rebase origin/staging-testing
```
### 第 6 步:在最新的代码基础上产生一个变更
使用 `make` 命令重新编译,确保你的变更没有错误。
### 第 7 步:提交你的变更并创建一个补丁
```
git add <file>
git commit -s -v
git format-patch -o /tmp/ HEAD^
```
![](https://opensource.com/sites/default/files/karnik_figure2.png)
主题是由冒号分隔的文件名组成,接下来是使用祈使语态来描述补丁做了什么。空行之后是强制规定的 `off` 标记,最后是你的补丁的 `diff` 信息。
下面是另外一个简单补丁的示例:
![](https://opensource.com/sites/default/files/karnik_figure3.png)
接下来,[使用 email 从命令行][2](在本例子中使用的是 Mutt发送这个补丁
```
mutt -H /tmp/0001-<whatever your filename is>
```
使用 [get_maintainer.pl 脚本][11],去了解你的补丁应该发送给哪位维护者的列表。
### 提交你的第一个补丁之前,你应该知道的事情
* [Greg Kroah-Hartman](3) 的 [staging tree][4] 是提交你的 [第一个补丁][1] 的最好的地方,因为他更容易接受新贡献者的补丁。在你熟悉了补丁发送流程以后,你就可以去发送复杂度更高的子系统专用的补丁。
* 你也可以从纠正代码中的编码风格开始。想学习更多关于这方面的内容,请阅读 [Linux 内核编码风格文档][5]。
* [checkpatch.pl][6] 脚本可以检测你的编码风格方面的错误。例如,运行如下的命令:
```
perl scripts/checkpatch.pl -f drivers/staging/android/* | less
```
* 你可以去补全开发者留下的 TODO 注释中未完成的内容:
```
find drivers/staging -name TODO
```
* [Coccinelle][7] 是一个模式匹配的有用工具。
* 阅读 [归档的内核邮件][8]。
* 为找到灵感,你可以去遍历 [linux.git log][9] 查看以前的作者的提交内容。
* 注意:不要为了评估你的补丁而在社区置顶帖子!下面就是一个这样的例子:
**错误的方式:**
Chris,
_Yes lets schedule the meeting tomorrow, on the second floor._
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1\. Do you want to schedule the meeting tomorrow?
> 2\. On which floor in the office?
> 3\. What time is suitable to you?
(注意那最后一个问题,在回复中无意中落下了。)
**正确的方式:**
Chris,
See my answers below...
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1\. Do you want to schedule the meeting tomorrow?
_Yes tomorrow is fine._
> 2\. On which floor in the office?
_Let's keep it on the second floor._
> 3\. What time is suitable to you?
_09:00 am would be alright._
(所有问题全部回复,并且这种方式还保存了阅读的时间。)
* [Eudyptula challenge][10] 是学习内核基础知识的非常好的方式。
想学习更多内容,阅读 [KernelNewbies 第一个内核补丁教程][1]。之后如果你还有任何问题,可以在 [kernelnewbies 邮件列表][12] 或者 [#kernelnewbies IRC channel][13] 中提问。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/first-linux-kernel-patch
作者:[Sayli Karnik][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sayli
[1]:https://kernelnewbies.org/FirstKernelPatch
[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
[3]:https://twitter.com/gregkh
[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html
[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html
[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl
[7]:http://coccinelle.lip6.fr/
[8]:linux-kernel@vger.kernel.org
[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/
[10]:http://eudyptula-challenge.org/
[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl
[12]:https://kernelnewbies.org/MailingList
[13]:https://kernelnewbies.org/IRC

View File

@ -0,0 +1,108 @@
一个简单,美观和跨平台的播客应用程序
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png)
播客在过去几年中变得非常流行。 播客就是所谓的“信息娱乐”,它们通常是轻松的,但它通常会为你提供有价值的信息。 播客在过去几年中已经非常火爆了,如果你喜欢某些东西,很可能存在一个相关的播客。 Linux 桌面版上有很多播客播放器,但是如果你想要一些视觉上美观,有光滑动画并且可以在每个平台上运行的东西,那就并没有很多替代品可以替代 **CPod** 了。 CPod以前称为 **Cumulonimbus**)是一个开源的最简单的播客应用程序,适用于 LinuxMacOS 和 Windows。
CPod 运行在一个名为 **Electron** 的东西上 - 这个工具允许开发人员构建跨平台(例如 WindowsMacOs 和 Linux的桌面图形化应用程序。 在本简要指南中,我们将讨论如何在 Linux 中安装和使用 CPod 播客应用程序。
### 安装 CPod
转到 CPod 的[**发布页面**][1]。 下载并安装所选平台的二进制文件。 如果你使用 Ubuntu / Debian你只需从发布页面下载并安装 .deb 文件,如下所示。
```
$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb
$ sudo apt update
$ sudo apt install gdebi
$ sudo gdebi CPod_1.25.7_amd64.deb
```
如果你使用任何其他发行版,你可能需要在发行版页面中使用 **AppImage**
从发布页面下载 AppImage 文件。
打开终端,然后转到存储 AppImage 文件的目录。 更改权限以允许执行:
```
$ chmod +x CPod-1.25.7-x86_64.AppImage
```
执行 AppImage 文件:
```
$ ./CPod-1.25.7-x86_64.AppImage
```
你将看到一个对话框询问是否将应用程序与系统集成。 如果要执行此操作,请单击**是**。
### 特征
**探索标签页**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png)
CPod 使用 Apple iTunes 数据库查找播客。 这很好,因为 iTunes 数据库是最大的数据库。 如果那里有一个播客,很可能是在 iTunes 上。 要查找播客,只需使用探索部分中的顶部搜索栏即可。 探索部分还展示了一些受欢迎的播客。
**主标签页**
![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png)
主标签页在打开应用程序时是默认打开的。 主标签页显示你已订阅的所有播客的所有剧集的时间顺序列表。
在主页选项卡中,你可以:
1. 标记剧集阅读。
2. 下载它们进行离线播放
3. 将它们添加到播放队列中。
![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png)
**订阅标签页**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png)
你当然可以订阅你喜欢的播客。 你可以在订阅标签页中执行的其他一些操作是:
1.刷新播客艺术作品
2.导出订阅到 .OPML 文件中,从 .OPML 文件中导入订阅。
**播放器**
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png)
播放器可能是 CPod 最美观的部分。 该应用程序根据播客的横幅更改整体外观。 底部有一个声音可视化器。 在右侧,你可以查看和搜索此播客的其他剧集。
**缺点/缺失功能**
虽然我喜欢这个应用程序,但 CPod 确实有一些特性和缺点:
1. 可怜的 MPRIS 集成 - 你可以从桌面环境的媒体播放器对话框中播放或者暂停播客,但这是不够的。 播客的名称未显示,你可以转到下一个或者上一个剧集。
2. 不支持章节。
3. 没有自动下载 - 你必须手动下载剧集。
4. 使用过程中的 CPU 使用率非常高(即使对于 Electron 应用程序)。
### Verdict
虽然它确实有它的缺点,但 CPod 显然是最美观的播客播放器应用程序,并且它具有最基本的功能。 如果你喜欢使用视觉上美观的应用程序,并且不需要高级功能,那么这就是你的完美款 app。 我知道我马上就要使用它。
你喜欢 CPod 吗? 请将你的意见发表在下面的评论中。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://github.com/z-------------/CPod/releases

View File

@ -0,0 +1,78 @@
Hegemon - 使用 Rust 编写的模块化系统监视程序
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png)
在类 Unix 系统中监视运行进程时,最常用的程序是 **top** 和 top 的增强版 **htop**。我个人最喜欢的是 htop。但是开发人员不时会发布这些程序的替代品。top 和 htop 工具的一个替代品是 **Hegemon**。它是使用 **Rust** 语言编写的模块化系统监视程序。
关于 Hegemon 的功能,我们可以列出以下这些:
* Hegemon 会监控 CPU、内存和交换页的使用情况。
  * 它监控系统的温度和风扇速度。
  * 更新间隔时间可以调整。默认值为 3 秒。
  * 我们可以通过扩展数据流来展示更详细的图表和其他信息。
  * 单元测试
  * 干净的界面
  * 免费且开源。
### 安装 Hegemon
确保已安装 **Rust 1.26** 或更高版本。要在 Linux 发行版中安装 Rust请参阅以下指南
[Install Rust Programming Language In Linux][2]
另外要安装 [libsensors][1] 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora
```
$ sudo dnf install lm_sensors-devel
```
在像 Ubuntu、Linux Mint 这样的基于 Debian 的系统上,可以使用这个命令安装它:
```
$ sudo apt-get install libsensors4-dev
```
在安装 Rust 和 libsensors 后,使用命令安装 Hegemon
```
$ cargo install hegemon
```
安装 hegemon 后,使用以下命令开始监视 Linux 系统中正在运行的进程:
```
$ hegemon
```
以下是 Arch Linux 桌面的示例输出。
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif)
要退出,请按 **Q**
请注意hegemon 仍处于早期开发阶段,并不能完全取代 **top** 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug请在项目的 github 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。
就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://github.com/lm-sensors/lm-sensors
[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/

View File

@ -0,0 +1,315 @@
Linux 系统上 swap 空间的介绍
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
当今无论什么操作系统 Swap 空间是非常常见的。Linux 使用 Swap 空间来增加主机可用的虚拟内存。它可以是常规文件或逻辑卷上使用一个或多个专用swap 分区或 swap 文件。
典型计算机中有两种基本类型的内存。第一种类型,随机存取存储器 (RAM),用于存储计算机使用的数据和程序。只有程序和数据存储在 RAM 中,计算机才能使用它们。随机存储器是易失性存储器;也就是说,如果计算机关闭了,存储在 RAM 中的数据就会丢失。
硬盘是用于长期存储数据和程序的磁性介质。该磁介质可以很好的保存数据即使计算机断电存储在磁盘上的数据也会保留下来。CPU (中央处理器)不能直接访问硬盘上的程序和数据;他们必须首先复制到 RAM 中RAM 是 CPU 访问代码指令和操作数据的地方。在引导过程中,计算机将特定的操作系统程序(如内核、init 或 systemd )以及硬盘上的数据复制到 RAM 中,在 RAM 中,计算机的处理器 CPU 可以直接访问这些数据。
### Swap 空间
Swap 空间是现代 Linux 系统中的第二种内存类型。Swap 空间的主要功能是当全部的 RAM 被占用并且需要更多内存时,用磁盘空间代替 RAM 内存。
例如,假设你有一个 8GB RAM 的计算机。如果你启动的程序没有填满 RAM一切好不需要 Swap。假设你在处理电子表格当添加更多的行时你电子表格会增长加上所有正在运行的程序将会占用全部的 RAM 。如果这时没有可用的 Swap 空间,你将不得不停止处理电子表格,直到关闭一些其他程序来释放一些 RAM 。
内核使用一个内存管理程序来检测最近没有使用的内存块,也就是内存页面。内存管理程序将这些相对不经常使用的内存页交换到硬盘上专门指定用于“分页”或 swap 的特殊分区。释放 RAM ,为输入电子表格更多数据腾出了空间。那些换出到硬盘的内存页面被内核的内存管理代码跟踪,如果需要,可以被分页回 RAM。
Linux 计算机中的内存总量是 RAM + swap 分区swap 分区被称为虚拟内存.
### Linux swap 分区类型
Linux 提供了两种类型的 swap 空间。默认情况下,大多数 Linux 在安装时都会创建一个 swap 分区,但是也可以使用一个特殊配置的文件作为 swap 文件。swap 分区顾名思义就是一个标准磁盘分区,由 `mkswap` 命令指定 swap 空间。
如果没有可用磁盘空间来创建新的 swap 分区,或者卷组中没有空间为 swap 空间创建逻辑卷,则可以使用 swap 文件。这只是一个创建并预分配指定大小的常规文件。然后运行 `mkswap` 命令将其配置为 swap 空间。除非绝对必要,否则我不建议使用文件来做 swap 空间。
### 频繁交换
当总虚拟内存( RAM 和 swap 空间 )变得快满时,可能会发生频繁交换 。系统花了太多时间在 swap 空间和 RAM 之间做内存块页面切换,以至于几乎没有时间用于实际工作。这种情况是显而易见的:系统变得缓慢或完全无反应,硬盘指示灯几乎持续亮起。
使用 `free` 的命令来显示 CPU 负载和内存使用情况,你会发现 CPU 负载非常高,可能达到系统中 CPU 内核数量的30到40倍。另一个情况是 RAM 和 swap 空间几乎完全被分配了。
事实上,查看 SAR (系统活动报告)数据也可以显示这些内容。在我的每个系统上都安装 SAR ,并将这些用于数据分析。
### swap 空间的正确大小是多少?
许多年前,硬盘上分配给 swap 空间大小是计算机上的 RAM 的两倍(当然,这是大多数计算机的 RAM 以 KB 或 MB 为单位的时候)。因此,如果一台计算机有 64KB 的 RAM应该分配 128KB 的 swap 分区。该规则考虑到了这样的事实情况,即 RAM 大小在当时非常小分配超过2倍的 RAM 用于 swap 空间并不能提高性能。使用超过两倍的 RAM 进行交换,比实际执行有用的工作的时候,大多数系统将花费更多的时间。
RAM 现在已经很便宜了,如今大多数计算机的 RAM 都达到了几十亿字节。我的大多数新电脑至少有 8GB 内存一台有32GB 内存,我的主工作站有 64GB 内存。我的旧电脑有4到 8GB 的内存。
当操作具有大 RAM 的计算机时swap 空间的限制性能系数远低于 2倍。[Fedora 28在线安装指南][1] 定义了当前关于 swap 空间分配的方法。下面内容是我提出的建议。
下表根据系统中的 RAM 大小以及是否有足够的内存让系统休眠,提供了交换分区的推荐大小。建议的 swap 分区大小是在安装过程中自动建立的。但是,为了满足系统休眠,您需要在自定义分区阶段编辑 swap 空间。
_表 1: Fedora 28文档中推荐的系统 swap 空间_
| **系统内存大小 ** | **推荐 swap 空间 ** | **建议 swap 大小用休眠模式 ** |
|--------------------------|-----------------------------|---------------------------------------|
| 小于 2 GB | 2倍 RAM | 3 倍 RAM |
| 2 GB - 8 GB | 等于 RAM 大小 | 2 倍 RAM |
| 8 GB - 64 GB | 0.5 倍 RAM | 1.5 倍 RAM |
| 大于 64 GB | 工作量相关 | 不建议休眠模式 |
在上面列出的每个范围之间的边界(例如,具有 2GB、8GB 或 64GB 的系统 RAM),请根据所选 swap 空间和支持休眠功能请谨慎使用。如果你的系统资源允许,增加 swap 空间可能会带来更好的性能。
当然,大多数 Linux 管理员对多大的 swap 空间量有自己的想法。下面的表2包含了基于我在多种环境中的个人经历所做出的建议。这些可能不适合你但是和表1一样它们可能对你有所帮助。
_表 2: 作者推荐的系统 swap 空间_
| RAM 大小 | 推荐 swap 空间 |
|---------------|------------------------|
| ≤ 2GB | 2X RAM |
| 2GB 8GB | = RAM |
| >8GB | 8GB |
这两个表中共同点,随着 RAM 数量的增加,超过某一点增加更多 swap 空间只会导致在 swap 空间几乎被全部使用之前就发生频繁交换。根据以上建议,则应尽可能添加更多 RAM而不是增加更多 swap 空间。如类似影响系统性能的情况一样,请使用最适合你的建议。根据 Linux 环境中的条件进行测试和更改是需要时间和精力的。
### 向非 LVM 磁盘环境添加更多 swap 空间
面对已安装 Linux 的主机并对 swap 空间的需求不断变化,有时有必要修改系统定义的 swap 空间的大小。此过程可用于需要增加 swap 空间大小的任何情况。它假设有足够的可用磁盘空间。此过程还假设磁盘在 “raw” EXT4 和 swap 分区中分区,并且不使用逻辑卷 (LVM)。
要基本步骤很简单:
1. 关闭现有的 swap 空间。
2. 创建所需大小的新 swap 分区。
3. 重读分区表。
4. 将分区配置为 swap 空间。
5. 添加新分区到 /etc/fstab。
6. 打开 swap 空间。
不应需要重新启动机器。
为了安全起见,在关闭 swap 空间前,至少你应该确保没有应用程序在运行,也没有 swap 空间在使用。`free` 或 `top` 命令可以告诉你 swap 空间是否在使用中。为了更安全您可以恢复到运行级别1或单用户模式。
使用关闭所有 swap 空间的命令关闭 swap 分区:
```
swapoff -a
```
现在查看硬盘上的现有分区。
```
fdisk -l
```
这将显示每个驱动器上的分区表。按编号标识当前的 swap 分区。
使用以下命令在交互模式下启动 `fdisk`:
```
fdisk /dev/<device name>
```
例如:
```
fdisk /dev/sda
```
此时,`fdisk` 是交互方式的,只在指定的磁盘驱动器上进行操作。
使用 fdisk `p` 子命令验证磁盘上是否有足够的可用空间来创建新的 swap 分区。硬盘上的空间以 512字节 以及起始和结束柱面编号的形式显示,因此您可能需要做一些计算来确定分配分区之间和末尾的可用空间。
使用 `n` 子命令创建新的交换分区。fdisk 会问你开始柱面。默认情况下,它选择编号最低的可用柱面。如果你想改变这一点,输入开始柱面的编号。
`fdisk` 命令允许你以多种格式输入分区的大小包括最后一个柱面号或字节、KB 或 MB 的大小。键入 4000M ,这将在新分区上提供大约 4GB 的空间(例如),然后按 Enter 键。
使用 `p` 子命令来验证分区是否按照指定的方式创建的。请注意,除非使用结束柱面编号,否则分区可能与你指定的不完全相同。`fdisk` 命令只能在整个柱面上增量的分配磁盘空间,因此你的分区可能比你指定的稍小或稍大。如果分区不是您想要的,你可以删除它并重新创建它。
现在指定新分区是 swap 分区了 。子命令 `t` 允许你指定定分区的类型。所以输入 `t`指定分区号当它要求十六进制分区类型时输入82这是Linux swap 分区类型,然后按 Enter。
当你对创建的分区感到满意时,使用 `w` 子命令将新的分区表写入磁盘。`fdisk` 程序将退出,并在完成修改后的分区表的编写后返回命令提示符。当`fdisk` 完成写入新分区表时,会收到以下消息:
```
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
```
此时,你使用 `partprobe` 命令强制内核重新读取分区表,这样就不需要执行重新启动机器。
```
partprobe
```
使用命令 `fdisk -l` 列出分区,新 swap 分区应该在列出的分区中。确保新的分区类型是 “Linux swap”。
修改 /etc/fstab 文件以指向新的 swap 分区。如下所示:
```
LABEL=SWAP-sdaX   swap        swap    defaults        0 0
```
其中 `X` 是分区号。根据新 swap 分区的位置,添加以下内容:
```
/dev/sdaY         swap        swap    defaults        0 0
```
请确保使用正确的分区号。现在,可以执行创建 swap 分区的最后一步。使用 `mkswap` 命令将分区定义为 swap 分区。
```
mkswap /dev/sdaY
```
最后一步是使用以下命令启用 swap 空间:
```
swapon -a
```
你的新 swap 分区现在与以前存在的 swap 分区一起在线。您可以使用 `free` 或`top` 命令来验证这一点。
#### 在 LVM 磁盘环境中添加 swap 空间
如果你的磁盘使用 LVM ,更改 swap 空间将相当容易。同样,假设当前 swap 卷所在的卷组中有可用空间。默认情况下LVM 环境中的 Fedora Linux 在安装过程将 swap 分区创建为逻辑卷。您可以非常简单地增加 swap 卷的大小。
以下是在 LVM 环境中增加 swap 空间大小的步骤:
1. 关闭所有 swap 。
2. 增加指定用于 swap 的逻辑卷的大小。
3. 为 swap 空间调整大小的卷配置。
4. 启用 swap。
首先,让我们使用 `lvs` 命令(列出逻辑卷)来验证 swap 是否存在以及 swap 是否是逻辑卷。
```
[root@studentvm1 ~]# lvs
  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   fedora_studentvm1 -wi-ao----  2.00g                                                      
  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                            
  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                  
  swap   fedora_studentvm1 -wi-ao----  8.00g                                                      
  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                      
  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                      
  var    fedora_studentvm1 -wi-ao---- 10.00g                                                      
[root@studentvm1 ~]#
```
你可以看到当前的 swap 大小为 8GB。在这种情况下我们希望将 2GB 添加到此 swap 卷中。首先,停止现有的 swap 。如果 swap 空间正在使用,终止正在运行的程序。
```
swapoff -a
```
现在增加逻辑卷的大小。
```
[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap
  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
  Logical volume fedora_studentvm1/swap successfully resized.
[root@studentvm1 ~]#
```
运行 `mkswap` 命令将整个 10GB 分区变成 swap 空间。
```
[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap
mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10 GiB (10737414144 bytes)
no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a
[root@studentvm1 ~]#
```
重新启用 swap 。
```
[root@studentvm1 ~]# swapon -a
[root@studentvm1 ~]#
```
现在,使用 `lsblk ` 命令验证新 swap 空间是否存在。同样,不需要重新启动机器。
```
[root@studentvm1 ~]# lsblk
NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                    8:0    0   60G  0 disk
|-sda1                                 8:1    0    1G  0 part /boot
`-sda2                                 8:2    0   59G  0 part
  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm  
  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm  
  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  /
  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm  
  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm  
  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm  
  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  /
  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm  
  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP]
  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr
  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home
  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var
  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp
sr0                                   11:0    1 1024M  0 rom  
[root@studentvm1 ~]#
```
您也可以使用`swapon -s` 命令或 `top` 、`free` 或其他几个命令来验证这一点。
```
[root@studentvm1 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        4038808      382404     2754072        4152      902332     3404184
Swap:      10485756           0    10485756
[root@studentvm1 ~]#
```
请注意,不同的命令以不同的形式显示或要求输入设备文件。在 /dev 目录中访问特定设备有多种方式。在我的文章[Managing Devices in Linux][2] 中有更多关于 /dev 目录及其内容说明。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/swap-space-linux-systems
作者:[David Both][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/
[2]: https://opensource.com/article/16/11/managing-devices-linux

View File

@ -0,0 +1,238 @@
如何将Scikit-learn Python库用于数据科学项目
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
Scikit-learn Python库最初于2007年发布从头到尾都通常用于解决机器学习和数据科学问题。 多功能库提供整洁一致高效的API和全面的在线文档。
### 什么是Scikit-learn
[Scikit-learn][1]是一个开源Python库拥有强大的数据分析和数据挖掘工具。 在BSD许可下可用并建立在以下机器学习库上
- **NumPy**,一个用于操作多维数组和矩阵的库。 它还具有广泛的数学函数汇集,可用于执行各种计算。
- **SciPy**,一个由各种库组成的生态系统,用于完成技术计算任务。
- **Matplotlib**,一个用于绘制各种图表和图形的库。
Scikit-learn提供了广泛的内置算法可以充分用于数据科学项目。
以下是使用Scikit-learn库的主要方法。
#### 1. 分类
[分类][2]工具识别与提供的数据相关联的类别。 例如,它们可用于将电子邮件分类为垃圾邮件或非垃圾邮件。
Scikit-learn中的分类算法包括
- 支持向量机SVM
- 最邻近
- 随机森林
#### 2. 回归
回归涉及到创建一个模型去试图理解输入和输出数据之间的关系。 例如,回归工具可用于了解股票价格的行为。
回归算法包括:
- SVM
- 岭回归Ridge regression
- LassoLCTT译者注Lasso 即 least absolute shrinkage and selection operator又译最小绝对值收敛和选择算子、套索算法
#### 3. 聚类
Scikit-learn聚类工具用于自动将具有相同特征的数据分组。 例如,可以根据客户数据的地点对客户数据进行细分。
聚类算法包括:
- K-means
- 谱聚类Spectral clustering
- Mean-shift
#### 4. 降维
降维降低了用于分析的随机变量的数量。 例如,为了提高可视化效率,可能不会考虑外围数据。
降维算法包括:
- 主成分分析Principal component analysisPCA
- 功能选择Feature selection
- 非负矩阵分解Non-negative matrix factorization
#### 5. 模型选择
模型选择算法提供了用于比较,验证和选择要在数据科学项目中使用的最佳参数和模型的工具。
通过参数调整能够增强精度的模型选择模块包括:
- 网格搜索Grid search
- 交叉验证Cross-validation
- 指标Metrics
#### 6. 预处理
Scikit-learn预处理工具在数据分析期间的特征提取和规范化中非常重要。 例如,您可以使用这些工具转换输入数据(如文本)并在分析中应用其特征。
预处理模块包括:
- 预处理
- 特征提取
### Scikit-learn库示例
让我们用一个简单的例子来说明如何在数据科学项目中使用Scikit-learn库。
我们将使用[鸢尾花花卉数据集][3]该数据集包含在Scikit-learn库中。 鸢尾花数据集包含有关三种花种的150个细节三种花种分别为
- Setosa-标记为0
- Versicolor-标记为1
- Virginica-标记为2
数据集包括每种花种的以下特征(以厘米为单位):
- 萼片长度
- 萼片宽度
- 花瓣长度
- 花瓣宽度
#### 第1步导入库
由于Iris数据集包含在Scikit-learn数据科学库中我们可以将其加载到我们的工作区中如下所示
```
from sklearn import datasets
iris = datasets.load_iris()
```
这些命令从**sklearn**导入数据集**datasets**模块,然后使用**datasets**中的**load_iris()**方法将数据包含在工作空间中。
#### 第2步获取数据集特征
数据集**datasets**模块包含几种方法,使您更容易熟悉处理数据。
在Scikit-learn中数据集指的是类似字典的对象其中包含有关数据的所有详细信息。 使用**.data**键存储数据,该数据列是一个数组列表。
例如,我们可以利用**iris.data**输出有关Iris花卉数据集的信息。
```
print(iris.data)
```
这是输出(结果已被截断):
```
[[5.1 3.5 1.4 0.2]
 [4.9 3.  1.4 0.2]
 [4.7 3.2 1.3 0.2]
 [4.6 3.1 1.5 0.2]
 [5.  3.6 1.4 0.2]
 [5.4 3.9 1.7 0.4]
 [4.6 3.4 1.4 0.3]
 [5.  3.4 1.5 0.2]
 [4.4 2.9 1.4 0.2]
 [4.9 3.1 1.5 0.1]
 [5.4 3.7 1.5 0.2]
 [4.8 3.4 1.6 0.2]
 [4.8 3.  1.4 0.1]
 [4.3 3.  1.1 0.1]
 [5.8 4.  1.2 0.2]
 [5.7 4.4 1.5 0.4]
 [5.4 3.9 1.3 0.4]
 [5.1 3.5 1.4 0.3]
```
我们还使用**iris.target**向我们提供有关花朵不同标签的信息。
```
print(iris.target)
```
这是输出:
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2]
```
如果我们使用**iris.target_names**,我们将输出数据集中找到的标签名称的数组。
```
print(iris.target_names)
```
以下是运行Python代码后的结果
```
['setosa' 'versicolor' 'virginica']
```
#### 第3步可视化数据集
我们可以使用[箱形图][4]来生成鸢尾花数据集的视觉描绘。 箱形图说明了数据如何通过四分位数在平面上分布的。
以下是如何实现这一目标:
```
import seaborn as sns
box_data = iris.data # 表示数据数组的变量
box_target = iris.target # 表示标签数组的变量
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
让我们看看结果:
![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png)
在横轴上:
* 0是萼片长度
* 1是萼片宽度
* 2是花瓣长度
* 3是花瓣宽度
垂直轴的尺寸以厘米为单位。
### 总结
以下是这个简单的Scikit-learn数据科学教程的完整代码。
```
from sklearn import datasets
iris = datasets.load_iris()
print(iris.data)
print(iris.target)
print(iris.target_names)
import seaborn as sns
box_data = iris.data # 表示数据数组的变量
box_target = iris.target # 表示标签数组的变量
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Scikit-learn是一个多功能的Python库可用于高效完成数据科学项目。
如果您想了解更多信息,请查看[LiveEdu][5]上的教程例如Andrey Bulezyuk关于使用Scikit-learn库创建[机器学习应用程序][6]的视频。
有什么评价或者疑问吗? 欢迎在下面分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects
作者:[Dr.Michael J.Garbade][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/drmjg
[1]: http://scikit-learn.org/stable/index.html
[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/
[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set
[4]: https://en.wikipedia.org/wiki/Box_plot
[5]: https://www.liveedu.tv/guides/data-science/
[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/

View File

@ -0,0 +1,115 @@
10 个 Linux 中方便的 Bash 别名
======
对 Bash 长命令使用压缩的版本来更有效率。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U)
你有多少次在命令行上输入一个长命令,并希望有一种方法可以保存它以供日后使用?这就是 Bash 别名派上用场的地方。它们允许你将长而神秘的命令压缩为易于记忆和使用的东西。需要一些例子来帮助你入门吗?没问题!
要使用你创建的 Bash 别名,你需要将其添加到 .bash_profile 中,该文件位于你的主文件夹中。请注意,此文件是隐藏的,并只能从命令行访问。编辑此文件的最简单方法是使用 Vi 或 Nano 之类的东西。
### 10 个方便的 Bash 别名
1. 你有几次遇到需要解压 .tar 文件但无法记住所需的确切参数?别名可以帮助你!只需将以下内容添加到 .bash_profile 中,然后使用 **untar FileName** 解压缩任何 .tar 文件。
```
alias untar='tar -zxvf '
```
2. 想要下载的东西,但如果出现问题可以恢复吗?
```
alias wget='wget -c '
```
3. 是否需要为新的网络帐户生成随机的 20 个字符的密码?没问题。
```
alias getpass="openssl rand -base64 20"
```
4. 下载文件并需要测试校验和?我们也可做到。
```
alias sha='shasum -a 256 '
```
5. 普通的 ping 将永远持续下去。我们不希望这样。相反,让我们将其限制在五个 ping。
```
alias ping='ping -c 5'
```
6. 在任何你想要的文件夹中启动 Web 服务器。
```
alias www='python -m SimpleHTTPServer 8000'
```
7. 想知道你的网络有多快?只需下载 Speedtest-cli 并使用此别名即可。你可以使用 **speedtest-cli --list** 命令选择离你所在位置更近的服务器。
```
alias speed='speedtest-cli --server 2406 --simple'
```
8. 你有多少次需要知道你的外部 IP 地址,但是不知道如何获取?我也是。
```
alias ipe='curl ipinfo.io/ip'
```
9. 需要知道你的本地 IP 地址?
```
alias ipi='ipconfig getifaddr en0'
```
10. 最后,让我们清空屏幕。
```
alias c='clear'
```
如你所见Bash 别名是一种在命令行上简化生活的超级简便方法。想了解更多信息?我建议你 Google 搜索“Bash 别名”或在 Github 中看下。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/handy-bash-aliases
作者:[Patrick H.Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pmullins