Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-02-12 20:54:26 +08:00
commit 6eff67529d
17 changed files with 1832 additions and 381 deletions

View File

@ -0,0 +1,208 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11881-1.html)
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
如何进行 Linux 启动时间优化
======
![][2]
> 快速启动嵌入式设备或电信设备,对于时间要求紧迫的应用程序是至关重要的,并且在改善用户体验方面也起着非常重要的作用。这个文章给予一些关于如何增强任意设备的启动时间的重要技巧。
快速启动或快速重启在各种情况下起着至关重要的作用。为了保持所有服务的高可用性和更好的性能,嵌入式设备的快速启动至关重要。设想有一台运行着没有启用快速启动的 Linux 操作系统的电信设备,所有依赖于这个特殊嵌入式设备的系统、服务和用户可能会受到影响。这些设备维持其服务的高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
一台电信设备的一次小故障或关机,即使只是几秒钟,都可能会对无数互联网上的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的设备中加入快速启动的功能以帮助它们快速恢复工作是非常重要的。让我们从图 1 中理解 Linux 启动过程。
![图 1启动过程][3]
### 监视工具和启动过程
在对机器做出更改之前,用户应注意许多因素。其中包括计算机的当前启动速度,以及占用资源并增加启动时间的服务、进程或应用程序。
#### 启动图
为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装:
```
sudo apt-get install pybootchartgui
```
你每次启动时,启动图会在日志中保存一个 png 文件,使用户能够查看该 png 文件来理解系统的启动过程和服务。为此,使用下面的命令:
```
cd /var/log/bootchart
```
用户可能需要一个应用程序来查看 png 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器它没有一个精致的图形用户界面但它只用来显示图片。Feh 可以用于查看 png 文件。你可以使用下面的命令来安装它:
```
sudo apt-get install feh
```
你可以使用 `feh xxxx.png` 来查看 png 文件。
![图 2启动图][4]
图 2 显示了一个正在查看的引导图 png 文件。
#### systemd-analyze
但是,对于 Ubuntu 15.10 以后的版本不再需要引导图。为获取关于启动速度的简短信息,使用下面的命令:
```
systemd-analyze
```
![图 3systemd-analyze 的输出][5]
图表 3 显示命令 `systemd-analyze` 的输出。
命令 `systemd-analyze blame` 用于根据初始化所用的时间打印所有正在运行的单元的列表。这个信息是非常有用的,可用于优化启动时间。`systemd-analyze blame` 不会显示服务类型为简单(`Type=simple`)的服务,因为 systemd 认为这些服务应是立即启动的;因此,无法测量初始化的延迟。
![图 4systemd-analyze blame 的输出][6]
图 4 显示 `systemd-analyze blame` 的输出。
下面的命令打印时间关键的服务单元的树形链条:
```
command systemd-analyze critical-chain
```
图 5 显示命令 `systemd-analyze critical-chain` 的输出。
![图 5systemd-analyze critical-chain 的输出][7]
### 减少启动时间的步骤
下面显示的是一些可以减少启动时间的各种步骤。
#### BUM启动管理器
BUM 是一个运行级配置编辑器允许在系统启动或重启时配置初始化服务。它显示了可以在启动时启动的每个服务的列表。用户可以打开和关闭各个服务。BUM 有一个非常清晰的图形用户界面,并且非常容易使用。
在 Ubuntu 14.04 中BUM 可以使用下面的命令安装:
```
sudo apt-get install bum
```
为在 15.10 以后的版本中安装它,从链接 http://apt.ubuntu.com/p/bum 下载软件包。
以基本的服务开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习服务的基础知识,因为这可能会影响计算机或操作系统。图 6 显示 BUM 的图形用户界面。
![图 6BUM][8]
#### 编辑 rc 文件
要编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
```
cd /etc/init.d
```
然而,访问 `init.d` 需要 root 用户权限,该目录基本上包含的是开始/停止脚本,这些脚本用于在系统运行时或启动期间控制(开始、停止、重新加载、启动启动)守护进程。
`init.d` 目录中的 `rc` 文件被称为<ruby>运行控制<rt>run control</rt></ruby>脚本。在启动期间,`init` 执行 `rc` 脚本并发挥它的作用。为改善启动速度,我们可以更改 `rc` 文件。使用任意的文件编辑器打开 `rc` 文件(当你在 `init.d` 目录中时)。
例如,通过输入 `vim rc` ,你可以更改 `CONCURRENCY=none``CONCURRENCY=shell`。后者允许某些启动脚本同时执行,而不是依序执行。
在最新版本的内核中,该值应该被更改为 `CONCURRENCY=makefile`
图 7 和图 8 显示编辑 `rc` 文件前后的启动时间比较。可以注意到启动速度有所提高。在编辑 `rc` 文件前的启动时间是 50.98 秒,然而在对 `rc` 文件进行更改后的启动时间是 23.85 秒。
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 `init.d` 文件。
![图 7对 rc 文件进行更改之前的启动速度][9]
![图 8对 rc 文件进行更改之后的启动速度][10]
#### E4rat
E4rat 代表 e4 <ruby>减少访问时间<rt>reduced access time</rt></ruby>(仅在 ext4 文件系统的情况下)。它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目。E4rat 是一个通过碎片整理来帮助快速启动的应用程序。它还会加速应用程序的启动。E4rat 使用物理文件的重新分配来消除寻道时间和旋转延迟,因而达到较高的磁盘传输速度。
E4rat 可以 .deb 软件包形式获得,你可以从它的官方网站 http://e4rat.sourceforge.net/ 下载。
Ubuntu 默认安装的 ureadahead 软件包与 e4rat 冲突。因此必须使用下面的命令安装这几个软件包:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
现在使用下面的命令来安装 e4rat 的依赖关系:
```
sudo apt-get install libblkid1 e2fslibs
```
打开下载的 .deb 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
遵循下面所给的步骤来使 e4rat 正确地运行并提高启动速度。
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 `shift` 按键来完成。
* 选择通常用于启动的选项(内核版本),并按 `e`
* 查找以 `linux /boot/vmlinuz` 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键):`init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
`。
* 现在,按 `Ctrl+x` 来继续启动。这可以让 e4rat 在启动后收集数据。在这台机器上工作,并在接下来的两分钟时间内打开并关闭应用程序。
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:`cd /var/log/e4rat`。
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件就绪,再次访问 Grub 菜单,并对你的选项按 `e`
* 在你之前已经编辑过的同一行的末尾输入 `single`。这可以让你访问命令行。如果出现其它菜单选择恢复正常启动Resume normal boot。如果你不知为何不能进入命令提示符`Ctrl+Alt+F1` 组合键。
* 在你看到登录提示后,输入你的登录信息。
* 现在输入下面的命令:`sudo e4rat-realloc /var/lib/e4rat/startup.log`。此过程需要一段时间,具体取决于机器的磁盘速度。
* 现在使用下面的命令来重启你的机器:`sudo shutdown -r now`。
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat。
* 使用任意的编辑器访问 grub 文件。例如,`gksu gedit /etc/default/grub`。
* 查找以 `GRUB CMDLINE LINUX DEFAULT=` 开头的一行,并在引号之间和任何选项之前添加下面的行:`init=/sbin/e4rat-preload 18`。
* 它应该看起来像这样:`GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash`。
* 保存并关闭 Grub 菜单,并使用 `sudo update-grub` 更新 Grub 。
* 重启系统,你将发现启动速度有明显变化。
图 9 和图 10 显示在安装 e4rat 前后的启动时间之间的差异。可注意到启动速度的提高。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
![图 9使用 e4rat 之前的启动速度][11]
![图 10使用 e4rat 之后的启动速度][12]
### 一些易做的调整
使用很小的调整也可以达到良好的启动速度,下面列出其中两个。
#### SSD
使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也有助于加快文件传输和运行应用程序方面的速度。
#### 禁用图形用户界面
图形用户界面、桌面图形和窗口动画占用大量的资源。禁用图形用户界面是获得良好的启动速度的另一个好方法。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?ssl=1

View File

@ -1,39 +1,38 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11882-1.html)
[#]: subject: (3 handy command-line internet speed tests)
[#]: via: (https://opensource.com/article/20/1/internet-speed-tests)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
3 个方便的命令行网速测试工具
3 个方便的命令行网速测试工具
======
用这三个开源工具检查你的互联网和局域网速度。
![Old train][1]
可以验证网络连接速度让你可以控制计算机。三个可以检查互联网和局域网的开源命令行工具是 Speedtest、Fast 和 iPerf。
> 用这三个开源工具检查你的互联网和局域网速度。
![](https://img.linux.net.cn/data/attachment/album/202002/12/115915kk6hkax1vparkuvk.jpg)
能够验证网络连接速度使您可以控制计算机。 使您可以在命令行中检查互联网和网络速度的三个开源工具是 Speedtest、Fast 和 iPerf。
### Speedtest
[Speedtest][2] 是以前的最爱。它用 Python 实现,并打包在 Apt 中,也可用 pip 安装。你可以将它作为命令行工具或 Python 脚本使用。
[Speedtest][2] 是一个旧宠。它用 Python 实现,并打包在 Apt 中,也可用 `pip` 安装。你可以将它作为命令行工具或 Python 脚本使用。
使用以下命令安装:
```
`sudo apt install speedtest-cli`
sudo apt install speedtest-cli
```
或者
```
`sudo pip3 install speedtest-cli`
sudo pip3 install speedtest-cli
```
然后使用命令 **speedtest** 运行它:
然后使用命令 `speedtest` 运行它:
```
$ speedtest
@ -48,28 +47,25 @@ Testing upload speed............................................................
Upload: 10.93 Mbit/s
```
它给你提供了上传和下载的网速。它快速而且脚本化,因此你可以定期运行它,并将输出保存到文件或数据库中,以记录一段时间内的网络速度。
它给你提供了互联网上传和下载的网速。它快速而且可脚本调用,因此你可以定期运行它,并将输出保存到文件或数据库中,以记录一段时间内的网络速度。
### Fast
[Fast][3] 是 Netflix 提供的服务。它的网址是 [Fast.com][4],同时它有一个可通过 npm 安装的命令行工具:
[Fast][3] 是 Netflix 提供的服务。它的网址是 [Fast.com][4],同时它有一个可通过 `npm` 安装的命令行工具:
```
`npm install --global fast-cli`
npm install --global fast-cli
```
网站和命令行程序都提供了相同的基本界面:它是一个尽可能简单的速度测试:
```
$ fast
     82 Mbps ↓
```
该命令返回你的网络下载速度。要获取上传速度,请使用 **-u** 标志:
该命令返回你的网络下载速度。要获取上传速度,请使用 `-u` 标志:
```
$ fast -u
@ -79,10 +75,10 @@ $ fast -u
### iPerf
[iPerf][5] 测试的是局域网速度(而不是像前两个工具一样测试互联网速度)的好方法。 Debian、Raspbian 和 Ubuntu 用户可以使用 apt 安装它:
[iPerf][5] 测试的是局域网速度而不是像前两个工具一样测试互联网速度的好方法。Debian、Raspbian 和 Ubuntu 用户可以使用 apt 安装它:
```
`sudo apt install iperf`
sudo apt install iperf
```
它还可用于 Mac 和 Windows。
@ -91,34 +87,31 @@ $ fast -u
获取服务端计算机的 IP 地址:
```
`ip addr show | grep inet.*brd`
ip addr show | grep inet.*brd
```
你的本地 IP 地址(假设为 IPv4 本地网络)以 **192.168****10** 开头。记下 IP 地址,以便可以在另一台计算机(指定为客户端的计算机)上使用它。
在服务端启动 **iperf**
你的本地 IP 地址(假设为 IPv4 本地网络)以 `192.168``10` 开头。记下 IP 地址,以便可以在另一台计算机(指定为客户端的计算机)上使用它。
在服务端启动 `iperf`
```
`iperf -s`
iperf -s
```
这会等待来自客户端的传入连接。将另一台计算机作为为客户端并运行此命令,将示例中的 IP 替换为服务端计算机的 IP
它会等待来自客户端的传入连接。将另一台计算机作为为客户端并运行此命令,将示例中的 IP 替换为服务端计算机的 IP
```
`iperf -c 192.168.1.2`
iperf -c 192.168.1.2
```
![iPerf][6]
只需几秒钟即可完成测试,然后返回传输大小和计算出的带宽。我使用家用服务器作为服务端,在 PC 和笔记本电脑上进行了一些测试。我最近在房屋周围安装了 Cat6 以太网,因此我的有线连接速度达到 1Gbps但 WiFi 连接速度却低得多。
只需几秒钟即可完成测试,然后返回传输大小和计算出的带宽。我使用家用服务器作为服务端,在 PC 和笔记本电脑上进行了一些测试。我最近在房屋周围安装了六类线以太网,因此我的有线连接速度达到 1Gbps但 WiFi 连接速度却低得多。
![iPerf][7]
你可能注意到它记录到 16Gbps。那是我使用服务器进行自我测试因此它只是在测试写入磁盘的速度。服务器是普通硬盘,它只有 16Gbps,但是我的台式机有 46Gbps另外我的笔记本超过了 60Gbps因为它们都有固态硬盘。
你可能注意到它记录到 16Gbps。那是我使用服务器进行自我测试因此它只是在测试写入磁盘的速度。该服务器具有仅 16 Gbps 的硬盘驱动器,但是我的台式机有 46Gbps另外我的)笔记本超过了 60Gbps因为它们都有固态硬盘。
![iPerf][8]
@ -128,9 +121,7 @@ $ fast -u
你还使用其他哪些工具来衡量家庭网络?在评论中分享你的评论。
* * *
_本文最初发表在 Ben Nuttall 的 [Tooling blog][9] 上并获准在此使用。_
本文最初发表在 Ben Nuttall 的 [Tooling blog][9] 上,并获准在此使用。
--------------------------------------------------------------------------------
@ -139,7 +130,7 @@ via: https://opensource.com/article/20/1/internet-speed-tests
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11884-1.html)
[#]: subject: (Best Open Source eCommerce Platforms to Build Online Shopping Websites)
[#]: via: (https://itsfoss.com/open-source-ecommerce/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
@ -16,7 +16,7 @@
这些电商解决方案是专为搭建线上购物站点设计的,因此都集成了库存管理、商品列表、购物车、下单、愿望清单以及支付这些必需的基础功能。
_但请注意,这篇文章并不会进行深入介绍。因此,我建议最好广泛试用其中的多个产品,以便进一步的了解和比较。_
但请注意,这篇文章并不会进行深入介绍。因此,我建议最好广泛试用其中的多个产品,以便进一步的了解和比较。
### 优秀的开源电商解决方案
@ -24,63 +24,63 @@ _但请注意这篇文章并不会进行深入介绍。因此我建议最
开源电商解决方案种类繁多,一些缺乏维护的都会被我们忽略掉,以免搭建出来的站点因维护不及时而受到影响。
_**另外,以下的列表排名不分先后。**_
另外,以下的列表排名不分先后。
#### 1\. nopCommerce
#### 1nopCommerce
![][3]
nopCommerce 是基于 [ASP.NET Core][4] 的自由开源电商解决方案。如果你要找的是基于 PHP 的解决方案,可以跳过这一节了。
nopCommerce 是基于 [ASP.NET Core][4] 的自由开源电商解决方案。如果你要找的是基于 PHP 的解决方案,可以跳过这一节了。
nopCommerce 的管理面板界面具有简洁易用的特点,如果你还使用过 OpenCart就可能会感到似曾相识。在默认情况下它就已经自带了很多基本的功能同时还为移动端用户提供了响应式的设计。
nopCommerce 的管理面板界面具有简洁易用的特点,如果你还使用过 OpenCart就可能会感到似曾相识(我不是在抱怨)。在默认情况下,它就已经自带了很多基本的功能,同时还为移动端用户提供了响应式的设计。
你可以在其[官方商店][5]中获取到一些兼容的界面主题和应用扩展,还可以选择付费的支持服务。
在开始使用前,你可以从 nopCommerce 的[官方网站][6]下载源代码包,然后进行自定义配置和部署;也可以直接下载完整的软件包快速安装到 web 服务器上。详细信息可以查阅 nopCommerce 的 [GitHub 页面][7]或官方网站。
[nopCommerce][8]
- [nopCommerce][8]
#### 2\. OpenCart
#### 2OpenCart
![][9]
OpenCart 是一个基于 PHP 的非常流行的电商解决方案, 我个人感觉它还是相当不错的
OpenCart 是一个基于 PHP 的非常流行的电商解决方案,就我个人而言,我曾为一个项目用过它,并且体验非常好,如果不是最好的话
或许你会觉得它维护得不是很频繁,但实际上使用 OpenCart 的开发者并不在少数。你可以获得许多受支持的扩展并将它们的功能加入到 OpenCart 中。
OpenCart 不一定是适合所有人的电商解决方案,但如果你需要的只是一个基于 PHP 的开源解决方案OpenCart 是个值得一试的选择,毕竟它可以方便地一键完成安装。想要了解更多,可以查阅 OpenCart 的官方网站或 [GitHub 页面][10]。
OpenCart 不一定是适合所有人的“现代”电商解决方案,但如果你需要的只是一个基于 PHP 的开源解决方案OpenCart 是个值得一试的选择。在大多数具有一键式应用程序安装支持的网络托管平台中,应该可以安装 OpenCart。想要了解更多,可以查阅 OpenCart 的官方网站或 [GitHub 页面][10]。
[OpenCart][11]
- [OpenCart][11]
#### 3\. PrestaShop
#### 3PrestaShop
![][12]
PrestaShop 也是一个可以尝试的开源电商解决方案。
PrestaShop 是一个积极维护下的开源解决方案,它的官方商店中也有额外提供主题和扩展。与 OpenCart 不同PrestaShop 不是一个能够一键安装的应用。但不需要担心,从官方网站下载下来之后,它的部署过程也并不复杂。如果你需要帮助,也可以参考 PrestaShop 的[安装指南][15]。
PrestaShop 是一个积极维护下的开源解决方案,它的官方商店中也有额外提供主题和扩展。与 OpenCart 不同,在托管服务平台上,你可能找不到一键安装的 PrestaShop。但不需要担心从官方网站下载下来之后它的部署过程也并不复杂。如果你需要帮助也可以参考 PrestaShop 的[安装指南][15]。
PrestaShop 的特点就是配置丰富和易于使用,我发现很多其它用户也在用它,你也不妨试用一下。
你也可以在 PrestaShop 的 [GitHub 页面][16]查阅到更多相关内容。
[PrestaShop][17]
- [PrestaShop][17]
#### 4\. WooCommerce
#### 4WooCommerce
![][18]
如果你想用 [WordPress][19] 来搭建电商站点,不妨使用 WooCommerce。
严格来说,这种方式其实是搭建一个 WordPress 应用,然后把 WooCommerce 作为一个插件或扩展以实现电商站点所需要的功能。很多 web 开发者都知道如何使用 WordPress因此 WooCommerce 的学习成本不会很高。
从技术上来说,这种方式其实是搭建一个 WordPress 应用,然后把 WooCommerce 作为一个插件或扩展以实现电商站点所需要的功能。很多 web 开发者都知道如何使用 WordPress因此 WooCommerce 的学习成本不会很高。
WordPress 作为目前最好的开源站点项目之一,对大部分人来说都不会有太高的门槛。它具有易用、稳定的特点,同时还支持大量的扩展插件。
WooCommerce 的灵活性也是一大亮点,在它的线上商店提供了许多设计和扩展可供选择。你也可以到它的 [GitHub 页面][20]查看相关介绍。
[WooCommerce][21]
- [WooCommerce][21]
#### 5\. Zen Cart
#### 5Zen Cart
![][22]
@ -90,9 +90,9 @@ WooCommerce 的灵活性也是一大亮点,在它的线上商店提供了许
你也可以在 [SourceForge][23] 找到 Zen Cart 这个项目。
[Zen Cart][24]
- [Zen Cart][24]
#### 6\. Magento
#### 6Magento
![Image Credits: Magestore][25]
@ -104,9 +104,9 @@ Magento 完全是作为电商应用程序而生的,因此你会发现它的很
想要了解更多,可以查看 Magento 的 [GitHub 页面][27]。
[Magento][28]
- [Magento][28]
#### 7\. Drupal
#### 7Drupal
![Drupal][29]
@ -116,11 +116,25 @@ Drupal 是一个适用于创建电商站点的开源 CMS 解决方案。
跟 WordPress 类似Drupal 在服务器上的部署并不复杂,不妨看看它的使用效果。在它的[下载页面][30]可以查看这个项目以及下载最新的版本。
[Drupal][31]
- [Drupal][31]
总结
#### 8、Odoo eCommerce
以上是我接触过最好的几个开源电商解决方案了,当然或许还会有很多更好的同类产品。如果你还有其它值得一提的产品,可以在评论区发表。也欢迎在评论区分享你对开源电商解决方案的经验和想法。
![Odoo Ecommerce Platform][32]
如果你还不知道Odoo 提供了一套开源商务应用程序。他们还提供了[开源会计软件][33]和 CRM 解决方案,我们将会在单独的列表中进行介绍。
对于电子商务门户,你可以根据需要使用其在线拖放生成器自定义网站。你也可以推广该网站。除了简单的主题安装和自定义选项之外,你还可以利用 HTML/CSS 在一定程度上手动自定义外观。
你也可以查看其 [GitHub][34] 页面以进一步了解它。
- [Odoo eCommerce][35]
### 总结
我敢肯定还有更多的开源电子商务平台,但是,我现在还没有遇到比我上面列出的更好的东西。
如果你还有其它值得一提的产品,可以在评论区发表。也欢迎在评论区分享你对开源电商解决方案的经验和想法。
--------------------------------------------------------------------------------
@ -129,7 +143,7 @@ via: https://itsfoss.com/open-source-ecommerce/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -166,3 +180,7 @@ via: https://itsfoss.com/open-source-ecommerce/
[29]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/drupal.png?ssl=1
[30]: https://www.drupal.org/project/drupal
[31]: https://www.drupal.org/industries/ecommerce
[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/odoo-ecommerce-platform.jpg?w=800&ssl=1
[33]: https://itsfoss.com/open-source-accounting-software/
[34]: https://github.com/odoo/odoo
[35]: https://www.odoo.com/page/open-source-ecommerce

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (OpenShot Video Editor Gets a Major Update With Version 2.5 Release)
[#]: via: (https://itsfoss.com/openshot-2-5-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
OpenShot Video Editor Gets a Major Update With Version 2.5 Release
======
[OpenShot][1] is one of the [best open-source video editors][2] out there. With all the features that it offered it was already a good video editor on Linux.
Now, with a major update to it (**v.2.5.0**), OpenShot has added a lot of new improvements and features. And, trust me, its not just any regular release it is a huge release packed with features that you probably wanted for a very long time.
In this article, I will briefly mention the key changes involved in the latest release.
![][3]
### OpenShot 2.5.0 Key Features
Here are some of the major new features and improvements in OpenShot 2.5:
#### Hardware Acceleration Support
The hardware acceleration support is still an experimental addition however, it is a useful feature to have.
Instead of relying on your CPU to do all the hard work, you can utilize your GPU to encode/decode video data when working with MP4/H.264 video files.
This will affect (or improve) the performance of OpenShot in a meaningful way.
#### Support Importing/Exporting Files From Final Cut Pro &amp; Premiere
![][4]
[Final Cut Pro][5] and [Adobe Premiere][6] are the two popular video editors for professional content creators. OpenShot 2.5 now allows you to work on projects created on these platforms. It can import (or export) the files from Final Cut Pro &amp; Premiere in EDL &amp; XML formats.
#### Thumbnail Generation Improved
This isnt a big feature but a necessary improvement to most of the video editors. You dont want broken images in the thumbnails (your timeline/library). So, with this update, OpenShot now generates the thumbnails using a local HTTP server, can check multiple folder locations, and regenerate missing ones.
#### Blender 2.8+ Support
The new OpenShot release also supports the latest [Blender][7] (.blend) format so it should come in handy if youre using Blender as well.
#### Easily Recover Previous Saves &amp; Improved Auto-backup
![][8]
It was always a horror to lose your timeline work after you accidentally deleted it which was then auto-saved to overwrite your saved project.
Now, the auto-backup feature has improved with an added ability to easily recover your previous saved version of the project.
Even though you can recover your previous saves now you will find a limited number of the saved versions, so you have to still remain careful.
#### Other Improvements
In addition to all the key highlights mentioned above, you will also notice a performance improvement when using the keyframe system.
Several other issues like SVG compatibility, exporting &amp; modifying keyframe data, and resizable preview window have been fixed in this major update. For privacy-concerned users, OpenShot no longer sends usage data unless you opt-in to share it with them.
For more information, you can take a look at [OpenShots official blog post][9] to get the release notes.
### Installing OpenShot 2.5 on Linux
You can simply download the .AppImage file from its [official download page][10] to [install the latest OpenShot version][11]. If youre new to AppImage, you should also check out [how to use AppImage][12] on Linux to easily launch OpenShot.
[Download Latest OpenShot Release][10]
Some distributions like Arch Linux may also provide the latest OpenShot release with regular system updates.
#### PPA available for Ubuntu-based distributions
On Ubuntu-based distributions, if you dont want to use AppImage, you can [use the official PPA][13] from OpenShot:
```
sudo add-apt-repository ppa:openshot.developers/ppa
sudo apt update
sudo apt install openshot-qt
```
You may want to know how to remove PPA if you want to uninstall it later.
**Wrapping Up**
With all the latest changes/improvements considered, do you see [OpenShot][11] as your primary [video editor on Linux][14]? If not, what more do you expect to see in OpenShot? Feel free to share your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/openshot-2-5-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.openshot.org/
[2]: https://itsfoss.com/open-source-video-editors/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-2-5-0.png?ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-xml-edl.png?ssl=1
[5]: https://www.apple.com/in/final-cut-pro/
[6]: https://www.adobe.com/in/products/premiere.html
[7]: https://www.blender.org/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-recovery.jpg?ssl=1
[9]: https://www.openshot.org/blog/2020/02/08/openshot-250-released-video-editing-hardware-acceleration/
[10]: https://www.openshot.org/download/
[11]: https://itsfoss.com/openshot-video-editor-release/
[12]: https://itsfoss.com/use-appimage-linux/
[13]: https://itsfoss.com/ppa-guide/
[14]: https://itsfoss.com/best-video-editing-software-linux/

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Future smart walls key to IoT)
[#]: via: (https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Future smart walls key to IoT
======
MIT researchers are developing a wallpaper-like material thats made up of simple RF switch elements and can be applied to building surfaces. Using beamforming, the antenna array could potentially improve wireless signal strength nearly tenfold.
Jason Dorfman, MIT CSAIL
IoT equipment designers shooting for efficiency should explore the potential for using buildings as antennas, researchers say.
Environmental surfaces such as walls can be used to intercept and beam signals, which can increase reliability and data throughput for devices, according to MIT's Computer Science and Artificial Intelligence Laboratory ([CSAIL][1]).
Researchers at CSAIL have been working on a smart-surface repeating antenna array called RFocus. The antennas, which could be applied in sheets like wallpaper, are designed to be incorporated into office spaces and factories. Radios that broadcast signals could then become smaller and less power intensive.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“Tests showed that RFocus could improve the average signal strength by a factor of almost 10,” CSAIL's Adam Conner-Simons [writes in MIT News][3]. “The platform is also very cost-effective, with each antenna costing only a few cents.”
The prototype system CSAIL developed uses more than 3,000 antennas embedded into sheets, which are then hung on walls. In future applications, the antennas could adhere directly to the wall or be integrated during building construction.
“People have had things completely backwards this whole time,” the article claims. “Rather than focusing on the transmitters and receivers, what if we could amplify the signal by adding antennas to an external surface in the environment itself?”
RFocus relies on [beamforming][4]; multiple antennas broadcast the same signal at slightly different times, and as a result, some of the signals cancel each other and some strengthen each other. When properly executed, beamforming can focus a stronger signal in a particular direction.
[][5]
"The surface does not emit any power of its own," the developers explain in their paper ([PDF][6]). The antennas, or RF switch elements, as the group describes them, either let a signal pass through or reflect it through software. Signal measurements allow the apparatus to define exactly what gets through and how its directed.
Importantly, the RFocus surface functions with no additional power requirements. The “RFocus surface can be manufactured as an inexpensive thin wallpaper requiring no wiring,” the group says.
### Antenna design
Antenna engineering is turning into a vital part of IoT development. It's one of the principal reasons data throughput and reliability keeps improving in wireless networks.
Arrays where multiple, active panel components make up antennas, rather than a simple passive wire, as is the case in traditional radio, is an example of advancements in antenna engineering.
[Spray-on antennas][7] (unrelated to the CSAIL work) is another in-the-works technology I've written about. In that case, flexible substrates create the antenna, which is applied in a manner that's similar to spray paint. Another future direction could be anti-laser antennas: [Reversing a laser][8], where the laser becomes an absorber of light rather than the sender of it, could allow all data-carrying energy to be absorbed, making it the perfect light-based antenna.
Development of 6G wireless, which is projected to supersede 5G sometime around 2030, includes efforts to figure out how to directly [couple antennas to fiber][9]—the radio ends up being part of the cable, in other words.
"We cant get faster internet speeds without more efficient ways of delivering wireless signals," CSAILs Conner-Simons says.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.csail.mit.edu/
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: http://news.mit.edu/2020/smart-surface-smart-devices-mit-csail-0203
[4]: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html
[5]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[6]: https://drive.google.com/file/d/1TLfH-r2w1zlGBbeeM6us2sg0yq6Lm2wF/view
[7]: https://www.networkworld.com/article/3309449/spray-on-antennas-will-revolutionize-the-internet-of-things.html
[8]: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html
[9]: https://www.networkworld.com/article/3438337/how-6g-will-work-terahertz-to-fiber-conversion.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Who should lead the push for IoT security?)
[#]: via: (https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Who should lead the push for IoT security?
======
Industry groups and governmental agencies have been taking a stab at rules to improve the security of the internet of things, but so far theres nothing comprehensive.
Thinkstock
The ease with which internet of things devices can be compromised, coupled with the potentially extreme consequences of breaches, have prompted action from legislatures and regulators, but what group is best to decide?
Both the makers of [IoT][1] devices and governments are aware of the security issues, but so far they havent come up with standardized ways to address them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“The challenge of this market is that its moving so fast that no regulation is going to be able to keep pace with the devices that are being connected,” said Forrester vice president and research director Merritt Maxim. “Regulations that are definitive are easy to enforce and helpful, but theyll quickly become outdated.”
The latest such effort by a governmental body is a proposed regulation in the U.K. that would impose three major mandates on IoT device manufacturers that would address key security concerns:
* device passwords would have to be unique, and resetting them to factory defaults would be prohibited
* device makers would have to offer a public point of contact for the disclosure of vulnerabilities
* device makers would have to “explicitly state the minimum length of time for which the device will receive security updates”
This proposal is patterned after a California law that took effect last month. Both sets of rules would likely have a global impact on the manufacture of IoT devices, even though theyre being imposed on limited jurisdictions. Thats because its expensive for device makers to create separate versions of their products.
IoT-specific regulations arent the only ones that can have an impact on the marketplace. Depending on the type of information a given device handles, it could be subject to the growing list of data-privacy laws being implemented around the world, most notably Europes General Data Protection Regulation, as well as industry-specific regulations in the U.S. and elsewhere.
The U.S. Food and Drug Administration, noted Maxim, has been particularly active in trying to address device-security flaws. For example, last year it issued [security warnings][3] about 11 vulnerabilities that could compromise medical IoT devices that had been discovered by IoT security vendor [Armis][4]. In other cases it issued fines against healthcare providers.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
But theres a broader issue with devising definitive regulation for IoT devices in general, as opposed to prescriptive ones that simply urge manufacturers to adopt best practices, he said.
Particular companies might have integrated security frameworks covering their vertically integrated products such as an [industrial IoT][6] company providing security across factory floor sensors but that kind of security is incomplete in the multi-vendor world of IoT.
Perhaps the closest thing to a general IoT-security standard is currently being worked on by Underwriters Laboratories (UL), the security-testing non-profit best known for its century-old certification program for electrical equipment. ULs [IoT Security Rating Program][7] offers a five-tier system for ranking the security of connected devices bronze, silver, gold, platinum and diamond.
Bronze certification means that the device has addressed the most glaring security flaws, similar to those outlined in the recent U.K. and California legislations. [The higher ratings][8] include capabilities like ongoing security maintenance, improved access control and known threat testing.
While government regulation and voluntary industry improvements can help keep future IoT systems safe, neither addresses two key issues in the IoT security puzzle the millions of insecure devices that have already been deployed, and user apathy around making their systems as safe as possible, according to Maxim.
“Requiring a non-default passwords is good, but that doesnt stop users from setting insecure passwords,” he warned. “The challenge is, do customers care? Are they willing to pay extra for products with that certification?”
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.fda.gov/medical-devices/safety-communications/urgent11-cybersecurity-vulnerabilities-widely-used-third-party-software-component-may-introduce
[4]: https://www.armis.com/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://www.networkworld.com/article/3243928/what-is-the-industrial-internet-of-things-essentials-of-iiot.html
[7]: https://ims.ul.com/iot-security-rating-levels
[8]: https://www.cnx-software.com/2019/12/30/ul-iot-security-rating-system-ranks-iot-devices-security-from-bronze-to-diamond/
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why innovation can't happen without standardization)
[#]: via: (https://opensource.com/open-organization/20/2/standardization-versus-innovation)
[#]: author: (Len Dimaggio https://opensource.com/users/ldimaggi)
Why innovation can't happen without standardization
======
Balancing standardization and innovation is critical during times of
organizational change. And it's an ongoing issue in open organizations,
where change is constant.
![and old computer and a new computer, representing migration to new software or hardware][1]
Any organization facing the prospect of change will confront an underlying tension between competing needs for standardization and innovation. Achieving the correct balance between these needs can be essential to an organization's success.
Experiencing too much of either can lead to morale and productivity problems. Over-stressing standardization, for example, can have a stifling effect on the team's ability to innovate to solve new problems. Unfettered innovation, on the other hand, can lead to time lost due to duplicated or misdirected efforts.
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change. In this article, I'll outline various considerations your organization might make when attempting to strike this critical balance.
### The need for standardization
When North American beavers hear running water, they instinctively start building a dam. When some people see a problem, they look to build or buy a new product or tool to solve that problem. Technological advances make modeling business process solutions or setting up production or customer-facing systems much easier than in the past. The ease with which organizational actors can introduce new systems can occasionally, however, lead to problems. Duplicate, conflicting, or incompatible systems—or systems that, while useful, do not address a team's highest priorities—can find their way into organizations, complicating processes.
This is where standardization can help. By agreeing on and implementing a common set of tools and processes, teams become more efficient, as they reduce the need for new development methods, customized training, and maintenance.
Standardization has several benefits:
* **Reliability, predictability, and safety.** Think about the electricity in your own home and the history of electrical systems. In the early days of electrification, companies competed to establish individual standards for basic elements like plug configurations and safety requirements like insulation. Thanks to standardization, when you buy a light bulb today you can be sure that it will fit and not start a fire.
* **Lower costs and more dependable, repeatable processes.** Standarsization frees people in organizations to focus more attention on other things—products, for instance—and not on the need to coordinate the use of potentially conflicting new tools and processes. And it can make people's skills more portable (or, in budgeting terms more "fungible") across projects, since all projects share a common set of standards. In addition to helping project teams be more flexible, this portability of skills makes it easier for people to adopt new assignments.
* **Consistent measurements.** Creating a set of consistent metrics people can use to assess product quality across multiple products or multiple releases of individual products is possible through standardization. Without it, applying this kind of consistent measurement to product quality and maintaining any kind of history of tracking such quality can be difficult. Standardization effectively provides the organization a common language for measuring quality.
A danger of standardization arises when it becomes an all-consuming end in itself. A constant push to standardize can result in it inadvertently stifling creativity and innovation. If taken too far, policies that over emphasize standardization appear to discourage support for people's need to find new solutions to new problems. Taken to an extreme, this can lead to a suffocating organizational atmosphere in which people are reluctant to propose new solutions in the interest of maintaining standardization or conformity. In an open organization especially focused on generating new value and solutions, an attempt to impose standardization can have a negative impact on team morale.
Viewing new challenges through the lens of former solutions is natural. Likewise, it's common (and in fact generally practical) to apply legacy tools and processes to solving new problems.
But in open organizations, change is constant. We must always adapt to it.
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change.
### The need for innovation
Digital technology changes at a rapid rate, and that rate of change is always increasing. New opportunities result in new problems that require new solutions. Any organization must be able to adapt and its people must have the freedom to innovate. This is even more important in an open organization and with open source source software, as many of the factors (e.g., restrictive licenses) that blocked innovation in the past no longer apply.
When considering the prospect of innovation in your organization, keep in mind the following:
* **Standardization doesn't have to be the end of innovation.** Even tools and processes that are firmly established and in use by an organization were once very new and untried, and they only came about through processes of organizational innovation.
* **Progress through innovation also involves failure.** It's very often the case that some innovations fail, but when they fail, they point the way forward to solutions. This progress therefore requires that an organization protect the freedom to fail. (In competitive sports, athletes and teams seldom learn lessons from easy victories; they learn lessons about how to win, including how to innovate to win, from failures and defeats.)
Freedom to innovate, however, cannot be freedom to do whatever the heck we feel like doing. The challenge for any organization is to be able to encourage and inspire innovation, but at the same time to keep innovation efforts focused towards meeting your organization's goals and to address the problems that you're trying to solve.
In closed organizations, leaders may be inclined to impose rigid, top-down limits on innovation. A better approach is to instead provide a direction or path forward in terms of goals and deliverables, and then enable people to find their own ways along that path. That forward path is usually not a straight line; [innovation is almost never a linear process][2]. Like a sailboat making progress into the wind, it's sometimes [necessary to "tack" or go sideways][3] in order to make forward progress.
### Blending standardization with focused innovation
Are we doomed to always think of standardization as the broccoli we must eat, while innovation is the ice cream we want to eat?
Are we doomed to always think of standardization as the broccoli we _must_ eat, while innovation is the ice cream we _want_ to eat?
It doesn't have to be this way.
Perceptions play a role in the conflict between standardization and innovation. People who only want to focus on standardization must remember that even the tools and processes that they want to promote as "the standard" were once new and represented change. Likewise, people who only want to focus on innovation have to remember that in order for a tool or process to provide value to an organization, it has to be stable enough for that organization to use it over time.
An important element of any successful organization, especially an open organization where everyone is free to express their views, is empathy for other people's views. A little empathy is necessary for understanding both perceptions of impending change.
I've always thought about standardization and innovation as being two halves of one solution. A good analogy is that of college course catalog. In many colleges, all incoming first-year students regardless of their major will take a core set of classes. These core classes can cover a wide range of subjects and provide each student with an educational foundation. Every student receives a standard grounding in these disciplines regardless of their major course of study. Beyond the standardized core curriculum, then, each student is free to take specialized courses depending upon his or her major degree requirements and selected elective courses, as they work to innovate in their respective fields.
Similarly, standardization provides a foundation on which innovation can build. Think of standardization as a core set of tools and practices you might applied to _all_ products. Innovation can take the form of tools and practices that go _above and beyond_ this standard. This will enable every team to extend the core set of standardized tools and processes to meet the individual needs of their own specific projects. Standardization does not mean that all forward-looking actions stop. Over time, what was an innovation can become a standard, and thereby make room for the next innovation (and the next).
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/2/standardization-versus-innovation
作者:[Len Dimaggio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ldimaggi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://opensource.com/open-organization/19/6/innovation-delusion
[3]: https://opensource.com/open-organization/18/5/navigating-disruption-1

View File

@ -1,82 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NVIDIAs Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux)
[#]: via: (https://itsfoss.com/geforce-now-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
NVIDIAs Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux
======
NVIDIAs [GeForce NOW][1] cloud gaming service is something promising for gamers who probably dont have the hardware but want to experience the latest and greatest games with the best possible experience using GeForce NOW (stream the game online and play it on any device you want).
The service was limited to a few users (in the form of the waitlist) to access. However, recently, they announced that [GeForce NOW is open to all][2]. But, it really isnt.
Interestingly, its **not available for all the regions** across the globe. And, worse- **GeForce NOW does not support Linux**.
![][3]
### GeForce NOW is Not Open For All
The whole point of making a subscription-based cloud service to play games is to eliminate platform dependence.
Just like you would normally visit a website using a web browser you should be able to stream a game on every platform. Thats the concept, right?
![][4]
Well, thats definitely not rocket science but NVIDIA still missed supporting Linux (and iOS)?
### Is it because no one uses Linux?
I would strongly disagree with this even if its the reason for some to not support Linux. If that was the case, I wouldnt be writing for Its FOSS while using Linux as my primary desktop OS.
Not just that why do you think a Twitter user mentioned the lack of support for Linux if it wasnt a thing?
![][5]
Yes, maybe the userbase isnt large enough but while considering this as a cloud-based service it doesnt make sense to **not support Linux**.
Technically, if no one games on Linux, **Valve** wouldnt have noticed Linux as a platform to improve [Steam Play][6] to help more users play Windows-only games on Linux.
I dont want to claim anything thats not true but the desktop Linux scene is evolving faster than ever for gaming (even if the stats are low when compared to Windows and Mac).
### Cloud gaming isnt supposed to work like this
![][7]
As I mentioned above, it isnt tough to find Linux gamers using Steam Play. Its just that youll find the overall “market share” of gamers on Linux to be less than its counterparts.
Even though thats a fact cloud gaming isnt supposed to depend on a specific platform. And, considering that the GeForce NOW is essentially a browser-based streaming service to play games, it shouldnt be tough for a big shot like NVIDIA to support Linux.
Come on, team green _you want us to believe that supporting Linux is technically tough_? Or, you just want to say that i_ts not worth supporting the Linux platform_?
**Wrapping Up**
No matter how excited I was for the GeForce NOW service to launch it was very disappointing to see that it does not support Linux at all.
If cloud gaming services like GeForce NOW start supporting Linux in the near future **you probably wont need a reason to use Windows** (*coughs*).
What do you think about it? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/geforce-now-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.nvidia.com/en-us/geforce-now/
[2]: https://blogs.nvidia.com/blog/2020/02/04/geforce-now-pc-gaming/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now-linux.jpg?ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now.png?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/geforce-now-twitter-1.jpg?ssl=1
[6]: https://itsfoss.com/steam-play/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/ge-force-now.jpg?ssl=1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (chai-yuan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Automate your live demos with this shell script)
[#]: via: (https://opensource.com/article/20/2/live-demo-script)
[#]: author: (Lisa Seelye https://opensource.com/users/lisa)
Automate your live demos with this shell script
======
Try this script the next time you give a presentation to prevent making
typos in front of a live audience.
![Person using a laptop][1]
I gave a talk about [multi-architecture container images][2] at [LISA19][3] in October that included a lengthy live demo. Rather than writing out 30+ commands and risking typos, I decided to automate the demo with a shell script.
The script mimics what appears as input/output and runs the real commands in the background, pausing at various points so I can narrate what is going on. I'm very pleased with how the script turned out and the effect on stage. The script and supporting materials for my presentation are available on [GitHub][4] under an Apache 2.0 license.
### The script
```
#!/bin/bash
set -e
IMG=thedoh/lisa19
REGISTRY=docker.io
VERSION=19.10.1
# Plan B with GCR:
#IMG=dulcet-iterator-213018
#REGISTRY=us.gcr.io
#VERSION=19.10.1
pause() {
  local step="${1}"
  ps1
  echo -n "# Next step: ${step}"
  read
}
ps1() {
  echo -ne "\033[01;32m${USER}@$(hostname -s) \033[01;34m$(basename $(pwd)) \$ \033[00m"
}
echocmd() {
  echo "$(ps1)$@"
}
docmd() {
  echocmd $@
  $@
}
step0() {
  local registry="${1}" img="${2}" version="${3}"
  # Mindful of tokens in ~/.docker/config.json
  docmd grep experimental ~/.docker/config.json
 
  docmd cd ~/go/src/github.com/lisa/lisa19-containers
 
  pause "This is what we'll be building"
  docmd export REGISTRY=${registry}
  docmd export IMG=${img}
  docmd export VERSION=${version}
  docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
}
step1() {
  local registry="${1}" img="${2}" version="${3}"
 
  docmd docker build --no-cache --platform=linux/amd64 --build-arg=GOARCH=amd64 -t ${REGISTRY}/${IMG}:amd64-${VERSION} .
  pause "ARM64 image next"
  docmd docker build --no-cache --platform=linux/arm64 --build-arg=GOARCH=arm64 -t ${REGISTRY}/${IMG}:arm64-${VERSION} .
}
step2() {
  local registry="${1}" img="${2}" version="${3}" origpwd=$(pwd) savedir=$(mktemp -d) jsontemp=$(mktemp -t XXXXX)
  chmod 700 $jsontemp $savedir
  # Set our way back home and get ready to fix our arm64 image to amd64.
  echocmd 'origpwd=$(pwd)'
  echocmd 'savedir=$(mktemp -d)'
  echocmd "mkdir -p \$savedir/change"
  mkdir -p $savedir/change &amp;&gt;/dev/null
  echocmd "docker save ${REGISTRY}/${IMG}:arm64-${VERSION} 2&gt;/dev/null 1&gt; \$savedir/image.tar"
  docker save ${REGISTRY}/${IMG}:arm64-${VERSION} 2&gt;/dev/null 1&gt; $savedir/image.tar
  pause "untar the image to access its metadata"
 
  echocmd "cd \$savedir/change"
  cd $savedir/change
  echocmd tar xf \$savedir/image.tar
  tar xf $savedir/image.tar
  docmd ls -l
 
  pause "find the JSON config file"
  echocmd 'jsonfile=$(jq -r ".[0].Config" manifest.json)'
  jsonfile=$(jq -r ".[0].Config" manifest.json)
 
  pause "notice the original metadata says amd64"
  echocmd jq '{architecture: .architecture, ID: .config.Image}' \$jsonfile
  jq '{architecture: .architecture, ID: .config.Image}' $jsonfile
 
  pause "Change from amd64 to arm64 using a temp file"
  echocmd "jq '.architecture = \"arm64\"' \$jsonfile &gt; \$jsontemp"
  jq '.architecture = "arm64"' $jsonfile &gt; $jsontemp
  echocmd /bin/mv -f -- \$jsontemp \$jsonfile
  /bin/mv -f -- $jsontemp $jsonfile
  pause "Check to make sure the config JSON file says arm64 now"
  echocmd jq '{architecture: .architecture, ID: .config.Image}' \$jsonfile
  jq '{architecture: .architecture, ID: .config.Image}' $jsonfile
 
  pause "delete the image with the incorrect metadata"
  docmd docker rmi ${REGISTRY}/${IMG}:arm64-${VERSION}
 
  pause "Re-compress the ARM64 image and load it back into Docker, then clean up the temp space"
  echocmd 'tar cf - * | docker load'
  tar cf - * | docker load
  docmd cd $origpwd
  echocmd "/bin/rm -rf -- \$savedir"
  /bin/rm -rf -- $savedir &amp;&gt;/dev/null
}
step3() {
  local registry="${1}" img="${2}" version="${3}"
  docmd docker push ${registry}/${img}:amd64-${version}
  pause "push ARM64 image to ${registry}"
  docmd docker push ${registry}/${img}:arm64-${version}
}
step4() {
  local registry="${1}" img="${2}" version="${3}"
  docmd docker manifest create ${registry}/${img}:${version} ${registry}/${img}:arm64-${version} ${registry}/${img}:amd64-${version}
 
  pause "add a reference to the amd64 image to the manifest list"
  docmd docker manifest annotate ${registry}/${img}:${version} ${registry}/${img}:amd64-${version} --os linux --arch amd64
  pause "now add arm64"
  docmd docker manifest annotate ${registry}/${img}:${version} ${registry}/${img}:arm64-${version} --os linux --arch arm64
}
step5() {
  local registry="${1}" img="${2}" version="${3}"
  docmd docker manifest push ${registry}/${img}:${version}
}
step6() {
  local registry="${1}" img="${2}" version="${3}"
  docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
 
  pause "ask docker.io if ${img}:${version} has a linux/amd64 manifest, and run it"
  docmd docker pull --platform linux/amd64 ${registry}/${img}:${version}
  docmd docker run --rm -i ${registry}/${img}:${version}
 
  pause "clean slate again"
  docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
 
  pause "now repeat for linux/arm64 and see what it gives us"
  docmd docker pull --platform linux/arm64 ${registry}/${img}:${version}
  set +e
  docmd docker run --rm -i ${registry}/${img}:${version}
  set -e
  if [[ $(uname -s) == "Darwin" ]]; then
    pause "note about Docker on Mac and binfmt_misc: binfmt_misc lets a mac run arm64 binaries in the Docker VM"
  fi
}
pause "initial setup"
step0 ${REGISTRY} ${IMG} ${VERSION}
pause "1 build constituent images"
step1 ${REGISTRY} ${IMG} ${VERSION}
pause "2 fix ARM64 metadata"
step2 ${REGISTRY} ${IMG} ${VERSION}
pause "3 push constituent images up to docker.io"
step3 ${REGISTRY} ${IMG} ${VERSION}
pause "4 build the manifest list for the image"
step4 ${REGISTRY} ${IMG} ${VERSION}
pause "5 Push the manifest list to docker.io"
step5 ${REGISTRY} ${IMG} ${VERSION}
pause "6 clean slate, and validate the list-based image"
step6 ${REGISTRY} ${IMG} ${VERSION}
docmd echo 'Manual steps all done!'
make REGISTRY=${REGISTRY} IMG=${IMG} VERSION=${VERSION} clean &amp;&gt;/dev/null
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/live-demo-script
作者:[Lisa Seelye][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lisa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://www.usenix.org/conference/lisa19/presentation/seelye
[3]: https://www.usenix.org/conference/lisa19
[4]: https://github.com/lisa/lisa19-containers

View File

@ -0,0 +1,211 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Basic kubectl and Helm commands for beginners)
[#]: via: (https://opensource.com/article/20/2/kubectl-helm-commands)
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
Basic kubectl and Helm commands for beginners
======
Take a trip to the grocery store to shop for the commands you'll need to
get started with these Kubernetes tools.
![A person working.][1]
Recently, my husband was telling me about an upcoming job interview where he would have to run through some basic commands on a computer. He was anxious about the interview, but the best way for him to learn and remember things has always been to equate the thing he doesn't know to something very familiar to him. Because our conversation happened right after I was roaming the grocery store trying to decide what to cook that evening, it inspired me to write about kubectl and Helm commands by equating them to an ordinary trip to the grocer.
[Helm][2] is a tool to manage applications within Kubernetes. You can easily deploy charts with your application information, allowing them to be up and preconfigured in minutes within your Kubernetes environment. When you're learning something new, it's always helpful to look at chart examples to see how they are used, so if you have time, take a look at these stable [charts][3].
[Kubectl][4] is a command line that interfaces with Kubernetes environments, allowing you to configure and manage your cluster. It does require some configuration to work within environments, so take a look through the [documentation][5] to see what you need to do.
I'll use namespaces in the examples, which you can learn about in my article [_Kubernetes namespaces for beginners_][6].
Now that we have that settled, let's start shopping for basic kubectl and Helm commands!
### Helm list
What is the first thing you do before you go to the store? Well, if you're organized, you make a **list**. LIkewise, this is the first basic Helm command I will explain.
In a Helm-deployed application, **list** provides details about an application's current release. In this example, I have one deployed application—the Jenkins CI/CD application. Running the basic **list** command always brings up the default namespace. Since I don't have anything deployed in the default namespace, nothing shows up:
```
$helm list
NAME    NAMESPACE    REVISION    UPDATED    STATUS    CHART    APP VERSION
```
However, if I run the command with an extra flag, my application and information appear:
```
$helm list --all-namespaces
NAME     NAMESPACE  REVISION  UPDATED                   STATUS      CHART           APP  VERSION
jenkins  jenkins        1         2020-01-18 16:18:07 EST   deployed    jenkins-1.9.4   lts
```
Finally, I can direct the **list** command to check only the namespace I want information from:
```
$helm list --namespace jenkins
NAME     NAMESPACE  REVISION  UPDATED                   STATUS    CHART          APP VERSION
jenkins    jenkins      1              2020-01-18 16:18:07 EST  deployed  jenkins-1.9.4  lts    
```
Now that I have a list and know what is on it, I can go and get my items with **get** commands! I'll start with the Kubernetes cluster; what can I get from it?
### Kubectl get
The **kubectl get** command gives information about many things in Kubernetes, including pods, nodes, and namespaces. Again, without a namespace flag, you'll always land in the default. First, I'll get the namespaces in the cluster to see what's running:
```
$kubectl get namespaces
NAME             STATUS   AGE
default          Active   53m
jenkins          Active   44m
kube-node-lease  Active   53m
kube-public      Active   53m
kube-system      Active   53m
```
Now that I have the namespaces running in my environment, I'll get the nodes and see how many are running:
```
$kubectl get nodes
NAME       STATUS   ROLES       AGE   VERSION
minikube   Ready    master  55m   v1.16.2
```
I have one node up and running, mainly because my Minikube is running on one small server. To get the pods running on my one node:
```
$kubectl get pods
No resources found in default namespace.
```
Oops, it's empty. I'll get what's in my Jenkins namespace with:
```
$kubectl get pods --namespace jenkins
NAME                      READY  STATUS   RESTARTS  AGE
jenkins-7fc688c874-mh7gv  1/1    Running  0         40m
```
Good news! There's one pod, it hasn't restarted, and it has been running for 40 minutes. Well, since I know the pod is up, I want to see what I can get from Helm.
### Helm get
**Helm get** is a little more complicated because this **get** command requires more than an application name, and you can request multiple things from applications. I'll begin by getting the values used to make the application, and then I'll show a snip of the **get all** action, which provides all the data related to the application.
```
$helm get values jenkins -n jenkins
USER-SUPPLIED VALUES:
null
```
Since I did a very minimal stable-only install, the configuration didn't change. If I run the **all** command, I get everything out of the chart:
```
`$helm get all jenkins -n jenkins`
```
![output from helm get all command][7]
This produces a ton of data, so I always recommend keeping a copy of a Helm chart so you can look over the templates in the chart. I also create my own values to see what I have in place.
Now that I have all my goodies in my shopping cart, I'll check the labels that **describe** what's in them. These examples pertain only to kubectl, and they describe what I've deployed through Helm.
### Kubectl describe
As I did with the **get** command, which can describe just about anything in Kubernetes, I'll limit my examples to namespaces, pods, and nodes. Since I know I'm working with one of each, this will be easy.
```
$kubectl describe ns jenkins
Name:           jenkins
Labels:         &lt;none&gt;
Annotations:  &lt;none&gt;
Status:         Active
No resource quota.
No resource limits.
```
I can see my namespace's name and that it is active and has no resource nor quote limits.
The **describe pods** command produces a large amount of information, so I'll provide a small snip of the output. If you run the command without the pod name, it will return information for all of the pods in the namespace, which can be overwhelming. So, be sure you always include the pod name with this command. For example:
```
`$kubectl describe pods jenkins-7fc688c874-mh7gv --namespace jenkins`
```
![output of kubectl-describe-pods][8]
This provides (among many other things) the status of the container, how the container is managed, the label, and the image used in the pod. The data not in this abbreviated output includes resource requests and limits along with any conditions, init containers, and storage volume information applied in a Helm values file. This data is useful if your application is crashing due to inadequate resources, a configured init container that runs a prescript for configuration, or generated hidden passwords that shouldn't be in a plain text YAML file.
Finally, I'll use **describe node**, which (of course) describes the node. Since this example has just one, named Minikube, that is what I'll use; if you have multiple nodes in your environment, you must include the node name of interest.
As with pods, the node command produces an abundance of data, so I'll include just a snip of the output.
```
`$kubectl describe node minikube`
```
![output of kubectl describe node][9]
Note that **describe node** is one of the more important basic commands. As this image shows, the command returns statistics that indicate when the node is running out of resources, and this data is excellent for alerting you when you need to scale up (if you do not have autoscaling in your environment). Other things not in this snippet of output include the percentages of requests made for all resources and limits, as well as the age and allocation of resources (e.g., for my application).
### Checking out
With these commands, I've finished my shopping and gotten everything I was looking for. Hopefully, these basic commands can help you, too, in your day-to-day with Kubernetes.
I urge you to work with the command line often and learn the shorthand flags available in the Help sections, which you can access by running these commands:
```
`$helm --help`
```
and
```
`$kubectl -h`
```
### Peanut butter and jelly
Some things just go together like peanut butter and jelly. Helm and kubectl are a little like that.
I often use these tools in my environment. Because they have many similarities in a ton of places, after using one, I usually need to follow up with the other. For example, I can do a Helm deployment and watch it fail using kubectl. Try them together, and see what they can do for you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubectl-helm-commands
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
[2]: https://helm.sh/
[3]: https://github.com/helm/charts/tree/master/stable
[4]: https://kubernetes.io/docs/reference/kubectl/kubectl/
[5]: https://kubernetes.io/docs/reference/kubectl/overview/
[6]: https://opensource.com/article/19/12/kubernetes-namespaces
[7]: https://opensource.com/sites/default/files/uploads/helm-get-all.png (output from helm get all command)
[8]: https://opensource.com/sites/default/files/uploads/kubectl-describe-pods.png (output of kubectl-describe-pods)
[9]: https://opensource.com/sites/default/files/uploads/kubectl-describe-node.png (output of kubectl describe node)

View File

@ -0,0 +1,179 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Navigating man pages in Linux)
[#]: via: (https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Navigating man pages in Linux
======
The man pages on a Linux system can do more than provide information on particular commands. They can help discover commands you didn't realize were available.
[Hello I'm Nik][1] [(CC0)][2]
Man pages provide essential information on Linux commands and many users refer to them often, but theres a lot more to the man pages than many of us realize.
You can always type a command like “man who” and get a nice description of how the man command works, but exploring commands that you might not know could be even more illuminating. For example, you can use the man command to help identify commands to handle some unusually challenging task or to show options that can help you use a command you already know in new and better ways.
Lets navigate through some options and see where we end up.
[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][3]
### Using man to identify commands
The man command can help you find commands by topic. If youre looking for a command to count the lines in a file, for example, you can provide a keyword. In the example below, weve put the keyword in quotes and added blanks so that we dont get commands that deal with “accounts” or “accounting” along with those that do some counting for us.
```
$ man -k ' count '
anvil (8postfix) - Postfix session count and request rate control
cksum (1) - checksum and count the bytes in a file
sum (1) - checksum and count the blocks in a file
timer_getoverrun (2) - get overrun count for a POSIX per-process timer
```
To show commands that relate to new user accounts, we might try a command like this:
```
$ man -k "new user"
newusers (8) - update and create new users in batch
useradd (8) - create a new user or update default new user information
zshroadmap (1) - informal introduction to the zsh manual The Zsh Manual, …
```
Just to be clear, the third item in the list above makes a reference to “new users” liking the material and is not a command for setting up, removing or configuring user accounts. The man command is simply matching words in the command description, acting very much like the apropos command. Notice the numbers in parentheses after each command listed above. These relate to the man page sections that contain the commands.
### Identifying the manual sections
The man command sections divide the commands into categories. To list these categories, type “man man” and look for descriptions like those below. You very likely wont have Section 9 commands on your system.
[][4]
```
1 Executable programs or shell commands
2 System calls (functions provided by the kernel)
3 Library calls (functions within program libraries)
4 Special files (usually found in /dev)
5 File formats and conventions eg /etc/passwd
6 Games
7 Miscellaneous (including macro packages and conventions), e.g.
man(7), groff(7)
8 System administration commands (usually only for root)
9 Kernel routines [Non standard]
```
Man pages cover more than what we typically think of as “commands”. As you can see from the above descriptions, they cover system calls, library calls, special files and more.
The listing below shows where man pages are actually stored on Linux systems. The dates on these directories will vary because, with updates, some of these sections will get new content while others will not.
```
$ ls -ld /usr/share/man/man?
drwxr-xr-x 2 root root 98304 Feb 5 16:27 /usr/share/man/man1
drwxr-xr-x 2 root root 65536 Oct 23 17:39 /usr/share/man/man2
drwxr-xr-x 2 root root 270336 Nov 15 06:28 /usr/share/man/man3
drwxr-xr-x 2 root root 4096 Feb 4 10:16 /usr/share/man/man4
drwxr-xr-x 2 root root 28672 Feb 5 16:25 /usr/share/man/man5
drwxr-xr-x 2 root root 4096 Oct 23 17:40 /usr/share/man/man6
drwxr-xr-x 2 root root 20480 Feb 5 16:25 /usr/share/man/man7
drwxr-xr-x 2 root root 57344 Feb 5 16:25 /usr/share/man/man8
```
Note that the man page files are generally **gzipped** to save space. The man command unzips them as needed whenever you use the man command.
```
$ ls -l /usr/share/man/man1 | head -10
total 12632
lrwxrwxrwx 1 root root 9 Sep 5 06:38 [.1.gz -> test.1.gz
-rw-r--r-- 1 root root 563 Nov 7 05:07 2to3-2.7.1.gz
-rw-r--r-- 1 root root 592 Apr 23 2016 411toppm.1.gz
-rw-r--r-- 1 root root 2866 Aug 14 10:36 a2query.1.gz
-rw-r--r-- 1 root root 2361 Sep 9 15:13 aa-enabled.1.gz
-rw-r--r-- 1 root root 2675 Sep 9 15:13 aa-exec.1.gz
-rw-r--r-- 1 root root 1142 Apr 3 2018 aaflip.1.gz
-rw-r--r-- 1 root root 3847 Aug 14 10:36 ab.1.gz
-rw-r--r-- 1 root root 2378 Aug 23 2018 ac.1.gz
```
### Listing man pages by section
Even just looking at the first 10 man pages in Section 1 (as shown above), you are likely to see some commands that are new to you maybe **a2query** or **aaflip** (shown above).
An even better strategy for exploring commands is to list commands by section without looking at the files themselves but, instead, using a man command that shows you the commands and provides a brief description of each.
In the command below, the **-s 1** instructs man to display information on commands in section 1. The **-k .** makes the command work for all commands rather than specifying a particular keyword; without this, the man command would come back and ask “What manual page do you want?” So, use a keyword to select a group of related commands or a dot to show all commands in a section.
```
$ man -s 1 -k .
2to3-2.7 (1) - Python2 to Python3 converter
411toppm (1) - convert Sony Mavica .411 image to ppm
as (1) - the portable GNU assembler.
baobab (1) - A graphical tool to analyze disk usage
busybox (1) - The Swiss Army Knife of Embedded Linux
cmatrix (1) - simulates the display from "The Matrix"
expect_dislocate (1) - disconnect and reconnect processes
red (1) - line-oriented text editor
enchant (1) - a spellchecker
```
### How many man pages are there?
If youre curious about how many man pages there are in each section, you can count them by section with a command like this:
```
$ for num in {1..8}
> do
> man -s $num -k . | wc -l
> done
2382
493
2935
53
441
11
245
919
```
The exact number may vary, but most Linux systems will have a similar number of commands. If we use a command that adds these numbers together, we can see that the system that this command is running on has nearly 7,500 man pages. Thats a lot of commands, system calls, etc.
```
$ for num in {1..8}
> do
> num=`man -s $num -k . | wc -l`
> tot=`expr $num + $tot`
> echo $tot
> done
2382
2875
5810
5863
6304
6315
6560
7479 <=== total
```
Theres a lot you can learn by reading man pages, but exploring them in other ways can help you become aware of commands you may not have known were available on your system.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/YiRQIglwYig
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,328 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using external libraries in Java)
[#]: via: (https://opensource.com/article/20/2/external-libraries-java)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Using external libraries in Java
======
External libraries fill gaps in the Java core libraries.
![books in a library, stacks][1]
Java comes with a core set of libraries, including those that define commonly used data types and related behavior, like **String** or **Date**; utilities to interact with the host operating system, such as **System** or **File**; and useful subsystems to manage security, deal with network communications, and create or parse XML. Given the richness of this core set of libraries, it's often easy to find the necessary bits and pieces to reduce the amount of code a programmer must write to solve a problem.
Even so, there are a lot of interesting Java libraries created by people who find gaps in the core libraries. For example, [Apache Commons][2] "is an Apache project focused on all aspects of reusable Java components" and provides a collection of some 43 open source libraries (as of this writing) covering a range of capabilities either outside the Java core (such as [geometry][3] or [statistics][4]) or that enhance or replace capabilities in the Java core (such as [math][5] or [numbers][6]).
Another common type of Java library is an interface to a system component—for example, to a database system. This article looks at using such an interface to connect to a [PostgreSQL][7] database and get some interesting information. But first, I'll review the important bits and pieces of a library.
### What is a library?
A library, of course, must contain some useful code. But to be useful, that code needs to be organized in such a way that the Java programmer can access the components to solve the problem at hand.
I'll boldly claim that the most important part of a library is its application programming interface (API) documentation. This kind of documentation is familiar to many and is most often produced by [Javadoc][8], which reads structured comments in the code and produces HTML output that displays the API's packages in the panel in the top-left corner of the page; its classes in the bottom-left corner; and the detailed documentation at the library, package, or class level (depending on what is selected in the main panel) on the right. For example, the [top level of API documentation for Apache Commons Math][9] looks like:
![API documentation for Apache Commons Math][10]
Clicking on a package in the main panel shows the Java classes and interfaces defined in that package. For example, **[org.apache.commons.math4.analysis.solvers][11]** shows classes like **BisectionSolver** for finding zeros of univariate real functions using the bisection algorithm. And clicking on the [BisectionSolver][12] link lists all the methods of the class **BisectionSolver**.
This type of documentation is useful as reference information; it's not intended as a tutorial for learning how to use the library. For example, if you know what a univariate real function is and look at the package **org.apache.commons.math4.analysis.function**, you can imagine using that package to compose a function definition and then using the **org.apache.commons.math4.analysis.solvers** package to look for zeros of the just-created function. But really, you probably need more learning-oriented documentation to bridge to the reference documentation. Maybe even an example!
This documentation structure also helps clarify the meaning of _package_—a collection of related Java class and interface definitions—and shows what packages are bundled in a particular library.
The code for such a library is most commonly found in a [**.jar** file][13], which is basically a .zip file created by the Java **jar** command that contains some other useful information. **.jar** files are typically created as the endpoint of a build process that compiles all the **.java** files in the various packages defined.
There are two main steps to accessing the functionality provided by an external library:
1. Make sure the library is available to the Java compilation step—[**javac**][14]—and the execution step—**java**—via the classpath (either the **-cp** argument on the command line or the **CLASSPATH** environment variable).
2. Use the appropriate **import** statements to access the package and class in the program source code.
The rest is just like coding with Java core classes, such as **String**—write the code using the class and interface definitions provided by the library. Easy, eh? Well, maybe not quite that easy; first, you need to understand the intended use pattern for the library components, and then you can write code.
### An example: Connect to a PostgreSQL database
The typical use pattern for accessing data in a database system is:
1. Gain access to the code specific to the database software being used.
2. Connect to the database server.
3. Build a query string.
4. Execute the query string.
5. Do something with the results returned.
6. Disconnect from the database server.
The programmer-facing part of all of this is provided by a database-independent interface package, **[java.sql][15]**, which defines the core client-side Java Database Connectivity (JDBC) API. The **java.sql** package is part of the core Java libraries, so there is no need to supply a **.jar** file to the compile step. However, each database provider creates its own implementation of the **java.sql** interfaces—for example, the **Connection** interface—and those implementations must be provided on the run step.
Let's see how this works, using PostgreSQL.
#### Gain access to the database-specific code
The following code uses the [Java class loader][16] (the **Class.forName()** call) to bring the PostgreSQL driver code into the executing virtual machine:
```
import java.sql.*;
public class Test1 {
    public static void main([String][17] args[]) {
        // Load the driver (jar file must be on class path) [1]
        try {
            Class.forName("org.postgresql.Driver");
            [System][18].out.println("driver loaded");
        } catch ([Exception][19] e1) {
            [System][18].err.println("couldn't find driver");
            [System][18].err.println(e1);
            [System][18].exit(1);
        }
        // If we get here all is OK
        [System][18].out.println("done.");
    }
}
```
Because the class loader can fail, and therefore can throw an exception when failing, surround the call to **Class.forName()** in a try-catch block.
If you compile the above code with **javac** and run it with Java:
```
me@mymachine:~/Test$ javac Test1.java
me@mymachine:~/Test$ java Test1
couldn't find driver
java.lang.ClassNotFoundException: org.postgresql.Driver
me@mymachine:~/Test$
```
The class loader needs the **.jar** file containing the PostgreSQL JDBC driver implementation to be on the classpath:
```
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test1
driver loaded
done.
me@mymachine:~/Test$
```
#### Connect to the database server
The following code loads the JDBC driver and creates a connection to the PostgreSQL database:
```
import java.sql.*;
public class Test2 {
        public static void main([String][17] args[]) {
                // Load the driver (jar file must be on class path) [1]
                try {
                        Class.forName("org.postgresql.Driver");
                        [System][18].out.println("driver loaded");
                } catch ([Exception][19] e1) {
                        [System][18].err.println("couldn't find driver");
                        [System][18].err.println(e1);
                        [System][18].exit(1);
                }
                // Set up connection properties [2]
                java.util.[Properties][20] props = new java.util.[Properties][20]();
                props.setProperty("user","me");
                props.setProperty("password","mypassword");
                [String][17] database = "jdbc:postgresql://myhost.org:5432/test";
                // Open the connection to the database [3]
                try ([Connection][21] conn = [DriverManager][22].getConnection(database, props)) {
                        [System][18].out.println("connection created");
                } catch ([Exception][19] e2) {
                        [System][18].err.println("sql operations failed");
                        [System][18].err.println(e2);
                        [System][18].exit(2);
                }
                [System][18].out.println("connection closed");
                // If we get here all is OK
                [System][18].out.println("done.");
        }
}
```
Compile and run it:
```
me@mymachine:~/Test$ javac Test2.java
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test2
driver loaded
connection created
connection closed
done.
me@mymachine:~/Test$
```
Some notes on the above:
* The code following comment [2] uses system properties to set up connection parameters—in this case, the PostgreSQL username and password. This allows for grabbing those parameters from the Java command line and passing all the parameters in as an argument bundle. There are other **Driver.getConnection()** options for passing in the parameters individually.
* JDBC requires a URL for defining the database, which is declared above as **String database** and passed into the **Driver.getConnection()** method along with the connection parameters.
* The code uses try-with-resources, which auto-closes the connection upon completion of the code in the try-catch block. There is a lengthy discussion of this approach on [Stack Overflow][23].
* The try-with-resources provides access to the **Connection** instance and can execute SQL statements there; any errors will be caught by the same **catch** statement.
#### Do something fun with the database connection
In my day job, I often need to know what users have been defined for a given database server instance, and I use this [handy piece of SQL][24] for grabbing a list of all users:
```
import java.sql.*;
public class Test3 {
        public static void main([String][17] args[]) {
                // Load the driver (jar file must be on class path) [1]
                try {
                        Class.forName("org.postgresql.Driver");
                        [System][18].out.println("driver loaded");
                } catch ([Exception][19] e1) {
                        [System][18].err.println("couldn't find driver");
                        [System][18].err.println(e1);
                        [System][18].exit(1);
                }
                // Set up connection properties [2]
                java.util.[Properties][20] props = new java.util.[Properties][20]();
                props.setProperty("user","me");
                props.setProperty("password","mypassword");
                [String][17] database = "jdbc:postgresql://myhost.org:5432/test";
                // Open the connection to the database [3]
                try ([Connection][21] conn = [DriverManager][22].getConnection(database, props)) {
                        [System][18].out.println("connection created");
                        // Create the SQL command string [4]
                        [String][17] qs = "SELECT " +
                                "       u.usename AS \"User name\", " +
                                "       u.usesysid AS \"User ID\", " +
                                "       CASE " +
                                "       WHEN u.usesuper AND u.usecreatedb THEN " +
                                "               CAST('superuser, create database' AS pg_catalog.text) " +
                        "       WHEN u.usesuper THEN " +
                                "               CAST('superuser' AS pg_catalog.text) " +
                                "       WHEN u.usecreatedb THEN " +
                                "               CAST('create database' AS pg_catalog.text) " +
                                "       ELSE " +
                                "               CAST('' AS pg_catalog.text) " +
                                "       END AS \"Attributes\" " +
                                "FROM pg_catalog.pg_user u " +
                                "ORDER BY 1";
                        // Use the connection to create a statement, execute it,
                        // analyze the results and close the result set [5]
                        [Statement][25] stat = conn.createStatement();
                        [ResultSet][26] rs = stat.executeQuery(qs);
                        [System][18].out.println("User name;User ID;Attributes");
                        while (rs.next()) {
                                [System][18].out.println(rs.getString("User name") + ";" +
                                                rs.getLong("User ID") + ";" +
                                                rs.getString("Attributes"));
                        }
                        rs.close();
                        stat.close();
               
                } catch ([Exception][19] e2) {
                        [System][18].err.println("connecting failed");
                        [System][18].err.println(e2);
                        [System][18].exit(1);
                }
                [System][18].out.println("connection closed");
                // If we get here all is OK
                [System][18].out.println("done.");
        }
}
```
In the above, once it has the **Connection** instance, it defines a query string (comment [4] above), creates a **Statement** instance and uses it to execute the query string, then puts its results in a **ResultSet** instance, which it can iterate through to analyze the results returned, and ends by closing both the **ResultSet** and **Statement** instances (comment [5] above).
Compiling and executing the program produces the following output:
```
me@mymachine:~/Test$ javac Test3.java
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test3
driver loaded
connection created
User name;User ID;[Attributes][27]
fwa;16395;superuser
vax;197772;
mbe;290995;
aca;169248;
connection closed
done.
me@mymachine:~/Test$
```
This is a (very simple) example of using the PostgreSQL JDBC library in a simple Java application. It's worth emphasizing that it didn't need to use a Java import statement like **import org.postgresql.jdbc.*;** in the code because of the way the **java.sql** library is designed. Because of that, there's no need to specify the classpath at compile time. Instead, it uses the Java class loader to bring in the PostgreSQL code at run time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/external-libraries-java
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list.jpg?itok=O3GvU1gH (books in a library, stacks)
[2]: https://commons.apache.org/
[3]: https://commons.apache.org/proper/commons-geometry/
[4]: https://commons.apache.org/proper/commons-statistics/
[5]: https://commons.apache.org/proper/commons-math/
[6]: https://commons.apache.org/proper/commons-numbers/
[7]: https://opensource.com/article/19/11/getting-started-postgresql
[8]: https://en.wikipedia.org/wiki/Javadoc
[9]: https://commons.apache.org/proper/commons-math/apidocs/index.html
[10]: https://opensource.com/sites/default/files/uploads/api-documentation_apachecommonsmath.png (API documentation for Apache Commons Math)
[11]: https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/analysis/solvers/package-summary.html
[12]: https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/analysis/solvers/BisectionSolver.html
[13]: https://en.wikipedia.org/wiki/JAR_(file_format)
[14]: https://en.wikipedia.org/wiki/Javac
[15]: https://docs.oracle.com/javase/8/docs/api/java/sql/package-summary.html
[16]: https://en.wikipedia.org/wiki/Java_Classloader
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+properties
[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+connection
[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+drivermanager
[23]: https://stackoverflow.com/questions/8066501/how-should-i-use-try-with-resources-with-jdbc
[24]: https://www.postgresql.org/message-id/1121195544.8208.242.camel@state.g2switchworks.com
[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+statement
[26]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+resultset
[27]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+attributes

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Change the Default Terminal in Ubuntu)
[#]: via: (https://itsfoss.com/change-default-terminal-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Change the Default Terminal in Ubuntu
======
Terminal is a crucial part of any Linux system. It allows you to access your Linux systems through a shell. There are several terminal applications (technically called terminal emulators) on Linux.
Most of the [desktop environments][1] have their own implementation of the terminal. It may look different and may have different keyboard shortcuts.
For example, [Guake Terminal][2] is extremely useful for power users and provides several features you might not get in your distributions terminal by default.
You can install other terminals on your system and use it as default that opens up with the usual [keyboard shortcut of Ctrl+Alt+T][3].
Now the question comes, how do you change the default terminal in Ubuntu. It doesnt follow the standard way of [changing default applications in Ubuntu][4] then how to do it?
### Change the default terminal in Ubuntu
![][5]
On Debian-based distributions, there is a handy command line utility called [update-alternatives][6] that allows you to handle the default applications.
You can use it to change the default command line text editor, terminal and more. To do that, run the following command:
```
sudo update-alternatives --config x-terminal-emulator
```
It will show all the terminal emulators present on your system that can be used as default. The current default terminal is marked with the asterisk.
```
[email protected]:~$ sudo update-alternatives --config x-terminal-emulator
There are 2 choices for the alternative x-terminal-emulator (providing /usr/bin/x-terminal-emulator).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/bin/gnome-terminal.wrapper 40 auto mode
1 /usr/bin/gnome-terminal.wrapper 40 manual mode
* 2 /usr/bin/st 15 manual mode
Press <enter> to keep the current choice[*], or type selection number:
```
All you have to do is to enter the selection number. In my case, I want to use the GNOME terminal instead of the one from [Regolith desktop][7].
```
Press <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /usr/bin/gnome-terminal.wrapper to provide /usr/bin/x-terminal-emulator (x-terminal-emulator) in manual mode
```
##### Auto mode vs manual mode
You might have noticed the auto mode and manual mode in the output of update-alternatives command.
If you choose auto mode, your system may automatically decide on the default application as the packages are installed or removed. The decision is influenced by the priority number (as seen in the output of the command in the previous section).
Suppose you have 5 terminal emulators installed on your system and you delete the default one. Now, your system will check which of the emulators are in auto mode. If there are more than one, it will choose the one with the highest priority as the default emulator.
I hope you find this quick little tip useful. Your questions and suggestions are always welcome.
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-default-terminal-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-desktop-environments/
[2]: http://guake-project.org/
[3]: https://itsfoss.com/ubuntu-shortcuts/
[4]: https://itsfoss.com/change-default-applications-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/switch_default_terminal_ubuntu.png?ssl=1
[6]: https://manpages.ubuntu.com/manpages/trusty/man8/update-alternatives.8.html
[7]: https://itsfoss.com/regolith-linux-desktop/

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution)
[#]: via: (https://itsfoss.com/appcenter-for-everyone/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution
======
_**Brief: elementary OS is building an app center ecosystem where you can buy open source applications for your Linux distribution.**_
### Crowdfunding to build an open source AppCenter for everyone
![][1]
[elementary OS][2] recently announced that it is [crowdfunding a campaign to build an app center][3] from where you can buy open source applications. The applications in the app center will be in Flatpak format.
Though its an initiative taken by elementary OS, this new app center will be available for other distributions as well.
The campaign aims to fund a week of in-person development sprint in Denver, Colorado (USA) featuring developers from elementary OS, [Endless][4], [Flathub][5] and [GNOME][6].
The crowdfunding campaign has already crossed its goal of raising $10,000. You can still fund it as additional funds will be used for the development of elementary OS.
[Crowdfunding Campaign][3]
### What features this AppCenter brings
The focus is on providing secure applications and hence [Flatpak][7] apps are used to provide confined applications. In this format, apps will be restricted from accessing system or personal files and will be isolated from other apps on a technical level by default.
Apps will have access to operating system and personal files only if you explicitly provide your consent for it.
Apart from security, [Flatpak][8] also bundles all the dependencies. This way, app developers can utilize the cutting edge technologies even if it is not available on the current Linux distribution.
AppCenter will also have the wallet feature to save your card details. This enables you to quickly pay for apps without entering the card details each time.
![][9]
This new open source app center will be available for other Linux distributions as well.
### Inspired by the success of elementary OSs own Pay What You Want app center model
A couple of years ago, elementary OS launched its own app center. The pay what you want approach for the app center was quite a hit. The developers can put a minimum amount for their open source apps and the users can choose to pay an amount equal to or more than the minimum amount.
![][10]
This helped several indie developers get paid for their open source applications. The app store now has around 160 native applications and elementary OS says that thousands of dollars have been paid to the developers through the app center.
Inspired by the success of this app center experiment in elementary OS, they now want to bring this app center approach to other distributions as well.
### If the applications are open source, how can you charge money for it?
Some people still get confused with the idea of FOSS (free and open source). Here, the **source** code of the software is **open** and anyone is **free** to modify it and redistribute it.
It doesnt mean that open source software has to be free of cost. Some developers rely on donations while some charge a fee for support.
Getting paid for the open source apps may encourage developers to create [applications for Linux][11].
### Lets see if it could work
![][12]
Personally, I am not a huge fan of Flatpak or Snap packaging format. They do have their benefits but they take relatively more time to start and they are huge in size. If you install several such Snaps or Flatpaks, your disk space may start running out of free space.
There is also a need to be vigilant about the fake and scam developers in this new app ecosystem. Imagine if some scammers starts creating Flatpak package of obscure open source applications and put it on the app center? I hope the developers put some sort of mechanism to weed out such apps.
I do hope that this new AppCenter replicates the success it has seen in elementary OS. We definitely need a better ecosystem for open source apps for desktop Linux.
What are your views on it? Is it the right approach? What suggestions do you have for the improvement of the AppCenter?
--------------------------------------------------------------------------------
via: https://itsfoss.com/appcenter-for-everyone/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter.png?ssl=1
[2]: https://elementary.io/
[3]: https://www.indiegogo.com/projects/appcenter-for-everyone/
[4]: https://itsfoss.com/endless-linux-computers/
[5]: https://flathub.org/
[6]: https://www.gnome.org/
[7]: https://flatpak.org/
[8]: https://itsfoss.com/flatpak-guide/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter-wallet.png?ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter-payment.png?ssl=1
[11]: https://itsfoss.com/essential-linux-applications/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/open_source_app_center.png?ssl=1

View File

@ -1,232 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
如何进行 Linux 启动时间优化
======
[![][1]][2]
_快速启动一台嵌入式设备或一台电信设备对于时间要求严格的应用程序是至关重要的并且在改善用户体验方面也起着非常重要的作用。这个文件给予一些关于如何增强任意设备的启动时间的重要技巧。_
快速启动或快速重启在各种情况下起着至关重要的作用。对于一套嵌入式设备来说,开始启动是为了保持所有服务的高可用性和更好的性能。设想一台电信设备运行一套没有启用快速启动的 Linux 操作系统。依赖于这个特殊嵌入式设备的所有的系统,服务和用户可能会受到影响。这些设备在其服务中维持高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
一台电信设备的一次小故障或关机,甚至几秒钟,都可能会对无数在因特网上工作的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的服务中包含快速启动以帮助它们快速重新开始工作是非常重要的。让我们从图表 1 中理解 Linux 启动过程。
![Figure 1: Boot-up procedure][3]
![Figure 2: Boot chart][4]
**监视工具和启动过程**
A user should take note of a number of factors before making changes to a machine.这包括机器的当前启动速度,以及服务,进程或应用程序 These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
**Boot chart:** 为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装 boot chart
```
sudo apt-get install pybootchartgui.
```
你每次启动时boot chart 在日志中保存一个 _.png_ (便携式网络图片)文件,使用户能够查看 _png_ 文件来理解系统的启动过程和服务。为此,使用下面的命令:
```
cd /var/log/bootchart
```
用户可能需要一个应用程序来查看 _.png_ 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器它没有一个精致的图形用户界面但是它仅仅显示图片。Feh 可以用于查看 _.png_ 文件。你可以使用下面的命令来安装它:
```
sudo apt-get install feh
```
你可以使用 _feh xxxx.png_ 来查看 _png_ 文件。
图表 2 显示查看一个 boot chart 的 _png_ 文件时的启动图表。
但是,对于 Ubuntu 15.10 以后的版本不再需要 boot chart 。 为获取关于启动速度的简短信息,使用下面的命令:
```
systemd-analyze
```
![Figure 3: Output of systemd-analyze][5]
图表 3 显示命令 _systemd-analyze_ 的输出。
命令 _systemd-analyze_ blame 用于打印所有正在运行的基于初始化所用的时间的单元。这个信息是非常有用的并且可用于优化启动时间。systemd-analyze blame 不会显示服务于使用 _Type=simple_ 的结果,因为 systemd 认为这些服务是立即启动的;因此,不能完成测量初始化的延迟。
![Figure 4: Output of systemd-analyze blame][6]
图表 4 显示 _systemd-analyze_ blame 的输出.
下面的命令打印一个单元的时间关键的链的树:
```
command systemd-analyze critical-chain
```
图表 5 显示命令_systemd-analyze critical-chain_ 的输出。
![Figure 5: Output of systemd-analyze critical-chain][7]
**减少启动时间的步骤**
下面显示的是一些可采取的用于减少启动时间的步骤。
**BUM (启动管理器):** BUM 是一个运行级配置编辑器,当系统启动或重启时,允许 _init_ 服务的配置。它显示在启动时可以启动的每个服务的一个列表。用户可以打开和关闭之间切换个别的服务。 BUM 有一个非常干净的图形用户界面,并且非常容易使用。
在 Ubuntu 14.04 中, BUM 可以使用下面的命令安装:
```
sudo apt-get install bum
```
为在 15.10 以后的版本中安装它,从链接 _<http://apt.ubuntu.com/p/bum> 13_ 下载软件包。
以基础的事开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习它们的基础知识,因为它可能会影响机器或操作系统。图表 6 显示 BUM 的图形用户界面。
![Figure 6: BUM][8]
**编辑 rc 文件:** 为编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
```
cd /etc/init.d.
```
然而,访问 _init.d_ 需要 root 用户权限,它基本上包含了开始/停止脚本,当系统在运行时或在启动期间,控制(开始,停止,重新加载,启动启动)守护进程。
_rc_ 文件在 _init.d_ 中被称为一个运行控制脚本。在启动期间init 执行 _rc_ 脚本并发挥它的作用。为改善启动速度,我们更改 _rc_ 文件。使用任意的文件编辑器打开 _rc_ 文件(当你在 _init.d_ 目录中时)。
例如,通过输入 _vim rc_ ,你可以更改 _CONCURRENCY=none_ 的值为 _CONCURRENCY=shell_ 。后者允许同时执行某些起始阶段的脚本,而不是连续地间断地交替执行。
在最新版本的内核中,该值应该被更改为 _CONCURRENCY=makefile_
图表 7 和 8 显示编辑 rc 文件前后的启动时间的比较。启动速度的改善可以被注意到。在编辑The time to boot before editing the rc 文件前的启动时间是 50.98 秒,然而在对 rc 文件进行更改后的启动时间是 23.85 秒。
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 _init.d_ 文件。
![Figure 7: Boot speed before making changes to the rc file][9]
![Figure 8: Boot speed after making changes to the rc file][10]
**E4rat:** E4rat 代表 e4 ‘减少访问时间’ (仅在 ext4 文件系统的情况下). 它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目. E4rat 是一个在碎片整理的帮助下来达到一次快速启动的应用程序。它也加速应用程序的启动。E4rat 排除使用物理文件重新分配的寻道时间和旋转延迟。这导致一个高速的磁盘传输速度。
E4rat 作为一个可以获得的 .deb 软件包,你可以从它的官方网站 _<http://e4rat.sourceforge.net/>_ 下载它.
Ubuntu 默认的 ureadahead 软件包与 e4rat 冲突。因此不得不使用下面的命令安装几个软件包:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
现在使用下面的命令来安装 e4rat 的依赖关系:
```
sudo apt-get install libblkid1 e2fslibs
```
打开下载的 _.deb_ 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
遵循下面所给的步骤来使 e4rat 正确地运行,并提高启动速度。
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 shift 按键来完成。
* 选择通常用于启动的选项(内核版本),并按 e
* 查找以 _linux /boot/vmlinuz_ 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键)
```
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
```
* 现在,按 _Ctrl+x_ 来继续启动。这让 e4rat 在启动后收集数据。在机器上工作,打开应用程序,并在接下来的两分钟时间内关闭应用程序。
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:
```
cd /var/log/e4rat
```
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件在这里,再次访问 Grub 菜单,并按 e 作为你的选项。
* 在你之前已经编辑过的同一行的末尾输入 single 。这将帮助你访问命令行。如果出现一个要求任何东西的不同菜单选择恢复正常启动Resume normal boot。如果你不知为何不能进入命令提示符按 Ctrl+Alt+F1 组合键。
* 在你看到登录提示后,输入你的详细信息。
* 现在输入下面的命令:
```
sudo e4rat-realloc /var/lib/e4rat/startup.log
```
这个进程需要一段时间,依赖于机器的磁盘速度。
* 现在使用下面的命令来重启你的机器:
```
sudo shutdown -r now
```
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat 。
* 使用任意的编辑器访问 grub 文件。例如, _gksu gedit /etc/default/grub 。_
* 查找以 _GRUB CMDLINE LINUX DEFAULT=_ 开头的一行,并在引号之间和任何选项之前添加下面的行:
```
init=/sbin/e4rat-preload 18
```
* 它应该看起来像这样:
```
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
```
* 保存并关闭 Grub 菜单,并使用 _sudo update-grub_ 更新 Grub 。
* 重启系统,你将在启动速度方面发现显著的变化。
图表 9 和 10 显示在安装 e4rat 前后的启动时间的不同。启动速度的改善可以被注意到。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
![Figure 9: Boot speed before using e4rat][11]
![Figure 10: Boot speed after using e4rat][12]
**一些易做的调整**
一个极好的启动速度也可以使用非常小的调整来实现,其中两个在下面列出。
**SSD:** 使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也帮助获得在传输文件和运行应用程序方面的极好速度。
**禁用图形用户界面:** 图形用户界面,桌面图形和窗口动画占用大量的资源。禁用图形用户界面是另一个实现极好的启动速度的好方法。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NVIDIAs Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux)
[#]: via: (https://itsfoss.com/geforce-now-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
NVIDIA 的云游戏服务 GeForce NOW 无耻地忽略了Linux
======
NVIDIA的 [GeForce NOW][1] 云游戏服务对于那些可能没有硬件但想使用 GeForce NOW 在最新的最好的游戏上获得尽可能好的游戏体验玩家来说是充满前景的(在线推流游戏,并在任何设备上玩)。
该服务仅限于一些用户(以等待列表的形式)使用。然而,他们最近宣布 [GeForce NOW 面向所有人开放][2]。但实际上并不是。
有趣的是,它**并不是面向全球所有区域**。而且,更糟的是 **GeForce NOW 不支持 Linux**
![][3]
### GeForce NOW 并不是向“所有人开放”
制作一个基于订阅的云服务来玩游戏的目的是消除平台依赖性。
就像你通常使用浏览器访问网站一样,你应该能够在每个平台上玩游戏。是这个概念吧?
![][4]
好吧,这绝对不是火箭科学,但是 NVIDIA 仍然不支持 Linux和iOS
### 是因为没有人使用 Linux 吗?
我非常不同意这一点,即使这是某些不支持 Linux 的原因。如果真是这样,我不会在使用 Linux 作为主要桌面操作系统时为 “Its FOSS” 写文章。
不仅如此,如果 Linux 不值一提,你认为为何一个 Twitter 用户会提到缺少 Linux 支持?
![][5]
是的,也许用户群不够大,但是在考虑将其作为基于云的服务时,**不支持 Linux** 显得没有意义。
从技术上讲,如果 Linux 上没有游戏,那么 **Valve** 就不会在 Linux 上改进 [Steam Play][6] 来帮助更多用户在 Linux 上玩纯 Windows 的游戏。
我不想说不正确的说法,但台式机 Linux 游戏的发展比以往任何时候都要快(即使统计上要比 Mac 和 Windows 要低)。
### 云游戏不应该像这样
![][7]
如上所述,找到使用 Steam Play 的 Linux 玩家不难。只是你会发现 Linux 上游戏玩家的整体“市场份额”低于其他平台。
即使这是事实,云游戏也不应该依赖于特定平台。而且,考虑到 GeForce NOW 本质上是一种基于浏览器的可以玩游戏的流媒体服务,所以对于像 NVIDIA 这样的大公司来说,支持 Linux 并不困难
来吧Nvidia_你想要我们相信在技术上支持 Linux 有困难_或者你只是想说_不值得支持 Linux 平台_
**总结**
不管我为 GeForce NOW 服务发布而感到多么兴奋,当看到它根本不支持 Linux我感到非常失望。
如果像 GeForce NOW 这样的云游戏服务在不久的将来开始支持 Linux**你可能没有理由使用 Windows 了***咳嗽*)。
你怎么看待这件事?在下面的评论中让我知道你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/geforce-now-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.nvidia.com/en-us/geforce-now/
[2]: https://blogs.nvidia.com/blog/2020/02/04/geforce-now-pc-gaming/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now-linux.jpg?ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now.png?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/geforce-now-twitter-1.jpg?ssl=1
[6]: https://itsfoss.com/steam-play/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/ge-force-now.jpg?ssl=1