mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-27 02:30:10 +08:00
merge upstream master
This commit is contained in:
commit
5dcee8fe66
published
20191022 How to Go About Linux Boot Time Optimisation.md20200109 What-s HTTPS for secure computing.md20200117 Use this Python script to find bugs in your Overcloud.md20200123 Use this open source tool to get your local weather forecast.md20200124 3 handy command-line internet speed tests.md20200207 Best Open Source eCommerce Platforms to Build Online Shopping Websites.md
sources
news
talk
20200210 5 firewall features IT pros should know about but probably don-t.md20200211 Future ‘smart walls- key to IoT.md20200211 Who should lead the push for IoT security.md20200211 Why innovation can-t happen without standardization.md
tech
20190113 Editing Subtitles in Linux.md20190322 12 open source tools for natural language processing.md20190730 Using Python to explore Google-s Natural Language API.md20200205 Getting started with GnuCash.md20200207 NVIDIA-s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux.md20200210 Install All Essential Media Codecs in Ubuntu With This Single Command -Beginner-s Tip.md20200210 Music composition with Python and Linux.md20200210 Playing Music on your Fedora Terminal with MPD and ncmpcpp.md20200210 Scan Kubernetes for errors with KRAWL.md20200210 Top hacks for the YaCy open source search engine.md20200211 Automate your live demos with this shell script.md20200211 Basic kubectl and Helm commands for beginners.md20200211 Dino is a Modern Looking Open Source XMPP Client.md20200211 Navigating man pages in Linux.md20200211 Using external libraries in Java.md20200212 How to Change the Default Terminal in Ubuntu.md20200212 elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution.md
translated/tech
20190322 12 open source tools for natural language processing.md20190407 Manage multimedia files with Git.md20191022 How to Go About Linux Boot Time Optimisation.md20200109 What-s HTTPS for secure computing.md20200117 Use this Python script to find bugs in your Overcloud.md20200205 Getting started with GnuCash.md20200207 NVIDIA-s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux.md
@ -0,0 +1,208 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11881-1.html)
|
||||
[#]: subject: (How to Go About Linux Boot Time Optimisation)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
|
||||
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
|
||||
|
||||
如何进行 Linux 启动时间优化
|
||||
======
|
||||
|
||||
![][2]
|
||||
|
||||
> 快速启动嵌入式设备或电信设备,对于时间要求紧迫的应用程序是至关重要的,并且在改善用户体验方面也起着非常重要的作用。这个文章给予一些关于如何增强任意设备的启动时间的重要技巧。
|
||||
|
||||
快速启动或快速重启在各种情况下起着至关重要的作用。为了保持所有服务的高可用性和更好的性能,嵌入式设备的快速启动至关重要。设想有一台运行着没有启用快速启动的 Linux 操作系统的电信设备,所有依赖于这个特殊嵌入式设备的系统、服务和用户可能会受到影响。这些设备维持其服务的高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
|
||||
|
||||
一台电信设备的一次小故障或关机,即使只是几秒钟,都可能会对无数互联网上的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的设备中加入快速启动的功能以帮助它们快速恢复工作是非常重要的。让我们从图 1 中理解 Linux 启动过程。
|
||||
|
||||
![图 1:启动过程][3]
|
||||
|
||||
### 监视工具和启动过程
|
||||
|
||||
在对机器做出更改之前,用户应注意许多因素。其中包括计算机的当前启动速度,以及占用资源并增加启动时间的服务、进程或应用程序。
|
||||
|
||||
#### 启动图
|
||||
|
||||
为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装:
|
||||
|
||||
```
|
||||
sudo apt-get install pybootchartgui
|
||||
```
|
||||
|
||||
你每次启动时,启动图会在日志中保存一个 png 文件,使用户能够查看该 png 文件来理解系统的启动过程和服务。为此,使用下面的命令:
|
||||
|
||||
```
|
||||
cd /var/log/bootchart
|
||||
```
|
||||
|
||||
用户可能需要一个应用程序来查看 png 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器,它没有一个精致的图形用户界面,但它只用来显示图片。Feh 可以用于查看 png 文件。你可以使用下面的命令来安装它:
|
||||
|
||||
```
|
||||
sudo apt-get install feh
|
||||
```
|
||||
|
||||
你可以使用 `feh xxxx.png` 来查看 png 文件。
|
||||
|
||||
|
||||
![图 2:启动图][4]
|
||||
|
||||
图 2 显示了一个正在查看的引导图 png 文件。
|
||||
|
||||
#### systemd-analyze
|
||||
|
||||
但是,对于 Ubuntu 15.10 以后的版本不再需要引导图。为获取关于启动速度的简短信息,使用下面的命令:
|
||||
|
||||
```
|
||||
systemd-analyze
|
||||
```
|
||||
|
||||
![图 3:systemd-analyze 的输出][5]
|
||||
|
||||
图表 3 显示命令 `systemd-analyze` 的输出。
|
||||
|
||||
命令 `systemd-analyze blame` 用于根据初始化所用的时间打印所有正在运行的单元的列表。这个信息是非常有用的,可用于优化启动时间。`systemd-analyze blame` 不会显示服务类型为简单(`Type=simple`)的服务,因为 systemd 认为这些服务应是立即启动的;因此,无法测量初始化的延迟。
|
||||
|
||||
![图 4:systemd-analyze blame 的输出][6]
|
||||
|
||||
图 4 显示 `systemd-analyze blame` 的输出。
|
||||
|
||||
下面的命令打印时间关键的服务单元的树形链条:
|
||||
|
||||
```
|
||||
command systemd-analyze critical-chain
|
||||
```
|
||||
|
||||
图 5 显示命令 `systemd-analyze critical-chain` 的输出。
|
||||
|
||||
![图 5:systemd-analyze critical-chain 的输出][7]
|
||||
|
||||
### 减少启动时间的步骤
|
||||
|
||||
下面显示的是一些可以减少启动时间的各种步骤。
|
||||
|
||||
#### BUM(启动管理器)
|
||||
|
||||
BUM 是一个运行级配置编辑器,允许在系统启动或重启时配置初始化服务。它显示了可以在启动时启动的每个服务的列表。用户可以打开和关闭各个服务。BUM 有一个非常清晰的图形用户界面,并且非常容易使用。
|
||||
|
||||
在 Ubuntu 14.04 中,BUM 可以使用下面的命令安装:
|
||||
|
||||
```
|
||||
sudo apt-get install bum
|
||||
```
|
||||
|
||||
为在 15.10 以后的版本中安装它,从链接 http://apt.ubuntu.com/p/bum 下载软件包。
|
||||
|
||||
以基本的服务开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习服务的基础知识,因为这可能会影响计算机或操作系统。图 6 显示 BUM 的图形用户界面。
|
||||
|
||||
![图 6:BUM][8]
|
||||
|
||||
#### 编辑 rc 文件
|
||||
|
||||
要编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
|
||||
|
||||
```
|
||||
cd /etc/init.d
|
||||
```
|
||||
|
||||
然而,访问 `init.d` 需要 root 用户权限,该目录基本上包含的是开始/停止脚本,这些脚本用于在系统运行时或启动期间控制(开始、停止、重新加载、启动启动)守护进程。
|
||||
|
||||
在 `init.d` 目录中的 `rc` 文件被称为<ruby>运行控制<rt>run control</rt></ruby>脚本。在启动期间,`init` 执行 `rc` 脚本并发挥它的作用。为改善启动速度,我们可以更改 `rc` 文件。使用任意的文件编辑器打开 `rc` 文件(当你在 `init.d` 目录中时)。
|
||||
|
||||
例如,通过输入 `vim rc` ,你可以更改 `CONCURRENCY=none` 为 `CONCURRENCY=shell`。后者允许某些启动脚本同时执行,而不是依序执行。
|
||||
|
||||
在最新版本的内核中,该值应该被更改为 `CONCURRENCY=makefile`。
|
||||
|
||||
图 7 和图 8 显示编辑 `rc` 文件前后的启动时间比较。可以注意到启动速度有所提高。在编辑 `rc` 文件前的启动时间是 50.98 秒,然而在对 `rc` 文件进行更改后的启动时间是 23.85 秒。
|
||||
|
||||
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 `init.d` 文件。
|
||||
|
||||
![图 7:对 rc 文件进行更改之前的启动速度][9]
|
||||
|
||||
![图 8:对 rc 文件进行更改之后的启动速度][10]
|
||||
|
||||
#### E4rat
|
||||
|
||||
E4rat 代表 e4 <ruby>减少访问时间<rt>reduced access time</rt></ruby>(仅在 ext4 文件系统的情况下)。它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目。E4rat 是一个通过碎片整理来帮助快速启动的应用程序。它还会加速应用程序的启动。E4rat 使用物理文件的重新分配来消除寻道时间和旋转延迟,因而达到较高的磁盘传输速度。
|
||||
|
||||
E4rat 可以 .deb 软件包形式获得,你可以从它的官方网站 http://e4rat.sourceforge.net/ 下载。
|
||||
|
||||
Ubuntu 默认安装的 ureadahead 软件包与 e4rat 冲突。因此必须使用下面的命令安装这几个软件包:
|
||||
|
||||
```
|
||||
sudo dpkg purge ureadahead ubuntu-minimal
|
||||
```
|
||||
|
||||
现在使用下面的命令来安装 e4rat 的依赖关系:
|
||||
|
||||
```
|
||||
sudo apt-get install libblkid1 e2fslibs
|
||||
```
|
||||
|
||||
打开下载的 .deb 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
|
||||
|
||||
遵循下面所给的步骤来使 e4rat 正确地运行并提高启动速度。
|
||||
|
||||
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 `shift` 按键来完成。
|
||||
* 选择通常用于启动的选项(内核版本),并按 `e`。
|
||||
* 查找以 `linux /boot/vmlinuz` 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键):`init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
|
||||
`。
|
||||
* 现在,按 `Ctrl+x` 来继续启动。这可以让 e4rat 在启动后收集数据。在这台机器上工作,并在接下来的两分钟时间内打开并关闭应用程序。
|
||||
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:`cd /var/log/e4rat`。
|
||||
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件就绪,再次访问 Grub 菜单,并对你的选项按 `e`。
|
||||
* 在你之前已经编辑过的同一行的末尾输入 `single`。这可以让你访问命令行。如果出现其它菜单,选择恢复正常启动(Resume normal boot)。如果你不知为何不能进入命令提示符,按 `Ctrl+Alt+F1` 组合键。
|
||||
* 在你看到登录提示后,输入你的登录信息。
|
||||
* 现在输入下面的命令:`sudo e4rat-realloc /var/lib/e4rat/startup.log`。此过程需要一段时间,具体取决于机器的磁盘速度。
|
||||
* 现在使用下面的命令来重启你的机器:`sudo shutdown -r now`。
|
||||
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat。
|
||||
* 使用任意的编辑器访问 grub 文件。例如,`gksu gedit /etc/default/grub`。
|
||||
* 查找以 `GRUB CMDLINE LINUX DEFAULT=` 开头的一行,并在引号之间和任何选项之前添加下面的行:`init=/sbin/e4rat-preload 18`。
|
||||
* 它应该看起来像这样:`GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash`。
|
||||
* 保存并关闭 Grub 菜单,并使用 `sudo update-grub` 更新 Grub 。
|
||||
* 重启系统,你将发现启动速度有明显变化。
|
||||
|
||||
图 9 和图 10 显示在安装 e4rat 前后的启动时间之间的差异。可注意到启动速度的提高。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
|
||||
|
||||
![图 9:使用 e4rat 之前的启动速度][11]
|
||||
|
||||
![图 10:使用 e4rat 之后的启动速度][12]
|
||||
|
||||
### 一些易做的调整
|
||||
|
||||
使用很小的调整也可以达到良好的启动速度,下面列出其中两个。
|
||||
|
||||
#### SSD
|
||||
|
||||
使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也有助于加快文件传输和运行应用程序方面的速度。
|
||||
|
||||
#### 禁用图形用户界面
|
||||
|
||||
图形用户界面、桌面图形和窗口动画占用大量的资源。禁用图形用户界面是获得良好的启动速度的另一个好方法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
|
||||
|
||||
作者:[B Thangaraju][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/b-thangaraju/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?&ssl=1 (Screenshot from 2019-10-07 13-16-32)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
|
||||
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?ssl=1
|
||||
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?ssl=1
|
||||
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?ssl=1
|
||||
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?ssl=1
|
||||
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?ssl=1
|
||||
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?ssl=1
|
||||
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?ssl=1
|
||||
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?ssl=1
|
||||
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?ssl=1
|
||||
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?ssl=1
|
65
published/20200109 What-s HTTPS for secure computing.md
Normal file
65
published/20200109 What-s HTTPS for secure computing.md
Normal file
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11877-1.html)
|
||||
[#]: subject: (What's HTTPS for secure computing?)
|
||||
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
用于安全计算的 HTTPS 是什么?
|
||||
======
|
||||
|
||||
> 在默认的情况下,网站的安全性还不足够。
|
||||
|
||||

|
||||
|
||||
在过去的几年里,寻找一个只以 “http://...” 开头的网站变得越来越难,这是因为业界终于意识到,网络安全“是件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算(使用例如<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby> TEEs 这样的硬件功能来提供数据和算法这种类型的保护)可以保护主机系统中的或者易受攻击的环境中的数据。
|
||||
|
||||
关于 [TEEs][2],当然,还有我和 Nathaniel McCallum 共同创立的 [Enarx 项目][3],我已经写了几次文章(参见《[给每个人的 Enarx(一个任务)][4]》 和 《[Enarx 迈向多平台][5]》)。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台,以此来让你能够安全地将敏感应用或者敏感组件(例如微服务)部署在你不信任的主机上。当然,Enarx 是完全开源的(顺便提一下,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作负载,这是可信计算的承诺,它扩展了使用静态敏感数据和传输中数据的常规做法:
|
||||
|
||||
* **存储**:你要加密你的静态数据,因为你不完全信任你的基础存储架构。
|
||||
* **网络**:你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
|
||||
* **计算**:你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
|
||||
|
||||
关于信任,我有非常多的话想说,而且,上述说法里的单词“**完全**”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。不论哪种情况,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如<ruby>同态加密<rt>homomorphic encryption</rt></ruby>一类的技术,这些技术正在开始提供一些可能性,但是它们依然有限,这些技术还不够成熟)。
|
||||
|
||||
考虑到发现的一些 CPU 安全性问题,是否应该完全信任 CPU 有时自然会产生疑问,以及它们是否在针对其所在的主机的物理攻击中具有完全的安全性。
|
||||
|
||||
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的[威胁模型][6]并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题,Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号、T 型号和 U 型号的芯片以及任何 P、M 和 N 供应商的任何芯片上去。”
|
||||
|
||||
我认为这里发生了三处改变,这些改变引起了人们现在对<ruby>机密计算<rt>confidential computing</rt></ruby>的兴趣和采用。
|
||||
|
||||
1. **硬件可用**:只是在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
|
||||
2. **行业就绪**:就像上云越来越多地被接受作为应用程序部署的模型,监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始呼吁在不受信任的主机运行敏感程序(或者是处理敏感数据的应用程序)的方法,更确切地说,是在无法完全信任且带有敏感数据的主机上运行的方法。这不足为奇:如果芯片制造商看不到这项技术的市场,他们就不会投太多的钱在这项技术上。Linux 基金会的[机密计算联盟(CCC)][7]的成立就是业界对如何寻找使用加密计算的通用模型并且鼓励开源项目使用这些技术感兴趣的案例。(红帽发起的 Enarx 是一个 CCC 项目。)
|
||||
3. **开放源码**:就像区块链一样,机密计算是使用开源绝对明智的技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内执行你的工作负载的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE,”但是如果你不够了解 TEE 软件环境,那你就是将一种软件不透明换成另外一种。TEEs 的开源支持将允许你或者社区(实际上是你与社区)以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会旗下(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果它们还没有成为开源)。
|
||||
|
||||
我认为,在过去的 15 到 20 年里,硬件可用、行业就绪和开放源码已成为推动技术改变的驱动力。区块链、人工智能、云计算、<ruby>大规模计算<rt>webscale computing</rt></ruby>、大数据和互联网商务都是这三个点同时发挥作用的例子,并且在业界带来了巨大的改变。
|
||||
|
||||
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的安全变得越来越实用和无处不在,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化 —— 而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
|
||||
|
||||
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/confidential-computing
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
|
||||
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
|
||||
[3]: https://enarx.io/
|
||||
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
|
||||
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
|
||||
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
|
||||
[7]: https://confidentialcomputing.io/
|
||||
[8]: tmp.VEZpFGxsLv#1
|
||||
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/
|
@ -0,0 +1,172 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11886-1.html)
|
||||
[#]: subject: (Use this Python script to find bugs in your Overcloud)
|
||||
[#]: via: (https://opensource.com/article/20/1/logtool-root-cause-identification)
|
||||
[#]: author: (Arkady Shtempler https://opensource.com/users/ashtempl)
|
||||
|
||||
用 Python 脚本发现 OpenStack Overcloud 中的问题
|
||||
======
|
||||
|
||||
> LogTool 是一组 Python 脚本,可帮助你找出 Overcloud 节点中问题的根本原因。
|
||||
|
||||

|
||||
|
||||
OpenStack 在其 Overcloud 节点和 Undercloud 主机上存储和管理了一堆日志文件。因此,使用 OSP 日志文件来排查遇到的问题并不是一件容易的事,尤其在你甚至都不知道是什么原因导致问题时。
|
||||
|
||||
如果你正处于这种情况,那么 [LogTool][2] 可以使你的生活变得更加轻松!它会为你节省本需要人工排查问题所需的时间和精力。LogTool 基于模糊字符串匹配算法,可提供过去发生的所有唯一错误和警告信息。你可以根据日志中的时间戳导出特定时间段(例如 10 分钟前、一个小时前、一天前等)的这些信息。
|
||||
|
||||
LogTool 是一组 Python 脚本,其主要模块 `PyTool.py` 在 Undercloud 主机上执行。某些操作模式使用直接在 Overcloud 节点上执行的其他脚本,例如从 Overcloud 日志中导出错误和警告信息。
|
||||
|
||||
LogTool 支持 Python 2 和 Python 3,你可以根据需要更改工作目录:[LogTool_Python2][3] or [LogTool_Python3][4]。
|
||||
|
||||
### 操作方式
|
||||
|
||||
#### 1、从 Overcloud 日志中导出错误和警告信息
|
||||
|
||||
此模式用于从过去发生的 Overcloud 节点中提取 **错误** 和 **警告** 信息。作为用户,系统将提示你提供“开始时间”和“调试级别”,以用于提取错误或警告消息。例如,如果在过去 10 分钟内出了问题,你则可以只提取该时间段内的错误和警告消息。
|
||||
|
||||
此操作模式将为每个 Overcloud 节点生成一个包含结果文件的目录。结果文件是经过压缩的简单文本文件(`*.gz`),以减少从 Overcloud 节点下载所需的时间。将压缩文件转换为常规文本文件,可以使用 `zcat` 或类似工具。此外,Vi 的某些版本和 Emacs 的任何最新版本均支持读取压缩数据。结果文件分为几部分,并在底部包含目录。
|
||||
|
||||
LogTool 可以即时检测两种日志文件:标准和非标准。在标准文件中,每条日志行都有一个已知的和已定义的结构:时间戳、调试级别、信息等等。在非标准文件中,日志的结构未知。例如,它可能是第三方的日志。在目录中,你可以找到每个部分的“名称 --> 行号”例如:
|
||||
|
||||
* **原始数据 - 从标准 OSP 日志中提取的错误/警告消息:** 这部分包含所有提取的错误/警告消息,没有任何修改或更改。这些消息是 LogTool 用于模糊匹配分析的原始数据。
|
||||
* **统计信息 - 每个标准 OSP 日志的错误/警告信息数量:** 在此部分,你将找到每个标准日志文件的错误和警告数量。这些信息可以帮助你了解用于排查问题根本原因的潜在组件。
|
||||
* **统计信息 - 每个标准 OSP 日志文件的唯一消息:** 这部分提供指定时间戳内的唯一的错误和警告消息。有关每个唯一错误或警告的更多详细信息,请在“原始数据”部分中查找相同的消息。
|
||||
* **统计信息 - 每个非标准日志文件在任意时间的唯一消息:** 此部分包含非标准日志文件中的唯一消息。遗憾的是,LogTool 无法像标准日志文件那样的处理方式处理这些日志文件。因此,在你提取“特定时间”的日志信息时会被忽略,你会看到过去创建的所有唯一的错误/警告消息。因此,首先,向下滚动到结果文件底部的目录并查看其部分-使用目录中的行索引跳到相关部分,其中第 3、4 和 5 行的信息最重要。
|
||||
|
||||
#### 2、从 Overcloud 节点下载所有日志
|
||||
|
||||
所有 Overcloud 节点的日志将被压缩并下载到 Undercloud 主机上的本地目录。
|
||||
|
||||
#### 3、所有 Overcloud 日志中搜索字符串
|
||||
|
||||
该模式“grep”(搜索)由用户在所有 Overcloud 日志上提供的字符串。例如,你可能希望查看特定请求的所有日志消息,例如,“Create VM”的失败的请求 ID。
|
||||
|
||||
#### 4、检查 Overcloud 上当前的 CPU、RAM 和磁盘使用情况
|
||||
|
||||
该模式显示每个 Overcloud 节点上的当前 CPU、RAM 和磁盘信息。
|
||||
|
||||
#### 5、执行用户脚本
|
||||
|
||||
该模式使用户可以在 Overcloud 节点上运行自己的脚本。例如,假设 Overcloud 部署失败,你就需要在每个控制器节点上执行相同的过程来修复该问题。你可以实现“替代方法”脚本,并使用此模式在控制器上运行它。
|
||||
|
||||
#### 6、仅按给定的时间戳下载相关日志
|
||||
|
||||
此模式仅下载 Overcloud 上 “给定的时间戳”的“上次修改时间”的日志。例如,如果 10 分钟前出现错误,则与旧日志文件就没有关系,因此无需下载。此外,你不能(或不应)在某些错误报告工具中附加大文件,因此此模式可能有助于编写错误报告。
|
||||
|
||||
#### 7、从 Undercloud 日志中导出错误和警告信息
|
||||
|
||||
这与上面的模式 1 相同。
|
||||
|
||||
#### 8、在 Overcloud 上检查不正常的 docker
|
||||
|
||||
此模式用于在节点上搜索不正常的 Docker。
|
||||
|
||||
#### 9、下载 OSP 日志并在本地运行 LogTool
|
||||
|
||||
此模式允许你从 Jenkins 或 Log Storage 下载 OSP 日志(例如,`cougar11.scl.lab.tlv.redhat.com`),并在本地分析。
|
||||
|
||||
#### 10、在 Undercloud 上分析部署日志
|
||||
|
||||
此模式可以帮助你了解 Overcloud 或 Undercloud 部署过程中出了什么问题。例如,在`overcloud_deploy.sh` 脚本中,使用 `--log` 选项时会生成部署日志;此类日志的问题是“不友好”,你很难理解是什么出了问题,尤其是当详细程度设置为 `vv` 或更高时,使得日志中的数据难以读取。此模式提供有关所有失败任务的详细信息。
|
||||
|
||||
#### 11、分析 Gerrit(Zuul)失败的日志
|
||||
|
||||
此模式用于分析 Gerrit(Zuul)日志文件。它会自动从远程 Gerrit 门下载所有文件(HTTP 下载)并在本地进行分析。
|
||||
|
||||
### 安装
|
||||
|
||||
GitHub 上有 LogTool,使用以下命令将其克隆到你的 Undercloud 主机:
|
||||
|
||||
```
|
||||
git clone https://github.com/zahlabut/LogTool.git
|
||||
```
|
||||
|
||||
该工具还使用了一些外部 Python 模块:
|
||||
|
||||
#### Paramiko
|
||||
|
||||
默认情况下,SSH 模块通常会安装在 Undercloud 上。使用以下命令来验证是否已安装:
|
||||
|
||||
```
|
||||
ls -a /usr/lib/python2.7/site-packages | grep paramiko
|
||||
```
|
||||
|
||||
如果需要安装模块,请在 Undercloud 上执行以下命令:
|
||||
|
||||
```
|
||||
sudo easy_install pip
|
||||
sudo pip install paramiko==2.1.1
|
||||
```
|
||||
|
||||
#### BeautifulSoup
|
||||
|
||||
此 HTML 解析器模块仅在使用 HTTP 下载日志文件的模式下使用。它用于解析 Artifacts HTML 页面以获取其中的所有链接。安装 BeautifulSoup,请输入以下命令:
|
||||
|
||||
```
|
||||
pip install beautifulsoup4
|
||||
```
|
||||
|
||||
你还可以通过执行以下命令使用 [requirements.txt][6] 文件安装所有必需的模块:
|
||||
|
||||
```
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 配置
|
||||
|
||||
所有必需的参数都直接在 `PyTool.py` 脚本中设置。默认值为:
|
||||
|
||||
```
|
||||
overcloud_logs_dir = '/var/log/containers'
|
||||
overcloud_ssh_user = 'heat-admin'
|
||||
overcloud_ssh_key = '/home/stack/.ssh/id_rsa'
|
||||
undercloud_logs_dir ='/var/log/containers'
|
||||
source_rc_file_path='/home/stack/'
|
||||
```
|
||||
|
||||
### 用法
|
||||
|
||||
此工具是交互式的,因此要启动它,只需输入:
|
||||
|
||||
```
|
||||
cd LogTool
|
||||
python PyTool.py
|
||||
```
|
||||
|
||||
### 排除 LogTool 故障
|
||||
|
||||
|
||||
在运行时会创建两个日志文件:`Error.log` 和 `Runtime.log`。请在你要打开的问题的描述中添加两者的内容。
|
||||
|
||||
### 局限性
|
||||
|
||||
LogTool 进行硬编码以处理最大 500 MB 的文件。
|
||||
|
||||
### LogTool_Python3 脚本
|
||||
|
||||
在 [github.com/zahlabut/LogTool][2] 获取。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/logtool-root-cause-identification
|
||||
|
||||
作者:[Arkady Shtempler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ashtempl
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
|
||||
[2]: https://github.com/zahlabut/LogTool
|
||||
[3]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python2
|
||||
[4]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python3
|
||||
[5]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[6]: https://github.com/zahlabut/LogTool/blob/master/LogTool_Python3/requirements.txt
|
@ -1,29 +1,30 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11879-1.html)
|
||||
[#]: subject: (Use this open source tool to get your local weather forecast)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-weather-forecast)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用这个开源工具获取本地天气预报
|
||||
======
|
||||
在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
|
||||
![Sky with clouds and grass][1]
|
||||
|
||||
> 在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
|
||||
|
||||

|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 wego 了解天气
|
||||
|
||||
过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,”晴朗“可以表示从”酷热“、”低于零度“到”一小时内会小雨“。能够了解实际情况和快速预测非常有用。
|
||||
过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,“晴朗”可以表示从“酷热”、“低于零度”到“一小时内会小雨”。能够了解实际情况和快速预测非常有用。
|
||||
|
||||
![Wego][2]
|
||||
|
||||
[Wego][3] 是用 Go 编写的程序,可以获取并显示你的当地天气。如果你愿意,它甚至可以用闪亮的 ASCII 艺术效果进行渲染。
|
||||
|
||||
要安装 wego,你需要确保在系统上安装了[Go][4]。之后,你可以使用 **go get** 命令获取最新版本。你可能还想将 **~/go/bin** 目录添加到路径中:
|
||||
|
||||
要安装 `wego`,你需要确保在系统上安装了[Go][4]。之后,你可以使用 `go get` 命令获取最新版本。你可能还想将 `~/go/bin` 目录添加到路径中:
|
||||
|
||||
```
|
||||
go get -u github.com/schachmat/wego
|
||||
@ -31,11 +32,9 @@ export PATH=~/go/bin:$PATH
|
||||
wego
|
||||
```
|
||||
|
||||
首次运行时,wego 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。Wego还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap,因此我将在此向你展示如何设置。
|
||||
|
||||
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 **~/.wegorc** 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国)还是国际单位制(SI)。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
|
||||
|
||||
首次运行时,`wego` 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。`wego` 还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap,因此我将在此向你展示如何设置。
|
||||
|
||||
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 `~/.wegorc` 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国)还是国际单位制(SI)。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
|
||||
|
||||
```
|
||||
# wego configuration for OEM
|
||||
@ -53,16 +52,15 @@ owm-lang=en
|
||||
units=imperial
|
||||
```
|
||||
|
||||
现在,在命令行运行 **wego** 将显示接下来三天的当地天气。
|
||||
现在,在命令行运行 `wego` 将显示接下来三天的当地天气。
|
||||
|
||||
Wego 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 **-f** 参数或在 **.wegorc** 文件中指定前端。
|
||||
`wego` 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 `-f` 参数或在 `.wegorc` 文件中指定前端。
|
||||
|
||||
![Wego at login][10]
|
||||
|
||||
如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 **~/.bashrc**(我这里是 **~/.zshrc**)即可。
|
||||
|
||||
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 **curl** 获取一行天气信息。我有一个名为 **get_wttr** 的 shell 函数,用于获取当前简化的预报信息。
|
||||
如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 `~/.bashrc`(我这里是 `~/.zshrc`)即可。
|
||||
|
||||
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 `curl` 获取一行天气信息。我有一个名为 `get_wttr` 的 shell 函数,用于获取当前简化的预报信息。
|
||||
|
||||
```
|
||||
get_wttr() {
|
||||
@ -81,7 +79,7 @@ via: https://opensource.com/article/20/1/open-source-weather-forecast
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,39 +1,38 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11882-1.html)
|
||||
[#]: subject: (3 handy command-line internet speed tests)
|
||||
[#]: via: (https://opensource.com/article/20/1/internet-speed-tests)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
3 个方便的命令行网速度测试工具
|
||||
3 个方便的命令行网速测试工具
|
||||
======
|
||||
用这三个开源工具检查你的互联网和局域网速度。
|
||||
![Old train][1]
|
||||
|
||||
可以验证网络连接速度让你可以控制计算机。三个可以检查互联网和局域网的开源命令行工具是 Speedtest、Fast 和 iPerf。
|
||||
> 用这三个开源工具检查你的互联网和局域网速度。
|
||||
|
||||

|
||||
|
||||
能够验证网络连接速度使您可以控制计算机。 使您可以在命令行中检查互联网和网络速度的三个开源工具是 Speedtest、Fast 和 iPerf。
|
||||
|
||||
### Speedtest
|
||||
|
||||
[Speedtest][2] 是以前的最爱。它用 Python 实现,并打包在 Apt 中,也可用 pip 安装。你可以将它作为命令行工具或 Python 脚本使用。
|
||||
[Speedtest][2] 是一个旧宠。它用 Python 实现,并打包在 Apt 中,也可用 `pip` 安装。你可以将它作为命令行工具或在 Python 脚本中使用。
|
||||
|
||||
使用以下命令安装:
|
||||
|
||||
|
||||
```
|
||||
`sudo apt install speedtest-cli`
|
||||
sudo apt install speedtest-cli
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
|
||||
```
|
||||
`sudo pip3 install speedtest-cli`
|
||||
sudo pip3 install speedtest-cli
|
||||
```
|
||||
|
||||
然后使用命令 **speedtest** 运行它:
|
||||
|
||||
然后使用命令 `speedtest` 运行它:
|
||||
|
||||
```
|
||||
$ speedtest
|
||||
@ -48,28 +47,25 @@ Testing upload speed............................................................
|
||||
Upload: 10.93 Mbit/s
|
||||
```
|
||||
|
||||
它给你提供了上传和下载的网速。它快速而且脚本化,因此你可以定期运行它,并将输出保存到文件或数据库中,以记录一段时间内的网络速度。
|
||||
它给你提供了互联网上传和下载的网速。它快速而且可脚本调用,因此你可以定期运行它,并将输出保存到文件或数据库中,以记录一段时间内的网络速度。
|
||||
|
||||
### Fast
|
||||
|
||||
[Fast][3] 是 Netflix 提供的服务。它的网址是 [Fast.com][4],同时它有一个可通过 npm 安装的命令行工具:
|
||||
|
||||
[Fast][3] 是 Netflix 提供的服务。它的网址是 [Fast.com][4],同时它有一个可通过 `npm` 安装的命令行工具:
|
||||
|
||||
```
|
||||
`npm install --global fast-cli`
|
||||
npm install --global fast-cli
|
||||
```
|
||||
|
||||
网站和命令行程序都提供了相同的基本界面:它是一个尽可能简单的速度测试:
|
||||
|
||||
|
||||
```
|
||||
$ fast
|
||||
|
||||
82 Mbps ↓
|
||||
```
|
||||
|
||||
该命令返回你的网络下载速度。要获取上传速度,请使用 **-u** 标志:
|
||||
|
||||
该命令返回你的网络下载速度。要获取上传速度,请使用 `-u` 标志:
|
||||
|
||||
```
|
||||
$ fast -u
|
||||
@ -79,10 +75,10 @@ $ fast -u
|
||||
|
||||
### iPerf
|
||||
|
||||
[iPerf][5] 测试的是局域网速度(而不是像前两个工具一样测试互联网速度)的好方法。 Debian、Raspbian 和 Ubuntu 用户可以使用 apt 安装它:
|
||||
[iPerf][5] 测试的是局域网速度(而不是像前两个工具一样测试互联网速度)的好方法。Debian、Raspbian 和 Ubuntu 用户可以使用 apt 安装它:
|
||||
|
||||
```
|
||||
`sudo apt install iperf`
|
||||
sudo apt install iperf
|
||||
```
|
||||
|
||||
它还可用于 Mac 和 Windows。
|
||||
@ -91,34 +87,31 @@ $ fast -u
|
||||
|
||||
获取服务端计算机的 IP 地址:
|
||||
|
||||
|
||||
```
|
||||
`ip addr show | grep inet.*brd`
|
||||
ip addr show | grep inet.*brd
|
||||
```
|
||||
|
||||
你的本地 IP 地址(假设为 IPv4 本地网络)以 **192.168** 或 **10** 开头。记下 IP 地址,以便可以在另一台计算机(指定为客户端的计算机)上使用它。
|
||||
|
||||
在服务端启动 **iperf**:
|
||||
你的本地 IP 地址(假设为 IPv4 本地网络)以 `192.168` 或 `10` 开头。记下 IP 地址,以便可以在另一台计算机(指定为客户端的计算机)上使用它。
|
||||
|
||||
在服务端启动 `iperf`:
|
||||
|
||||
```
|
||||
`iperf -s`
|
||||
iperf -s
|
||||
```
|
||||
|
||||
这会等待来自客户端的传入连接。将另一台计算机作为为客户端并运行此命令,将示例中的 IP 替换为服务端计算机的 IP:
|
||||
|
||||
它会等待来自客户端的传入连接。将另一台计算机作为为客户端并运行此命令,将示例中的 IP 替换为服务端计算机的 IP:
|
||||
|
||||
```
|
||||
`iperf -c 192.168.1.2`
|
||||
iperf -c 192.168.1.2
|
||||
```
|
||||
|
||||
![iPerf][6]
|
||||
|
||||
只需几秒钟即可完成测试,然后返回传输大小和计算出的带宽。我使用家用服务器作为服务端,在 PC 和笔记本电脑上进行了一些测试。我最近在房屋周围安装了 Cat6 以太网,因此我的有线连接速度达到 1Gbps,但 WiFi 连接速度却低得多。
|
||||
只需几秒钟即可完成测试,然后返回传输大小和计算出的带宽。我使用家用服务器作为服务端,在 PC 和笔记本电脑上进行了一些测试。我最近在房屋周围安装了六类线以太网,因此我的有线连接速度达到 1Gbps,但 WiFi 连接速度却低得多。
|
||||
|
||||
![iPerf][7]
|
||||
|
||||
你可能注意到它记录到 16Gbps。那是我使用服务器进行自我测试,因此它只是在测试写入磁盘的速度。服务器是普通硬盘,它只有 16Gbps,但是我的台式机有 46Gbps,另外我的(新)笔记本超过了 60Gbps,因为它们都有固态硬盘。
|
||||
你可能注意到它记录到 16Gbps。那是我使用服务器进行自我测试,因此它只是在测试写入磁盘的速度。该服务器具有仅 16 Gbps 的硬盘驱动器,但是我的台式机有 46Gbps,另外我的(较新的)笔记本超过了 60Gbps,因为它们都有固态硬盘。
|
||||
|
||||
![iPerf][8]
|
||||
|
||||
@ -128,9 +121,7 @@ $ fast -u
|
||||
|
||||
你还使用其他哪些工具来衡量家庭网络?在评论中分享你的评论。
|
||||
|
||||
* * *
|
||||
|
||||
_本文最初发表在 Ben Nuttall 的 [Tooling blog][9] 上,并获准在此使用。_
|
||||
本文最初发表在 Ben Nuttall 的 [Tooling blog][9] 上,并获准在此使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -139,7 +130,7 @@ via: https://opensource.com/article/20/1/internet-speed-tests
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11884-1.html)
|
||||
[#]: subject: (Best Open Source eCommerce Platforms to Build Online Shopping Websites)
|
||||
[#]: via: (https://itsfoss.com/open-source-ecommerce/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
@ -16,7 +16,7 @@
|
||||
|
||||
这些电商解决方案是专为搭建线上购物站点设计的,因此都集成了库存管理、商品列表、购物车、下单、愿望清单以及支付这些必需的基础功能。
|
||||
|
||||
_但请注意,这篇文章并不会进行深入介绍。因此,我建议最好广泛试用其中的多个产品,以便进一步的了解和比较。_
|
||||
但请注意,这篇文章并不会进行深入介绍。因此,我建议最好广泛试用其中的多个产品,以便进一步的了解和比较。
|
||||
|
||||
### 优秀的开源电商解决方案
|
||||
|
||||
@ -24,63 +24,63 @@ _但请注意,这篇文章并不会进行深入介绍。因此,我建议最
|
||||
|
||||
开源电商解决方案种类繁多,一些缺乏维护的都会被我们忽略掉,以免搭建出来的站点因维护不及时而受到影响。
|
||||
|
||||
_**另外,以下的列表排名不分先后。**_
|
||||
另外,以下的列表排名不分先后。
|
||||
|
||||
#### 1\. nopCommerce
|
||||
#### 1、nopCommerce
|
||||
|
||||
![][3]
|
||||
|
||||
nopCommerce 是基于 [ASP.NET Core][4] 的自由开源电商解决方案。如果你要找的是基于 PHP 的解决方案,可以跳过这一节了。
|
||||
nopCommerce 是基于 [ASP.NET Core][4] 的自由开源的电商解决方案。如果你要找的是基于 PHP 的解决方案,可以跳过这一节了。
|
||||
|
||||
nopCommerce 的管理面板界面具有简洁易用的特点,如果你还使用过 OpenCart,就可能会感到似曾相识。在默认情况下,它就已经自带了很多基本的功能,同时还为移动端用户提供了响应式的设计。
|
||||
nopCommerce 的管理面板界面具有简洁易用的特点,如果你还使用过 OpenCart,就可能会感到似曾相识(我不是在抱怨)。在默认情况下,它就已经自带了很多基本的功能,同时还为移动端用户提供了响应式的设计。
|
||||
|
||||
你可以在其[官方商店][5]中获取到一些兼容的界面主题和应用扩展,还可以选择付费的支持服务。
|
||||
|
||||
在开始使用前,你可以从 nopCommerce 的[官方网站][6]下载源代码包,然后进行自定义配置和部署;也可以直接下载完整的软件包快速安装到 web 服务器上。详细信息可以查阅 nopCommerce 的 [GitHub 页面][7]或官方网站。
|
||||
|
||||
[nopCommerce][8]
|
||||
- [nopCommerce][8]
|
||||
|
||||
#### 2\. OpenCart
|
||||
#### 2、OpenCart
|
||||
|
||||
![][9]
|
||||
|
||||
OpenCart 是一个基于 PHP 的非常流行的电商解决方案, 我个人感觉它还是相当不错的。
|
||||
OpenCart 是一个基于 PHP 的非常流行的电商解决方案,就我个人而言,我曾为一个项目用过它,并且体验非常好,如果不是最好的话。
|
||||
|
||||
或许你会觉得它维护得不是很频繁,但实际上使用 OpenCart 的开发者并不在少数。你可以获得许多受支持的扩展并将它们的功能加入到 OpenCart 中。
|
||||
|
||||
OpenCart 不一定是适合所有人的电商解决方案,但如果你需要的只是一个基于 PHP 的开源解决方案,OpenCart 是个值得一试的选择,毕竟它可以方便地一键完成安装。想要了解更多,可以查阅 OpenCart 的官方网站或 [GitHub 页面][10]。
|
||||
OpenCart 不一定是适合所有人的“现代”电商解决方案,但如果你需要的只是一个基于 PHP 的开源解决方案,OpenCart 是个值得一试的选择。在大多数具有一键式应用程序安装支持的网络托管平台中,应该可以安装 OpenCart。想要了解更多,可以查阅 OpenCart 的官方网站或 [GitHub 页面][10]。
|
||||
|
||||
[OpenCart][11]
|
||||
- [OpenCart][11]
|
||||
|
||||
#### 3\. PrestaShop
|
||||
#### 3、PrestaShop
|
||||
|
||||
![][12]
|
||||
|
||||
PrestaShop 也是一个可以尝试的开源电商解决方案。
|
||||
|
||||
PrestaShop 则是一个积极维护下的开源解决方案,它的官方商店中也有额外提供主题和扩展。与 OpenCart 不同,PrestaShop 不是一个能够一键安装的应用。但不需要担心,从官方网站下载下来之后,它的部署过程也并不复杂。如果你需要帮助,也可以参考 PrestaShop 的[安装指南][15]。
|
||||
PrestaShop 是一个积极维护下的开源解决方案,它的官方商店中也有额外提供主题和扩展。与 OpenCart 不同,在托管服务平台上,你可能找不到一键安装的 PrestaShop。但不需要担心,从官方网站下载下来之后,它的部署过程也并不复杂。如果你需要帮助,也可以参考 PrestaShop 的[安装指南][15]。
|
||||
|
||||
PrestaShop 的特点就是配置丰富和易于使用,我发现很多其它用户也在用它,你也不妨试用一下。
|
||||
|
||||
你也可以在 PrestaShop 的 [GitHub 页面][16]查阅到更多相关内容。
|
||||
|
||||
[PrestaShop][17]
|
||||
- [PrestaShop][17]
|
||||
|
||||
#### 4\. WooCommerce
|
||||
#### 4、WooCommerce
|
||||
|
||||
![][18]
|
||||
|
||||
如果你想用 [WordPress][19] 来搭建电商站点,不妨使用 WooCommerce。
|
||||
|
||||
严格来说,这种方式其实是搭建一个 WordPress 应用,然后把 WooCommerce 作为一个插件或扩展以实现电商站点所需要的功能。很多 web 开发者都知道如何使用 WordPress,因此 WooCommerce 的学习成本不会很高。
|
||||
从技术上来说,这种方式其实是搭建一个 WordPress 应用,然后把 WooCommerce 作为一个插件或扩展以实现电商站点所需要的功能。很多 web 开发者都知道如何使用 WordPress,因此 WooCommerce 的学习成本不会很高。
|
||||
|
||||
WordPress 作为目前最好的开源站点项目之一,对大部分人来说都不会有太高的门槛。它具有易用、稳定的特点,同时还支持大量的扩展插件。
|
||||
|
||||
WooCommerce 的灵活性也是一大亮点,在它的线上商店提供了许多设计和扩展可供选择。你也可以到它的 [GitHub 页面][20]查看相关介绍。
|
||||
|
||||
[WooCommerce][21]
|
||||
- [WooCommerce][21]
|
||||
|
||||
#### 5\. Zen Cart
|
||||
#### 5、Zen Cart
|
||||
|
||||
![][22]
|
||||
|
||||
@ -90,9 +90,9 @@ WooCommerce 的灵活性也是一大亮点,在它的线上商店提供了许
|
||||
|
||||
你也可以在 [SourceForge][23] 找到 Zen Cart 这个项目。
|
||||
|
||||
[Zen Cart][24]
|
||||
- [Zen Cart][24]
|
||||
|
||||
#### 6\. Magento
|
||||
#### 6、Magento
|
||||
|
||||
![Image Credits: Magestore][25]
|
||||
|
||||
@ -104,9 +104,9 @@ Magento 完全是作为电商应用程序而生的,因此你会发现它的很
|
||||
|
||||
想要了解更多,可以查看 Magento 的 [GitHub 页面][27]。
|
||||
|
||||
[Magento][28]
|
||||
- [Magento][28]
|
||||
|
||||
#### 7\. Drupal
|
||||
#### 7、Drupal
|
||||
|
||||
![Drupal][29]
|
||||
|
||||
@ -116,11 +116,25 @@ Drupal 是一个适用于创建电商站点的开源 CMS 解决方案。
|
||||
|
||||
跟 WordPress 类似,Drupal 在服务器上的部署并不复杂,不妨看看它的使用效果。在它的[下载页面][30]可以查看这个项目以及下载最新的版本。
|
||||
|
||||
[Drupal][31]
|
||||
- [Drupal][31]
|
||||
|
||||
总结
|
||||
#### 8、Odoo eCommerce
|
||||
|
||||
以上是我接触过最好的几个开源电商解决方案了,当然或许还会有很多更好的同类产品。如果你还有其它值得一提的产品,可以在评论区发表。也欢迎在评论区分享你对开源电商解决方案的经验和想法。
|
||||
![Odoo Ecommerce Platform][32]
|
||||
|
||||
如果你还不知道,Odoo 提供了一套开源商务应用程序。他们还提供了[开源会计软件][33]和 CRM 解决方案,我们将会在单独的列表中进行介绍。
|
||||
|
||||
对于电子商务门户,你可以根据需要使用其在线拖放生成器自定义网站。你也可以推广该网站。除了简单的主题安装和自定义选项之外,你还可以利用 HTML/CSS 在一定程度上手动自定义外观。
|
||||
|
||||
你也可以查看其 [GitHub][34] 页面以进一步了解它。
|
||||
|
||||
- [Odoo eCommerce][35]
|
||||
|
||||
### 总结
|
||||
|
||||
我敢肯定还有更多的开源电子商务平台,但是,我现在还没有遇到比我上面列出的更好的东西。
|
||||
|
||||
如果你还有其它值得一提的产品,可以在评论区发表。也欢迎在评论区分享你对开源电商解决方案的经验和想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -129,7 +143,7 @@ via: https://itsfoss.com/open-source-ecommerce/
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -166,3 +180,7 @@ via: https://itsfoss.com/open-source-ecommerce/
|
||||
[29]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/drupal.png?ssl=1
|
||||
[30]: https://www.drupal.org/project/drupal
|
||||
[31]: https://www.drupal.org/industries/ecommerce
|
||||
[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/odoo-ecommerce-platform.jpg?w=800&ssl=1
|
||||
[33]: https://itsfoss.com/open-source-accounting-software/
|
||||
[34]: https://github.com/odoo/odoo
|
||||
[35]: https://www.odoo.com/page/open-source-ecommerce
|
@ -0,0 +1,115 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (OpenShot Video Editor Gets a Major Update With Version 2.5 Release)
|
||||
[#]: via: (https://itsfoss.com/openshot-2-5-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
OpenShot Video Editor Gets a Major Update With Version 2.5 Release
|
||||
======
|
||||
|
||||
[OpenShot][1] is one of the [best open-source video editors][2] out there. With all the features that it offered – it was already a good video editor on Linux.
|
||||
|
||||
Now, with a major update to it (**v.2.5.0**), OpenShot has added a lot of new improvements and features. And, trust me, it’s not just any regular release – it is a huge release packed with features that you probably wanted for a very long time.
|
||||
|
||||
In this article, I will briefly mention the key changes involved in the latest release.
|
||||
|
||||
![][3]
|
||||
|
||||
### OpenShot 2.5.0 Key Features
|
||||
|
||||
Here are some of the major new features and improvements in OpenShot 2.5:
|
||||
|
||||
#### Hardware Acceleration Support
|
||||
|
||||
The hardware acceleration support is still an experimental addition – however, it is a useful feature to have.
|
||||
|
||||
Instead of relying on your CPU to do all the hard work, you can utilize your GPU to encode/decode video data when working with MP4/H.264 video files.
|
||||
|
||||
This will affect (or improve) the performance of OpenShot in a meaningful way.
|
||||
|
||||
#### Support Importing/Exporting Files From Final Cut Pro & Premiere
|
||||
|
||||
![][4]
|
||||
|
||||
[Final Cut Pro][5] and [Adobe Premiere][6] are the two popular video editors for professional content creators. OpenShot 2.5 now allows you to work on projects created on these platforms. It can import (or export) the files from Final Cut Pro & Premiere in EDL & XML formats.
|
||||
|
||||
#### Thumbnail Generation Improved
|
||||
|
||||
This isn’t a big feature – but a necessary improvement to most of the video editors. You don’t want broken images in the thumbnails (your timeline/library). So, with this update, OpenShot now generates the thumbnails using a local HTTP server, can check multiple folder locations, and regenerate missing ones.
|
||||
|
||||
#### Blender 2.8+ Support
|
||||
|
||||
The new OpenShot release also supports the latest [Blender][7] (.blend) format – so it should come in handy if you’re using Blender as well.
|
||||
|
||||
#### Easily Recover Previous Saves & Improved Auto-backup
|
||||
|
||||
![][8]
|
||||
|
||||
It was always a horror to lose your timeline work after you accidentally deleted it – which was then auto-saved to overwrite your saved project.
|
||||
|
||||
Now, the auto-backup feature has improved with an added ability to easily recover your previous saved version of the project.
|
||||
|
||||
Even though you can recover your previous saves now – you will find a limited number of the saved versions, so you have to still remain careful.
|
||||
|
||||
#### Other Improvements
|
||||
|
||||
In addition to all the key highlights mentioned above, you will also notice a performance improvement when using the keyframe system.
|
||||
|
||||
Several other issues like SVG compatibility, exporting & modifying keyframe data, and resizable preview window have been fixed in this major update. For privacy-concerned users, OpenShot no longer sends usage data unless you opt-in to share it with them.
|
||||
|
||||
For more information, you can take a look at [OpenShot’s official blog post][9] to get the release notes.
|
||||
|
||||
### Installing OpenShot 2.5 on Linux
|
||||
|
||||
You can simply download the .AppImage file from its [official download page][10] to [install the latest OpenShot version][11]. If you’re new to AppImage, you should also check out [how to use AppImage][12] on Linux to easily launch OpenShot.
|
||||
|
||||
[Download Latest OpenShot Release][10]
|
||||
|
||||
Some distributions like Arch Linux may also provide the latest OpenShot release with regular system updates.
|
||||
|
||||
#### PPA available for Ubuntu-based distributions
|
||||
|
||||
On Ubuntu-based distributions, if you don’t want to use AppImage, you can [use the official PPA][13] from OpenShot:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:openshot.developers/ppa
|
||||
sudo apt update
|
||||
sudo apt install openshot-qt
|
||||
```
|
||||
|
||||
You may want to know how to remove PPA if you want to uninstall it later.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
With all the latest changes/improvements considered, do you see [OpenShot][11] as your primary [video editor on Linux][14]? If not, what more do you expect to see in OpenShot? Feel free to share your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/openshot-2-5-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.openshot.org/
|
||||
[2]: https://itsfoss.com/open-source-video-editors/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-2-5-0.png?ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-xml-edl.png?ssl=1
|
||||
[5]: https://www.apple.com/in/final-cut-pro/
|
||||
[6]: https://www.adobe.com/in/products/premiere.html
|
||||
[7]: https://www.blender.org/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-recovery.jpg?ssl=1
|
||||
[9]: https://www.openshot.org/blog/2020/02/08/openshot-250-released-video-editing-hardware-acceleration/
|
||||
[10]: https://www.openshot.org/download/
|
||||
[11]: https://itsfoss.com/openshot-video-editor-release/
|
||||
[12]: https://itsfoss.com/use-appimage-linux/
|
||||
[13]: https://itsfoss.com/ppa-guide/
|
||||
[14]: https://itsfoss.com/best-video-editing-software-linux/
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 firewall features IT pros should know about but probably don’t)
|
||||
[#]: via: (https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
5 firewall features IT pros should know about but probably don’t
|
||||
======
|
||||
As a foundational network defense, firewalls continue to be enhanced with new features, so many that some important ones that shouldn’t be get overlooked.
|
||||
Natalya Burova / Getty Images
|
||||
|
||||
[Firewalls][1] continuously evolve to remain a staple of network security by incorporating functionality of standalone devices, embracing network-architecture changes, and integrating outside data sources to add intelligence to the decisions they make – a daunting wealth of possibilities that is difficult to keep track of.
|
||||
|
||||
Because of this richness of features, next-generation firewalls are difficult to master fully, and important capabilities sometimes can be, and in practice are, overlooked.
|
||||
|
||||
Here is a shortlist of new features IT pros should be aware of.
|
||||
|
||||
**[ Also see [What to consider when deploying a next generation firewall][2]. | Get regularly scheduled insights by [signing up for Network World newsletters][3]. ]**
|
||||
|
||||
### Also in this series:
|
||||
|
||||
* [Cybersecurity in 2020: From secure code to defense in depth][4] (CSO)
|
||||
* [More targeted, sophisticated and costly: Why ransomware might be your biggest threat][5] (CSO)
|
||||
* [How to bring security into agile development and CI/CD][6] (Infoworld)
|
||||
* [UEM to marry security – finally – after long courtship][7] (Computerworld)
|
||||
* [Security vs. innovation: IT's trickiest balancing act][8] (CIO)
|
||||
|
||||
|
||||
|
||||
## Network segmentation
|
||||
|
||||
Dividing a single physical network into multiple logical networks is known as [network segmentation][9] in which each segment behaves as if it runs on its own physical network. The traffic from one segment can’t be seen by or passed to another segment.
|
||||
|
||||
This significantly reduces attack surfaces in the event of a breach. For example, a hospital could put all its medical devices into one segment and its patient records into another. Then, if hackers breach a heart pump that was not secured properly, that would not enable them to access private patient information.
|
||||
|
||||
It’s important to note that many connected things that make up the [internet of things][10] have older operating systems and are inherently insecure and can act as a point of entry for attackers, so the growth of IoT and its distributed nature drives up the need for network segmentation.
|
||||
|
||||
## Policy optimization
|
||||
|
||||
Firewall policies and rules are the engine that make firewalls go. Most security professionals are terrified of removing older policies because they don’t know when they were put in place or why. As a result, rules keep getting added with no thought of reducing the overall number. Some enterprises say they have millions of firewall rules in place. The fact is, too many rules add complexity, can conflict with each other and are time consuming to manage and troubleshoot.
|
||||
|
||||
[][11]
|
||||
|
||||
Policy optimization migrates legacy security policy rules to application-based rules that permit or deny traffic based on what application is being used. This improves overall security by reducing the attack surface and also provides visibility to safely enable application access. Policy optimization identifies port-based rules so they can be converted to application-based whitelist rules or add applications from a port-based rule to an existing application-based rule without compromising application availability. It also identifies over-provisioned application-based rules. Policy optimization helps prioritize which port-based rules to migrate first, identify application-based rules that allow applications that aren’t being used, and analyze rule-usage characteristics such as hit count, which compares how often a particular rule is applied vs. how often all the rules are applied.
|
||||
|
||||
Converting port-based rules to application-based rules improves security posture because the organization can select the applications they want to whitelist and deny all other applications. That way unwanted and potentially malicious traffic is eliminated from the network.
|
||||
|
||||
## Credential-theft prevention
|
||||
|
||||
Historically, workers accessed corporate applications from company offices. Today they access legacy apps, SaaS apps and other cloud services from the office, home, airport and anywhere else they may be. This makes it much easier for threat actors to steal credentials. The Verizon [Data Breach Investigations Report][12] found that 81% of hacking-related breaches leveraged stolen and/or weak passwords.
|
||||
|
||||
Credential-theft prevention blocks employees from using corporate credentials on sites such as Facebook and Twitter. Even though they may be sanctioned applications, using corporate credentials to access them puts the business at risk.
|
||||
|
||||
Credential-theft prevention works by scanning username and password submissions to websites and compare those submissions to lists of official corporate credentials. Businesses can choose what websites to allow submitting corporate credentials to or block them based on the URL category of the website.
|
||||
|
||||
When the firewall detects a user attempting to submit credentials to a site in a category that is restricted, it can display a block-response page that prevents the user from submitting credentials. Alternatively, it can present a continue page that warns users against submitting credentials to sites classified in certain URL categories, but still allows them to continue with the credential submission. Security professionals can customize these block pages to educate users against reusing corporate credentials, even on legitimate, non-phishing sites.
|
||||
|
||||
## DNS security
|
||||
|
||||
A combination of machine learning, analytics and automation can block attacks that leverage the [Domain Name System (DNS)][13]. In many enterprises, DNS servers are unsecured and completely wide open to attacks that redirect users to bad sites where they are phished and where data is stolen. Threat actors have a high degree of success with DNS-based attacks because security teams have very little visibility into how attackers use the service to maintain control of infected devices. There are some standalone DNS security services that are moderately effective but lack the volume of data to recognize all attacks.
|
||||
|
||||
When DNS security is integrated into firewalls, machine learning can analyze the massive amount of network data, making standalone analysis tools unnecessary. DNS security integrated into a firewall can predict and block malicious domains through automation and the real-time analysis that finds them. As the number of bad domains grows, machine learning can find them quickly and ensure they don’t become problems.
|
||||
|
||||
Integrated DNS security can also use machine-learning analytics to neutralize DNS tunneling, which smuggles data through firewalls by hiding it within DNS requests. DNS security can also find malware command-and-control servers. It builds on top of signature-based systems to identify advanced tunneling methods and automates the shutdown of DNS-tunneling attacks.
|
||||
|
||||
## Dynamic user groups
|
||||
|
||||
It’s possible to create policies that automate the remediation of anomalous activities of workers. The basic premise is that users’ roles within a group means their network behaviors should be similar to each other. For example, if a worker is phished and strange apps were installed, this would stand out and could indicate a breach.
|
||||
|
||||
Historically, quarantining a group of users was highly time consuming because each member of the group had to be addressed and policies enforced individually. With dynamic user groups, when the firewall sees an anomaly it creates policies that counter the anomoly and pushes them out to the user group. The entire group is automatically updated without having to manually create and commit policies. So, for example, all the people in accounting would receive the same policy update automatically, at once, instead of manually, one at a time. Integration with the firewall enables the firewall to distribute the policies for the user group to all the other infrastructure that requires it including other firewalls, log collectors or applications.
|
||||
|
||||
Firewalls have been and will continue to be the anchor of cyber security. They are the first line of defense and can thwart many attacks before they penetrate the enterprise network. Maximizing the value of firewalls means turning on many of the advanced features, some of which have been in firewalls for years but not turned on for a variety of reasons.
|
||||
|
||||
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3519854/4-firewall-features-it-pros-should-know-about-but-probably-dont.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.csoonline.com/article/3519913/cybersecurity-in-2020-from-secure-code-to-defense-in-depth.html
|
||||
[5]: https://www.csoonline.com/article/3518864/more-targeted-sophisticated-and-costly-why-ransomware-might-be-your-biggest-threat.html
|
||||
[6]: https://www.infoworld.com/article/3520969/how-to-bring-security-into-agile-development-and-cicd.html
|
||||
[7]: https://www.computerworld.com/article/3516136/uem-to-marry-security-finally-after-long-courtship.html
|
||||
[8]: https://www.cio.com/article/3521009/security-vs-innovation-its-trickiest-balancing-act.html
|
||||
[9]: https://www.networkworld.com/article/3016565/how-network-segmentation-provides-a-path-to-iot-security.html
|
||||
[10]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[12]: https://enterprise.verizon.com/resources/reports/dbir/
|
||||
[13]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
74
sources/talk/20200211 Future ‘smart walls- key to IoT.md
Normal file
74
sources/talk/20200211 Future ‘smart walls- key to IoT.md
Normal file
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Future ‘smart walls’ key to IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Future ‘smart walls’ key to IoT
|
||||
======
|
||||
MIT researchers are developing a wallpaper-like material that’s made up of simple RF switch elements and can be applied to building surfaces. Using beamforming, the antenna array could potentially improve wireless signal strength nearly tenfold.
|
||||
Jason Dorfman, MIT CSAIL
|
||||
|
||||
IoT equipment designers shooting for efficiency should explore the potential for using buildings as antennas, researchers say.
|
||||
|
||||
Environmental surfaces such as walls can be used to intercept and beam signals, which can increase reliability and data throughput for devices, according to MIT's Computer Science and Artificial Intelligence Laboratory ([CSAIL][1]).
|
||||
|
||||
Researchers at CSAIL have been working on a smart-surface repeating antenna array called RFocus. The antennas, which could be applied in sheets like wallpaper, are designed to be incorporated into office spaces and factories. Radios that broadcast signals could then become smaller and less power intensive.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
“Tests showed that RFocus could improve the average signal strength by a factor of almost 10,” CSAIL's Adam Conner-Simons [writes in MIT News][3]. “The platform is also very cost-effective, with each antenna costing only a few cents.”
|
||||
|
||||
The prototype system CSAIL developed uses more than 3,000 antennas embedded into sheets, which are then hung on walls. In future applications, the antennas could adhere directly to the wall or be integrated during building construction.
|
||||
|
||||
“People have had things completely backwards this whole time,” the article claims. “Rather than focusing on the transmitters and receivers, what if we could amplify the signal by adding antennas to an external surface in the environment itself?”
|
||||
|
||||
RFocus relies on [beamforming][4]; multiple antennas broadcast the same signal at slightly different times, and as a result, some of the signals cancel each other and some strengthen each other. When properly executed, beamforming can focus a stronger signal in a particular direction.
|
||||
|
||||
[][5]
|
||||
|
||||
"The surface does not emit any power of its own," the developers explain in their paper ([PDF][6]). The antennas, or RF switch elements, as the group describes them, either let a signal pass through or reflect it through software. Signal measurements allow the apparatus to define exactly what gets through and how it’s directed.
|
||||
|
||||
Importantly, the RFocus surface functions with no additional power requirements. The “RFocus surface can be manufactured as an inexpensive thin ‘wallpaper’ requiring no wiring,” the group says.
|
||||
|
||||
### Antenna design
|
||||
|
||||
Antenna engineering is turning into a vital part of IoT development. It's one of the principal reasons data throughput and reliability keeps improving in wireless networks.
|
||||
|
||||
Arrays where multiple, active panel components make up antennas, rather than a simple passive wire, as is the case in traditional radio, is an example of advancements in antenna engineering.
|
||||
|
||||
[Spray-on antennas][7] (unrelated to the CSAIL work) is another in-the-works technology I've written about. In that case, flexible substrates create the antenna, which is applied in a manner that's similar to spray paint. Another future direction could be anti-laser antennas: [Reversing a laser][8], where the laser becomes an absorber of light rather than the sender of it, could allow all data-carrying energy to be absorbed, making it the perfect light-based antenna.
|
||||
|
||||
Development of 6G wireless, which is projected to supersede 5G sometime around 2030, includes efforts to figure out how to directly [couple antennas to fiber][9]—the radio ends up being part of the cable, in other words.
|
||||
|
||||
"We can’t get faster internet speeds without more efficient ways of delivering wireless signals," CSAIL’s Conner-Simons says.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3519440/future-smart-walls-key-to-iot.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.csail.mit.edu/
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: http://news.mit.edu/2020/smart-surface-smart-devices-mit-csail-0203
|
||||
[4]: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html
|
||||
[5]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[6]: https://drive.google.com/file/d/1TLfH-r2w1zlGBbeeM6us2sg0yq6Lm2wF/view
|
||||
[7]: https://www.networkworld.com/article/3309449/spray-on-antennas-will-revolutionize-the-internet-of-things.html
|
||||
[8]: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html
|
||||
[9]: https://www.networkworld.com/article/3438337/how-6g-will-work-terahertz-to-fiber-conversion.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Who should lead the push for IoT security?)
|
||||
[#]: via: (https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Who should lead the push for IoT security?
|
||||
======
|
||||
Industry groups and governmental agencies have been taking a stab at rules to improve the security of the internet of things, but so far there’s nothing comprehensive.
|
||||
Thinkstock
|
||||
|
||||
The ease with which internet of things devices can be compromised, coupled with the potentially extreme consequences of breaches, have prompted action from legislatures and regulators, but what group is best to decide?
|
||||
|
||||
Both the makers of [IoT][1] devices and governments are aware of the security issues, but so far they haven’t come up with standardized ways to address them.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
“The challenge of this market is that it’s moving so fast that no regulation is going to be able to keep pace with the devices that are being connected,” said Forrester vice president and research director Merritt Maxim. “Regulations that are definitive are easy to enforce and helpful, but they’ll quickly become outdated.”
|
||||
|
||||
The latest such effort by a governmental body is a proposed regulation in the U.K. that would impose three major mandates on IoT device manufacturers that would address key security concerns:
|
||||
|
||||
* device passwords would have to be unique, and resetting them to factory defaults would be prohibited
|
||||
* device makers would have to offer a public point of contact for the disclosure of vulnerabilities
|
||||
* device makers would have to “explicitly state the minimum length of time for which the device will receive security updates”
|
||||
|
||||
|
||||
|
||||
This proposal is patterned after a California law that took effect last month. Both sets of rules would likely have a global impact on the manufacture of IoT devices, even though they’re being imposed on limited jurisdictions. That’s because it’s expensive for device makers to create separate versions of their products.
|
||||
|
||||
IoT-specific regulations aren’t the only ones that can have an impact on the marketplace. Depending on the type of information a given device handles, it could be subject to the growing list of data-privacy laws being implemented around the world, most notably Europe’s General Data Protection Regulation, as well as industry-specific regulations in the U.S. and elsewhere.
|
||||
|
||||
The U.S. Food and Drug Administration, noted Maxim, has been particularly active in trying to address device-security flaws. For example, last year it issued [security warnings][3] about 11 vulnerabilities that could compromise medical IoT devices that had been discovered by IoT security vendor [Armis][4]. In other cases it issued fines against healthcare providers.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
|
||||
|
||||
But there’s a broader issue with devising definitive regulation for IoT devices in general, as opposed to prescriptive ones that simply urge manufacturers to adopt best practices, he said.
|
||||
|
||||
Particular companies might have integrated security frameworks covering their vertically integrated products – such as an [industrial IoT][6] company providing security across factory floor sensors – but that kind of security is incomplete in the multi-vendor world of IoT.
|
||||
|
||||
Perhaps the closest thing to a general IoT-security standard is currently being worked on by Underwriters Laboratories (UL), the security-testing non-profit best known for its century-old certification program for electrical equipment. UL’s [IoT Security Rating Program][7] offers a five-tier system for ranking the security of connected devices – bronze, silver, gold, platinum and diamond.
|
||||
|
||||
Bronze certification means that the device has addressed the most glaring security flaws, similar to those outlined in the recent U.K. and California legislations. [The higher ratings][8] include capabilities like ongoing security maintenance, improved access control and known threat testing.
|
||||
|
||||
While government regulation and voluntary industry improvements can help keep future IoT systems safe, neither addresses two key issues in the IoT security puzzle – the millions of insecure devices that have already been deployed, and user apathy around making their systems as safe as possible, according to Maxim.
|
||||
|
||||
“Requiring a non-default passwords is good, but that doesn’t stop users from setting insecure passwords,” he warned. “The challenge is, do customers care? Are they willing to pay extra for products with that certification?”
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3526490/who-should-lead-the-push-for-iot-security.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.fda.gov/medical-devices/safety-communications/urgent11-cybersecurity-vulnerabilities-widely-used-third-party-software-component-may-introduce
|
||||
[4]: https://www.armis.com/
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[6]: https://www.networkworld.com/article/3243928/what-is-the-industrial-internet-of-things-essentials-of-iiot.html
|
||||
[7]: https://ims.ul.com/iot-security-rating-levels
|
||||
[8]: https://www.cnx-software.com/2019/12/30/ul-iot-security-rating-system-ranks-iot-devices-security-from-bronze-to-diamond/
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why innovation can't happen without standardization)
|
||||
[#]: via: (https://opensource.com/open-organization/20/2/standardization-versus-innovation)
|
||||
[#]: author: (Len Dimaggio https://opensource.com/users/ldimaggi)
|
||||
|
||||
Why innovation can't happen without standardization
|
||||
======
|
||||
Balancing standardization and innovation is critical during times of
|
||||
organizational change. And it's an ongoing issue in open organizations,
|
||||
where change is constant.
|
||||
![and old computer and a new computer, representing migration to new software or hardware][1]
|
||||
|
||||
Any organization facing the prospect of change will confront an underlying tension between competing needs for standardization and innovation. Achieving the correct balance between these needs can be essential to an organization's success.
|
||||
|
||||
Experiencing too much of either can lead to morale and productivity problems. Over-stressing standardization, for example, can have a stifling effect on the team's ability to innovate to solve new problems. Unfettered innovation, on the other hand, can lead to time lost due to duplicated or misdirected efforts.
|
||||
|
||||
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change. In this article, I'll outline various considerations your organization might make when attempting to strike this critical balance.
|
||||
|
||||
### The need for standardization
|
||||
|
||||
When North American beavers hear running water, they instinctively start building a dam. When some people see a problem, they look to build or buy a new product or tool to solve that problem. Technological advances make modeling business process solutions or setting up production or customer-facing systems much easier than in the past. The ease with which organizational actors can introduce new systems can occasionally, however, lead to problems. Duplicate, conflicting, or incompatible systems—or systems that, while useful, do not address a team's highest priorities—can find their way into organizations, complicating processes.
|
||||
|
||||
This is where standardization can help. By agreeing on and implementing a common set of tools and processes, teams become more efficient, as they reduce the need for new development methods, customized training, and maintenance.
|
||||
|
||||
Standardization has several benefits:
|
||||
|
||||
* **Reliability, predictability, and safety.** Think about the electricity in your own home and the history of electrical systems. In the early days of electrification, companies competed to establish individual standards for basic elements like plug configurations and safety requirements like insulation. Thanks to standardization, when you buy a light bulb today you can be sure that it will fit and not start a fire.
|
||||
* **Lower costs and more dependable, repeatable processes.** Standarsization frees people in organizations to focus more attention on other things—products, for instance—and not on the need to coordinate the use of potentially conflicting new tools and processes. And it can make people's skills more portable (or, in budgeting terms more "fungible") across projects, since all projects share a common set of standards. In addition to helping project teams be more flexible, this portability of skills makes it easier for people to adopt new assignments.
|
||||
* **Consistent measurements.** Creating a set of consistent metrics people can use to assess product quality across multiple products or multiple releases of individual products is possible through standardization. Without it, applying this kind of consistent measurement to product quality and maintaining any kind of history of tracking such quality can be difficult. Standardization effectively provides the organization a common language for measuring quality.
|
||||
|
||||
|
||||
|
||||
A danger of standardization arises when it becomes an all-consuming end in itself. A constant push to standardize can result in it inadvertently stifling creativity and innovation. If taken too far, policies that over emphasize standardization appear to discourage support for people's need to find new solutions to new problems. Taken to an extreme, this can lead to a suffocating organizational atmosphere in which people are reluctant to propose new solutions in the interest of maintaining standardization or conformity. In an open organization especially focused on generating new value and solutions, an attempt to impose standardization can have a negative impact on team morale.
|
||||
|
||||
Viewing new challenges through the lens of former solutions is natural. Likewise, it's common (and in fact generally practical) to apply legacy tools and processes to solving new problems.
|
||||
|
||||
But in open organizations, change is constant. We must always adapt to it.
|
||||
|
||||
Finding and maintaining the correct balance between standardization and innovation is critical during times of organizational change.
|
||||
|
||||
### The need for innovation
|
||||
|
||||
Digital technology changes at a rapid rate, and that rate of change is always increasing. New opportunities result in new problems that require new solutions. Any organization must be able to adapt and its people must have the freedom to innovate. This is even more important in an open organization and with open source source software, as many of the factors (e.g., restrictive licenses) that blocked innovation in the past no longer apply.
|
||||
|
||||
When considering the prospect of innovation in your organization, keep in mind the following:
|
||||
|
||||
* **Standardization doesn't have to be the end of innovation.** Even tools and processes that are firmly established and in use by an organization were once very new and untried, and they only came about through processes of organizational innovation.
|
||||
* **Progress through innovation also involves failure.** It's very often the case that some innovations fail, but when they fail, they point the way forward to solutions. This progress therefore requires that an organization protect the freedom to fail. (In competitive sports, athletes and teams seldom learn lessons from easy victories; they learn lessons about how to win, including how to innovate to win, from failures and defeats.)
|
||||
|
||||
|
||||
|
||||
Freedom to innovate, however, cannot be freedom to do whatever the heck we feel like doing. The challenge for any organization is to be able to encourage and inspire innovation, but at the same time to keep innovation efforts focused towards meeting your organization's goals and to address the problems that you're trying to solve.
|
||||
|
||||
In closed organizations, leaders may be inclined to impose rigid, top-down limits on innovation. A better approach is to instead provide a direction or path forward in terms of goals and deliverables, and then enable people to find their own ways along that path. That forward path is usually not a straight line; [innovation is almost never a linear process][2]. Like a sailboat making progress into the wind, it's sometimes [necessary to "tack" or go sideways][3] in order to make forward progress.
|
||||
|
||||
### Blending standardization with focused innovation
|
||||
|
||||
Are we doomed to always think of standardization as the broccoli we must eat, while innovation is the ice cream we want to eat?
|
||||
|
||||
Are we doomed to always think of standardization as the broccoli we _must_ eat, while innovation is the ice cream we _want_ to eat?
|
||||
|
||||
It doesn't have to be this way.
|
||||
|
||||
Perceptions play a role in the conflict between standardization and innovation. People who only want to focus on standardization must remember that even the tools and processes that they want to promote as "the standard" were once new and represented change. Likewise, people who only want to focus on innovation have to remember that in order for a tool or process to provide value to an organization, it has to be stable enough for that organization to use it over time.
|
||||
|
||||
An important element of any successful organization, especially an open organization where everyone is free to express their views, is empathy for other people's views. A little empathy is necessary for understanding both perceptions of impending change.
|
||||
|
||||
I've always thought about standardization and innovation as being two halves of one solution. A good analogy is that of college course catalog. In many colleges, all incoming first-year students regardless of their major will take a core set of classes. These core classes can cover a wide range of subjects and provide each student with an educational foundation. Every student receives a standard grounding in these disciplines regardless of their major course of study. Beyond the standardized core curriculum, then, each student is free to take specialized courses depending upon his or her major degree requirements and selected elective courses, as they work to innovate in their respective fields.
|
||||
|
||||
Similarly, standardization provides a foundation on which innovation can build. Think of standardization as a core set of tools and practices you might applied to _all_ products. Innovation can take the form of tools and practices that go _above and beyond_ this standard. This will enable every team to extend the core set of standardized tools and processes to meet the individual needs of their own specific projects. Standardization does not mean that all forward-looking actions stop. Over time, what was an innovation can become a standard, and thereby make room for the next innovation (and the next).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/2/standardization-versus-innovation
|
||||
|
||||
作者:[Len Dimaggio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ldimaggi
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
|
||||
[2]: https://opensource.com/open-organization/19/6/innovation-delusion
|
||||
[3]: https://opensource.com/open-organization/18/5/navigating-disruption-1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (chenmu-kk )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,113 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 open source tools for natural language processing)
|
||||
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
12 open source tools for natural language processing
|
||||
======
|
||||
|
||||
Take a look at a dozen options for your next NLP application.
|
||||
|
||||
![Chat bubbles][1]
|
||||
|
||||
Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application.
|
||||
|
||||
For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons.
|
||||
|
||||
The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool.
|
||||
|
||||
I was also surprised to see that the Scala libraries are fairly stagnant. It has been a couple of years since I last used Scala, when it was pretty popular. Most of the libraries haven't been updated since that time—or they've only had a few updates.
|
||||
|
||||
Finally, I excluded C++. This is mostly because it's been many years since I last wrote in C++, and the organizations I've worked in have not used C++ for NLP or any data science work.
|
||||
|
||||
### Python tools
|
||||
|
||||
#### Natural Language Toolkit (NLTK)
|
||||
|
||||
It would be easy to argue that [Natural Language Toolkit (NLTK)][2] is the most full-featured tool of the ones I surveyed. It implements pretty much any component of NLP you would need, like classification, tokenization, stemming, tagging, parsing, and semantic reasoning. And there's often more than one implementation for each, so you can choose the exact algorithm or methodology you'd like to use. It also supports many languages. However, it represents all data in the form of strings, which is fine for simple constructs but makes it hard to use some advanced functionality. The documentation is also quite dense, but there is a lot of it, as well as [a great book][3]. The library is also a bit slow compared to other tools. Overall, this is a great toolkit for experimentation, exploration, and applications that need a particular combination of algorithms.
|
||||
|
||||
#### SpaCy
|
||||
|
||||
[SpaCy][4] is probably the main competitor to NLTK. It is faster in most cases, but it only has a single implementation for each NLP component. Also, it represents everything as an object rather than a string, which simplifies the interface for building applications. This also helps it integrate with many other frameworks and data science tools, so you can do more once you have a better understanding of your text data. However, SpaCy doesn't support as many languages as NLTK. It does have a simple interface with a simplified set of choices and great documentation, as well as multiple neural models for various components of language processing and analysis. Overall, this is a great tool for new applications that need to be performant in production and don't require a specific algorithm.
|
||||
|
||||
#### TextBlob
|
||||
|
||||
[TextBlob][5] is kind of an extension of NLTK. You can access many of NLTK's functions in a simplified manner through TextBlob, and TextBlob also includes functionality from the Pattern library. If you're just starting out, this might be a good tool to use while learning, and it can be used in production for applications that don't need to be overly performant. Overall, TextBlob is used all over the place and is great for smaller projects.
|
||||
|
||||
#### Textacy
|
||||
|
||||
This tool may have the best name of any library I've ever used. Say "[Textacy][6]" a few times while emphasizing the "ex" and drawing out the "cy." Not only is it great to say, but it's also a great tool. It uses SpaCy for its core NLP functionality, but it handles a lot of the work before and after the processing. If you were planning to use SpaCy, you might as well use Textacy so you can easily bring in many types of data without having to write extra helper code.
|
||||
|
||||
#### PyTorch-NLP
|
||||
|
||||
[PyTorch-NLP][7] has been out for just a little over a year, but it has already gained a tremendous community. It is a great tool for rapid prototyping. It's also updated often with the latest research, and top companies and researchers have released many other tools to do all sorts of amazing processing, like image transformations. Overall, PyTorch is targeted at researchers, but it can also be used for prototypes and initial production workloads with the most advanced algorithms available. The libraries being created on top of it might also be worth looking into.
|
||||
|
||||
### Node tools
|
||||
|
||||
#### Retext
|
||||
|
||||
[Retext][8] is part of the [unified collective][9]. Unified is an interface that allows multiple tools and plugins to integrate and work together effectively. Retext is one of three syntaxes used by the unified tool; the others are Remark for markdown and Rehype for HTML. This is a very interesting idea, and I'm excited to see this community grow. Retext doesn't expose a lot of its underlying techniques, but instead uses plugins to achieve the results you might be aiming for with NLP. It's easy to do things like checking spelling, fixing typography, detecting sentiment, or making sure text is readable with simple plugins. Overall, this is an excellent tool and community if you just need to get something done without having to understand everything in the underlying process.
|
||||
|
||||
#### Compromise
|
||||
|
||||
[Compromise][10] certainly isn't the most sophisticated tool. If you're looking for the most advanced algorithms or the most complete system, this probably isn't the right tool for you. However, if you want a performant tool that has a wide breadth of features and can function on the client side, you should take a look at Compromise. Overall, its name is accurate in that the creators compromised on functionality and accuracy by focusing on a small package with much more specific functionality that benefits from the user understanding more of the context surrounding the usage.
|
||||
|
||||
#### Natural
|
||||
|
||||
[Natural][11] includes most functions you might expect in a general NLP library. It is mostly focused on English, but some other languages have been contributed, and the community is open to additional contributions. It supports tokenizing, stemming, classification, phonetics, term frequency–inverse document frequency, WordNet, string similarity, and some inflections. It might be most comparable to NLTK, in that it tries to include everything in one package, but it is easier to use and isn't necessarily focused around research. Overall, this is a pretty full library, but it is still in active development and may require additional knowledge of underlying implementations to be fully effective.
|
||||
|
||||
#### Nlp.js
|
||||
|
||||
[Nlp.js][12] is built on top of several other NLP libraries, including Franc and Brain.js. It provides a nice interface into many components of NLP, like classification, sentiment analysis, stemming, named entity recognition, and natural language generation. It also supports quite a few languages, which is helpful if you plan to work in something other than English. Overall, this is a great general tool with a simplified interface into several other great tools. This will likely take you a long way in your applications before you need something more powerful or more flexible.
|
||||
|
||||
### Java tools
|
||||
|
||||
#### OpenNLP
|
||||
|
||||
[OpenNLP][13] is hosted by the Apache Foundation, so it's easy to integrate it into other Apache projects, like Apache Flink, Apache NiFi, and Apache Spark. It is a general NLP tool that covers all the common processing components of NLP, and it can be used from the command line or within an application as a library. It also has wide support for multiple languages. Overall, OpenNLP is a powerful tool with a lot of features and ready for production workloads if you're using Java.
|
||||
|
||||
#### StanfordNLP
|
||||
|
||||
[Stanford CoreNLP][14] is a set of tools that provides statistical NLP, deep learning NLP, and rule-based NLP functionality. Many other programming language bindings have been created so this tool can be used outside of Java. It is a very powerful tool created by an elite research institution, but it may not be the best thing for production workloads. This tool is dual-licensed with a special license for commercial purposes. Overall, this is a great tool for research and experimentation, but it may incur additional costs in a production system. The Python implementation might also interest many readers more than the Java version. Also, one of the best Machine Learning courses is taught by a Stanford professor on Coursera. [Check it out][15] along with other great resources.
|
||||
|
||||
#### CogCompNLP
|
||||
|
||||
[CogCompNLP][16], developed by the University of Illinois, also has a Python library with similar functionality. It can be used to process text, either locally or on remote systems, which can remove a tremendous burden from your local device. It provides processing functions such as tokenization, part-of-speech tagging, chunking, named-entity tagging, lemmatization, dependency and constituency parsing, and semantic role labeling. Overall, this is a great tool for research, and it has a lot of components that you can explore. I'm not sure it's great for production workloads, but it's worth trying if you plan to use Java.
|
||||
|
||||
* * *
|
||||
|
||||
What are your favorite open source tools and libraries for NLP? Please share in the comments—especially if there's one I didn't include.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
|
||||
作者:[Dan Barker (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: http://www.nltk.org/
|
||||
[3]: http://www.nltk.org/book_1ed/
|
||||
[4]: https://spacy.io/
|
||||
[5]: https://textblob.readthedocs.io/en/dev/
|
||||
[6]: https://readthedocs.org/projects/textacy/
|
||||
[7]: https://pytorchnlp.readthedocs.io/en/latest/
|
||||
[8]: https://www.npmjs.com/package/retext
|
||||
[9]: https://unified.js.org/
|
||||
[10]: https://www.npmjs.com/package/compromise
|
||||
[11]: https://www.npmjs.com/package/natural
|
||||
[12]: https://www.npmjs.com/package/node-nlp
|
||||
[13]: https://opennlp.apache.org/
|
||||
[14]: https://stanfordnlp.github.io/CoreNLP/
|
||||
[15]: https://opensource.com/article/19/2/learn-data-science-ai
|
||||
[16]: https://github.com/CogComp/cogcomp-nlp
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (zhangxiangping)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -264,7 +264,7 @@ via: https://opensource.com/article/19/7/python-google-natural-language-api
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[zhangxiangping](https://github.com/zhangxiangping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,96 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with GnuCash)
|
||||
[#]: via: (https://opensource.com/article/20/2/gnucash)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Getting started with GnuCash
|
||||
======
|
||||
Manage your personal or small business accounting with GnuCash.
|
||||
![A dollar sign in a network][1]
|
||||
|
||||
For the past four years, I've been managing my personal finances with [GnuCash][2], and I'm quite satisfied with it. The open source (GPL v3) project has been growing and improving since its initial release in 1998, and the latest version, 3.8, released in December 2019, adds many improvements and bug fixes.
|
||||
|
||||
GnuCash is available for Windows, MacOS, and Linux. The application implements a double-entry bookkeeping system and can import a variety of popular open and proprietary file formats, including QIF, QFX, OFX, CSV, and more. This makes it easy to convert from other personal finance applications, including Quicken, which it was created to replicate.
|
||||
|
||||
With GnuCash, you can track personal finances as well as small business accounting and invoicing. It doesn't have an integrated payroll system; according to the documentation, you can track payroll expenses in GnuCash, but you have to calculate taxes and deductions outside the software.
|
||||
|
||||
### Installation
|
||||
|
||||
To install GnuCash on Linux:
|
||||
|
||||
* On Red Hat, CentOS, or Fedora: **$ sudo dnf install gnucash**
|
||||
* On Debian, Ubuntu, or Pop_OS: **$ sudo apt install gnucash**
|
||||
|
||||
|
||||
|
||||
You can also install it from [Flathub][3], which is what I used on my laptop running Elementary OS. (All the screenshots in this article are from that installation.)
|
||||
|
||||
### Setup
|
||||
|
||||
After you install and launch the program, you will see a welcome screen that gives you the option to create a new set of accounts, import QIF files, or open a new user tutorial.
|
||||
|
||||
![GnuCash Welcome screen][4]
|
||||
|
||||
#### Personal accounts
|
||||
|
||||
If you choose the first option (as I did), GnuCash opens a screen to help you get up and running. It collects initial data and sets up your account preferences, such as your account types and names, business data (e.g., tax ID number), and preferred currency.
|
||||
|
||||
![GnuCash new account setup][5]
|
||||
|
||||
GnuCash supports personal bank accounts, business accounts, car loans, CD and money market accounts, childcare accounts, and more.
|
||||
|
||||
As an example, start by creating a simple checkbook. You can either enter your account's beginning balance or import existing account data in multiple formats.
|
||||
|
||||
![GnuCash import data][6]
|
||||
|
||||
#### Invoicing
|
||||
|
||||
GnuCash also supports small business functions, including customers, vendors, and invoicing. To create an invoice, enter the data in the **Business ->Invoice** section.
|
||||
|
||||
![GnuCash create invoice][7]
|
||||
|
||||
Then you can either print the invoice on paper or export it to a PDF and email it to your customer.
|
||||
|
||||
![GnuCash invoice][8]
|
||||
|
||||
### Get help
|
||||
|
||||
If you have questions, there's an excellent Help section that's guide accessible from the far-right side of the menu bar.
|
||||
|
||||
![GnuCash help][9]
|
||||
|
||||
The project's website includes links to lots of helpful information, such as a great overview of GnuCash [features][10]. GnuCash also has [detailed documentation][11] available to download and read offline and a [wiki][12] with helpful information for users and developers.
|
||||
|
||||
You can find other files and documentation in the project's [GitHub][13] repository. The GnuCash project is volunteer-driven. If you would like to contribute, please check out [Getting involved][14] on the project's wiki.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/gnucash
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network)
|
||||
[2]: https://www.gnucash.org/
|
||||
[3]: https://flathub.org/apps/details/org.gnucash.GnuCash
|
||||
[4]: https://opensource.com/sites/default/files/images/gnucash_welcome.png (GnuCash Welcome screen)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/gnucash_newaccountsetup.png (GnuCash new account setup)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/gnucash_importdata.png (GnuCash import data)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gnucash_enter-invoice.png (GnuCash create invoice)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/gnucash_invoice.png (GnuCash invoice)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/gnucash_help.png (GnuCash help)
|
||||
[10]: https://www.gnucash.org/features.phtml
|
||||
[11]: https://www.gnucash.org/docs/v3/C/gnucash-help.pdf
|
||||
[12]: https://wiki.gnucash.org/wiki/GnuCash
|
||||
[13]: https://github.com/Gnucash
|
||||
[14]: https://wiki.gnucash.org/wiki/GnuCash#Getting_involved_in_the_GnuCash_project
|
@ -1,82 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (NVIDIA’s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux)
|
||||
[#]: via: (https://itsfoss.com/geforce-now-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
NVIDIA’s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux
|
||||
======
|
||||
|
||||
NVIDIA’s [GeForce NOW][1] cloud gaming service is something promising for gamers who probably don’t have the hardware but want to experience the latest and greatest games with the best possible experience using GeForce NOW (stream the game online and play it on any device you want).
|
||||
|
||||
The service was limited to a few users (in the form of the waitlist) to access. However, recently, they announced that [GeForce NOW is open to all][2]. But, it really isn’t.
|
||||
|
||||
Interestingly, it’s **not available for all the regions** across the globe. And, worse- **GeForce NOW does not support Linux**.
|
||||
|
||||
![][3]
|
||||
|
||||
### GeForce NOW is Not ‘Open For All’
|
||||
|
||||
The whole point of making a subscription-based cloud service to play games is to eliminate platform dependence.
|
||||
|
||||
Just like you would normally visit a website using a web browser – you should be able to stream a game on every platform. That’s the concept, right?
|
||||
|
||||
![][4]
|
||||
|
||||
Well, that’s definitely not rocket science – but NVIDIA still missed supporting Linux (and iOS)?
|
||||
|
||||
### Is it because no one uses Linux?
|
||||
|
||||
I would strongly disagree with this – even if it’s the reason for some to not support Linux. If that was the case, I wouldn’t be writing for It’s FOSS while using Linux as my primary desktop OS.
|
||||
|
||||
Not just that – why do you think a Twitter user mentioned the lack of support for Linux if it wasn’t a thing?
|
||||
|
||||
![][5]
|
||||
|
||||
Yes, maybe the userbase isn’t large enough but while considering this as a cloud-based service – it doesn’t make sense to **not support Linux**.
|
||||
|
||||
Technically, if no one games on Linux, **Valve** wouldn’t have noticed Linux as a platform to improve [Steam Play][6] to help more users play Windows-only games on Linux.
|
||||
|
||||
I don’t want to claim anything that’s not true – but the desktop Linux scene is evolving faster than ever for gaming (even if the stats are low when compared to Windows and Mac).
|
||||
|
||||
### Cloud gaming isn’t supposed to work like this
|
||||
|
||||
![][7]
|
||||
|
||||
As I mentioned above, it isn’t tough to find Linux gamers using Steam Play. It’s just that you’ll find the overall “market share” of gamers on Linux to be less than its counterparts.
|
||||
|
||||
Even though that’s a fact – cloud gaming isn’t supposed to depend on a specific platform. And, considering that the GeForce NOW is essentially a browser-based streaming service to play games, it shouldn’t be tough for a big shot like NVIDIA to support Linux.
|
||||
|
||||
Come on, team green – _you want us to believe that supporting Linux is technically tough_? Or, you just want to say that i_t’s not worth supporting the Linux platform_?
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
No matter how excited I was for the GeForce NOW service to launch – it was very disappointing to see that it does not support Linux at all.
|
||||
|
||||
If cloud gaming services like GeForce NOW start supporting Linux in the near future – **you probably won’t need a reason to use Windows** (*coughs*).
|
||||
|
||||
What do you think about it? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/geforce-now-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.nvidia.com/en-us/geforce-now/
|
||||
[2]: https://blogs.nvidia.com/blog/2020/02/04/geforce-now-pc-gaming/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now-linux.jpg?ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now.png?ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/geforce-now-twitter-1.jpg?ssl=1
|
||||
[6]: https://itsfoss.com/steam-play/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/ge-force-now.jpg?ssl=1
|
@ -0,0 +1,114 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Install All Essential Media Codecs in Ubuntu With This Single Command [Beginner’s Tip])
|
||||
[#]: via: (https://itsfoss.com/install-media-codecs-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Install All Essential Media Codecs in Ubuntu With This Single Command [Beginner’s Tip]
|
||||
======
|
||||
|
||||
If you have just installed Ubuntu or some other [Ubuntu flavors][1] like Kubuntu, Lubuntu etc, you’ll notice that your system doesn’t play some audio or video file.
|
||||
|
||||
For video files, you can [install VLC on Ubuntu][2]. [VLC][3] one of the [best video players for Linux][4] and can play almost any video file format. But you’ll still have troubles with audio media files and flash player.
|
||||
|
||||
The good thing is that [Ubuntu][5] provides a single package to install all the essential media codecs: ubuntu-restricted-extras.
|
||||
|
||||
![][6]
|
||||
|
||||
### What is Ubuntu Restricted Extras?
|
||||
|
||||
The ubuntu-restricted-extras is a software package that consists various essential software like flash plugin, [unrar][7], [gstreamer][8], mp4, codecs for [Chromium browser in Ubuntu][9] etc.
|
||||
|
||||
Since these software are not open source and some of them involve software patents, Ubuntu doesn’t install them by default. You’ll have to use multiverse repository, the software repository specifically created by Ubuntu to provide non-open source software to its users.
|
||||
|
||||
Please read this article to [learn more about various Ubuntu repositories][10].
|
||||
|
||||
### How to install Ubuntu Restricted Extras?
|
||||
|
||||
I find it surprising that the software center doesn’t list Ubuntu Restricted Extras. In any case, you can install the package using command line and it’s very simple.
|
||||
|
||||
Open a terminal by searching for it in the menu or using the [terminal keyboard shortcut Ctrl+Alt+T][11].
|
||||
|
||||
Since ubuntu-restrcited-extras package is available in the multiverse repository, you should verify that the multiverse repository is enabled on your system:
|
||||
|
||||
```
|
||||
sudo add-apt-repository multiverse
|
||||
```
|
||||
|
||||
And then you can install it in Ubuntu default edition using this command:
|
||||
|
||||
```
|
||||
sudo apt install ubuntu-restricted-extras
|
||||
```
|
||||
|
||||
When you enter the command, you’ll be asked to enter your password. When _**you type the password, nothing is displayed on the screen**_. That’s normal. Type your password and press enter.
|
||||
|
||||
It will show a huge list of packages to be installed. Press enter to confirm your selection when it asks.
|
||||
|
||||
You’ll also encounter an [EULA][12] (End User License Agreement) screen like this:
|
||||
|
||||
![Press Tab key to select OK and press Enter key][13]
|
||||
|
||||
It could be overwhelming to navigate this screen but don’t worry. Just press tab and it will highlight the options. When the correct options are highlighted, press enter to confirm your selection.
|
||||
|
||||
![Press Tab key to highlight Yes and press Enter key][14]
|
||||
|
||||
Once the process finishes, you should be able to play MP3 and other media formats thanks to newly installed media codecs.
|
||||
|
||||
##### Installing restricted extra package on Kubuntu, Lubuntu, Xubuntu
|
||||
|
||||
Do keep in mind that Kubuntu, Lubuntu and Xubuntu has this package available with their own respective names. They should have just used the same name but they don’t unfortunately.
|
||||
|
||||
On Kubuntu, use this command:
|
||||
|
||||
```
|
||||
sudo apt install kubuntu-restricted-extras
|
||||
```
|
||||
|
||||
On Lubuntu, use:
|
||||
|
||||
```
|
||||
sudo apt install lubuntu-restricted-extras
|
||||
```
|
||||
|
||||
On Xubuntu, you should use:
|
||||
|
||||
```
|
||||
sudo apt install xubuntu-restricted-extras
|
||||
```
|
||||
|
||||
I always recommend getting ubuntu-restricted-extras as one of the [essential things to do after installing Ubuntu][15]. It’s good to have a single command to install multiple codecs in Ubuntu.
|
||||
|
||||
I hope you like this quick tip in the Ubuntu beginner series. I’ll share more such tips in the future.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-media-codecs-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://itsfoss.com/install-latest-vlc/
|
||||
[3]: https://www.videolan.org/index.html
|
||||
[4]: https://itsfoss.com/video-players-linux/
|
||||
[5]: https://ubuntu.com/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/Media_Codecs_in_Ubuntu.png?ssl=1
|
||||
[7]: https://itsfoss.com/use-rar-ubuntu-linux/
|
||||
[8]: https://gstreamer.freedesktop.org/
|
||||
[9]: https://itsfoss.com/install-chromium-ubuntu/
|
||||
[10]: https://itsfoss.com/ubuntu-repositories/
|
||||
[11]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[12]: https://en.wikipedia.org/wiki/End-user_license_agreement
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras.jpg?ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras_1.jpg?ssl=1
|
||||
[15]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
|
109
sources/tech/20200210 Music composition with Python and Linux.md
Normal file
109
sources/tech/20200210 Music composition with Python and Linux.md
Normal file
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Music composition with Python and Linux)
|
||||
[#]: via: (https://opensource.com/article/20/2/linux-open-source-music)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Music composition with Python and Linux
|
||||
======
|
||||
A chat with Mr. MAGFest—Brendan Becker.
|
||||
![Wires plugged into a network switch][1]
|
||||
|
||||
I met Brendan Becker working in a computer store in 1999. We both enjoyed building custom computers and installing Linux on them. Brendan was always involved in several technology projects at once, ranging from game coding to music composition. Fast-forwarding a few years from the days of computer stores, he went on to write [pyDance][2], an open source implementation of multiple dancing games, and then became the CEO of music and gaming event [MAGFest][3]. Sometimes referred to as "Mr. MAGFest" because he was at the helm of the event, Brendan now uses the music pseudonym "[Inverse Phase][4]" as a composer of chiptunes—music predominantly made on 8-bit computers and game consoles.
|
||||
|
||||
I thought it would be interesting to interview him and ask some specifics about how he has benefited from Linux and open source software throughout his career.
|
||||
|
||||
![Inverse Phase performance photo][5]
|
||||
|
||||
Copyright Nickeledge, CC BY-SA 2.0.
|
||||
|
||||
### Alan Formy-Duval: How did you get started in computers and software?
|
||||
|
||||
Brendan Becker: There's been a computer in my household almost as far back as I can remember. My dad has fervently followed technology; he brought home a Compaq Portable when they first hit the market, and when he wasn't doing work on it, I would have access to it. Since I began reading at age two, using a computer became second nature to me—just read what it said on the disk, follow the instructions, and I could play games! Some of the time I would be playing with learning and education software, and we had a few disks full of games that I could play other times. I remember a single disk with a handful of free clones of popular titles. Eventually, my dad showed me that we could call other computers (BBS'ing at age 5!), and I saw where some of the games came from. One of the games I liked to play was written in BASIC, and all bets were off when I realized that I could simply modify the game by just reading a few things and re-typing them to make my game easier.
|
||||
|
||||
### Formy-Duval: This was the 1980s?
|
||||
|
||||
Becker: The Compaq Portable dropped in 1983 to give you a frame of reference. My dad had one of the first of that model.
|
||||
|
||||
### Formy-Duval: How did you get into Linux and open source software?
|
||||
|
||||
Becker: I was heavy into MODs and demoscene stuff in the early 90s, and I noticed that Walnut Creek ([cdrom.com][6]; now defunct) ran shop on FreeBSD. I was super curious about Unix and other operating systems in general, but didn't have much firsthand exposure, and thought it might be time to finally try something. DOOM had just released, and someone told me I might even be able to get it to run. Between that and being able to run cool internet servers, I started going down the rabbit hole. Someone saw me reading about FreeBSD and suggested I check out Linux, this new OS that was written from the ground up for x86, unlike BSD, which (they said) had some issues with compatibility. So, I joined #linuxhelp on undernet IRC and asked how to get started with Linux, pointing out that I had done a modicum of research (asking "what's the difference between Red Hat and Slackware?") and probing mainly about what would be easiest to use. The only person talking in the channel said that he was 13 years old and he could figure out Slackware, so I should not have an issue. A math teacher in my school gave me a hard disk, I downloaded the "A" disk sets and a boot disk, wrote it out, installed it, and didn't spend much time looking back.
|
||||
|
||||
### Formy-Duval: How'd you become known as Mr. MAGFest?
|
||||
|
||||
Becker: Well, this one is pretty easy. I became the acting head of MAGFest almost immediately after the first event. The former chairpeople were all going their separate ways, and I demanded the event not be canceled to the guy in charge. The solution was to run it myself, and that nickname became mine as I slowly molded the event into my own.
|
||||
|
||||
### Formy-Duval: I remember attending in those early days. How large did MAGFest eventually become?
|
||||
|
||||
Becker: The first MAGFest was 265 people. It is now a scary huge 20,000+ unique attendees.
|
||||
|
||||
### Formy-Duval: That is huge! Can you briefly describe the MAGFest convention?
|
||||
|
||||
Becker: One of my buddies, Hex, described it really well. He said, "It's like this video-game themed birthday party with all of your friends, but there happen to be a few thousand people there, and all of them can be your friends if you want, and then there are rock concerts." This was quickly adopted and shortened to "It's a four-day video game party with multiple video game rock concerts." Often the phrase "music and gaming festival" is all people need to get the idea.
|
||||
|
||||
### Formy-Duval: How did you make use of open source software in running MAGFest?
|
||||
|
||||
Becker: At the time I became the head of MAGFest, I had already written a game in Python, so I felt most comfortable also writing our registration system in Python. It was a pretty easy decision since there were no costs involved, and I already had the experience. Later on, our online registration system and rideshare interfaces were written in PHP/MySQL, and we used Kboard for our forums. Eventually, this evolved to us rolling our own registration system from scratch in Python, which we also use at the event, and running Drupal on the main website. At one point, I also wrote a system to manage the video room and challenge stations in Python. Oh, and we had a few game music listening stations that you could flip through tracks and liner notes of iconic game OSTs (Original Sound Tracks) and bands who played MAGFest.
|
||||
|
||||
### Formy-Duval: I understand that a few years ago you reduced your responsibilities at MAGFest to pursue new projects. What was your next endeavor?
|
||||
|
||||
Becker: I was always rather heavily into the game music scene and tried to bring as much of it to MAGFest as possible. As I became a part of those communities more and more, I wanted to participate. I wrote some medleys, covers, and arrangements of video game tunes using free, open source versions of the DOS and Windows demoscene tools that I had used before, which were also free but not necessarily open source. I released a few tracks in the first few years of running MAGFest, and then after some tough love and counseling from Jake Kaufman (also known as virt; Shovel Knight and Shantae are on his resume, among others), I switched gears to something I was much better at—chiptunes. Even though I had written PC speaker beeps and boops as a kid with my Compaq Portable and MOD files in the demoscene throughout the 90s, I released the first NES-spec track that I was truly proud to call my own in 2006. Several pop tributes and albums followed.
|
||||
|
||||
In 2010, I was approached by multiple individuals for game soundtrack work. Even though the soundtrack work didn't affect it much, I began to scale back some of my responsibilities with MAGFest more seriously, and in 2011, I decided to take a much larger step into the background. I would stay on the board as a consultant and help people learn what they needed to run their departments, but I was no longer at the helm. At the same time, my part-time job, which was paying the bills, laid off all of their workers, and I suddenly found myself with a lot of spare time. I began writing Pretty Eight Machine, a Nine Inch Nails tribute, which I spent over a year on, and between that and the game soundtrack work, I proved to myself that I could put food on the table (if only barely) with music, and this is what I wanted to do next.
|
||||
|
||||
###
|
||||
|
||||
![Inverse Phase CTM Tracker][7]
|
||||
|
||||
Copyright Inverse Phase, Used with permission.
|
||||
|
||||
### Formy-Duval: What is your workspace like in terms of hardware and software?
|
||||
|
||||
Becker: In my DOS/Windows days, I used mostly FastTracker 2. In Linux, I replaced that with SoundTracker (not Karsten Obarski's original, but a GTK rewrite; see [soundtracker.org][8]). These days, SoundTracker is in a state of flux—although I still need to try the new GTK3 version—but [MilkyTracker][9] is a good replacement when I can't use SoundTracker. Good old FastTracker 2 also runs in DOSBox, if I really need the original. This was when I started using Linux, however, so this is stuff I figured out 20-25 years ago.
|
||||
|
||||
Within the last ten years, I've gravitated away from sample-based music and towards chiptunes—music synthesized by old sound chips from 8- and 16-bit game systems and computers. There is a very good cross-platform tool called [Deflemask][10] to write music for a lot of these systems. A few of the systems I want to write music for aren't supported, though, and Deflemask is closed source, so I've begun building my own music composition environment from scratch with Python and [Pygame][11]. I maintain my code tree using Git and will control hardware synthesizer boards using open source [KiCad][12].
|
||||
|
||||
### Formy-Duval: What projects are you currently focused on?
|
||||
|
||||
Becker: I work on game soundtracks and music commissions off and on. While that's going on, I've also been working on starting an electronic entertainment museum called [Bloop][13]. We're doing a lot of cool things with archival and inventory, but perhaps the most exciting thing is that we've been building exhibits with Raspberry Pis. They're so versatile, and it's weird to think that, if I had tried to do this even ten years ago, I wouldn't have had small single-board computers to drive my exhibits; I probably would have bolted a laptop to the back of a flat-panel!
|
||||
|
||||
### Formy-Duval: There are a lot more game platforms coming to Linux now, such as Steam, Lutris, and Play-on-Linux. Do you think this trend will continue, and these are here to stay?
|
||||
|
||||
Becker: As someone who's been gaming on Linux for 25 years—in fact, I was brought to Linux _because_ of games—I think I find this question harder than most people would. I've been running Linux-native games for decades, and I've even had to eat my "either a Linux solution exists or can be written" words back in the day, but eventually, I did, and wrote a Linux game.
|
||||
|
||||
Real talk: Android's been out since 2008. If you've played a game on Android, you've played a game on Linux. Steam's been available for Linux for eight years. The Steambox/SteamOS was announced only a year after Steam. I don't hear a whole lot about Lutris or Play-on-Linux, but I know they exist and hope they succeed. I do see a pretty big following for GOG, and I think that's pretty neat. I see a lot of quality game ports coming out of people like Ryan Gordon (icculus) and Ethan Lee (flibitijibibo), and some companies even port in-house. Game engines like Unity and Unreal already support Linux. Valve has incorporated Proton into the Linux version of Steam for something like two years now, so now Linux users don't even have to search for Linux-native versions of their games.
|
||||
|
||||
I can say that I think most gamers expect and will continue to expect the level of support they're already receiving from the retail game market. Personally, I hope that level goes up and not down!
|
||||
|
||||
_Learn more about Brendan's work as [Inverse Phase][14]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/linux-open-source-music
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other21x_cc.png?itok=JJJ5z6aB (Wires plugged into a network switch)
|
||||
[2]: http://icculus.org/pyddr/
|
||||
[3]: http://magfest.org/
|
||||
[4]: http://www.inversephase.com/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/inverse_phase_performance_bw.png (Inverse Phase performance photo)
|
||||
[6]: https://en.wikipedia.org/wiki/Walnut_Creek_CDROM
|
||||
[7]: https://opensource.com/sites/default/files/uploads/inversephase_ctm_tracker_screenshot.png (Inverse Phase CTM Tracker)
|
||||
[8]: http://soundtracker.org
|
||||
[9]: http://www.milkytracker.org
|
||||
[10]: http://www.deflemask.com
|
||||
[11]: http://www.pygame.org
|
||||
[12]: http://www.kicad-pcb.org
|
||||
[13]: http://bloopmuseum.com
|
||||
[14]: https://www.inversephase.com
|
@ -0,0 +1,118 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chai-yuan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Playing Music on your Fedora Terminal with MPD and ncmpcpp)
|
||||
[#]: via: (https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/)
|
||||
[#]: author: (Carmine Zaccagnino https://fedoramagazine.org/author/carzacc/)
|
||||
|
||||
Playing Music on your Fedora Terminal with MPD and ncmpcpp
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
MPD, as the name implies, is a Music Playing Daemon. It can play music but, being a daemon, any piece of software can interface with it and play sounds, including some CLI clients.
|
||||
|
||||
One of them is called _ncmpcpp_, which is an improvement over the pre-existing _ncmpc_ tool. The name change doesn’t have much to do with the language they’re written in: they’re both C++, but _ncmpcpp_ is called that because it’s the _NCurses Music Playing Client_ _Plus Plus_.
|
||||
|
||||
### Installing MPD and ncmpcpp
|
||||
|
||||
The _ncmpmpcc_ client can be installed from the official Fedora repositories with DNF directly with
|
||||
|
||||
```
|
||||
$ sudo dnf install ncmpcpp
|
||||
```
|
||||
|
||||
On the other hand, MPD has to be installed from the RPMFusion _free_ repositories, which you can enable, [as per the official installation instructions][2], by running
|
||||
|
||||
```
|
||||
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
|
||||
```
|
||||
|
||||
and then you can install MPD by running
|
||||
|
||||
```
|
||||
$ sudo dnf install mpd
|
||||
```
|
||||
|
||||
### Configuring and Starting MPD
|
||||
|
||||
The most painless way to set up MPD is to run it as a regular user. The default is to run it as the dedicated _mpd_ user, but that causes all sorts of issues with permissions.
|
||||
|
||||
Before we can run it, we need to create a local config file that will allow it to run as a regular user.
|
||||
|
||||
To do that, create a subdirectory called _mpd_ in _~/.config_:
|
||||
|
||||
```
|
||||
$ mkdir ~/.config/mpd
|
||||
```
|
||||
|
||||
copy the default config file into this directory:
|
||||
|
||||
```
|
||||
$ cp /etc/mpd.conf ~/.config/mpd
|
||||
```
|
||||
|
||||
and then edit it with a text editor like _vim_, _nano_ or _gedit_:
|
||||
|
||||
```
|
||||
$ nano ~/.config/mpd/mpd.conf
|
||||
```
|
||||
|
||||
I recommend you read through all of it to check if there’s anything you need to do, but for most setups you can delete everything and just leave the following:
|
||||
|
||||
```
|
||||
db_file "~/.config/mpd/mpd.db"
|
||||
log_file "syslog"
|
||||
```
|
||||
|
||||
At this point you should be able to just run
|
||||
|
||||
```
|
||||
$ mpd
|
||||
```
|
||||
|
||||
with no errors, which will start the MPD daemon in the background.
|
||||
|
||||
### Using ncmpcpp
|
||||
|
||||
Simply run
|
||||
|
||||
```
|
||||
$ ncmpcpp
|
||||
```
|
||||
|
||||
and you’ll see a ncurses-powered graphical user interface in your terminal.
|
||||
|
||||
Press _4_ and you should see your local music library, be able to change the selection using the arrow keys and press _Enter_ to play a song.
|
||||
|
||||
Doing this multiple times will create a _playlist_, which allows you to move to the next track using the _>_ button (not the right arrow, the _>_ closing angle bracket character) and go back to the previous track with _<_. The + and – buttons increase and decrease volume. The _Q_ button quits ncmpcpp but it doesn’t stop the music. You can play and pause with _P_.
|
||||
|
||||
You can see the current playlist by pressing the _1_ button (this is the default view). From this view you can press _i_ to look at the information (tags) about the current song. You can change the tags of the currently playing (or paused) song by pressing _6_.
|
||||
|
||||
Pressing the \ button will add (or remove) an informative panel at the top of the view. In the top left, you should see something that looks like this:
|
||||
|
||||
```
|
||||
[------]
|
||||
```
|
||||
|
||||
Pressing the _r_, _z_, _y_, _R_, _x_ buttons will respectively toggle the _repeat_, _random_, _single_, _consume_ and _crossfade_ playback modes and will replace one of the _–_ characters in that little indicator to the initial of the selected mode.
|
||||
|
||||
Pressing the _F1_ button will display some help text, which contains a list of keybindings, so there’s no need to write a complete list here. So now go on, be geeky, and play all your music from your terminal!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/
|
||||
|
||||
作者:[Carmine Zaccagnino][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/carzacc/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/play_music_mpd-816x346.png
|
||||
[2]: https://rpmfusion.org/Configuration
|
222
sources/tech/20200210 Scan Kubernetes for errors with KRAWL.md
Normal file
222
sources/tech/20200210 Scan Kubernetes for errors with KRAWL.md
Normal file
@ -0,0 +1,222 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Scan Kubernetes for errors with KRAWL)
|
||||
[#]: via: (https://opensource.com/article/20/2/kubernetes-scanner)
|
||||
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
|
||||
|
||||
Scan Kubernetes for errors with KRAWL
|
||||
======
|
||||
The KRAWL script identifies errors in Kubernetes pods and containers.
|
||||
![Ship captain sailing the Kubernetes seas][1]
|
||||
|
||||
When you're running containers with Kubernetes, you often find that they pile up. This is by design. It's one of the advantages of containers: they're cheap to start whenever a new one is needed. You can use a front-end like OpenShift or OKD to manage pods and containers. Those make it easy to visualize what you have set up, and have a rich set of commands for quick interactions.
|
||||
|
||||
If a platform to manage containers doesn't fit your requirements, though, you can also get that information using only a Kubernetes toolchain, but there are a lot of commands you need for a full overview of a complex environment. For that reason, I wrote [KRAWL][2], a simple script that scans pods and containers under the namespaces on Kubernetes clusters and displays the output of events, if any are found. It can also be used as Kubernetes plugin for the same purpose. It's a quick and easy way to get a lot of useful information.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* You must have kubectl installed.
|
||||
* Your cluster's kubeconfig must be either in its default location ($HOME/.kube/config) or exported (KUBECONFIG=/path/to/kubeconfig).
|
||||
|
||||
|
||||
|
||||
### Usage
|
||||
|
||||
|
||||
```
|
||||
`$ ./krawl`
|
||||
```
|
||||
|
||||
![KRAWL script][3]
|
||||
|
||||
### The script
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# AUTHOR: Abhishek Tamrakar
|
||||
# EMAIL: [abhishek.tamrakar08@gmail.com][4]
|
||||
# LICENSE: Copyright (C) 2018 Abhishek Tamrakar
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
##
|
||||
#define the variables
|
||||
KUBE_LOC=~/.kube/config
|
||||
#define variables
|
||||
KUBECTL=$(which kubectl)
|
||||
GET=$(which egrep)
|
||||
AWK=$(which awk)
|
||||
red=$(tput setaf 1)
|
||||
normal=$(tput sgr0)
|
||||
# define functions
|
||||
|
||||
# wrapper for printing info messages
|
||||
info()
|
||||
{
|
||||
printf '\n\e[34m%s\e[m: %s\n' "INFO" "$@"
|
||||
}
|
||||
|
||||
# cleanup when all done
|
||||
cleanup()
|
||||
{
|
||||
rm -f results.csv
|
||||
}
|
||||
|
||||
# just check if the command we are about to call is available
|
||||
checkcmd()
|
||||
{
|
||||
#check if command exists
|
||||
local cmd=$1
|
||||
if [ -z "${!cmd}" ]
|
||||
then
|
||||
printf '\n\e[31m%s\e[m: %s\n' "ERROR" "check if $1 is installed !!!"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
get_namespaces()
|
||||
{
|
||||
#get namespaces
|
||||
namespaces=( \
|
||||
$($KUBECTL get namespaces --ignore-not-found=true | \
|
||||
$AWK '/Active/ {print $1}' \
|
||||
ORS=" ") \
|
||||
)
|
||||
#exit if namespaces are not found
|
||||
if [ ${#namespaces[@]} -eq 0 ]
|
||||
then
|
||||
printf '\n\e[31m%s\e[m: %s\n' "ERROR" "No namespaces found!!"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
#get events for pods in errored state
|
||||
get_pod_events()
|
||||
{
|
||||
printf '\n'
|
||||
if [ ${#ERRORED[@]} -ne 0 ]
|
||||
then
|
||||
info "${#ERRORED[@]} errored pods found."
|
||||
for CULPRIT in ${ERRORED[@]}
|
||||
do
|
||||
info "POD: $CULPRIT"
|
||||
info
|
||||
$KUBECTL get events \
|
||||
--field-selector=involvedObject.name=$CULPRIT \
|
||||
-ocustom-columns=LASTSEEN:.lastTimestamp,REASON:.reason,MESSAGE:.message \
|
||||
--all-namespaces \
|
||||
--ignore-not-found=true
|
||||
done
|
||||
else
|
||||
info "0 pods with errored events found."
|
||||
fi
|
||||
}
|
||||
|
||||
#define the logic
|
||||
get_pod_errors()
|
||||
{
|
||||
printf "%s %s %s\n" "NAMESPACE,POD_NAME,CONTAINER_NAME,ERRORS" > results.csv
|
||||
printf "%s %s %s\n" "---------,--------,--------------,------" >> results.csv
|
||||
for NAMESPACE in ${namespaces[@]}
|
||||
do
|
||||
while IFS=' ' read -r POD CONTAINERS
|
||||
do
|
||||
for CONTAINER in ${CONTAINERS//,/ }
|
||||
do
|
||||
COUNT=$($KUBECTL logs --since=1h --tail=20 $POD -c $CONTAINER -n $NAMESPACE 2>/dev/null| \
|
||||
$GET -c '^error|Error|ERROR|Warn|WARN')
|
||||
if [ $COUNT -gt 0 ]
|
||||
then
|
||||
STATE=("${STATE[@]}" "$NAMESPACE,$POD,$CONTAINER,$COUNT")
|
||||
else
|
||||
#catch pods in errored state
|
||||
ERRORED=($($KUBECTL get pods -n $NAMESPACE --no-headers=true | \
|
||||
awk '!/Running/ {print $1}' ORS=" ") \
|
||||
)
|
||||
fi
|
||||
done
|
||||
done< <($KUBECTL get pods -n $NAMESPACE --ignore-not-found=true -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name --no-headers=true)
|
||||
done
|
||||
printf "%s\n" ${STATE[@]:-None} >> results.csv
|
||||
STATE=()
|
||||
}
|
||||
#define usage for seprate run
|
||||
usage()
|
||||
{
|
||||
cat << EOF
|
||||
|
||||
USAGE: "${0##*/} </path/to/kube-config>(optional)"
|
||||
|
||||
This program is a free software under the terms of Apache 2.0 License.
|
||||
COPYRIGHT (C) 2018 Abhishek Tamrakar
|
||||
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
#check if basic commands are found
|
||||
trap cleanup EXIT
|
||||
checkcmd KUBECTL
|
||||
#
|
||||
#set the ground
|
||||
if [ $# -lt 1 ]; then
|
||||
if [ ! -e ${KUBE_LOC} -a ! -s ${KUBE_LOC} ]
|
||||
then
|
||||
info "A readable kube config location is required!!"
|
||||
usage
|
||||
fi
|
||||
elif [ $# -eq 1 ]
|
||||
then
|
||||
export KUBECONFIG=$1
|
||||
elif [ $# -gt 1 ]
|
||||
then
|
||||
usage
|
||||
fi
|
||||
#play
|
||||
get_namespaces
|
||||
get_pod_errors
|
||||
|
||||
printf '\n%40s\n' 'KRAWL'
|
||||
printf '%s\n' '---------------------------------------------------------------------------------'
|
||||
printf '%s\n' ' Krawl is a command line utility to scan pods and prints name of errored pods '
|
||||
printf '%s\n\n' ' +and containers within. To use it as kubernetes plugin, please check their page '
|
||||
printf '%s\n' '================================================================================='
|
||||
|
||||
cat results.csv | sed 's/,/,|/g'| column -s ',' -t
|
||||
get_pod_events
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
_This was originally published as the README in [KRAWL's GitHub repository][2] and is reused with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/kubernetes-scanner
|
||||
|
||||
作者:[Abhishek Tamrakar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tamrakar
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
|
||||
[2]: https://github.com/abhiTamrakar/kube-plugins/tree/master/krawl
|
||||
[3]: https://opensource.com/sites/default/files/uploads/krawl_0.png (KRAWL script)
|
||||
[4]: mailto:abhishek.tamrakar08@gmail.com
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top hacks for the YaCy open source search engine)
|
||||
[#]: via: (https://opensource.com/article/20/2/yacy-search-engine-hacks)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Top hacks for the YaCy open source search engine
|
||||
======
|
||||
Rather than adapting to someone else's vision, customize you search
|
||||
engine for the internet you want with YaCY.
|
||||
![Browser of things][1]
|
||||
|
||||
In my article about [getting started with YaCy][2], I explained how to install and start using the [YaCy][3] peer-to-peer search engine. One of the most exciting things about YaCy, however, is the fact that it's a local client. Each user owns and operates a node in a globally distributed search engine infrastructure, which means each user is in full control of how they navigate and experience the World Wide Web.
|
||||
|
||||
For instance, Google used to provide the URL google.com/linux as a shortcut to filter searches for Linux-related topics. It was a small feature that many people found useful, but [topical shortcuts were dropped][4] in 2011.
|
||||
|
||||
YaCy makes it possible to customize your search experience.
|
||||
|
||||
### Customize YaCy
|
||||
|
||||
Once you've installed YaCy, navigate to your search page at **localhost:8090**. To customize your search engine, click the **Administration** button in the top-right corner (it may be concealed in a menu icon on small screens).
|
||||
|
||||
The admin panel allows you to configure how YaCy uses your system resources and how it interacts with other YaCy clients.
|
||||
|
||||
![YaCy profile selector][5]
|
||||
|
||||
For instance, to configure an alternative port and set RAM and disk usage, use the **First steps** menu in the sidebar. To monitor YaCy activity, use the **Monitoring** panel. Most features are discoverable by clicking through the panels, but here are some of my favorites.
|
||||
|
||||
### Search appliance
|
||||
|
||||
Several companies have offered [intranet search appliances][6], but with YaCy, you can implement it for free. Whether you want to search through your own data or to implement a search system for local file shares at your business, you can choose to run YaCy as an internal indexer for files accessible over HTTP, FTP, and SMB (Samba). People in your local network can use your personalized instance of YaCy to find shared files, and none of the data is shared with users outside your network.
|
||||
|
||||
### Network configuration
|
||||
|
||||
YaCy favors isolation and privacy by default. You can adjust how you connect to the peer-to-peer network in the **Network Configuration** panel, which is revealed by clicking the link located at the top of the **Use Case & Account** configuration screen.
|
||||
|
||||
![YaCy network configuration][7]
|
||||
|
||||
### Crawl a site
|
||||
|
||||
Peer-to-peer indexing is user-driven. There's no mega-corporation initiating searches on every accessible page on the internet, so a site isn't indexed until someone deliberately crawls it with YaCy.
|
||||
|
||||
The YaCy client provides two options to help you help crawl the web: you can perform a manual crawl, and you can make YaCy available for suggested crawls.
|
||||
|
||||
![YaCy advanced crawler][8]
|
||||
|
||||
#### Start a manual crawling job
|
||||
|
||||
A manual crawl is when you enter the URL of a site you want to index and start a YaCy crawl job. To do this, click the **Advanced Crawler** link in the **Production** sidebar. Enter one or more URLs, then scroll to the bottom of the page and enable the **Do remote indexing** option. This enables your client to broadcast the URLs it is indexing, so clients that have opted to accept requests can help you perform the crawl.
|
||||
|
||||
To start the crawl, click the **Start New Crawl Job** button at the bottom of the page. I use this method to index sites I use frequently or find useful.
|
||||
|
||||
Once the crawl job starts, YaCy indexes the URLs you enter and stores the index on your local machine. As long as you are running in senior mode (meaning your firewall permits incoming and outgoing traffic on port 8090), your index is available to YaCy users all over the globe.
|
||||
|
||||
#### Join in on a crawl
|
||||
|
||||
While some very dedicated YaCy senior users may crawl the internet compulsively, there are a _lot_ of sites out there in the world. It might seem impossible to match the resources of popular spiders and bots, but because YaCy has so many users, they can band together as a community to index more of the internet than any one user could do alone. If you activate YaCy to broadcast requests for site crawls, participating clients can work together to crawl sites you might not otherwise think to crawl manually.
|
||||
|
||||
To configure your client to accept jobs from others, click the **Advanced Crawler** link in the left sidebar menu. In the **Advanced Crawler** panel, click the **Remote Crawling** link under the **Network Harvesting** heading at the top of the page. Enable remote crawls by placing a tick in the checkbox next to the **Load** setting.
|
||||
|
||||
![YaCy remote crawling][9]
|
||||
|
||||
### YaCy monitoring and more
|
||||
|
||||
YaCy is a surprisingly robust search engine, providing you with the opportunity to theme and refine your experience in nearly any way you could want. You can monitor the activity of your YaCy client in the **Monitoring** panel, so you can get an idea of how many people are benefiting from the work of the YaCy community and also see what kind of activity it's generating for your computer and network.
|
||||
|
||||
![YaCy monitoring screen][10]
|
||||
|
||||
### Search engines make a difference
|
||||
|
||||
The more time you spend with the Administration screen, the more fun it becomes to ponder how the search engine you use can change your perspective. Your experience of the internet is shaped by the results you get back for even the simplest of queries. You might notice, in fact, how different one person's "internet" is from another person's when you talk to computer users from a different industry. For some people, the web is littered with ads and promoted searches and suffers from the tunnel vision of learned responses to queries. For instance, if someone consistently searches for answers about X, most commercial search engines will give weight to query responses that concern X. That's a useful feature on the one hand, but it occludes answers that require Y, even though that might be the better solution for a specific task.
|
||||
|
||||
As in real life, stepping outside a manufactured view of the world can be healthy and enlightening. Try YaCy, and see what you discover.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/yacy-search-engine-hacks
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
|
||||
[2]: https://opensource.com/article/20/2/open-source-search-engine
|
||||
[3]: https://yacy.net/
|
||||
[4]: https://www.linuxquestions.org/questions/linux-news-59/is-there-no-more-linux-google-884306/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/yacy-profiles.jpg (YaCy profile selector)
|
||||
[6]: https://en.wikipedia.org/wiki/Vivisimo
|
||||
[7]: https://opensource.com/sites/default/files/uploads/yacy-network-config.jpg (YaCy network configuration)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/yacy-advanced-crawler.jpg (YaCy advanced crawler)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/yacy-remote-crawl-accept.jpg (YaCy remote crawling)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/yacy-monitor.jpg (YaCy monitoring screen)
|
@ -0,0 +1,210 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Automate your live demos with this shell script)
|
||||
[#]: via: (https://opensource.com/article/20/2/live-demo-script)
|
||||
[#]: author: (Lisa Seelye https://opensource.com/users/lisa)
|
||||
|
||||
Automate your live demos with this shell script
|
||||
======
|
||||
Try this script the next time you give a presentation to prevent making
|
||||
typos in front of a live audience.
|
||||
![Person using a laptop][1]
|
||||
|
||||
I gave a talk about [multi-architecture container images][2] at [LISA19][3] in October that included a lengthy live demo. Rather than writing out 30+ commands and risking typos, I decided to automate the demo with a shell script.
|
||||
|
||||
The script mimics what appears as input/output and runs the real commands in the background, pausing at various points so I can narrate what is going on. I'm very pleased with how the script turned out and the effect on stage. The script and supporting materials for my presentation are available on [GitHub][4] under an Apache 2.0 license.
|
||||
|
||||
### The script
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
IMG=thedoh/lisa19
|
||||
REGISTRY=docker.io
|
||||
VERSION=19.10.1
|
||||
|
||||
# Plan B with GCR:
|
||||
#IMG=dulcet-iterator-213018
|
||||
#REGISTRY=us.gcr.io
|
||||
#VERSION=19.10.1
|
||||
|
||||
pause() {
|
||||
local step="${1}"
|
||||
ps1
|
||||
echo -n "# Next step: ${step}"
|
||||
read
|
||||
}
|
||||
|
||||
ps1() {
|
||||
echo -ne "\033[01;32m${USER}@$(hostname -s) \033[01;34m$(basename $(pwd)) \$ \033[00m"
|
||||
}
|
||||
|
||||
echocmd() {
|
||||
echo "$(ps1)$@"
|
||||
}
|
||||
|
||||
docmd() {
|
||||
echocmd $@
|
||||
$@
|
||||
}
|
||||
|
||||
step0() {
|
||||
local registry="${1}" img="${2}" version="${3}"
|
||||
# Mindful of tokens in ~/.docker/config.json
|
||||
docmd grep experimental ~/.docker/config.json
|
||||
|
||||
docmd cd ~/go/src/github.com/lisa/lisa19-containers
|
||||
|
||||
pause "This is what we'll be building"
|
||||
docmd export REGISTRY=${registry}
|
||||
docmd export IMG=${img}
|
||||
docmd export VERSION=${version}
|
||||
docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
|
||||
}
|
||||
|
||||
step1() {
|
||||
local registry="${1}" img="${2}" version="${3}"
|
||||
|
||||
docmd docker build --no-cache --platform=linux/amd64 --build-arg=GOARCH=amd64 -t ${REGISTRY}/${IMG}:amd64-${VERSION} .
|
||||
pause "ARM64 image next"
|
||||
docmd docker build --no-cache --platform=linux/arm64 --build-arg=GOARCH=arm64 -t ${REGISTRY}/${IMG}:arm64-${VERSION} .
|
||||
}
|
||||
|
||||
step2() {
|
||||
local registry="${1}" img="${2}" version="${3}" origpwd=$(pwd) savedir=$(mktemp -d) jsontemp=$(mktemp -t XXXXX)
|
||||
chmod 700 $jsontemp $savedir
|
||||
# Set our way back home and get ready to fix our arm64 image to amd64.
|
||||
echocmd 'origpwd=$(pwd)'
|
||||
echocmd 'savedir=$(mktemp -d)'
|
||||
echocmd "mkdir -p \$savedir/change"
|
||||
mkdir -p $savedir/change &>/dev/null
|
||||
echocmd "docker save ${REGISTRY}/${IMG}:arm64-${VERSION} 2>/dev/null 1> \$savedir/image.tar"
|
||||
docker save ${REGISTRY}/${IMG}:arm64-${VERSION} 2>/dev/null 1> $savedir/image.tar
|
||||
pause "untar the image to access its metadata"
|
||||
|
||||
echocmd "cd \$savedir/change"
|
||||
cd $savedir/change
|
||||
echocmd tar xf \$savedir/image.tar
|
||||
tar xf $savedir/image.tar
|
||||
docmd ls -l
|
||||
|
||||
pause "find the JSON config file"
|
||||
echocmd 'jsonfile=$(jq -r ".[0].Config" manifest.json)'
|
||||
jsonfile=$(jq -r ".[0].Config" manifest.json)
|
||||
|
||||
pause "notice the original metadata says amd64"
|
||||
echocmd jq '{architecture: .architecture, ID: .config.Image}' \$jsonfile
|
||||
jq '{architecture: .architecture, ID: .config.Image}' $jsonfile
|
||||
|
||||
pause "Change from amd64 to arm64 using a temp file"
|
||||
echocmd "jq '.architecture = \"arm64\"' \$jsonfile > \$jsontemp"
|
||||
jq '.architecture = "arm64"' $jsonfile > $jsontemp
|
||||
echocmd /bin/mv -f -- \$jsontemp \$jsonfile
|
||||
/bin/mv -f -- $jsontemp $jsonfile
|
||||
|
||||
pause "Check to make sure the config JSON file says arm64 now"
|
||||
echocmd jq '{architecture: .architecture, ID: .config.Image}' \$jsonfile
|
||||
jq '{architecture: .architecture, ID: .config.Image}' $jsonfile
|
||||
|
||||
pause "delete the image with the incorrect metadata"
|
||||
docmd docker rmi ${REGISTRY}/${IMG}:arm64-${VERSION}
|
||||
|
||||
pause "Re-compress the ARM64 image and load it back into Docker, then clean up the temp space"
|
||||
echocmd 'tar cf - * | docker load'
|
||||
tar cf - * | docker load
|
||||
|
||||
docmd cd $origpwd
|
||||
echocmd "/bin/rm -rf -- \$savedir"
|
||||
/bin/rm -rf -- $savedir &>/dev/null
|
||||
}
|
||||
|
||||
step3() {
|
||||
local registry="${1}" img="${2}" version="${3}"
|
||||
docmd docker push ${registry}/${img}:amd64-${version}
|
||||
pause "push ARM64 image to ${registry}"
|
||||
docmd docker push ${registry}/${img}:arm64-${version}
|
||||
}
|
||||
|
||||
step4() {
|
||||
local registry="${1}" img="${2}" version="${3}"
|
||||
docmd docker manifest create ${registry}/${img}:${version} ${registry}/${img}:arm64-${version} ${registry}/${img}:amd64-${version}
|
||||
|
||||
pause "add a reference to the amd64 image to the manifest list"
|
||||
docmd docker manifest annotate ${registry}/${img}:${version} ${registry}/${img}:amd64-${version} --os linux --arch amd64
|
||||
pause "now add arm64"
|
||||
docmd docker manifest annotate ${registry}/${img}:${version} ${registry}/${img}:arm64-${version} --os linux --arch arm64
|
||||
}
|
||||
|
||||
step5() {
|
||||
local registry="${1}" img="${2}" version="${3}"
|
||||
docmd docker manifest push ${registry}/${img}:${version}
|
||||
}
|
||||
|
||||
step6() {
|
||||
local registry="${1}" img="${2}" version="${3}"
|
||||
docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
|
||||
|
||||
pause "ask docker.io if ${img}:${version} has a linux/amd64 manifest, and run it"
|
||||
docmd docker pull --platform linux/amd64 ${registry}/${img}:${version}
|
||||
docmd docker run --rm -i ${registry}/${img}:${version}
|
||||
|
||||
pause "clean slate again"
|
||||
docmd make REGISTRY=${registry} IMG=${img} VERSION=${version} clean
|
||||
|
||||
pause "now repeat for linux/arm64 and see what it gives us"
|
||||
docmd docker pull --platform linux/arm64 ${registry}/${img}:${version}
|
||||
set +e
|
||||
docmd docker run --rm -i ${registry}/${img}:${version}
|
||||
set -e
|
||||
if [[ $(uname -s) == "Darwin" ]]; then
|
||||
pause "note about Docker on Mac and binfmt_misc: binfmt_misc lets a mac run arm64 binaries in the Docker VM"
|
||||
fi
|
||||
}
|
||||
|
||||
pause "initial setup"
|
||||
step0 ${REGISTRY} ${IMG} ${VERSION}
|
||||
pause "1 build constituent images"
|
||||
step1 ${REGISTRY} ${IMG} ${VERSION}
|
||||
|
||||
pause "2 fix ARM64 metadata"
|
||||
step2 ${REGISTRY} ${IMG} ${VERSION}
|
||||
|
||||
pause "3 push constituent images up to docker.io"
|
||||
step3 ${REGISTRY} ${IMG} ${VERSION}
|
||||
|
||||
pause "4 build the manifest list for the image"
|
||||
step4 ${REGISTRY} ${IMG} ${VERSION}
|
||||
|
||||
pause "5 Push the manifest list to docker.io"
|
||||
step5 ${REGISTRY} ${IMG} ${VERSION}
|
||||
|
||||
pause "6 clean slate, and validate the list-based image"
|
||||
step6 ${REGISTRY} ${IMG} ${VERSION}
|
||||
|
||||
docmd echo 'Manual steps all done!'
|
||||
make REGISTRY=${REGISTRY} IMG=${IMG} VERSION=${VERSION} clean &>/dev/null
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/live-demo-script
|
||||
|
||||
作者:[Lisa Seelye][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lisa
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
|
||||
[2]: https://www.usenix.org/conference/lisa19/presentation/seelye
|
||||
[3]: https://www.usenix.org/conference/lisa19
|
||||
[4]: https://github.com/lisa/lisa19-containers
|
@ -0,0 +1,211 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Basic kubectl and Helm commands for beginners)
|
||||
[#]: via: (https://opensource.com/article/20/2/kubectl-helm-commands)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
|
||||
|
||||
Basic kubectl and Helm commands for beginners
|
||||
======
|
||||
Take a trip to the grocery store to shop for the commands you'll need to
|
||||
get started with these Kubernetes tools.
|
||||
![A person working.][1]
|
||||
|
||||
Recently, my husband was telling me about an upcoming job interview where he would have to run through some basic commands on a computer. He was anxious about the interview, but the best way for him to learn and remember things has always been to equate the thing he doesn't know to something very familiar to him. Because our conversation happened right after I was roaming the grocery store trying to decide what to cook that evening, it inspired me to write about kubectl and Helm commands by equating them to an ordinary trip to the grocer.
|
||||
|
||||
[Helm][2] is a tool to manage applications within Kubernetes. You can easily deploy charts with your application information, allowing them to be up and preconfigured in minutes within your Kubernetes environment. When you're learning something new, it's always helpful to look at chart examples to see how they are used, so if you have time, take a look at these stable [charts][3].
|
||||
|
||||
[Kubectl][4] is a command line that interfaces with Kubernetes environments, allowing you to configure and manage your cluster. It does require some configuration to work within environments, so take a look through the [documentation][5] to see what you need to do.
|
||||
|
||||
I'll use namespaces in the examples, which you can learn about in my article [_Kubernetes namespaces for beginners_][6].
|
||||
|
||||
Now that we have that settled, let's start shopping for basic kubectl and Helm commands!
|
||||
|
||||
### Helm list
|
||||
|
||||
What is the first thing you do before you go to the store? Well, if you're organized, you make a **list**. LIkewise, this is the first basic Helm command I will explain.
|
||||
|
||||
In a Helm-deployed application, **list** provides details about an application's current release. In this example, I have one deployed application—the Jenkins CI/CD application. Running the basic **list** command always brings up the default namespace. Since I don't have anything deployed in the default namespace, nothing shows up:
|
||||
|
||||
|
||||
```
|
||||
$helm list
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
```
|
||||
|
||||
However, if I run the command with an extra flag, my application and information appear:
|
||||
|
||||
|
||||
```
|
||||
$helm list --all-namespaces
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
jenkins jenkins 1 2020-01-18 16:18:07 EST deployed jenkins-1.9.4 lts
|
||||
```
|
||||
|
||||
Finally, I can direct the **list** command to check only the namespace I want information from:
|
||||
|
||||
|
||||
```
|
||||
$helm list --namespace jenkins
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
jenkins jenkins 1 2020-01-18 16:18:07 EST deployed jenkins-1.9.4 lts
|
||||
```
|
||||
|
||||
Now that I have a list and know what is on it, I can go and get my items with **get** commands! I'll start with the Kubernetes cluster; what can I get from it?
|
||||
|
||||
### Kubectl get
|
||||
|
||||
The **kubectl get** command gives information about many things in Kubernetes, including pods, nodes, and namespaces. Again, without a namespace flag, you'll always land in the default. First, I'll get the namespaces in the cluster to see what's running:
|
||||
|
||||
|
||||
```
|
||||
$kubectl get namespaces
|
||||
NAME STATUS AGE
|
||||
default Active 53m
|
||||
jenkins Active 44m
|
||||
kube-node-lease Active 53m
|
||||
kube-public Active 53m
|
||||
kube-system Active 53m
|
||||
```
|
||||
|
||||
Now that I have the namespaces running in my environment, I'll get the nodes and see how many are running:
|
||||
|
||||
|
||||
```
|
||||
$kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
minikube Ready master 55m v1.16.2
|
||||
```
|
||||
|
||||
I have one node up and running, mainly because my Minikube is running on one small server. To get the pods running on my one node:
|
||||
|
||||
|
||||
```
|
||||
$kubectl get pods
|
||||
No resources found in default namespace.
|
||||
```
|
||||
|
||||
Oops, it's empty. I'll get what's in my Jenkins namespace with:
|
||||
|
||||
|
||||
```
|
||||
$kubectl get pods --namespace jenkins
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
jenkins-7fc688c874-mh7gv 1/1 Running 0 40m
|
||||
```
|
||||
|
||||
Good news! There's one pod, it hasn't restarted, and it has been running for 40 minutes. Well, since I know the pod is up, I want to see what I can get from Helm.
|
||||
|
||||
### Helm get
|
||||
|
||||
**Helm get** is a little more complicated because this **get** command requires more than an application name, and you can request multiple things from applications. I'll begin by getting the values used to make the application, and then I'll show a snip of the **get all** action, which provides all the data related to the application.
|
||||
|
||||
|
||||
```
|
||||
$helm get values jenkins -n jenkins
|
||||
USER-SUPPLIED VALUES:
|
||||
null
|
||||
```
|
||||
|
||||
Since I did a very minimal stable-only install, the configuration didn't change. If I run the **all** command, I get everything out of the chart:
|
||||
|
||||
|
||||
```
|
||||
`$helm get all jenkins -n jenkins`
|
||||
```
|
||||
|
||||
![output from helm get all command][7]
|
||||
|
||||
This produces a ton of data, so I always recommend keeping a copy of a Helm chart so you can look over the templates in the chart. I also create my own values to see what I have in place.
|
||||
|
||||
Now that I have all my goodies in my shopping cart, I'll check the labels that **describe** what's in them. These examples pertain only to kubectl, and they describe what I've deployed through Helm.
|
||||
|
||||
### Kubectl describe
|
||||
|
||||
As I did with the **get** command, which can describe just about anything in Kubernetes, I'll limit my examples to namespaces, pods, and nodes. Since I know I'm working with one of each, this will be easy.
|
||||
|
||||
|
||||
```
|
||||
$kubectl describe ns jenkins
|
||||
Name: jenkins
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
Status: Active
|
||||
No resource quota.
|
||||
No resource limits.
|
||||
```
|
||||
|
||||
I can see my namespace's name and that it is active and has no resource nor quote limits.
|
||||
|
||||
The **describe pods** command produces a large amount of information, so I'll provide a small snip of the output. If you run the command without the pod name, it will return information for all of the pods in the namespace, which can be overwhelming. So, be sure you always include the pod name with this command. For example:
|
||||
|
||||
|
||||
```
|
||||
`$kubectl describe pods jenkins-7fc688c874-mh7gv --namespace jenkins`
|
||||
```
|
||||
|
||||
![output of kubectl-describe-pods][8]
|
||||
|
||||
This provides (among many other things) the status of the container, how the container is managed, the label, and the image used in the pod. The data not in this abbreviated output includes resource requests and limits along with any conditions, init containers, and storage volume information applied in a Helm values file. This data is useful if your application is crashing due to inadequate resources, a configured init container that runs a prescript for configuration, or generated hidden passwords that shouldn't be in a plain text YAML file.
|
||||
|
||||
Finally, I'll use **describe node**, which (of course) describes the node. Since this example has just one, named Minikube, that is what I'll use; if you have multiple nodes in your environment, you must include the node name of interest.
|
||||
|
||||
As with pods, the node command produces an abundance of data, so I'll include just a snip of the output.
|
||||
|
||||
|
||||
```
|
||||
`$kubectl describe node minikube`
|
||||
```
|
||||
|
||||
![output of kubectl describe node][9]
|
||||
|
||||
Note that **describe node** is one of the more important basic commands. As this image shows, the command returns statistics that indicate when the node is running out of resources, and this data is excellent for alerting you when you need to scale up (if you do not have autoscaling in your environment). Other things not in this snippet of output include the percentages of requests made for all resources and limits, as well as the age and allocation of resources (e.g., for my application).
|
||||
|
||||
### Checking out
|
||||
|
||||
With these commands, I've finished my shopping and gotten everything I was looking for. Hopefully, these basic commands can help you, too, in your day-to-day with Kubernetes.
|
||||
|
||||
I urge you to work with the command line often and learn the shorthand flags available in the Help sections, which you can access by running these commands:
|
||||
|
||||
|
||||
```
|
||||
`$helm --help`
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
|
||||
```
|
||||
`$kubectl -h`
|
||||
```
|
||||
|
||||
### Peanut butter and jelly
|
||||
|
||||
Some things just go together like peanut butter and jelly. Helm and kubectl are a little like that.
|
||||
|
||||
I often use these tools in my environment. Because they have many similarities in a ton of places, after using one, I usually need to follow up with the other. For example, I can do a Helm deployment and watch it fail using kubectl. Try them together, and see what they can do for you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/kubectl-helm-commands
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
||||
[2]: https://helm.sh/
|
||||
[3]: https://github.com/helm/charts/tree/master/stable
|
||||
[4]: https://kubernetes.io/docs/reference/kubectl/kubectl/
|
||||
[5]: https://kubernetes.io/docs/reference/kubectl/overview/
|
||||
[6]: https://opensource.com/article/19/12/kubernetes-namespaces
|
||||
[7]: https://opensource.com/sites/default/files/uploads/helm-get-all.png (output from helm get all command)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/kubectl-describe-pods.png (output of kubectl-describe-pods)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/kubectl-describe-node.png (output of kubectl describe node)
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Dino is a Modern Looking Open Source XMPP Client)
|
||||
[#]: via: (https://itsfoss.com/dino-xmpp-client/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Dino is a Modern Looking Open Source XMPP Client
|
||||
======
|
||||
|
||||
_**Brief: Dino is a relatively new open-source XMPP client that tries to offer a good user experience while encouraging privacy-focused users to utilize XMPP for messaging.**_
|
||||
|
||||
### Dino: An Open Source XMPP Client
|
||||
|
||||
![][1]
|
||||
|
||||
[XMPP][2] (Extensible Messaging Presence Protocol) is a decentralized model of network to facilitate instant messaging and collaboration. Decentralize means there is no central server that has access to your data. The communication is directly between the end-points.
|
||||
|
||||
Some of us might call it an “old school” tech probably because the XMPP clients usually have a very bad user experience or simply just because it takes time to get used to (or set it up).
|
||||
|
||||
That’s when [Dino][3] comes to the rescue as a modern XMPP client to provide a clean and snappy user experience without compromising your privacy.
|
||||
|
||||
### The User Experience
|
||||
|
||||
![][4]
|
||||
|
||||
Dino does try to improve the user experience as an XMPP client but it is worth noting that the look and feel of it will depend on your Linux distribution to some extent. Your icon theme or the gnome theme might make it look better or worse for your personal experience.
|
||||
|
||||
Technically, the user interface is quite simple and easy to use. So, I suggest you take a look at some of the [best icon themes][5] and [GNOME themes][6] for Ubuntu to tweak the look of Dino.
|
||||
|
||||
### Features of Dino
|
||||
|
||||
![Dino Screenshot][7]
|
||||
|
||||
You can expect to use Dino as an alternative to Slack, [Signal][8] or [Wire][9] for your business or personal usage.
|
||||
|
||||
It offers all of the essential features you would need in a messaging application, let us take a look at a list of things that you can expect from it:
|
||||
|
||||
* Decentralized Communication
|
||||
* Public XMPP Servers supported if you cannot setup your own server
|
||||
* Similar to UI to other popular messengers – so it’s easy to use
|
||||
* Image & File sharing
|
||||
* Multiple accounts supported
|
||||
* Advanced message search
|
||||
* [OpenPGP][10] & [OMEMO][11] encryption supported
|
||||
* Lightweight native desktop application
|
||||
|
||||
|
||||
|
||||
### Installing Dino on Linux
|
||||
|
||||
You may or may not find it listed in your software center. Dino does provide ready to use binaries for Debian (deb) and Fedora (rpm) based distributions.
|
||||
|
||||
**For Ubuntu:**
|
||||
|
||||
Dino is available in the universe repository on Ubuntu and you can install it using this command:
|
||||
|
||||
```
|
||||
sudo apt install dino-im
|
||||
```
|
||||
|
||||
Similarly, you can find packages for other Linux distributions on their [GitHub distribution packages page][12].
|
||||
|
||||
If you want the latest and greatest, you can also find both **.deb** and .**rpm** files for Dino to install on your Linux distribution (nightly builds) from [OpenSUSE’s software webpage][13].
|
||||
|
||||
In either case, head to their [GitHub page][14] or click on the link below to visit the official site.
|
||||
|
||||
[Download Dino][3]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
It works quite well without any issues (at the time of writing this and quick testing it). I’ll try exploring more about it and hopefully cover more XMPP-centric articles to encourage users to use XMPP clients and servers for communication.
|
||||
|
||||
What do you think about Dino? Would you recommend another open-source XMPP client that’s potentially better than Dino? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/dino-xmpp-client/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-main.png?ssl=1
|
||||
[2]: https://xmpp.org/about/
|
||||
[3]: https://dino.im/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-xmpp-client.jpg?ssl=1
|
||||
[5]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[6]: https://itsfoss.com/best-gtk-themes/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/dino-screenshot.png?ssl=1
|
||||
[8]: https://itsfoss.com/signal-messaging-app/
|
||||
[9]: https://itsfoss.com/wire-messaging-linux/
|
||||
[10]: https://www.openpgp.org/
|
||||
[11]: https://en.wikipedia.org/wiki/OMEMO
|
||||
[12]: https://github.com/dino/dino/wiki/Distribution-Packages
|
||||
[13]: https://software.opensuse.org/download.html?project=network:messaging:xmpp:dino&package=dino
|
||||
[14]: https://github.com/dino/dino
|
179
sources/tech/20200211 Navigating man pages in Linux.md
Normal file
179
sources/tech/20200211 Navigating man pages in Linux.md
Normal file
@ -0,0 +1,179 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Navigating man pages in Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Navigating man pages in Linux
|
||||
======
|
||||
The man pages on a Linux system can do more than provide information on particular commands. They can help discover commands you didn't realize were available.
|
||||
[Hello I'm Nik][1] [(CC0)][2]
|
||||
|
||||
Man pages provide essential information on Linux commands and many users refer to them often, but there’s a lot more to the man pages than many of us realize.
|
||||
|
||||
You can always type a command like “man who” and get a nice description of how the man command works, but exploring commands that you might not know could be even more illuminating. For example, you can use the man command to help identify commands to handle some unusually challenging task or to show options that can help you use a command you already know in new and better ways.
|
||||
|
||||
Let’s navigate through some options and see where we end up.
|
||||
|
||||
[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][3]
|
||||
|
||||
### Using man to identify commands
|
||||
|
||||
The man command can help you find commands by topic. If you’re looking for a command to count the lines in a file, for example, you can provide a keyword. In the example below, we’ve put the keyword in quotes and added blanks so that we don’t get commands that deal with “accounts” or “accounting” along with those that do some counting for us.
|
||||
|
||||
```
|
||||
$ man -k ' count '
|
||||
anvil (8postfix) - Postfix session count and request rate control
|
||||
cksum (1) - checksum and count the bytes in a file
|
||||
sum (1) - checksum and count the blocks in a file
|
||||
timer_getoverrun (2) - get overrun count for a POSIX per-process timer
|
||||
```
|
||||
|
||||
To show commands that relate to new user accounts, we might try a command like this:
|
||||
|
||||
```
|
||||
$ man -k "new user"
|
||||
newusers (8) - update and create new users in batch
|
||||
useradd (8) - create a new user or update default new user information
|
||||
zshroadmap (1) - informal introduction to the zsh manual The Zsh Manual, …
|
||||
```
|
||||
|
||||
Just to be clear, the third item in the list above makes a reference to “new users” liking the material and is not a command for setting up, removing or configuring user accounts. The man command is simply matching words in the command description, acting very much like the apropos command. Notice the numbers in parentheses after each command listed above. These relate to the man page sections that contain the commands.
|
||||
|
||||
### Identifying the manual sections
|
||||
|
||||
The man command sections divide the commands into categories. To list these categories, type “man man” and look for descriptions like those below. You very likely won’t have Section 9 commands on your system.
|
||||
|
||||
[][4]
|
||||
|
||||
```
|
||||
1 Executable programs or shell commands
|
||||
2 System calls (functions provided by the kernel)
|
||||
3 Library calls (functions within program libraries)
|
||||
4 Special files (usually found in /dev)
|
||||
5 File formats and conventions eg /etc/passwd
|
||||
6 Games
|
||||
7 Miscellaneous (including macro packages and conventions), e.g.
|
||||
man(7), groff(7)
|
||||
8 System administration commands (usually only for root)
|
||||
9 Kernel routines [Non standard]
|
||||
```
|
||||
|
||||
Man pages cover more than what we typically think of as “commands”. As you can see from the above descriptions, they cover system calls, library calls, special files and more.
|
||||
|
||||
The listing below shows where man pages are actually stored on Linux systems. The dates on these directories will vary because, with updates, some of these sections will get new content while others will not.
|
||||
|
||||
```
|
||||
$ ls -ld /usr/share/man/man?
|
||||
drwxr-xr-x 2 root root 98304 Feb 5 16:27 /usr/share/man/man1
|
||||
drwxr-xr-x 2 root root 65536 Oct 23 17:39 /usr/share/man/man2
|
||||
drwxr-xr-x 2 root root 270336 Nov 15 06:28 /usr/share/man/man3
|
||||
drwxr-xr-x 2 root root 4096 Feb 4 10:16 /usr/share/man/man4
|
||||
drwxr-xr-x 2 root root 28672 Feb 5 16:25 /usr/share/man/man5
|
||||
drwxr-xr-x 2 root root 4096 Oct 23 17:40 /usr/share/man/man6
|
||||
drwxr-xr-x 2 root root 20480 Feb 5 16:25 /usr/share/man/man7
|
||||
drwxr-xr-x 2 root root 57344 Feb 5 16:25 /usr/share/man/man8
|
||||
```
|
||||
|
||||
Note that the man page files are generally **gzipped** to save space. The man command unzips them as needed whenever you use the man command.
|
||||
|
||||
```
|
||||
$ ls -l /usr/share/man/man1 | head -10
|
||||
total 12632
|
||||
lrwxrwxrwx 1 root root 9 Sep 5 06:38 [.1.gz -> test.1.gz
|
||||
-rw-r--r-- 1 root root 563 Nov 7 05:07 2to3-2.7.1.gz
|
||||
-rw-r--r-- 1 root root 592 Apr 23 2016 411toppm.1.gz
|
||||
-rw-r--r-- 1 root root 2866 Aug 14 10:36 a2query.1.gz
|
||||
-rw-r--r-- 1 root root 2361 Sep 9 15:13 aa-enabled.1.gz
|
||||
-rw-r--r-- 1 root root 2675 Sep 9 15:13 aa-exec.1.gz
|
||||
-rw-r--r-- 1 root root 1142 Apr 3 2018 aaflip.1.gz
|
||||
-rw-r--r-- 1 root root 3847 Aug 14 10:36 ab.1.gz
|
||||
-rw-r--r-- 1 root root 2378 Aug 23 2018 ac.1.gz
|
||||
```
|
||||
|
||||
### Listing man pages by section
|
||||
|
||||
Even just looking at the first 10 man pages in Section 1 (as shown above), you are likely to see some commands that are new to you – maybe **a2query** or **aaflip** (shown above).
|
||||
|
||||
An even better strategy for exploring commands is to list commands by section without looking at the files themselves but, instead, using a man command that shows you the commands and provides a brief description of each.
|
||||
|
||||
In the command below, the **-s 1** instructs man to display information on commands in section 1. The **-k .** makes the command work for all commands rather than specifying a particular keyword; without this, the man command would come back and ask “What manual page do you want?” So, use a keyword to select a group of related commands or a dot to show all commands in a section.
|
||||
|
||||
```
|
||||
$ man -s 1 -k .
|
||||
2to3-2.7 (1) - Python2 to Python3 converter
|
||||
411toppm (1) - convert Sony Mavica .411 image to ppm
|
||||
as (1) - the portable GNU assembler.
|
||||
baobab (1) - A graphical tool to analyze disk usage
|
||||
busybox (1) - The Swiss Army Knife of Embedded Linux
|
||||
cmatrix (1) - simulates the display from "The Matrix"
|
||||
expect_dislocate (1) - disconnect and reconnect processes
|
||||
red (1) - line-oriented text editor
|
||||
enchant (1) - a spellchecker
|
||||
…
|
||||
```
|
||||
|
||||
### How many man pages are there?
|
||||
|
||||
If you’re curious about how many man pages there are in each section, you can count them by section with a command like this:
|
||||
|
||||
```
|
||||
$ for num in {1..8}
|
||||
> do
|
||||
> man -s $num -k . | wc -l
|
||||
> done
|
||||
2382
|
||||
493
|
||||
2935
|
||||
53
|
||||
441
|
||||
11
|
||||
245
|
||||
919
|
||||
```
|
||||
|
||||
The exact number may vary, but most Linux systems will have a similar number of commands. If we use a command that adds these numbers together, we can see that the system that this command is running on has nearly 7,500 man pages. That’s a lot of commands, system calls, etc.
|
||||
|
||||
```
|
||||
$ for num in {1..8}
|
||||
> do
|
||||
> num=`man -s $num -k . | wc -l`
|
||||
> tot=`expr $num + $tot`
|
||||
> echo $tot
|
||||
> done
|
||||
2382
|
||||
2875
|
||||
5810
|
||||
5863
|
||||
6304
|
||||
6315
|
||||
6560
|
||||
7479 <=== total
|
||||
```
|
||||
|
||||
There’s a lot you can learn by reading man pages, but exploring them in other ways can help you become aware of commands you may not have known were available on your system.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3519853/navigating-man-pages-in-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/YiRQIglwYig
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
328
sources/tech/20200211 Using external libraries in Java.md
Normal file
328
sources/tech/20200211 Using external libraries in Java.md
Normal file
@ -0,0 +1,328 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using external libraries in Java)
|
||||
[#]: via: (https://opensource.com/article/20/2/external-libraries-java)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Using external libraries in Java
|
||||
======
|
||||
External libraries fill gaps in the Java core libraries.
|
||||
![books in a library, stacks][1]
|
||||
|
||||
Java comes with a core set of libraries, including those that define commonly used data types and related behavior, like **String** or **Date**; utilities to interact with the host operating system, such as **System** or **File**; and useful subsystems to manage security, deal with network communications, and create or parse XML. Given the richness of this core set of libraries, it's often easy to find the necessary bits and pieces to reduce the amount of code a programmer must write to solve a problem.
|
||||
|
||||
Even so, there are a lot of interesting Java libraries created by people who find gaps in the core libraries. For example, [Apache Commons][2] "is an Apache project focused on all aspects of reusable Java components" and provides a collection of some 43 open source libraries (as of this writing) covering a range of capabilities either outside the Java core (such as [geometry][3] or [statistics][4]) or that enhance or replace capabilities in the Java core (such as [math][5] or [numbers][6]).
|
||||
|
||||
Another common type of Java library is an interface to a system component—for example, to a database system. This article looks at using such an interface to connect to a [PostgreSQL][7] database and get some interesting information. But first, I'll review the important bits and pieces of a library.
|
||||
|
||||
### What is a library?
|
||||
|
||||
A library, of course, must contain some useful code. But to be useful, that code needs to be organized in such a way that the Java programmer can access the components to solve the problem at hand.
|
||||
|
||||
I'll boldly claim that the most important part of a library is its application programming interface (API) documentation. This kind of documentation is familiar to many and is most often produced by [Javadoc][8], which reads structured comments in the code and produces HTML output that displays the API's packages in the panel in the top-left corner of the page; its classes in the bottom-left corner; and the detailed documentation at the library, package, or class level (depending on what is selected in the main panel) on the right. For example, the [top level of API documentation for Apache Commons Math][9] looks like:
|
||||
|
||||
![API documentation for Apache Commons Math][10]
|
||||
|
||||
Clicking on a package in the main panel shows the Java classes and interfaces defined in that package. For example, **[org.apache.commons.math4.analysis.solvers][11]** shows classes like **BisectionSolver** for finding zeros of univariate real functions using the bisection algorithm. And clicking on the [BisectionSolver][12] link lists all the methods of the class **BisectionSolver**.
|
||||
|
||||
This type of documentation is useful as reference information; it's not intended as a tutorial for learning how to use the library. For example, if you know what a univariate real function is and look at the package **org.apache.commons.math4.analysis.function**, you can imagine using that package to compose a function definition and then using the **org.apache.commons.math4.analysis.solvers** package to look for zeros of the just-created function. But really, you probably need more learning-oriented documentation to bridge to the reference documentation. Maybe even an example!
|
||||
|
||||
This documentation structure also helps clarify the meaning of _package_—a collection of related Java class and interface definitions—and shows what packages are bundled in a particular library.
|
||||
|
||||
The code for such a library is most commonly found in a [**.jar** file][13], which is basically a .zip file created by the Java **jar** command that contains some other useful information. **.jar** files are typically created as the endpoint of a build process that compiles all the **.java** files in the various packages defined.
|
||||
|
||||
There are two main steps to accessing the functionality provided by an external library:
|
||||
|
||||
1. Make sure the library is available to the Java compilation step—[**javac**][14]—and the execution step—**java**—via the classpath (either the **-cp** argument on the command line or the **CLASSPATH** environment variable).
|
||||
2. Use the appropriate **import** statements to access the package and class in the program source code.
|
||||
|
||||
|
||||
|
||||
The rest is just like coding with Java core classes, such as **String**—write the code using the class and interface definitions provided by the library. Easy, eh? Well, maybe not quite that easy; first, you need to understand the intended use pattern for the library components, and then you can write code.
|
||||
|
||||
### An example: Connect to a PostgreSQL database
|
||||
|
||||
The typical use pattern for accessing data in a database system is:
|
||||
|
||||
1. Gain access to the code specific to the database software being used.
|
||||
2. Connect to the database server.
|
||||
3. Build a query string.
|
||||
4. Execute the query string.
|
||||
5. Do something with the results returned.
|
||||
6. Disconnect from the database server.
|
||||
|
||||
|
||||
|
||||
The programmer-facing part of all of this is provided by a database-independent interface package, **[java.sql][15]**, which defines the core client-side Java Database Connectivity (JDBC) API. The **java.sql** package is part of the core Java libraries, so there is no need to supply a **.jar** file to the compile step. However, each database provider creates its own implementation of the **java.sql** interfaces—for example, the **Connection** interface—and those implementations must be provided on the run step.
|
||||
|
||||
Let's see how this works, using PostgreSQL.
|
||||
|
||||
#### Gain access to the database-specific code
|
||||
|
||||
The following code uses the [Java class loader][16] (the **Class.forName()** call) to bring the PostgreSQL driver code into the executing virtual machine:
|
||||
|
||||
|
||||
```
|
||||
import java.sql.*;
|
||||
|
||||
public class Test1 {
|
||||
|
||||
public static void main([String][17] args[]) {
|
||||
|
||||
// Load the driver (jar file must be on class path) [1]
|
||||
|
||||
try {
|
||||
Class.forName("org.postgresql.Driver");
|
||||
[System][18].out.println("driver loaded");
|
||||
} catch ([Exception][19] e1) {
|
||||
[System][18].err.println("couldn't find driver");
|
||||
[System][18].err.println(e1);
|
||||
[System][18].exit(1);
|
||||
}
|
||||
|
||||
// If we get here all is OK
|
||||
|
||||
[System][18].out.println("done.");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Because the class loader can fail, and therefore can throw an exception when failing, surround the call to **Class.forName()** in a try-catch block.
|
||||
|
||||
If you compile the above code with **javac** and run it with Java:
|
||||
|
||||
|
||||
```
|
||||
me@mymachine:~/Test$ javac Test1.java
|
||||
me@mymachine:~/Test$ java Test1
|
||||
couldn't find driver
|
||||
java.lang.ClassNotFoundException: org.postgresql.Driver
|
||||
me@mymachine:~/Test$
|
||||
```
|
||||
|
||||
The class loader needs the **.jar** file containing the PostgreSQL JDBC driver implementation to be on the classpath:
|
||||
|
||||
|
||||
```
|
||||
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test1
|
||||
driver loaded
|
||||
done.
|
||||
me@mymachine:~/Test$
|
||||
```
|
||||
|
||||
#### Connect to the database server
|
||||
|
||||
The following code loads the JDBC driver and creates a connection to the PostgreSQL database:
|
||||
|
||||
|
||||
```
|
||||
import java.sql.*;
|
||||
|
||||
public class Test2 {
|
||||
|
||||
public static void main([String][17] args[]) {
|
||||
|
||||
// Load the driver (jar file must be on class path) [1]
|
||||
|
||||
try {
|
||||
Class.forName("org.postgresql.Driver");
|
||||
[System][18].out.println("driver loaded");
|
||||
} catch ([Exception][19] e1) {
|
||||
[System][18].err.println("couldn't find driver");
|
||||
[System][18].err.println(e1);
|
||||
[System][18].exit(1);
|
||||
}
|
||||
|
||||
// Set up connection properties [2]
|
||||
|
||||
java.util.[Properties][20] props = new java.util.[Properties][20]();
|
||||
props.setProperty("user","me");
|
||||
props.setProperty("password","mypassword");
|
||||
[String][17] database = "jdbc:postgresql://myhost.org:5432/test";
|
||||
|
||||
// Open the connection to the database [3]
|
||||
|
||||
try ([Connection][21] conn = [DriverManager][22].getConnection(database, props)) {
|
||||
[System][18].out.println("connection created");
|
||||
} catch ([Exception][19] e2) {
|
||||
[System][18].err.println("sql operations failed");
|
||||
[System][18].err.println(e2);
|
||||
[System][18].exit(2);
|
||||
}
|
||||
[System][18].out.println("connection closed");
|
||||
|
||||
// If we get here all is OK
|
||||
|
||||
[System][18].out.println("done.");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Compile and run it:
|
||||
|
||||
|
||||
```
|
||||
me@mymachine:~/Test$ javac Test2.java
|
||||
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test2
|
||||
driver loaded
|
||||
connection created
|
||||
connection closed
|
||||
done.
|
||||
me@mymachine:~/Test$
|
||||
```
|
||||
|
||||
Some notes on the above:
|
||||
|
||||
* The code following comment [2] uses system properties to set up connection parameters—in this case, the PostgreSQL username and password. This allows for grabbing those parameters from the Java command line and passing all the parameters in as an argument bundle. There are other **Driver.getConnection()** options for passing in the parameters individually.
|
||||
* JDBC requires a URL for defining the database, which is declared above as **String database** and passed into the **Driver.getConnection()** method along with the connection parameters.
|
||||
* The code uses try-with-resources, which auto-closes the connection upon completion of the code in the try-catch block. There is a lengthy discussion of this approach on [Stack Overflow][23].
|
||||
* The try-with-resources provides access to the **Connection** instance and can execute SQL statements there; any errors will be caught by the same **catch** statement.
|
||||
|
||||
|
||||
|
||||
#### Do something fun with the database connection
|
||||
|
||||
In my day job, I often need to know what users have been defined for a given database server instance, and I use this [handy piece of SQL][24] for grabbing a list of all users:
|
||||
|
||||
|
||||
```
|
||||
import java.sql.*;
|
||||
|
||||
public class Test3 {
|
||||
|
||||
public static void main([String][17] args[]) {
|
||||
|
||||
// Load the driver (jar file must be on class path) [1]
|
||||
|
||||
try {
|
||||
Class.forName("org.postgresql.Driver");
|
||||
[System][18].out.println("driver loaded");
|
||||
} catch ([Exception][19] e1) {
|
||||
[System][18].err.println("couldn't find driver");
|
||||
[System][18].err.println(e1);
|
||||
[System][18].exit(1);
|
||||
}
|
||||
|
||||
// Set up connection properties [2]
|
||||
|
||||
java.util.[Properties][20] props = new java.util.[Properties][20]();
|
||||
props.setProperty("user","me");
|
||||
props.setProperty("password","mypassword");
|
||||
[String][17] database = "jdbc:postgresql://myhost.org:5432/test";
|
||||
|
||||
// Open the connection to the database [3]
|
||||
|
||||
try ([Connection][21] conn = [DriverManager][22].getConnection(database, props)) {
|
||||
[System][18].out.println("connection created");
|
||||
|
||||
// Create the SQL command string [4]
|
||||
|
||||
[String][17] qs = "SELECT " +
|
||||
" u.usename AS \"User name\", " +
|
||||
" u.usesysid AS \"User ID\", " +
|
||||
" CASE " +
|
||||
" WHEN u.usesuper AND u.usecreatedb THEN " +
|
||||
" CAST('superuser, create database' AS pg_catalog.text) " +
|
||||
" WHEN u.usesuper THEN " +
|
||||
" CAST('superuser' AS pg_catalog.text) " +
|
||||
" WHEN u.usecreatedb THEN " +
|
||||
" CAST('create database' AS pg_catalog.text) " +
|
||||
" ELSE " +
|
||||
" CAST('' AS pg_catalog.text) " +
|
||||
" END AS \"Attributes\" " +
|
||||
"FROM pg_catalog.pg_user u " +
|
||||
"ORDER BY 1";
|
||||
|
||||
// Use the connection to create a statement, execute it,
|
||||
// analyze the results and close the result set [5]
|
||||
|
||||
[Statement][25] stat = conn.createStatement();
|
||||
[ResultSet][26] rs = stat.executeQuery(qs);
|
||||
[System][18].out.println("User name;User ID;Attributes");
|
||||
while (rs.next()) {
|
||||
[System][18].out.println(rs.getString("User name") + ";" +
|
||||
rs.getLong("User ID") + ";" +
|
||||
rs.getString("Attributes"));
|
||||
}
|
||||
rs.close();
|
||||
stat.close();
|
||||
|
||||
} catch ([Exception][19] e2) {
|
||||
[System][18].err.println("connecting failed");
|
||||
[System][18].err.println(e2);
|
||||
[System][18].exit(1);
|
||||
}
|
||||
[System][18].out.println("connection closed");
|
||||
|
||||
// If we get here all is OK
|
||||
|
||||
[System][18].out.println("done.");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the above, once it has the **Connection** instance, it defines a query string (comment [4] above), creates a **Statement** instance and uses it to execute the query string, then puts its results in a **ResultSet** instance, which it can iterate through to analyze the results returned, and ends by closing both the **ResultSet** and **Statement** instances (comment [5] above).
|
||||
|
||||
Compiling and executing the program produces the following output:
|
||||
|
||||
|
||||
```
|
||||
me@mymachine:~/Test$ javac Test3.java
|
||||
me@mymachine:~/Test$ java -cp ~/src/postgresql-42.2.5.jar:. Test3
|
||||
driver loaded
|
||||
connection created
|
||||
User name;User ID;[Attributes][27]
|
||||
fwa;16395;superuser
|
||||
vax;197772;
|
||||
mbe;290995;
|
||||
aca;169248;
|
||||
connection closed
|
||||
done.
|
||||
me@mymachine:~/Test$
|
||||
```
|
||||
|
||||
This is a (very simple) example of using the PostgreSQL JDBC library in a simple Java application. It's worth emphasizing that it didn't need to use a Java import statement like **import org.postgresql.jdbc.*;** in the code because of the way the **java.sql** library is designed. Because of that, there's no need to specify the classpath at compile time. Instead, it uses the Java class loader to bring in the PostgreSQL code at run time.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/external-libraries-java
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list.jpg?itok=O3GvU1gH (books in a library, stacks)
|
||||
[2]: https://commons.apache.org/
|
||||
[3]: https://commons.apache.org/proper/commons-geometry/
|
||||
[4]: https://commons.apache.org/proper/commons-statistics/
|
||||
[5]: https://commons.apache.org/proper/commons-math/
|
||||
[6]: https://commons.apache.org/proper/commons-numbers/
|
||||
[7]: https://opensource.com/article/19/11/getting-started-postgresql
|
||||
[8]: https://en.wikipedia.org/wiki/Javadoc
|
||||
[9]: https://commons.apache.org/proper/commons-math/apidocs/index.html
|
||||
[10]: https://opensource.com/sites/default/files/uploads/api-documentation_apachecommonsmath.png (API documentation for Apache Commons Math)
|
||||
[11]: https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/analysis/solvers/package-summary.html
|
||||
[12]: https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/analysis/solvers/BisectionSolver.html
|
||||
[13]: https://en.wikipedia.org/wiki/JAR_(file_format)
|
||||
[14]: https://en.wikipedia.org/wiki/Javac
|
||||
[15]: https://docs.oracle.com/javase/8/docs/api/java/sql/package-summary.html
|
||||
[16]: https://en.wikipedia.org/wiki/Java_Classloader
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+properties
|
||||
[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+connection
|
||||
[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+drivermanager
|
||||
[23]: https://stackoverflow.com/questions/8066501/how-should-i-use-try-with-resources-with-jdbc
|
||||
[24]: https://www.postgresql.org/message-id/1121195544.8208.242.camel@state.g2switchworks.com
|
||||
[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+statement
|
||||
[26]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+resultset
|
||||
[27]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+attributes
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Change the Default Terminal in Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/change-default-terminal-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Change the Default Terminal in Ubuntu
|
||||
======
|
||||
|
||||
Terminal is a crucial part of any Linux system. It allows you to access your Linux systems through a shell. There are several terminal applications (technically called terminal emulators) on Linux.
|
||||
|
||||
Most of the [desktop environments][1] have their own implementation of the terminal. It may look different and may have different keyboard shortcuts.
|
||||
|
||||
For example, [Guake Terminal][2] is extremely useful for power users and provides several features you might not get in your distribution’s terminal by default.
|
||||
|
||||
You can install other terminals on your system and use it as default that opens up with the usual [keyboard shortcut of Ctrl+Alt+T][3].
|
||||
|
||||
Now the question comes, how do you change the default terminal in Ubuntu. It doesn’t follow the standard way of [changing default applications in Ubuntu][4] then how to do it?
|
||||
|
||||
### Change the default terminal in Ubuntu
|
||||
|
||||
![][5]
|
||||
|
||||
On Debian-based distributions, there is a handy command line utility called [update-alternatives][6] that allows you to handle the default applications.
|
||||
|
||||
You can use it to change the default command line text editor, terminal and more. To do that, run the following command:
|
||||
|
||||
```
|
||||
sudo update-alternatives --config x-terminal-emulator
|
||||
```
|
||||
|
||||
It will show all the terminal emulators present on your system that can be used as default. The current default terminal is marked with the asterisk.
|
||||
|
||||
```
|
||||
[email protected]:~$ sudo update-alternatives --config x-terminal-emulator
|
||||
There are 2 choices for the alternative x-terminal-emulator (providing /usr/bin/x-terminal-emulator).
|
||||
|
||||
Selection Path Priority Status
|
||||
------------------------------------------------------------
|
||||
0 /usr/bin/gnome-terminal.wrapper 40 auto mode
|
||||
1 /usr/bin/gnome-terminal.wrapper 40 manual mode
|
||||
* 2 /usr/bin/st 15 manual mode
|
||||
|
||||
Press <enter> to keep the current choice[*], or type selection number:
|
||||
```
|
||||
|
||||
All you have to do is to enter the selection number. In my case, I want to use the GNOME terminal instead of the one from [Regolith desktop][7].
|
||||
|
||||
```
|
||||
Press <enter> to keep the current choice[*], or type selection number: 1
|
||||
update-alternatives: using /usr/bin/gnome-terminal.wrapper to provide /usr/bin/x-terminal-emulator (x-terminal-emulator) in manual mode
|
||||
```
|
||||
|
||||
##### Auto mode vs manual mode
|
||||
|
||||
You might have noticed the auto mode and manual mode in the output of update-alternatives command.
|
||||
|
||||
If you choose auto mode, your system may automatically decide on the default application as the packages are installed or removed. The decision is influenced by the priority number (as seen in the output of the command in the previous section).
|
||||
|
||||
Suppose you have 5 terminal emulators installed on your system and you delete the default one. Now, your system will check which of the emulators are in auto mode. If there are more than one, it will choose the one with the highest priority as the default emulator.
|
||||
|
||||
I hope you find this quick little tip useful. Your questions and suggestions are always welcome.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/change-default-terminal-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[2]: http://guake-project.org/
|
||||
[3]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[4]: https://itsfoss.com/change-default-applications-ubuntu/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/switch_default_terminal_ubuntu.png?ssl=1
|
||||
[6]: https://manpages.ubuntu.com/manpages/trusty/man8/update-alternatives.8.html
|
||||
[7]: https://itsfoss.com/regolith-linux-desktop/
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution)
|
||||
[#]: via: (https://itsfoss.com/appcenter-for-everyone/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
elementary OS is Building an App Center Where You Can Buy Open Source Apps for Your Linux Distribution
|
||||
======
|
||||
|
||||
_**Brief: elementary OS is building an app center ecosystem where you can buy open source applications for your Linux distribution.**_
|
||||
|
||||
### Crowdfunding to build an open source AppCenter for everyone
|
||||
|
||||
![][1]
|
||||
|
||||
[elementary OS][2] recently announced that it is [crowdfunding a campaign to build an app center][3] from where you can buy open source applications. The applications in the app center will be in Flatpak format.
|
||||
|
||||
Though it’s an initiative taken by elementary OS, this new app center will be available for other distributions as well.
|
||||
|
||||
The campaign aims to fund a week of in-person development sprint in Denver, Colorado (USA) featuring developers from elementary OS, [Endless][4], [Flathub][5] and [GNOME][6].
|
||||
|
||||
The crowdfunding campaign has already crossed its goal of raising $10,000. You can still fund it as additional funds will be used for the development of elementary OS.
|
||||
|
||||
[Crowdfunding Campaign][3]
|
||||
|
||||
### What features this AppCenter brings
|
||||
|
||||
The focus is on providing ‘secure’ applications and hence [Flatpak][7] apps are used to provide confined applications. In this format, apps will be restricted from accessing system or personal files and will be isolated from other apps on a technical level by default.
|
||||
|
||||
Apps will have access to operating system and personal files only if you explicitly provide your consent for it.
|
||||
|
||||
Apart from security, [Flatpak][8] also bundles all the dependencies. This way, app developers can utilize the cutting edge technologies even if it is not available on the current Linux distribution.
|
||||
|
||||
AppCenter will also have the wallet feature to save your card details. This enables you to quickly pay for apps without entering the card details each time.
|
||||
|
||||
![][9]
|
||||
|
||||
This new open source ‘app center’ will be available for other Linux distributions as well.
|
||||
|
||||
### Inspired by the success of elementary OS’s own ‘Pay What You Want’ app center model
|
||||
|
||||
A couple of years ago, elementary OS launched its own app center. The ‘pay what you want’ approach for the app center was quite a hit. The developers can put a minimum amount for their open source apps and the users can choose to pay an amount equal to or more than the minimum amount.
|
||||
|
||||
![][10]
|
||||
|
||||
This helped several indie developers get paid for their open source applications. The app store now has around 160 native applications and elementary OS says that thousands of dollars have been paid to the developers through the app center.
|
||||
|
||||
Inspired by the success of this app center experiment in elementary OS, they now want to bring this app center approach to other distributions as well.
|
||||
|
||||
### If the applications are open source, how can you charge money for it?
|
||||
|
||||
Some people still get confused with the idea of FOSS (free and open source). Here, the **source** code of the software is **open** and anyone is **free** to modify it and redistribute it.
|
||||
|
||||
It doesn’t mean that open source software has to be free of cost. Some developers rely on donations while some charge a fee for support.
|
||||
|
||||
Getting paid for the open source apps may encourage developers to create [applications for Linux][11].
|
||||
|
||||
### Let’s see if it could work
|
||||
|
||||
![][12]
|
||||
|
||||
Personally, I am not a huge fan of Flatpak or Snap packaging format. They do have their benefits but they take relatively more time to start and they are huge in size. If you install several such Snaps or Flatpaks, your disk space may start running out of free space.
|
||||
|
||||
There is also a need to be vigilant about the fake and scam developers in this new app ecosystem. Imagine if some scammers starts creating Flatpak package of obscure open source applications and put it on the app center? I hope the developers put some sort of mechanism to weed out such apps.
|
||||
|
||||
I do hope that this new AppCenter replicates the success it has seen in elementary OS. We definitely need a better ecosystem for open source apps for desktop Linux.
|
||||
|
||||
What are your views on it? Is it the right approach? What suggestions do you have for the improvement of the AppCenter?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/appcenter-for-everyone/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter.png?ssl=1
|
||||
[2]: https://elementary.io/
|
||||
[3]: https://www.indiegogo.com/projects/appcenter-for-everyone/
|
||||
[4]: https://itsfoss.com/endless-linux-computers/
|
||||
[5]: https://flathub.org/
|
||||
[6]: https://www.gnome.org/
|
||||
[7]: https://flatpak.org/
|
||||
[8]: https://itsfoss.com/flatpak-guide/
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter-wallet.png?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/appcenter-payment.png?ssl=1
|
||||
[11]: https://itsfoss.com/essential-linux-applications/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/open_source_app_center.png?ssl=1
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zhangxiangping)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 open source tools for natural language processing)
|
||||
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
12种自然语言处理的开源工具
|
||||
======
|
||||
|
||||
看看可以用在你自己NLP应用中的十几个工具吧。
|
||||
|
||||
![Chat bubbles][1]
|
||||
|
||||
在过去的几年里,自然语言处理(NLP)推动了聊天机器人、语音助手、文本预测,这些在我们的日常生活中常用的语音或文本应用程技术的发展。目前有着各种各样开源的NLP工具,所以我决定调查一下当前开源的NLP工具来帮助您制定您开发下一个基于语音或文本的应用程序的计划。
|
||||
|
||||
我将从我所熟悉的编程语言出发来介绍这些工具,尽管我对这些工具不是很熟悉(我没有在我不熟悉的语言中找工具)。也就是说,出于各种原因,我排除了三种我熟悉的语言中的工具。
|
||||
|
||||
R语言是没有被包含在内的,因为我发现的大多数库都有一年多没有更新了。这并不总是意味着他们没有得到很好的维护,但我认为他们应该得到更多的更新,以便和同一领域的其他工具竞争。我还选择了最有可能在生产场景中使用的语言和工具(而不是在学术界和研究中使用),虽然我主要是使用R作为研究和发现工具。
|
||||
|
||||
我发现Scala的很多库都没有更新了。我上次使用Scala已经有好几年了,当时它非常流行。但是大多数库从那个时候就再没有更新过,或者只有少数一些有更新。
|
||||
|
||||
最后,我排除了C++。这主要是因为我在的公司很久没有使用C++来进行NLP或者任何数据科学的工作。
|
||||
|
||||
### Python工具
|
||||
#### Natural Language Toolkit (NLTK)
|
||||
|
||||
[Natural Language Toolkit (NLTK)][2]是我调研的所有工具中功能最完善的一个。它完美地实现了自然语言处理中多数功能组件,比如分类,令牌化,词干化,标注,分词和语义推理。每一种方法都有多种不同的实现方式,所以你可以选择具体的算法和方式去使用它。同时,它也支持不同语言。然而,它将所有的数据都表示为字符串的形式,对于一些简单的数据结构来说可能很方便,但是如果要使用一些高级的功能来说就可能有点困难。它的使用文档有点复杂,但也有很多其他人编写的使用文档,比如[a great book][3]。和其他的工具比起来,这个工具库的运行速度有点慢。但总的来说,这个工具包非常不错,可以用于需要具体算法组合的实验,探索和实际应用当中。
|
||||
|
||||
#### SpaCy
|
||||
|
||||
[SpaCy][4]是NLTK的主要竞争者。在大多数情况下都比NLTK的速度更快,但是SpaCy对自然语言处理的功能组件只有单一实现。SpaCy把所有的东西都表示为一个对象而不是字符串,这样就能够为构建应用简化接口。这也方便它能够集成多种框架和数据科学的工具,使得你更容易理解你的文本数据。然而,SpaCy不像NLTK那样支持多种语言。它对每个接口都有一些简单的选项和文档,包括用于语言处理和分析各种组件的多种神经网络模型。总的来说,如果创造一个新的应用的生产过程中不需要使用特定的算法的话,这是一个很不错的工具。
|
||||
|
||||
#### TextBlob
|
||||
|
||||
[TextBlob][5]是NLTK的一个扩展库。你可以通过TextBlob用一种更简单的方式来使用NLTK的功能,TextBlob也包括了Pattern库中的功能。如果你刚刚开始学习,这将会是一个不错的工具可以用于生产对性能要求不太高的应用。TextBlob适用于任何场景,但是对小型项目会更加合适。
|
||||
|
||||
#### Textacy
|
||||
|
||||
这个工具是我用过的名字最好听的。读"[Textacy][6]" 时先发出"ex"再发出"cy"。它不仅仅是名字好,同时它本身也是一个很不错的工具。它使用SpaCy作为它自然语言处理核心功能,但它在处理过程的前后做了很多工作。如果你想要使用SpaCy,你可以先使用Textacy,从而不用去多写额外的附加代码你就可以处理不同种类的数据。
|
||||
|
||||
#### PyTorch-NLP
|
||||
|
||||
[PyTorch-NLP][7]才出现短短的一年,但它已经有一个庞大的社区了。它适用于快速原型开发。当公司或者研究人员推出很多其他工具去完成新奇的处理任务,比如图像转换,它就会被更新。PyTorch的目标用户是研究人员,但它也能用于原型开发,或在最开始的生产任务中使用最好的算法。基于此基础上的创建的库也是值得研究的。
|
||||
|
||||
### 节点工具
|
||||
|
||||
#### Retext
|
||||
|
||||
[Retext][8]是[unified collective][9]的一部分。Unified是一个接口,能够集成不同的工具和插件以便他们能够高效的工作。Retext是unified工具集三个中的一个,另外的两个分别是用于markdown编辑的Remark和用于HTML处理的Rehype。这是一个非常有趣的想法,我很高兴看到这个社区的发展。Retext没有暴露过多的底层技术,更多的是使用插件去完成你在NLP任务中想要做的事情。拼写检查,固定排版,情绪检测和可读性分析都可以用简单的插件来完成。如果你不想了解底层处理技术又想完成你的任务的话,这个工具和社区是一个不错的选择。
|
||||
|
||||
#### Compromise
|
||||
|
||||
如果你在找拥有最高级的功能和最复杂的系统的工具的话,[Compromise][10]不是你的选择。 然而,如果你想要一个性能好,应用广泛,还能在客户端运行的工具的话,Compromise值得一试。实际上,它的名字是准确的,因为作者更关注更具体功能的小软件包,而在功能性和准确性上做出了牺牲,这些功能得益于用户对使用环境的理解。
|
||||
|
||||
#### Natural
|
||||
|
||||
[Natural][11]包含了一般自然语言处理库所具有的大多数功能。它主要是处理英文文本,但也包括一些其他语言,它的社区也支持额外的语言。它能够进行令牌化,词干化,分类,语音处理,词频-逆文档频率计算(TF-IDF),WordNet,字符相似度计算和一些变换。它和NLTK有的一比,因为它想要把所有东西都包含在一个包里头,使用方便但是可能不太适合专注的研究。总的来说,这是一个不错的功能齐全的库,目前仍在开发但可能需要对底层实现有更多的了解才能完更有效。
|
||||
|
||||
#### Nlp.js
|
||||
|
||||
[Nlp.js][12]是在其他几个NLP库上开发的,包括Franc和Brain.js。它提供了一个能很好支持NLP组件的接口,比如分类,情感分析,词干化,命名实体识别和自然语言生成。它也支持一些其他语言,在你处理除了英语之外的语言时也能提供一些帮助。总之,它是一个不错的通用工具,能够提供简单的接口去调用其他工具。在你需要更强大或更灵活的工具之前,这个工具可能会在你的应用程序中用上很长一段时间。
|
||||
|
||||
### Java工具
|
||||
#### OpenNLP
|
||||
|
||||
[OpenNLP][13]是由Apache基金会维护的,所以它可以很方便地集成到其他Apache项目中,比如Apache Flink,Apache NiFi和Apache Spark。这是一个通用的NLP工具,包含了所有NLP组件中的通用功能,可以通过命令行或者以包的形式导入到应用中来使用它。它也支持很多种语言。OpenNLP是一个很高效的工具,包含了很多特性,如果你用Java开发生产的话,它是个很好的选择。
|
||||
|
||||
#### StanfordNLP
|
||||
|
||||
[Stanford CoreNLP][14]是一个工具集,提供了基于统计的,基于深度学习和基于规则的NLP功能。这个工具也有许多其他编程语言的版本,所以可以脱离Java来使用。它是由高水平的研究机构创建的一个高效的工具,但在生产环境中可能不是最好的。此工具具有双重许可,并具有可以用于商业目的的特殊许可。总之,在研究和实验中它是一个很棒的工具,但在生产系统中可能会带来一些额外的开销。比起Java版本来说,读者可能对它的Python版本更感兴趣。斯坦福教授在Coursera上教的最好的机器学习课程之一,[点此][15]访问其他不错的资源。
|
||||
|
||||
#### CogCompNLP
|
||||
|
||||
[CogCompNLP][16]由伊利诺斯大学开发的一个工具,它也有一个相似功能的Python版本事项。它可以用于处理文本,包括本地处理和远程处理,能够极大地缓解你本地设备的压力。它提供了很多处理函数,比如令牌化,词性分析,标注,断句,命名实体标注,词型还原,依存分析和语义角色标注。它是一个很好的研究工具,你可以自己探索它的不同功能。我不确定它是否适合生产环境,但如果你使用Java的话,它值得一试。
|
||||
|
||||
* * *
|
||||
|
||||
你最喜欢的开源的NLP工具和库是什么?请在评论区分享文中没有提到的工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
|
||||
作者:[Dan Barker (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zxp](https://github.com/zhangxiangping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: http://www.nltk.org/
|
||||
[3]: http://www.nltk.org/book_1ed/
|
||||
[4]: https://spacy.io/
|
||||
[5]: https://textblob.readthedocs.io/en/dev/
|
||||
[6]: https://readthedocs.org/projects/textacy/
|
||||
[7]: https://pytorchnlp.readthedocs.io/en/latest/
|
||||
[8]: https://www.npmjs.com/package/retext
|
||||
[9]: https://unified.js.org/
|
||||
[10]: https://www.npmjs.com/package/compromise
|
||||
[11]: https://www.npmjs.com/package/natural
|
||||
[12]: https://www.npmjs.com/package/node-nlp
|
||||
[13]: https://opennlp.apache.org/
|
||||
[14]: https://stanfordnlp.github.io/CoreNLP/
|
||||
[15]: https://opensource.com/article/19/2/learn-data-science-ai
|
||||
[16]: https://github.com/CogComp/cogcomp-nlp
|
246
translated/tech/20190407 Manage multimedia files with Git.md
Normal file
246
translated/tech/20190407 Manage multimedia files with Git.md
Normal file
@ -0,0 +1,246 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (svtter)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage multimedia files with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
|
||||
通过 Git 来管理多媒体文件
|
||||
======
|
||||
|
||||
学习如何使用 Git 来追踪项目中的大型多媒体文件。
|
||||
在系列中的最后一篇文章中描述了如何使用 Git。
|
||||
|
||||
![video editing dashboard][1]
|
||||
|
||||
Git 是专用于源代码版本控制的工具。因此,Git 很少被用于非纯文本的项目以及行业。然而,异步工作流的优点是十分诱人的,尤其是在一些日益增长的行业中,这种类型的行业把重要的计算和重要的艺术冒险结合起来。其中,包括网页设计、视觉效果、视频游戏、出版、货币设计(是的,这是一个真实的行业),教育 ... 等等。还有许多行业属于这个类型。
|
||||
|
||||
在这个系列正要谈到 Git 14周年纪念日之际,我们分享了六个少为人知的方式来使用 Git。在文章的末尾,我们将会介绍一下那些利用 Git 优点来管理多媒体文件的软件。
|
||||
|
||||
### Git 管理多媒体文件的问题
|
||||
|
||||
众所周知,Git 用于处理非文本文件不是很好,但是这并不妨碍我们进行尝试。下面是一个使用 Git 来复制照片文件的例子:
|
||||
|
||||
```
|
||||
$ du -hs
|
||||
108K .
|
||||
$ cp ~/photos/dandelion.tif .
|
||||
$ git add dandelion.tif
|
||||
$ git commit -m 'added a photo'
|
||||
[master (root-commit) fa6caa7] two photos
|
||||
1 file changed, 0 insertions(+), 0 deletions(-)
|
||||
create mode 100644 dandelion.tif
|
||||
$ du -hs
|
||||
1.8M .
|
||||
```
|
||||
|
||||
目前为止没有什么异常。增加一个 1.8MB 的照片到一个目录下,使得目录变成了 1.8 MB 的大小。所以下一步,我们尝试删除文件。
|
||||
|
||||
```
|
||||
$ git rm dandelion.tif
|
||||
$ git commit -m 'deleted a photo'
|
||||
$ du -hs
|
||||
828K .
|
||||
```
|
||||
|
||||
在这里我们可以看到有些问题:删除一个已经被提交的文件,还是会使得仓库的大小扩大到原来的8倍(从 108K 到 828K)。我们可以测试多次来得到一个更好的平均值,但是这个简单的演示与我的假设一直。提交非文本文件,在一开始花费空间比较少,但是一个工厂活跃地时间越长,人们可能对静态内容修改的会更多,更多的零碎文件会被加和到一起。当一个 Git 仓库变的越来越大,主要的成本往往是速度。拉取和推送的时间,从最初抿一口咖啡的时间到你觉得你可能踢掉了
|
||||
|
||||
导致 Git 中静态内容的体积不断扩大的原因是什么呢?那些通过文本的构成的文件,允许 Git 只拉取那些修改的部分。光栅图以及音乐文件对 Git 文件而言与文本不同,你可以查看一下 .png 和 .wav 文件中的二进制数据。所以,Git 只不过是获取了全部的数据,并且创建了一个新的副本,哪怕是一张图仅仅修改了一个像素。
|
||||
|
||||
### Git-portal
|
||||
|
||||
在实践中,许多多媒体项目不需要或者不想追踪媒体的历史记录。相对于文本后者代码的部分,项目的媒体部分一般有一个不同的生命周期。媒体资源一般通过一个方向产生:一张图片从铅笔草稿开始,以数绘的形式抵达它的目的地。然后,尽管文本能够回滚到早起的版本,但是艺术只会一直向前。工程中的媒体很少被绑定到一个特定的版本。例外情况通常是反映数据集的图形,通常是可以用基于文本的格式(如SVG)完成的表、图形或图表。
|
||||
|
||||
所以,在许多同时包含文本(无论是叙事散文还是代码)和媒体的工程中,Git 是一个用于文件管理的,可接受的解决方案,只要有一个在版本控制循环之外的游乐场来给艺术家游玩。
|
||||
|
||||
![Graphic showing relationship between art assets and Git][2]
|
||||
|
||||
一个简单的方法来启用这个特性是 [Git-portal][3],一个通过武装 Git hooks 的 Bash 脚本,它将静态文件从文件夹中移出 Git 的范围,通过链接来取代。Git 提交链接文件(有时候称作快捷方式),这种链接文件比较小,所以所有的提交都是文本文件和那些代表媒体文件的链接。替身文件是链接,所以工程还会像预期的运行,因为本地机器会处理他们,转换成“真的”。当链接文件发生变动时,Git-portal 维护了一个项目的结构,因此逆转这个过程很简单。用户需要考虑的,仅仅是 Git-portal 是否适用于工程,或者需要构建一个没有链接的工程版本(例如需要分发的时候)。
|
||||
|
||||
Git-portal 也允许通过 rsync 来远程同步静态资源,所以用户可以设置一个远程存储位置,来做为一个中心的授权源。
|
||||
|
||||
Git-portal 对于多媒体的工程是一个理想的解决方案。类似的多媒体工程包括视频游戏,桌面游戏,需要进行大型3D模型渲染和纹理的虚拟现实工程,[带图的书籍][4]以及 .odt 输出,协作型的[博客站点][5],音乐项目,等等。艺术家在应用程序中以图层(在图形世界中)和曲目(在音乐世界中)的形式执行版本控制并不少见——因此,Git 不会向多媒体项目文件本身添加任何内容。Git 的功能可用于艺术项目的其他部分(例如散文和叙述、项目管理、字幕文件、信贷、营销副本、文档等),而结构化远程备份的功能则由艺术家使用。
|
||||
|
||||
#### 安装 Git-portal
|
||||
|
||||
Git-portal 的RPM 安装包位于 <https://klaatu.fedorapeople.org/git-portal>,可用于下载和安装。
|
||||
|
||||
此外,用户可以从 Git-portal 的 Gitlab 主页手动安装。这仅仅是一个 Bash 脚本以及一些 Git hooks(也是 Bash 脚本),但是需要一个快速的构建过程来让它知道安装的位置。
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
|
||||
$ cd git-portal.clone
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
#### 使用 Git-portal
|
||||
|
||||
Git-portal 与 Git 一起使用。这意味着,对于 Git 的所有大型文件扩展名,都需要记住一些额外的步骤。但是,你仅仅需要在处理你的媒体资源的时候使用 Git-portal,所以很容易记住,除非你把大文件都当做文本文件来进行处理(对于 Git 用户很少见)。使用 Git-portal 必须做的一个安装步骤是:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir bigproject.git
|
||||
$ cd !$
|
||||
$ git init
|
||||
$ git-portal init
|
||||
```
|
||||
|
||||
Git-portal 的 **init** 函数在 Git 仓库中创建了一个 **_portal** 文件夹并且添加到 .gitignore 文件中。
|
||||
|
||||
在平日里使用 Git-portal 和 Git 协同十分平滑。一个较好的例子是基于 MIDI 的音乐项目:音乐工作站产生的项目文件是基于文本的,但是 MIDI 文件是二进制数据:
|
||||
|
||||
|
||||
```
|
||||
$ ls -1
|
||||
_portal
|
||||
song.1.qtr
|
||||
song.qtr
|
||||
song-Track_1-1.mid
|
||||
song-Track_1-3.mid
|
||||
song-Track_2-1.mid
|
||||
$ git add song*qtr
|
||||
$ git-portal song-Track*mid
|
||||
$ git add song-Track*mid
|
||||
```
|
||||
|
||||
如果你查看一下 **_portal** 文件夹,你会发现那里有原始的MIDI文件。这些文件在原本的位置被替换成了指向 **_portal** 的链接文件,使得音乐工作站像预期一样运行。
|
||||
|
||||
|
||||
```
|
||||
$ ls -lG
|
||||
[...] _portal/
|
||||
[...] song.1.qtr
|
||||
[...] song.qtr
|
||||
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
|
||||
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
|
||||
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
|
||||
```
|
||||
|
||||
与 Git 相同,你也可以添加一个文件下的文件。
|
||||
|
||||
|
||||
```
|
||||
$ cp -r ~/synth-presets/yoshimi .
|
||||
$ git-portal add yoshimi
|
||||
Directories cannot go through the portal. Sending files instead.
|
||||
$ ls -lG _portal/yoshimi
|
||||
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
|
||||
```
|
||||
|
||||
删除功能也想预期一样工作,但是从 **_portal**中删除了一些东西。你应该使用 **git-portal rm** 而不是 **git rm**。使用 Git-portal 可以确保文件从 **_portal** 中删除:
|
||||
|
||||
|
||||
```
|
||||
$ ls
|
||||
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
|
||||
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
|
||||
$ git-portal rm song-Track_1-3.mid
|
||||
rm 'song-Track_1-3.mid'
|
||||
$ ls _portal/
|
||||
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
|
||||
```
|
||||
|
||||
如果你忘记使用 Git-portal,那么你需要手动删除 portal 文件:
|
||||
|
||||
|
||||
```
|
||||
$ git-portal rm song-Track_1-1.mid
|
||||
rm 'song-Track_1-1.mid'
|
||||
$ ls _portal/
|
||||
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
|
||||
$ trash _portal/song-Track_1-1.mid
|
||||
```
|
||||
|
||||
Git-portal 仅有的其他工程,是列出当前所有的链接并且找到里面已经损坏的部分。有时这种情况会因为项目文件夹中的文件被移动而发生:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir foo
|
||||
$ mv yoshimi foo
|
||||
$ git-portal status
|
||||
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
|
||||
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
|
||||
```
|
||||
|
||||
如果你使用 Git-portal 用于私人项目并且维护自己的备份,以上就是技术方面所有你需要知道关于 Git-portal 的事情了。如果你想要添加一个协作者或者你希望 Git-portal 来像 Git 的方式来管理备份,你可以创建一个远程。
|
||||
|
||||
#### 增加 Git-portal remotes
|
||||
|
||||
为 Git-portal 增加一个远程位置是通过 Git 已经存在的功能来实现的。Git-portal 实现了 Git hooks,隐藏在仓库 .git 文件夹中的脚本,来寻找你的远程主机上是否存在以 **_portal** 开头的文件夹。如果它找到一个,它会尝试使用 **rsync** 来与远程位置同步文件。Git-portal 在用户进行 Git push 以及 Git 合并的时候(或者在进行 git pull的时候,实际上是进行一次 fetch 和自动合并)会处理这项任务。
|
||||
|
||||
如果你近克隆了 Git 仓库,那么你可能永远不会自己添加一个 remote。这是一个标准的 Git 过程:
|
||||
|
||||
```
|
||||
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
|
||||
$ git remote -v
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
|
||||
```
|
||||
|
||||
**origin** 这个名字对你的主要 Git 仓库是一个流行的惯例,为 Git 数据使用它是有意义的。然而,你的 Git-portal 数据是分开存储的,所以你必须创建第二个远程机器来让 Git-portal 了解向哪里 push 和从哪里 pull。取决于你的 Git 主机。你可能需要一个分离的服务器,因为媒体资源可能有GB的大小,使得一个 Git 主机由于空间限制无法承担。或者,可能你的服务器仅允许你访问你的 Git 仓库而不允许一个额外的存储文件夹:
|
||||
|
||||
```
|
||||
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
|
||||
$ git remote -v
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
|
||||
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
|
||||
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
|
||||
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
|
||||
```
|
||||
|
||||
你可能不想把你的所有私人账户放在你的服务器上,而且你不需要这样做。为了提供服务器上仓库的大文件资源权限,你可以运行一个 Git 前端,比如 **[Gitolite][8]** 或者你可以使用 **rrsync** (restricted rsync)。
|
||||
|
||||
现在你可以推送你的 Git 数据到你的远程 Git 仓库和你的 Git-portal 数据到你的远程 portal:
|
||||
|
||||
|
||||
```
|
||||
$ git push origin HEAD
|
||||
master destination detected
|
||||
Syncing _portal content...
|
||||
sending incremental file list
|
||||
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
|
||||
total size is 60,358,015 speedup is 6,474.10
|
||||
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
|
||||
```
|
||||
|
||||
如果你已经安装了 Git-portal,并且配置了一个远程的 **_portal**,你的 **_portal** 文件夹将会被同步,并且从服务器获取新的内容,以及在每一次 push 的时候发送新的内容。但是,你不需要进行 Git commit 或者 push 来和服务器同步(用户可以使用直接使用 rsync),我发现对于艺术性内容的改变,提交是有用的。这将会把艺术家及其数字资源集成到工作流的其余部分中,并提供有关项目进度和速度的有用元数据。
|
||||
|
||||
### 其他选项
|
||||
|
||||
如果 Git-portal 对你而言太过简单,还有一些其他的选择用于 Git 管理大型文件。[Git Large File Storage][9] (LFS) 是一个失效项目的分支,称作 git-media。这个分支由 Github 维护和支持。它需要特殊的命令(例如 **git lfs track** 来保护大型文件不被 Git 追踪)并且需要用户维护一个 .gitattributes 文件来更新哪些仓库中的文件被 LFS 追踪。对于大文件而言,它 _仅_ 支持 HTTP 和 HTTPS 主机。所以你的 LFS 服务器必须进行配置,才能使得用户可以通过 HTTP 而不是 SSH 或 rsync 来进行鉴权。
|
||||
|
||||
另一个相对 LFS 更灵活的选项是 [git-annex][10]。你可以在我的文章 [managing binary blobs in Git][11] (忽略 git-media 这个已经废弃的项目,它的继任者 Git LFS 没有将它延续下来)中了解更多。Git-annex 是一个灵活且优雅的解决方案。它拥有一个细腻的系统来用于添加,删除,移动仓库中的大型文件。因为它灵活且强大,有很多新的命令和规则需要进行学习,所以建议看一下它的 [文档][12]。
|
||||
|
||||
然而,如果你的需求很简单,你可能更加喜欢整合已有技术来进行简单且明显任务的解决方案,Git-portal 可能是对于工作而言比较合适的工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/manage-multimedia-files-git
|
||||
|
||||
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[svtter](https://github.com/svtter)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
|
||||
[3]: http://gitlab.com/slackermedia/git-portal.git
|
||||
[4]: https://www.apress.com/gp/book/9781484241691
|
||||
[5]: http://mixedsignals.ml
|
||||
[6]: mailto:git@gitdawg.com
|
||||
[7]: mailto:seth@example.com
|
||||
[8]: https://opensource.com/article/19/4/file-sharing-git
|
||||
[9]: https://git-lfs.github.com/
|
||||
[10]: https://git-annex.branchable.com/
|
||||
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
|
||||
[12]: https://git-annex.branchable.com/walkthrough/
|
@ -1,232 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Go About Linux Boot Time Optimisation)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
|
||||
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
|
||||
|
||||
如何进行 Linux 启动时间优化
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_快速启动一台嵌入式设备或一台电信设备,对于时间要求严格的应用程序是至关重要的,并且在改善用户体验方面也起着非常重要的作用。这个文件给予一些关于如何增强任意设备的启动时间的重要技巧。_
|
||||
|
||||
快速启动或快速重启在各种情况下起着至关重要的作用。对于一套嵌入式设备来说,开始启动是为了保持所有服务的高可用性和更好的性能。设想一台电信设备运行一套没有启用快速启动的 Linux 操作系统。依赖于这个特殊嵌入式设备的所有的系统,服务和用户可能会受到影响。这些设备在其服务中维持高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
|
||||
|
||||
一台电信设备的一次小故障或关机,甚至几秒钟,都可能会对无数在因特网上工作的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的服务中包含快速启动以帮助它们快速重新开始工作是非常重要的。让我们从图表 1 中理解 Linux 启动过程。
|
||||
|
||||
![Figure 1: Boot-up procedure][3]
|
||||
|
||||
![Figure 2: Boot chart][4]
|
||||
|
||||
**监视工具和启动过程**
|
||||
A user should take note of a number of factors before making changes to a machine.这包括机器的当前启动速度,以及服务,进程或应用程序 These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
|
||||
|
||||
**Boot chart:** 为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装 boot chart:
|
||||
|
||||
```
|
||||
sudo apt-get install pybootchartgui.
|
||||
```
|
||||
|
||||
你每次启动时,boot chart 在日志中保存一个 _.png_ (便携式网络图片)文件,使用户能够查看 _png_ 文件来理解系统的启动过程和服务。为此,使用下面的命令:
|
||||
|
||||
```
|
||||
cd /var/log/bootchart
|
||||
```
|
||||
|
||||
用户可能需要一个应用程序来查看 _.png_ 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器,它没有一个精致的图形用户界面,但是它仅仅显示图片。Feh 可以用于查看 _.png_ 文件。你可以使用下面的命令来安装它:
|
||||
|
||||
```
|
||||
sudo apt-get install feh
|
||||
```
|
||||
|
||||
你可以使用 _feh xxxx.png_ 来查看 _png_ 文件。
|
||||
|
||||
图表 2 显示查看一个 boot chart 的 _png_ 文件时的启动图表。
|
||||
|
||||
但是,对于 Ubuntu 15.10 以后的版本不再需要 boot chart 。 为获取关于启动速度的简短信息,使用下面的命令:
|
||||
|
||||
```
|
||||
systemd-analyze
|
||||
```
|
||||
|
||||
![Figure 3: Output of systemd-analyze][5]
|
||||
|
||||
图表 3 显示命令 _systemd-analyze_ 的输出。
|
||||
|
||||
命令 _systemd-analyze_ blame 用于打印所有正在运行的基于初始化所用的时间的单元。这个信息是非常有用的,并且可用于优化启动时间。systemd-analyze blame 不会显示服务于使用 _Type=simple_ 的结果,因为 systemd 认为这些服务是立即启动的;因此,不能完成测量初始化的延迟。
|
||||
|
||||
![Figure 4: Output of systemd-analyze blame][6]
|
||||
|
||||
图表 4 显示 _systemd-analyze_ blame 的输出.
|
||||
|
||||
下面的命令打印一个单元的时间关键的链的树:
|
||||
|
||||
```
|
||||
command systemd-analyze critical-chain
|
||||
```
|
||||
|
||||
图表 5 显示命令_systemd-analyze critical-chain_ 的输出。
|
||||
|
||||
![Figure 5: Output of systemd-analyze critical-chain][7]
|
||||
|
||||
**减少启动时间的步骤**
|
||||
下面显示的是一些可采取的用于减少启动时间的步骤。
|
||||
|
||||
**BUM (启动管理器):** BUM 是一个运行级配置编辑器,当系统启动或重启时,允许 _init_ 服务的配置。它显示在启动时可以启动的每个服务的一个列表。用户可以打开和关闭之间切换个别的服务。 BUM 有一个非常干净的图形用户界面,并且非常容易使用。
|
||||
|
||||
在 Ubuntu 14.04 中, BUM 可以使用下面的命令安装:
|
||||
|
||||
```
|
||||
sudo apt-get install bum
|
||||
```
|
||||
|
||||
为在 15.10 以后的版本中安装它,从链接 _<http://apt.ubuntu.com/p/bum> 13_ 下载软件包。
|
||||
|
||||
以基础的事开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习它们的基础知识,因为它可能会影响机器或操作系统。图表 6 显示 BUM 的图形用户界面。
|
||||
|
||||
![Figure 6: BUM][8]
|
||||
|
||||
**编辑 rc 文件:** 为编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
|
||||
|
||||
```
|
||||
cd /etc/init.d.
|
||||
```
|
||||
|
||||
然而,访问 _init.d_ 需要 root 用户权限,它基本上包含了开始/停止脚本,当系统在运行时或在启动期间,控制(开始,停止,重新加载,启动启动)守护进程。
|
||||
|
||||
_rc_ 文件在 _init.d_ 中被称为一个运行控制脚本。在启动期间,init 执行 _rc_ 脚本并发挥它的作用。为改善启动速度,我们更改 _rc_ 文件。使用任意的文件编辑器打开 _rc_ 文件(当你在 _init.d_ 目录中时)。
|
||||
|
||||
例如,通过输入 _vim rc_ ,你可以更改 _CONCURRENCY=none_ 的值为 _CONCURRENCY=shell_ 。后者允许同时执行某些起始阶段的脚本,而不是连续地间断地交替执行。
|
||||
|
||||
在最新版本的内核中,该值应该被更改为 _CONCURRENCY=makefile_ 。
|
||||
图表 7 和 8 显示编辑 rc 文件前后的启动时间的比较。启动速度的改善可以被注意到。在编辑The time to boot before editing the rc 文件前的启动时间是 50.98 秒,然而在对 rc 文件进行更改后的启动时间是 23.85 秒。
|
||||
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 _init.d_ 文件。
|
||||
|
||||
![Figure 7: Boot speed before making changes to the rc file][9]
|
||||
|
||||
![Figure 8: Boot speed after making changes to the rc file][10]
|
||||
|
||||
**E4rat:** E4rat 代表 e4 ‘减少访问时间’ (仅在 ext4 文件系统的情况下). 它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目. E4rat 是一个在碎片整理的帮助下来达到一次快速启动的应用程序。它也加速应用程序的启动。E4rat 排除使用物理文件重新分配的寻道时间和旋转延迟。这导致一个高速的磁盘传输速度。
|
||||
E4rat 作为一个可以获得的 .deb 软件包,你可以从它的官方网站 _<http://e4rat.sourceforge.net/>_ 下载它.
|
||||
|
||||
Ubuntu 默认的 ureadahead 软件包与 e4rat 冲突。因此不得不使用下面的命令安装几个软件包:
|
||||
|
||||
```
|
||||
sudo dpkg purge ureadahead ubuntu-minimal
|
||||
```
|
||||
|
||||
现在使用下面的命令来安装 e4rat 的依赖关系:
|
||||
|
||||
```
|
||||
sudo apt-get install libblkid1 e2fslibs
|
||||
```
|
||||
|
||||
打开下载的 _.deb_ 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
|
||||
|
||||
遵循下面所给的步骤来使 e4rat 正确地运行,并提高启动速度。
|
||||
|
||||
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 shift 按键来完成。
|
||||
* 选择通常用于启动的选项(内核版本),并按 ‘e’ 。
|
||||
* 查找以 _linux /boot/vmlinuz_ 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键):
|
||||
|
||||
|
||||
|
||||
```
|
||||
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
|
||||
```
|
||||
|
||||
* 现在,按 _Ctrl+x_ 来继续启动。这让 e4rat 在启动后收集数据。在机器上工作,打开应用程序,并在接下来的两分钟时间内关闭应用程序。
|
||||
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:
|
||||
|
||||
|
||||
|
||||
```
|
||||
cd /var/log/e4rat
|
||||
```
|
||||
|
||||
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件在这里,再次访问 Grub 菜单,并按 ‘e’ 作为你的选项。
|
||||
* 在你之前已经编辑过的同一行的末尾输入 ‘single’ 。这将帮助你访问命令行。如果出现一个要求任何东西的不同菜单,选择恢复正常启动(Resume normal boot)。如果你不知为何不能进入命令提示符,按 Ctrl+Alt+F1 组合键。
|
||||
* 在你看到登录提示后,输入你的详细信息。
|
||||
* 现在输入下面的命令:
|
||||
|
||||
|
||||
|
||||
```
|
||||
sudo e4rat-realloc /var/lib/e4rat/startup.log
|
||||
```
|
||||
|
||||
这个进程需要一段时间,依赖于机器的磁盘速度。
|
||||
|
||||
* 现在使用下面的命令来重启你的机器:
|
||||
|
||||
|
||||
|
||||
```
|
||||
sudo shutdown -r now
|
||||
```
|
||||
|
||||
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat 。
|
||||
* 使用任意的编辑器访问 grub 文件。例如, _gksu gedit /etc/default/grub 。_
|
||||
* 查找以 _GRUB CMDLINE LINUX DEFAULT=_ 开头的一行,并在引号之间和任何选项之前添加下面的行:
|
||||
|
||||
|
||||
|
||||
```
|
||||
init=/sbin/e4rat-preload 18
|
||||
```
|
||||
|
||||
* 它应该看起来像这样:
|
||||
|
||||
|
||||
|
||||
```
|
||||
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
|
||||
```
|
||||
|
||||
* 保存并关闭 Grub 菜单,并使用 _sudo update-grub_ 更新 Grub 。
|
||||
* 重启系统,你将在启动速度方面发现显著的变化。
|
||||
|
||||
|
||||
|
||||
图表 9 和 10 显示在安装 e4rat 前后的启动时间的不同。启动速度的改善可以被注意到。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
|
||||
|
||||
![Figure 9: Boot speed before using e4rat][11]
|
||||
|
||||
![Figure 10: Boot speed after using e4rat][12]
|
||||
|
||||
**一些易做的调整**
|
||||
|
||||
一个极好的启动速度也可以使用非常小的调整来实现,其中两个在下面列出。
|
||||
**SSD:** 使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也帮助获得在传输文件和运行应用程序方面的极好速度。
|
||||
|
||||
**禁用图形用户界面:** 图形用户界面,桌面图形和窗口动画占用大量的资源。禁用图形用户界面是另一个实现极好的启动速度的好方法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
|
||||
|
||||
作者:[B Thangaraju][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/b-thangaraju/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
|
||||
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
|
||||
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
|
||||
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
|
||||
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
|
||||
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
|
||||
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
|
||||
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
|
||||
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
|
||||
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
|
||||
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1
|
@ -1,75 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's HTTPS for secure computing?)
|
||||
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
用于安全计算的 HTTPS 是什么?
|
||||
======
|
||||
在默认的情况下,网站的安全性还不足够。
|
||||
![Secure https browser][1]
|
||||
|
||||
在过去的几年里,寻找一个只以 "http://…" 开头的网站变得越来越难,这是因为行业终于意识到,网络安全是 “一件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算-使用例如受信任的执行环境(TEEs)这样的硬件功能来提供数据和算法这种类型的保护-保护主机系统中的或者易受攻击的环境中的数据。
|
||||
|
||||
我已经写了几次关于 TEEs 的文章,当然,还有我和 Nathaniel McCallum 共同创立的 Enarx 项目(要看 Enarx 项目的所有人(一个任务)和 Enarx 的多平台示例)。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台,以此来让你能够安全地将敏感应用或者敏感组件(例如微服务)部署在你不信任的主机上。Enarx 当然是完全开源的(或许会有人感兴趣,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作程序,这是可信计算的承诺,它为静态敏感数据和传输中数据的使用扩展了常规做法:
|
||||
|
||||
* **存储:** 你要加密你的静态数据,因为你不完全信任你的基础存储架构。
|
||||
* **网络:** 你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
|
||||
* **计算:** 你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
|
||||
|
||||
|
||||
|
||||
关于信任,我有非常多的话想说,而且,上述说法里的单词“完全”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。在所有的情况下,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如同态加密一类的技术,这些技术正在开始提供一些机会,但是它们依然是有限的,这些技术还不够成熟)。
|
||||
|
||||
你在考虑是否应该完全信任 cpu 时,问题有时也会随之一起到来,还会同时发现一些安全问题,并且是否无论主机在哪都应该完全免受物理攻击。
|
||||
|
||||
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广开的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的威胁模型并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题,Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号,T 型号和 U 型号的芯片以及任何 P,M 和 N 供应商的任何芯片上去。”
|
||||
|
||||
我认为这里发生了三处改变,这些改变引起了人们现在对机密计算的兴趣和采用。
|
||||
|
||||
1. **硬件可用性:** 仅仅在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
|
||||
2. **行业准备:** 就像上云越来越多地被接受作为应用程序部署的模型,监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始强烈呼吁在不受信任的主机-或者更确切地说,在他们不能完全信任且带有敏感数据的主机上运行敏感程序(或者是处理敏感数据的应用程序)的方法。这应该是不足为奇的:如果芯片制造商看不到这项技术的市场,他们就不会投太多的钱在这项技术上。Linux 基金会的机密计算联盟(CCC)的成立就是一个行业如何有志于发现使用加密计算的公共模型并且鼓励开源项目使用这些技术的案例。
|
||||
3. **开源:** 就像区块链一样,加密计算是使用起来绝对不费吹灰之力的开源技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内支持你工作扩展的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE,”但是如果你不够了解 TEE 软件环境,那你就是将一种不透明的软件换成另外一种。TEEs 的开源支持将允许你或者社区-实际上是你和社区-以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果他们还没有成为开源)。
|
||||
|
||||
|
||||
|
||||
我认为,在过去的 15 到 20 年里,硬件可用性,行业准备和开源已成为推动技术改变的驱动力。区块链,人工智能,云计算,互联网计算,大数据和互联网商务都是这三个点同时发挥作用的例子,并且在行业内带来了巨大的改变。
|
||||
|
||||
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的无处不在的安全变得越来越实际,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化-而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
|
||||
|
||||
* * *
|
||||
|
||||
1. Enarx,是红帽发起的,是一个 CCC 项目。
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/confidential-computing
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
|
||||
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
|
||||
[3]: https://enarx.io/
|
||||
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
|
||||
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
|
||||
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
|
||||
[7]: https://confidentialcomputing.io/
|
||||
[8]: tmp.VEZpFGxsLv#1
|
||||
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/
|
@ -1,183 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use this Python script to find bugs in your Overcloud)
|
||||
[#]: via: (https://opensource.com/article/20/1/logtool-root-cause-identification)
|
||||
[#]: author: (Arkady Shtempler https://opensource.com/users/ashtempl)
|
||||
|
||||
用 Python 脚本发现 Overcloud 中的问题
|
||||
======
|
||||
LogTool 是一组 Python 脚本,可帮助你找出 Overcloud 节点中问题的根本原因。
|
||||
![Searching for code][1]
|
||||
|
||||
OpenStack 在其 Overcloud 节点和 Undercloud 主机上存储和管理了一堆日志文件。因此,使用 OSP 日志文件来排查遇到的问题并不是一件容易的事,尤其在你甚至都不知道是什么原因导致问题时。
|
||||
|
||||
如果你正处于这种情况,那么 [LogTool][2] 可以使你的生活变得更加轻松!它会为你节省本需要人工排查问题所需的时间和精力。 LogTool 基于模糊字符串匹配算法,可提供过去发生的所有唯一错误和警告信息。你可以根据日志中的时间戳导出特定时间段(例如10分钟前,一个小时前,一天前等)的这些信息。
|
||||
|
||||
LogTool 是一组 Python 脚本,其主要模块 PyTool.py 在 Undercloud 主机上执行。某些操作模式使用直接在Overcloud 节点上执行的其他脚本,例如从 Overcloud 日志中导出错误和警告信息。
|
||||
|
||||
LogTool 支持 Python 2 和 Python 3,你可以根据需要更改工作目录:[LogTool_Python2][3] or [LogTool_Python3][4].
|
||||
|
||||
### 操作方式
|
||||
|
||||
#### 1\. 从 Overcloud 日志中导出错误和警告信息
|
||||
|
||||
此模式用于从过去发生的 Overcloud 节点中提取 **错误** 和 **警告** 信息。作为用户,系统将提示你“开始时间”和“调试级别”,以用于提取错误或警告消息。例如,如果在过去10分钟内出了问题,你则可以只提取该时间段内的错误和警告消息。
|
||||
|
||||
|
||||
此操作模式将为每个 Overcloud 节点生成一个包含结果文件的目录。结果文件是经过压缩(***.gz**)的简单文本文件,以减少从 Overcloud 节点下载所需的时间。将压缩文件转换为常规文本文件,可以使用 zcat 或类似工具。此外,Vi 的某些版本和 Emacs 的任何最新版本均支持读取压缩数据。结果文件分为几部分,并在底部包含目录。
|
||||
|
||||
|
||||
LogTool 可以即时检测两种日志文件:标准和非标准。在标准文件中,每条日志行都有一个已知的和已定义的结构:时间戳,调试级别,信息等等。在非标准文件中,日志的结构未知。例如,它可能是第三方的日志。在目录中,你可以找到每个部分的"名称 --> 行号" 例如:
|
||||
|
||||
|
||||
* **原始数据 - 从标准 OSP 日志中提取的错误/警告消息:** 这部分包含所有提取的错误/警告消息,没有任何修改或更改。这些消息是 LogTool 用于模糊匹配分析的原始数据。
|
||||
|
||||
* **统计信息 - 每个标准 OSP 日志的错误/警告信息数量:** 在此部分,你将找到每个标准日志文件的错误和警告数量。这些信息可以帮助你了解用于排查问题根本原因的潜在组件。
|
||||
|
||||
* **统计信息 - 每个标准 OSP 日志文件的唯一消息:** 这部分提供指定时间戳内的唯一的错误和警告消息。有关每个唯一错误或警告的更多详细信息,请在“原始数据”部分中查找相同的消息。
|
||||
|
||||
* **统计信息 - 每个非标准日志文件在任意时间的唯一消息:** 此部分包含非标准日志文件中的唯一消息。遗憾的是,LogTool 无法像标准日志文件那样的处理方式处理这些日志文件。因此,在你提取“特定时间”的日志信息时会被忽略,你会看到过去创建的所有唯一的错误/警告消息。因此,首先,向下滚动到结果文件底部的目录并查看其部分-使用目录中的行索引跳到相关部分,其中第3、4和5行的信息最重要。
|
||||
|
||||
#### 2\. 从 Overcloud 节点下载所有日志
|
||||
|
||||
所有 Overcloud 节点的日志将被压缩并下载到 Undercloud 主机上的本地目录。
|
||||
|
||||
#### 3\. 所有 Overcloud 日志中使用 Grep 搜索字符串
|
||||
|
||||
|
||||
该模式中使用“greps”来搜索由用户在所有 Overcloud 日志上提供的字符串。例如,你可能希望查看特定请求的所有日志消息,例如,“创建VM”的失败的请求ID。
|
||||
|
||||
#### 4\. 检查 Overcloud 上当前的 CPU,RAM 和磁盘使用情况
|
||||
|
||||
该模式显示每个 Overcloud 节点上的当前 CPU,RAM 和磁盘信息。
|
||||
|
||||
#### 5\. 执行用户脚本
|
||||
|
||||
该模式使用户可以在 Overcloud 节点上运行自己的脚本。例如,假设 Overcloud 部署失败,你就需要在每个Controller 节点上执行相同的过程来修复该问题。你可以实现“替代方法”脚本,并使用此模式在 Controller 上运行它。
|
||||
|
||||
#### 6\. 仅按给定的时间戳下载相关日志
|
||||
|
||||
此模式仅下载 Overcloud 上 “给定的时间戳”的“上次修改时间”的日志。例如,如果10分钟前出现错误,则与旧日志文件就没有关系,因此无需下载。此外,你不能(或不应)在某些错误报告工具中附加大文件,因此此模式可能有助于编写错误报告。
|
||||
|
||||
#### 7\. 从 Undercloud 日志中导出错误和警告信息
|
||||
|
||||
这与上面的模式#1相同。
|
||||
|
||||
#### 8\. 在 Overcloud 上检查不健康的 docker
|
||||
|
||||
此模式用于在节点上搜索不正常的 Docker。
|
||||
|
||||
#### 9\. 下载 OSP 日志并在本地运行 LogTool
|
||||
|
||||
此模式允许你从 Jenkins 或 Log Storage 下载 OSP 日志 (例如, **cougar11.scl.lab.tlv.redhat.com**),并在本地分析。
|
||||
|
||||
#### 10\. 在 Undercloud 上分析部署日志
|
||||
|
||||
|
||||
此模式可以帮助你了解 Overcloud 或 Undercloud 部署过程中出了什么问题。例如,在**overcloud_deploy.sh** 脚本中,使用 **\--log**选项时会生成部署日志;此类日志的问题是“不友好”,你很难理解是什么出了问题,尤其是当详细程度设置为**vv** 或更高时,使得日志中的数据难以读取。此模式提供有关所有失败任务的详细信息。
|
||||
|
||||
#### 11\. 分析 Gerrit(Zuul)失败的日志
|
||||
|
||||
此模式用于分析 Gerrit(Zuul)日志文件。它会自动从远程 Gerrit 门下载所有文件(HTTP下载)并在本地进行分析。
|
||||
|
||||
### 安装
|
||||
|
||||
GitHub 上有 LogTool,使用以下命令将其克隆到你的 Undercloud 主机:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/zahlabut/LogTool.git`
|
||||
```
|
||||
|
||||
该工具还使用了一些外部 Python 模块:
|
||||
|
||||
#### Paramiko
|
||||
|
||||
默认情况下,SSH 模块通常会安装在 Undercloud 上。使用以下命令来验证是否已安装:
|
||||
|
||||
|
||||
```
|
||||
`ls -a /usr/lib/python2.7/site-packages | grep paramiko`
|
||||
```
|
||||
|
||||
如果需要安装模块,请在 Undercloud 上执行以下命令:
|
||||
|
||||
|
||||
```
|
||||
sudo easy_install pip
|
||||
sudo pip install paramiko==2.1.1
|
||||
```
|
||||
|
||||
#### BeautifulSoup
|
||||
|
||||
此 HTML 解析器模块仅在使用 HTTP 下载日志文件的模式下使用。它用于解析 Artifacts HTML 页面以获取其中的所有链接。安装 BeautifulSoup,请输入以下命令:
|
||||
|
||||
```
|
||||
`pip install beautifulsoup4`
|
||||
```
|
||||
|
||||
你还可以通过执行以下命令使用[requirements.txt][6]文件安装所有必需的模块:
|
||||
|
||||
|
||||
```
|
||||
`pip install -r requirements.txt`
|
||||
```
|
||||
|
||||
### 配置
|
||||
|
||||
所有必需的参数都直接在**PyTool.py**脚本中设置。默认值为:
|
||||
|
||||
```
|
||||
overcloud_logs_dir = '/var/log/containers'
|
||||
overcloud_ssh_user = 'heat-admin'
|
||||
overcloud_ssh_key = '/home/stack/.ssh/id_rsa'
|
||||
undercloud_logs_dir ='/var/log/containers'
|
||||
source_rc_file_path='/home/stack/'
|
||||
```
|
||||
|
||||
### 用法
|
||||
|
||||
此工具是交互式的,因此要启动它,只需输入:
|
||||
|
||||
|
||||
```
|
||||
cd LogTool
|
||||
python PyTool.py
|
||||
```
|
||||
|
||||
### 排除 LogTool 故障
|
||||
|
||||
|
||||
在运行时会创建两个日志文件:Error.log 和 Runtime.log*.* 请在你要打开的问题的描述中添加两者的内容。
|
||||
|
||||
### 局限性
|
||||
|
||||
LogTool 进行硬编码以处理最大500 MB 的文件。
|
||||
|
||||
### LogTool_Python3 脚本
|
||||
|
||||
在 [github.com/zahlabut/LogTool][2] 获取。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/logtool-root-cause-identification
|
||||
|
||||
作者:[Arkady Shtempler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ashtempl
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
|
||||
[2]: https://github.com/zahlabut/LogTool
|
||||
[3]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python2
|
||||
[4]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python3
|
||||
[5]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[6]: https://github.com/zahlabut/LogTool/blob/master/LogTool_Python3/requirements.txt
|
96
translated/tech/20200205 Getting started with GnuCash.md
Normal file
96
translated/tech/20200205 Getting started with GnuCash.md
Normal file
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with GnuCash)
|
||||
[#]: via: (https://opensource.com/article/20/2/gnucash)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
开始使用 GnuCash
|
||||
======
|
||||
使用 GnuCash 管理你的个人或小型企业会计。
|
||||
![A dollar sign in a network][1]
|
||||
|
||||
在过去的四年里,我一直在用 [GnuCash][2] 来管理我的个人财务,我对此非常满意。这个开源 (GPL v3) 项目自 1998 年首次发布以来一直成长和改进,2019 年 12 月发布的最新版本 3.8 增加了许多改进和 bug 修复。
|
||||
|
||||
GnuCash 可在 Windows、MacOS 和 Linux 中使用。它实现了一个复式记账系统,并可以导入各种流行的开放和专有文件格式,包括 QIF、QFX、OFX、CSV 等。这使得从其他财务应用转换(包括 Quicken)而来很容易,它是为取代这些而出现的。
|
||||
|
||||
借助 GnuCash,你可以跟踪个人财务状况以及小型企业会计和开票。它没有一个集成的工资系统。根据文档,你可以在 GnuCash 中跟踪工资支出,但你必须在软件外部计算税金和扣减。
|
||||
|
||||
### 安装
|
||||
|
||||
要在 Linux 上安装 GnuCash:
|
||||
|
||||
* 在 Red Hat、CentOS 或 Fedora 中: **$ sudo dnf install gnucash**
|
||||
* 在 Debian、Ubuntu 或 Pop_OS 中: **$ sudo apt install gnucash**
|
||||
|
||||
|
||||
|
||||
你也可以从 [Flathub][3] 安装它,我在运行 Elementary OS 的笔记本上使用它。(本文中的所有截图都来自此次安装)。
|
||||
|
||||
### 设置
|
||||
|
||||
安装并启动程序后,你将看到一个欢迎屏幕,该页面提供了创建新账户集、导入 QIF 文件或打开新用户教程的选项。
|
||||
|
||||
![GnuCash Welcome screen][4]
|
||||
|
||||
#### 个人账户
|
||||
|
||||
如果你选择第一个选项(正如我所做的那样),GnuCash 会打开一个页面给你向导。它收集初始数据并设置账户首选项,例如账户类型和名称、商业数据(例如,税号)和首选货币。
|
||||
|
||||
![GnuCash new account setup][5]
|
||||
|
||||
GnuCash 支持个人银行账户、商业账户、汽车贷款、CD 和货币市场账户、儿童保育账户等。
|
||||
|
||||
例如,首先创建一个简单的支票簿。你可以输入账户的初始余额或以多种格式导入现有账户数据。
|
||||
|
||||
![GnuCash import data][6]
|
||||
|
||||
#### 开票
|
||||
|
||||
GnuCash 还支持小型企业功能,包括客户、供应商和开票。要创建发票,请在 **Business ->Invoice** 中输入数据。
|
||||
|
||||
![GnuCash create invoice][7]
|
||||
|
||||
然后,你可以将发票打印在纸上,也可以将其导出到 PDF 并通过电子邮件发送给你的客户。
|
||||
|
||||
![GnuCash invoice][8]
|
||||
|
||||
### 获取帮助
|
||||
|
||||
如果你有任何疑问,它有一个优秀的帮助,你可在菜单栏的右侧获取指导。
|
||||
|
||||
![GnuCash help][9]
|
||||
|
||||
项目的网站包含许多有用的信息的链接,例如 GnuCash [功能][10]的概述。GnuCash 还提供了[详细的文档][11],可供下载和离线阅读,它还有一个 [wiki][12],为用户和开发人员提供了有用的信息。
|
||||
|
||||
你可以在项目的 [GitHub][13] 仓库中找到其他文件和文档。GnuCash 项目由志愿者驱动。如果你想参与,请查看项目的 wiki 上的 [Getting involved][14] 部分。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/gnucash
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi]](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network)
|
||||
[2]: https://www.gnucash.org/
|
||||
[3]: https://flathub.org/apps/details/org.gnucash.GnuCash
|
||||
[4]: https://opensource.com/sites/default/files/images/gnucash_welcome.png (GnuCash Welcome screen)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/gnucash_newaccountsetup.png (GnuCash new account setup)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/gnucash_importdata.png (GnuCash import data)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/gnucash_enter-invoice.png (GnuCash create invoice)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/gnucash_invoice.png (GnuCash invoice)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/gnucash_help.png (GnuCash help)
|
||||
[10]: https://www.gnucash.org/features.phtml
|
||||
[11]: https://www.gnucash.org/docs/v3/C/gnucash-help.pdf
|
||||
[12]: https://wiki.gnucash.org/wiki/GnuCash
|
||||
[13]: https://github.com/Gnucash
|
||||
[14]: https://wiki.gnucash.org/wiki/GnuCash#Getting_involved_in_the_GnuCash_project
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (NVIDIA’s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux)
|
||||
[#]: via: (https://itsfoss.com/geforce-now-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
NVIDIA 的云游戏服务 GeForce NOW 无耻地忽略了Linux
|
||||
======
|
||||
|
||||
NVIDIA的 [GeForce NOW][1] 云游戏服务对于那些可能没有硬件但想使用 GeForce NOW 在最新的最好的游戏上获得尽可能好的游戏体验玩家来说是充满前景的(在线推流游戏,并在任何设备上玩)。
|
||||
|
||||
该服务仅限于一些用户(以等待列表的形式)使用。然而,他们最近宣布 [GeForce NOW 面向所有人开放][2]。但实际上并不是。
|
||||
|
||||
有趣的是,它**并不是面向全球所有区域**。而且,更糟的是 **GeForce NOW 不支持 Linux**。
|
||||
|
||||
![][3]
|
||||
|
||||
### GeForce NOW 并不是向“所有人开放”
|
||||
|
||||
制作一个基于订阅的云服务来玩游戏的目的是消除平台依赖性。
|
||||
|
||||
就像你通常使用浏览器访问网站一样,你应该能够在每个平台上玩游戏。是这个概念吧?
|
||||
|
||||
![][4]
|
||||
|
||||
好吧,这绝对不是火箭科学,但是 NVIDIA 仍然不支持 Linux(和iOS)?
|
||||
|
||||
### 是因为没有人使用 Linux 吗?
|
||||
|
||||
我非常不同意这一点,即使这是某些不支持 Linux 的原因。如果真是这样,我不会在使用 Linux 作为主要桌面操作系统时为 “It’s FOSS” 写文章。
|
||||
|
||||
不仅如此,如果 Linux 不值一提,你认为为何一个 Twitter 用户会提到缺少 Linux 支持?
|
||||
|
||||
![][5]
|
||||
|
||||
是的,也许用户群不够大,但是在考虑将其作为基于云的服务时,**不支持 Linux** 显得没有意义。
|
||||
|
||||
从技术上讲,如果 Linux 上没有游戏,那么 **Valve** 就不会在 Linux 上改进 [Steam Play][6] 来帮助更多用户在 Linux 上玩纯 Windows 的游戏。
|
||||
|
||||
我不想说不正确的说法,但台式机 Linux 游戏的发展比以往任何时候都要快(即使统计上要比 Mac 和 Windows 要低)。
|
||||
|
||||
### 云游戏不应该像这样
|
||||
|
||||
![][7]
|
||||
|
||||
如上所述,找到使用 Steam Play 的 Linux 玩家不难。只是你会发现 Linux 上游戏玩家的整体“市场份额”低于其他平台。
|
||||
|
||||
即使这是事实,云游戏也不应该依赖于特定平台。而且,考虑到 GeForce NOW 本质上是一种基于浏览器的可以玩游戏的流媒体服务,所以对于像 NVIDIA 这样的大公司来说,支持 Linux 并不困难
|
||||
|
||||
来吧,Nvidia,_你想要我们相信在技术上支持 Linux 有困难?_或者,你只是想说_不值得支持 Linux 平台?_
|
||||
|
||||
**总结**
|
||||
|
||||
不管我为 GeForce NOW 服务发布而感到多么兴奋,当看到它根本不支持 Linux,我感到非常失望。
|
||||
|
||||
如果像 GeForce NOW 这样的云游戏服务在不久的将来开始支持 Linux,**你可能没有理由使用 Windows 了**(*咳嗽*)。
|
||||
|
||||
你怎么看待这件事?在下面的评论中让我知道你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/geforce-now-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.nvidia.com/en-us/geforce-now/
|
||||
[2]: https://blogs.nvidia.com/blog/2020/02/04/geforce-now-pc-gaming/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now-linux.jpg?ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now.png?ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/geforce-now-twitter-1.jpg?ssl=1
|
||||
[6]: https://itsfoss.com/steam-play/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/ge-force-now.jpg?ssl=1
|
Loading…
Reference in New Issue
Block a user