Merge pull request #95 from LCTT/master

update
This commit is contained in:
MjSeven 2018-11-03 13:55:24 +08:00 committed by GitHub
commit e96e20dcbe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
29 changed files with 2271 additions and 1162 deletions

View File

@ -1,44 +1,43 @@
2017 年 Linux 上最好的 9 个免费视频编辑软件
Linux 上最好的 9 个免费视频编辑软件2018
======
**概要:这里介绍 Linux 上几个最好的视频编辑器,介绍他们的特性、利与弊,以及如何在你的 Linux 发行版上安装它们。**
![Linux 上最好的视频编辑器][1]
> 概要:这里介绍 Linux 上几个最好的视频编辑器,介绍它们的特性、利与弊,以及如何在你的 Linux 发行版上安装它们。
![Linux 上最好的视频编辑器][2]
我们曾经在一篇短文中讨论过[ Linux 上最好的照片管理应用][3][Linux 上最好的代码编辑器][4]。今天我们将讨论 **Linux 上最好的视频编辑软件**
我们曾经在一篇短文中讨论过 [Linux 上最好的照片管理应用][3][Linux 上最好的代码编辑器][4]。今天我们将讨论 **Linux 上最好的视频编辑软件**
当谈到免费视频编辑软件Windows Movie Maker 和 iMovie 是大部分人经常推荐的。
很不幸,上述两者在 GNU/Linux 上都不可用。但是不必担心,我们为你汇集了一个**最好的视频编辑器**清单。
## Linux 上最好的视频编辑器
### Linux 上最好的视频编辑器
接下来让我们一起看看这些最好的视频编辑软件。如果你觉得文章读起来太长,这里有一个快速摘要。你可以点击链接跳转到文章的相关章节:
接下来让我们一起看看这些最好的视频编辑软件。如果你觉得文章读起来太长,这里有一个快速摘要。
视频编辑器 主要用途 类型
Kdenlive 通用视频编辑 免费开源
OpenShot 通用视频编辑 免费开源
Shotcut 通用视频编辑 免费开源
Flowblade 通用视频编辑 免费开源
Lightworks 专业级视频编辑 免费增值
Blender 专业级三维编辑 免费开源
Cinelerra 通用视频编辑 免费开源
DaVinci 专业级视频处理编辑 免费增值
VidCutter 简单视频拆分合并 免费开源
| 视频编辑器 | 主要用途 | 类型 |
|----------|---------|-----|
| Kdenlive | 通用视频编辑 | 自由开源 |
| OpenShot | 通用视频编辑 | 自由开源 |
| Shotcut | 通用视频编辑 | 自由开源 |
| Flowblade | 通用视频编辑 | 自由开源 |
| Lightworks | 专业级视频编辑 | 免费增值 |
| Blender | 专业级三维编辑 | 自由开源 |
| Cinelerra | 通用视频编辑 | 自由开源 |
| DaVinci | 专业级视频处理编辑 | 免费增值 |
| VidCutter | 简单视频拆分合并 | 自由开源 |
### 1\. Kdenlive
![Kdenlive - Ubuntu 上的免费视频编辑器][1]
### 1、 Kdenlive
![Kdenlive - Ubuntu 上的免费视频编辑器][5]
[Kdenlive][6] 是 [KDE][8] 上的一个免费且[开源][7]的视频编辑软件,支持双视频监控,多轨时间线,剪辑列表,支持自定义布局,基本效果,以及基本过渡。
[Kdenlive][6] 是 [KDE][8] 上的一个自由且[开源][7]的视频编辑软件,支持双视频监控、多轨时间线、剪辑列表、自定义布局、基本效果,以及基本过渡效果。
它支持多种文件格式和多种摄像机、相机包括低分辨率摄像机Raw 和 AVI DV 编辑)Mpeg2mpeg4 和 h264 AVCHD小型相机和便携式摄像机高分辨率摄像机文件包括 HDV 和 AVCHD 摄像机,专业摄像机,包括 XDCAM-HD™ 流, IMX™ (D10) 流DVCAM (D10)DVCAMDVCPRO™DVCPRO50™ 流以及 DNxHD™ 流
它支持多种文件格式和多种摄像机、相机包括低分辨率摄像机Raw 和 AVI DV 编辑)、mpeg2、mpeg4 和 h264 AVCHD小型相机和便携式摄像机、高分辨率摄像机文件包括 HDV 和 AVCHD 摄像机)、专业摄像机(包括 XDCAM-HD™ 流、IMX™ (D10) 流、DVCAM (D10)、DVCAM、DVCPRO™、DVCPRO50™ 流以及 DNxHD™ 流)
如果你正寻找 Linux 上 iMovie 的替代品Kdenlive 会是你最好的选择。
#### Kdenlive 特性
Kdenlive 特性
* 多轨视频编辑
* 多种音视频格式支持
@ -51,50 +50,44 @@ VidCutter 简单视频拆分合并 免费开源
* 广泛的硬件支持
* 关键帧效果
#### 优点
优点:
* 通用视频编辑器
* 对于那些熟悉视频编辑的人来说并不太复杂
缺点:
#### 缺点
* 如果你想找的是极致简单的编辑软件,它可能还是令你有些困惑
* KDE 应用程序以臃肿而臭名昭著
#### 安装 Kdenlive
Kdenlive 适用于所有主要的 Linux 发行版。你只需在软件中心搜索即可。[Kdenlive 网站的下载部分][9]提供了各种软件包。
命令行爱好者可以通过在 Debian 和基于 Ubuntu 的 Linux 发行版中运行以下命令从终端安装它:
命令行爱好者可以通过在 Debian 和基于 Ubuntu 的 Linux 发行版中运行以下命令从终端安装它:
```
sudo apt install kdenlive
```
### 2\. OpenShot
![Openshot - ubuntu 上的免费视频编辑器][1]
### 2、 OpenShot
![Openshot - ubuntu 上的免费视频编辑器][10]
[OpenShot][11] 是 Linux 上的另一个多用途视频编辑器。OpenShot 可以帮助你创建具有过渡和效果的视频。你还可以调整声音大小。当然,它支持大多数格式和编解码器。
你还可以将视频导出至 DVD上传至 YouTubeVimeoXbox 360 以及许多常见的视频格式。OpenShot 比 Kdenlive 要简单一些。因此如果你需要界面简单的视频编辑器OpenShot 是一个不错的选择。
你还可以将视频导出至 DVD上传至 YouTube、Vimeo、Xbox 360 以及许多常见的视频格式。OpenShot 比 Kdenlive 要简单一些。因此如果你需要界面简单的视频编辑器OpenShot 是一个不错的选择。
它还有个简洁的[开始使用 Openshot][12] 文档。
#### OpenShot 特性
OpenShot 特性
* 跨平台,可在LinuxmacOS 和 Windows 上使用
* 跨平台,可在 Linux、macOS 和 Windows 上使用
* 支持多种视频,音频和图像格式
* 强大的基于曲线的关键帧动画
* 桌面集成与拖放支持
* 不受限制的音视频轨道或图层
* 可剪辑调整大小,缩放,修剪,捕捉,旋转和剪切
* 可剪辑调整大小、缩放、修剪、捕捉、旋转和剪切
* 视频转换可实时预览
* 合成,图像层叠和水印
* 标题模板,标题创建,子标题
@ -107,96 +100,81 @@ sudo apt install kdenlive
* 音频混合和编辑
* 数字视频效果,包括亮度,伽玛,色调,灰度,色度键等
#### 优点
优点:
* 用于一般视频编辑需求的通用视频编辑器
* 可在 Windows 和 macOS 以及 Linux 上使用
#### 缺点
缺点:
* 软件用起来可能很简单,但如果你对视频编辑非常陌生,那么肯定需要一个曲折学习的过程
* 你可能仍然没有达到专业级电影制作编辑软件的水准
#### 安装 OpenShot
OpenShot 也可以在所有主流 Linux 发行版的软件仓库中使用。你只需在软件中心搜索即可。你也可以从[官方页面][13]中获取它。
在 Debian 和基于 Ubuntu 的 Linux 发行版中,我最喜欢运行以下命令来安装它:
在 Debian 和基于 Ubuntu 的 Linux 发行版中,我最喜欢运行以下命令来安装它:
```
sudo apt install openshot
```
### 3\. Shotcut
![Shotcut Linux 视频编辑器][1]
### 3、 Shotcut
![Shotcut Linux 视频编辑器][14]
[Shotcut][15] 是 Linux 上的另一个编辑器,可以和 Kdenlive 与 OpenShot 归为同一联盟。虽然它确实与上面讨论的其他两个软件有类似的功能,但 Shotcut 更先进的地方是支持4K视频。
[Shotcut][15] 是 Linux 上的另一个编辑器,可以和 Kdenlive 与 OpenShot 归为同一联盟。虽然它确实与上面讨论的其他两个软件有类似的功能,但 Shotcut 更先进的地方是支持 4K 视频。
支持许多音频视频格式,过渡和效果是 Shotcut 的众多功能中的一部分。它也支持外部监视器。
支持许多音频视频格式,过渡和效果是 Shotcut 的众多功能中的一部分。它也支持外部监视器。
这里有一系列视频教程让你[轻松上手 Shotcut][16]。它也可在 Windows 和 macOS 上使用,因此你也可以在其他操作系统上学习。
#### Shotcut 特性
Shotcut 特性
* 跨平台,可在 LinuxmacOS 和 Windows 上使用
* 跨平台,可在 LinuxmacOS 和 Windows 上使用
* 支持各种视频,音频和图像格式
* 原生时间线编辑
* 混合并匹配项目中的分辨率和帧速率
* 音频滤波混音和效果
* 音频滤波混音和效果
* 视频转换和过滤
* 具有缩略图和波形的多轨时间轴
* 无限制撤消和重做播放列表编辑,包括历史记录视图
* 剪辑调整大小,缩放,修剪,捕捉,旋转和剪切
* 剪辑调整大小、缩放、修剪、捕捉、旋转和剪切
* 使用纹波选项修剪源剪辑播放器或时间轴
* 在额外系统显示/监视器上的外部监察
* 硬件支持
你可以在[这里][17]阅它的更多特性。
#### 优点
优点
* 用于常见视频编辑需求的通用视频编辑器
* 支持 4K 视频
* 可在 WindowsmacOS 以及 Linux 上使用
#### 缺点
缺点:
* 功能太多降低了软件的易用性
#### 安装 Shotcut
Shotcut 以 [Snap][18] 格式提供。你可以在 Ubuntu 软件中心找到它。对于其他发行版,你可以从此[下载页面][19]获取可执行文件来安装。
### 4\. Flowblade
![Flowblade ubuntu 上的视频编辑器][1]
### 4、 Flowblade
![Flowblade ubuntu 上的视频编辑器][20]
[Flowblade][21] 是 Linux 上的一个多轨非线性视频编辑器。与上面讨论的一样,这也是一个免费开源的软件。它具有时尚和现代化的用户界面。
[Flowblade][21] 是 Linux 上的一个多轨非线性视频编辑器。与上面讨论的一样,这也是一个自由开源的软件。它具有时尚和现代化的用户界面。
用 Python 编写它的设计初衷是快速且准确。Flowblade 专注于在 Linux 和其他免费平台上提供最佳体验。所以它没有在 Windows 和 OS X 上运行的版本。Linux 用户专享其实感觉也不错的。
用 Python 编写它的设计初衷是快速且准确。Flowblade 专注于在 Linux 和其他自由平台上提供最佳体验。所以它没有在 Windows 和 OS X 上运行的版本。Linux 用户专享其实感觉也不错的。
你也可以查看这个不错的[文档][22]来帮助你使用它的所有功能。
#### Flowblade 特性
Flowblade 特性
* 轻量级应用
* 为简单的任务提供简单的界面,如拆分,合并,覆盖等
* 为简单的任务提供简单的界面,如拆分、合并、覆盖等
* 大量的音视频效果和过滤器
* 支持[代理编辑][23]
* 支持拖拽
@ -206,39 +184,32 @@ Shotcut 以 [Snap][18] 格式提供。你可以在 Ubuntu 软件中心找到它
* 视频转换和过滤器
* 具有缩略图和波形的多轨时间轴
你可以在 [Flowblade 特性][24]里阅读关于它的更多信息。
#### 优点
优点
* 轻量
* 适用于通用视频编辑
#### 缺点
缺点:
* 不支持其他平台
#### 安装 Flowblade
Flowblade 应当在所有主流 Linux 发行版的软件仓库中都可以找到。你可以从软件中心安装它。也可以在[下载页面][25]查看更多信息。
另外,你可以在 Ubuntu 和基于 Ubuntu 的系统中使用一下命令安装 Flowblade
```
sudo apt install flowblade
```
### 5\. Lightworks
![Lightworks 运行在 ubuntu 16.04][1]
### 5、 Lightworks
![Lightworks 运行在 ubuntu 16.04][26]
如果你在寻找一个具有更多特性的视频编辑器,这就是你想要的。[Lightworks][27] 是一个跨平台的专业视频编辑器,可以在 LinuxMac OS X 以及 Windows上使用。
如果你在寻找一个具有更多特性的视频编辑器,这就是你想要的。[Lightworks][27] 是一个跨平台的专业视频编辑器,可以在 LinuxMac OS X 以及 Windows上使用。
它是一款屡获殊荣的专业[非线性编辑][28]NLE软件支持高达 4K 的分辨率以及 SD 和 HD 格式的视频。
@ -249,13 +220,11 @@ Lightwokrs 有两个版本:
* Lightworks 免费版
* Lightworks 专业版
专业版有更多功能,比如支持更高的分辨率,支持 4K 和 蓝光视频等。
专业版有更多功能,比如支持更高的分辨率,支持 4K 和蓝光视频等。
这个[页面][29]有广泛的可用文档。你也可以参考 [Lightworks 视频向导页][30]的视频。
#### Lightworks 特性
Lightworks 特性
* 跨平台
* 简单直观的用户界面
@ -267,27 +236,19 @@ Lightwokrs 有两个版本:
* 支持拖拽
* 各种音频和视频效果和滤镜
#### 优点
优点:
* 专业,功能丰富的视频编辑器
#### 缺点
缺点:
* 免费版有使用限制
#### 安装 Lightworks
Lightworks 为 Debian 和基于 Ubuntu 的 Linux 提供了 DEB 安装包,为基于 Fedora 的 Linux 发行版提供了RPM 安装包。你可以在[下载页面][31]找到安装包。
### 6\. Blender
![Blender 运行在 Ubuntu 16.04][1]
### 6、 Blender
![Blender 运行在 Ubuntu 16.04][32]
@ -295,48 +256,40 @@ Lightworks 为 Debian 和基于 Ubuntu 的 Linux 提供了 DEB 安装包,为
虽然最初设计用于制作 3D 模型,但它也具有多种格式视频的编辑功能。
#### Blender 特性
* 实时预览,亮度波形,色度矢量显示和直方图显示
* 音频混合,同步,擦洗和波形可视化
* 最多32个轨道用于添加视频图像音频场景面具和效果
* 速度控制,调整图层,过渡,关键帧,过滤器等
Blender 特性:
* 实时预览、亮度波形、色度矢量显示和直方图显示
* 音频混合、同步、擦洗和波形可视化
* 最多32个轨道用于添加视频、图像、音频、场景、面具和效果
* 速度控制、调整图层、过渡、关键帧、过滤器等
你可以在[这里][34]阅读更多相关特性。
#### 优点
优点
* 跨平台
* 专业级视频编辑
#### 缺点
缺点:
* 复杂
* 主要用于制作 3D 动画,不专门用于常规视频编辑
#### 安装 Blender
Blender 的最新版本可以从[下载页面][35]下载。
### 7\. Cinelerra
![Cinelerra Linux 上的视频编辑器][1]
### 7、 Cinelerra
![Cinelerra Linux 上的视频编辑器][36]
[Cinelerra][37] 从 1998 年发布以来已被下载超过500万次。它是 2003 年第一个在 64 位系统上提供非线性编辑的视频编辑器。当时它是Linux用户的首选视频编辑器但随后一些开发人员丢弃了此项目它也随之失去了光彩。
[Cinelerra][37] 从 1998 年发布以来,已被下载超过 500 万次。它是 2003 年第一个在 64 位系统上提供非线性编辑的视频编辑器。当时它是 Linux 用户的首选视频编辑器,但随后一些开发人员丢弃了此项目,它也随之失去了光彩。
好消息是它正回到正轨并且良好地再次发展。
如果你想了解关于 Cinelerra 项目是如何开始的,这里有些[有趣的背景故事][38]。
#### Cinelerra 特性
Cinelerra 特性
* 非线性编辑
* 支持 HD 视频
@ -345,27 +298,20 @@ Blender 的最新版本可以从[下载页面][35]下载。
* 不受限制的图层数量
* 拆分窗格编辑
#### 优点
优点:
* 通用视频编辑器
#### 缺点
缺点:
* 不适用于新手
* 没有可用的安装包
#### 安装 Cinelerra
你可以从 [SourceForge][39] 下载源码。更多相关信息请看[下载页面][40]。
### 8\. DaVinci Resolve
![DaVinci Resolve 视频编辑器][1]
### 8、 DaVinci Resolve
![DaVinci Resolve 视频编辑器][41]
@ -375,10 +321,10 @@ DaVinci Resolve 不是常规的视频编辑器。它是一个成熟的编辑工
DaVinci Resolve 不开源。类似于 LightWorks它也为 Linux 提供一个免费版本。专业版售价是 $300。
#### DaVinci Resolve 特性
DaVinci Resolve 特性
* 高性能播放引擎
* 支持所有类型的编辑类型,如覆盖,插入,波纹覆盖,替换,适合填充,末尾追加
* 支持所有类型的编辑类型,如覆盖、插入、波纹覆盖、替换、适合填充、末尾追加
* 高级修剪
* 音频叠加
* Multicam Editing 可实现实时编辑来自多个摄像机的镜头
@ -387,60 +333,48 @@ DaVinci Resolve 不开源。类似于 LightWorks它也为 Linux 提供一个
* 时间轴曲线编辑器
* VFX 的非线性编辑
优点:
#### 优点
* 跨平台
* 专业级视频编辑器
#### 缺点
缺点:
* 不适用于通用视频编辑
* 不开源
* 免费版本中有些功能无法使用
#### 安装 DaVinci Resolve
你可以从[这个页面][42]下载 DaVinci Resolve。你需要注册哪怕仅仅下载免费版。
### 9\. VidCutter
![VidCutter Linux 上的视频编辑器][1]
### 9、 VidCutter
![VidCutter Linux 上的视频编辑器][43]
不像这篇文章讨论的其他视频编辑器,[VidCutter][44] 非常简单。除了分割和合并视频之外,它没有其他太多功能。但有时你正好需要 VidCutter 提供的这些功能。
#### VidCutter 特性
VidCutter 特性
* 适用于LinuxWindows 和 MacOS 的跨平台应用程序
* 支持绝大多数常见视频格式例如AVIMP4MPEG 1/2WMVMP3MOV3GPFLV 等等
* 适用于LinuxWindows 和 MacOS 的跨平台应用程序
* 支持绝大多数常见视频格式例如AVI、MP4、MPEG 1/2、WMV、MP3、MOV、3GP、FLV 等等
* 界面简单
* 修剪和合并视频,仅此而已
#### 优点
优点:
* 跨平台
* 很适合做简单的视频分割和合并
#### 缺点
缺点:
* 不适合用于通用视频编辑
* 经常崩溃
#### 安装 VidCutter
如果你使用的是基于 Ubuntu 的 Linux 发行版,你可以使用这个官方 PPA译者注PPA个人软件包档案PersonalPackageArchives
如果你使用的是基于 Ubuntu 的 Linux 发行版,你可以使用这个官方 PPALCTT 译注PPA个人软件包档案PersonalPackageArchives
```
sudo add-apt-repository ppa:ozmartian/apps
sudo apt-get update
@ -457,17 +391,17 @@ Arch Linux 用户可以轻松的使用 AUR 安装它。对于其他 Linux 发行
如果你需要的不止这些,**OpenShot** 或者 **Kdenlive** 是不错的选择。他们有规格标准的系统,适用于初学者。
如果你拥有一台高端计算机并且需要高级功能,可以使用 **Lightworks** 或者 **DaVinci Resolve**。如果你在寻找更高级的工具用于制作 3D 作品,If you are looking for more advanced features for 3D works,**Blender** 就得到了你的支持
如果你拥有一台高端计算机并且需要高级功能,可以使用 **Lightworks** 或者 **DaVinci Resolve**。如果你在寻找更高级的工具用于制作 3D 作品,你肯定会支持选择 **Blender**
这就是关于 **Linux 上最好的视频编辑软件**我所能表达的全部内容像UbuntuLinux MintElementary以及其他 Linux 发行版。向我们分享你最喜欢的视频编辑器。
这就是在 Ubuntu、Linux Mint、Elementary以及其它发行版等 **Linux 上最好的视频编辑软件**的全部内容。向我们分享你最喜欢的视频编辑器。
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[It'S Foss Team][a]
作者:[itsfoss][a]
译者:[fuowang](https://github.com/fuowang)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,19 +1,20 @@
在 Linux 上使用 systemd 设置定时器
======
> 学习使用 systemd 创建启动你的游戏服务器的定时器。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/clock-650753_1920.jpg?itok=RiRyCbAP)
之前,我们看到了如何[手动的][1]、[在开机与关机时][2]、[在启用某个设备时][3]、[在文件系统发生改变时][4]启用与禁用 systemd 服务。
之前,我们看到了如何[手动的][1]、[在开机与关机时][2]、[在启用某个设备时][3]、[在文件系统发生改变时][4] 启用与禁用 systemd 服务。
定时器增加了另一种启动服务的方式,基于...时间。尽管与定时任务很相似,但 systemd 定时器稍微地灵活一些。让我们看看它是怎么工作的。
定时器增加了另一种启动服务的方式,基于……时间。尽管与定时任务很相似,但 systemd 定时器稍微地灵活一些。让我们看看它是怎么工作的。
### “定时运行”
让我们展开[本系列前两篇文章][2]中[你所设置的 ][1] [Minetest][5] 服务器作为如何使用定时器单元的第一个例子。如果你还没有读过那几篇文章,可以现在去看看。
让我们展开[本系列前两篇文章][2]中[你所设置的][1] [Minetest][5] 服务器作为如何使用定时器单元的第一个例子。如果你还没有读过那几篇文章,可以现在去看看。
你将通过创建一个定时器来改进 Minetest 服务器,使得在定时器启动 1 分钟后运行游戏服务器而不是立即运行。这样做的原因可能是,在启动之前可能会用到其他的服务,例如发邮件给其他玩家告诉他们游戏已经准备就绪,你要确保其他的服务(例如网络)在开始前完全启动并运行。
你将通过创建一个定时器来“改进” Minetest 服务器,使得在服务器启动 1 分钟后运行游戏服务器而不是立即运行。这样做的原因可能是,在启动之前可能会用到其他的服务,例如发邮件给其他玩家告诉他们游戏已经准备就绪,你要确保其他的服务(例如网络)在开始前完全启动并运行。
跳到最底下,你的 `_minetest.timer_` 单元看起来就像这样:
最终,你的 `minetest.timer` 单元看起来就像这样:
```
# minetest.timer
@ -26,23 +27,22 @@ Unit=minetest.service
[Install]
WantedBy=basic.target
```
一点也不难吧。
通常,开头是 `[Unit]` 和一段描述单元作用的信息,这儿没什么新东西。`[Timer]` 这一节是新出现的,但它的作用不言自明:它包含了何时启动服务,启动哪个服务的信息。在这个例子当中,`OnBootSec` 是告诉 systemd 在系统启动后运行服务的指令。
如以往一般,开头是 `[Unit]` 和一段描述单元作用的信息,这儿没什么新东西。`[Timer]` 这一节是新出现的,但它的作用不言自明:它包含了何时启动服务,启动哪个服务的信息。在这个例子当中,`OnBootSec` 是告诉 systemd 在系统启动后运行服务的指令。
其他的指令有:
* `OnActiveSec=`,告诉 systemd 在定时器启动后多长时间运行服务。
* `OnStartupSec=`,同样的,它告诉 systemd 在 systemd 进程启动后多长时间运行服务。
* `OnUnitActiveSec=`,告诉 systemd 在上次由定时器激活的服务启动后多长时间运行服务。
* `OnUnitInactiveSec=`,告诉 systemd 在上次由定时器激活的服务停用后多长时间运行服务。
* `OnActiveSec=`,告诉 systemd 在定时器启动后多长时间运行服务。
* `OnStartupSec=`,同样的,它告诉 systemd 在 systemd 进程启动后多长时间运行服务。
* `OnUnitActiveSec=`,告诉 systemd 在上次由定时器激活的服务启动后多长时间运行服务。
* `OnUnitInactiveSec=`,告诉 systemd 在上次由定时器激活的服务停用后多长时间运行服务。
继续 `_minetest.timer_` 单元,`basic.target` 通常用作<ruby>后期引导服务<rt>late boot services</rt></ruby><ruby>同步点<rt>synchronization point</rt></ruby>。这就意味着它可以让 `_minetest.timer_` 单元运行在安装完<ruby>本地挂载点<rt>local mount points</rt></ruby>或交换设备,套接字、定时器、路径单元和其他基本的初始化进程之后。就像在[第二篇文章中 systemd 单元][2]里解释的那样,`_targets_` 就像<ruby>旧的运行等级<rt>old run levels</rt></ruby>,可以将你的计算机置于某个状态,或像这样告诉你的服务在达到某个状态后开始运行。
继续 `minetest.timer` 单元,`basic.target` 通常用作<ruby>后期引导服务<rt>late boot services</rt></ruby><ruby>同步点<rt>synchronization point</rt></ruby>。这就意味着它可以让 `minetest.timer` 单元运行在安装完<ruby>本地挂载点<rt>local mount points</rt></ruby>或交换设备,套接字、定时器、路径单元和其他基本的初始化进程之后。就像在[第二篇文章中 systemd 单元][2]里解释的那样,`targets` 就像<ruby>旧的运行等级<rt>old run levels</rt></ruby>一样,可以将你的计算机置于某个状态,或像这样告诉你的服务在达到某个状态后开始运行。
在前两篇文章中你配置的`_minetest.service_`文件[最终][2]看起来就像这样:
在前两篇文章中你配置的 `minetest.service` 文件[最终][2]看起来就像这样:
```
# minetest.service
@ -64,10 +64,9 @@ ExecStop= /bin/kill -2 $MAINPID
[Install]
WantedBy= multi-user.target
```
这儿没什么需要修改的。但是你需要将 `_mtsendmail.sh_`(发送你的 email 的脚本)从:
这儿没什么需要修改的。但是你需要将 `mtsendmail.sh`(发送你的 email 的脚本)从:
```
#!/bin/bash
@ -75,7 +74,6 @@ WantedBy= multi-user.target
sleep 20
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
sleep 10
```
改成:
@ -84,40 +82,37 @@ sleep 10
#!/bin/bash
# mtsendmail.sh
echo $1 | mutt -F /home/paul/.muttrc -s "$2" pbrown@mykolab.com
```
你做的事是去除掉 Bash 脚本中那些蹩脚的停顿。Systemd 现在正在等待。
你做的事是去除掉 Bash 脚本中那些蹩脚的停顿。Systemd 现在来做等待。
### 让它运行起来
确保一切运作正常,禁用 `_minetest.service_`
确保一切运作正常,禁用 `minetest.service`
```
sudo systemctl disable minetest
```
这使得系统启动时它不会一同启动;然后,相反地,启用 `_minetest.timer_`
这使得系统启动时它不会一同启动;然后,相反地,启用 `minetest.timer`
```
sudo systemctl enable minetest.timer
```
现在你就可以重启服务器了,当运行`sudo journalctl -u minetest.*`后,你就会看到 `_minetest.timer_` 单元执行后大约一分钟,`_minetest.service_` 单元开始运行。
现在你就可以重启服务器了,当运行 `sudo journalctl -u minetest.*` 后,你就会看到 `minetest.timer` 单元执行后大约一分钟,`minetest.service` 单元开始运行。
![minetest timer][7]
图 1minetest.timer 运行大约 1 分钟后 minetest.service 开始运行
[经许可使用][8]
*图 1minetest.timer 运行大约 1 分钟后 minetest.service 开始运行*
### 时间的问题
`_minetest.timer_` 在 systemd 的日志里显示的启动时间为 09:08:33 而 `_minetest.service` 启动时间是 09:09:18它们之间少于 1 分钟,关于这件事有几点需要说明一下:首先,请记住我们说过 `OnBootSec=` 指令是从引导完成后开始计算服务启动的时间。当 `_minetest.timer_` 的时间到来时,引导已经在几秒之前完成了。
`minetest.timer` 在 systemd 的日志里显示的启动时间为 09:08:33 而 `minetest.service` 启动时间是 09:09:18它们之间少于 1 分钟,关于这件事有几点需要说明一下:首先,请记住我们说过 `OnBootSec=` 指令是从引导完成后开始计算服务启动的时间。当 `minetest.timer` 的时间到来时,引导已经在几秒之前完成了。
另一件事情是 systemd 给自己设置了一个<ruby>误差幅度<rt>margin of error</rt></ruby>(默认是 1 分钟)来运行东西。这有助于在多个<ruby>资源密集型进程<rt>resource-intensive processes</rt></ruby>同时运行时分配负载:通过分配 1 分钟的时间systemd 可以等待某些进程关闭。这也意味着 `_minetest.service_`会在引导完成后的 1~2 分钟之间启动。但精确的时间谁也不知道。
另一件事情是 systemd 给自己设置了一个<ruby>误差幅度<rt>margin of error</rt></ruby>(默认是 1 分钟)来运行东西。这有助于在多个<ruby>资源密集型进程<rt>resource-intensive processes</rt></ruby>同时运行时分配负载:通过分配 1 分钟的时间systemd 可以等待某些进程关闭。这也意味着 `minetest.service` 会在引导完成后的 1~2 分钟之间启动。但精确的时间谁也不知道。
作为记录,你可以用 `AccuracySec=` 指令[修改误差幅度][9]。
顺便一提,你可以用 `AccuracySec=` 指令[修改误差幅度][9]。
你也可以检查系统上所有的定时器何时运行或是上次运行的时间:
@ -127,9 +122,7 @@ systemctl list-timers --all
![check timer][11]
图 2检查定时器何时运行或上次运行的时间
[经许可使用][8]
*图 2检查定时器何时运行或上次运行的时间*
最后一件值得思考的事就是你应该用怎样的格式去表示一段时间。Systemd 在这方面非常灵活:`2 h``2 hours` 或 `2hr` 都可以用来表示 2 个小时。对于“秒”,你可以用 `seconds``second``sec` 和 `s`。“分”也是同样的方式:`minutes``minute``min` 和 `m`。你可以检查 `man systemd.time` 来查看 systemd 能够理解的所有时间单元。
@ -148,13 +141,13 @@ via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/setting-timer-system
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[LuuMing](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[2]:https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
[1]:https://linux.cn/article-9700-1.html
[2]:https://linux.cn/article-9703-1.html
[3]:https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change
[4]:https://www.linux.com/blog/learn/intro-to-linux/2018/6/systemd-services-monitoring-files-and-directories
[5]:https://www.minetest.net/

View File

@ -0,0 +1,103 @@
10 个最值得关注的树莓派博客
======
> 如果你正在计划你的下一个树莓派项目,那么这些博客或许有帮助。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA)
网上有很多很棒的树莓派爱好者网站、教程、代码仓库、YouTube 频道和其他资源。以下是我最喜欢的十大树莓派博客,排名不分先后。
### 1、Raspberry Pi Spy
树莓派粉丝 Matt Hawkins 从很早开始就在他的网站 [Raspberry Pi Spy][4] 上撰写了大量全面且信息丰富的教程。我从这个网站上直接学到了很多东西,而且 Matt 似乎也总是涵盖到众多主题的第一个人。在我学习使用树莓派的前三年里,多次在这个网站得到帮助。
值得庆幸的是,这个不断采用新技术的网站仍然很强大。我希望看到它继续存在下去,让新社区成员在需要时得到帮助。
### 2、Adafruit
[Adafruit][1] 是硬件黑客中知名品牌之一。该公司制作和销售漂亮的硬件,并提供由员工、社区成员,甚至 Lady Ada 女士自己编写的优秀教程。
除了网上商店Adafruit 还经营一个博客,这个博客充满了来自世界各地的精彩内容。在博客上可以查看树莓派的类别,特别是在工作日的最后一天,会在 Adafruit Towers 举办名为 [Friday is Pi Day][1] 的活动。
### 3、Recantha 的 Raspberry Pi Pod
Mike HorneRecantha是英国一位重要的树莓派社区成员负责 [CamJam 和 Potton PiPint][2](剑桥的两个树莓派社团)以及 [Pi Wars][3] 一年一度的树莓派机器人竞赛。他为其他人建立树莓派社团提供建议并且总是有时间帮助初学者。Horne 和他的共同组织者 Tim Richardson 一起开发了 CamJam Edu Kit (一系列小巧且价格合理的套件,适合初学者使用 Python 学习物理计算)。
除此之外,他还运营着 [Pi Pod][18],这是一个包含了世界各地树莓派相关内容的博客。它可能是这个列表中更新最频繁的树莓派博客,所以这是一个把握树莓派社区动向的极好方式。
### 4. Raspberry Pi 官方博客
必须提一下[树莓派基金会][19]的官方博客,这个博客涵盖了基金会的硬件、软件、教育、社区、慈善和青年编码俱乐部的一系列内容。博客上的大型主题是家庭数字化、教育授权,以及硬件版本和软件更新的官方新闻。
该博客自 [2011 年][5] 运行至今,并提供了自那时以来所有 1800 多个帖子的 [存档][6] 。你也可以在 Twitter 上关注[@raspberrypi_otd][7],这是我用 [Python][8] 创建的机器人(教程在这里:[Opensource.com][9]。Twitter 机器人推送来自博客存档的过去几年同一天的树莓派帖子。
### 5、RasPi.tv
另一位开创性的树莓派社区成员是 Alex Eames通过他的博客和 YouTube 频道 [RasPi.tv][20],他很早就加入了树莓派社区。他的网站为很多创客项目提供高质量、精心制作的视频教程和书面指南。
Alex 的网站 [RasP.iO][10] 制作了一系列树莓派附加板和配件,包括方便的 GPIO 端口引脚、电路板测量尺等等。他的博客也拓展到了 [Arduino][11]、[WEMO][12] 以及其他小网站。
### 6、pyimagesearch
虽然不是严格的树莓派博客名称中的“py”是“Python”而不是“树莓派”但该网站的 [树莓派栏目][13] 有着大量内容。 Adrian Rosebrock 获得了计算机视觉和机器学习领域的博士学位,他的博客旨在分享他在学习和制作自己的计算机视觉项目时所学到的机器学习技巧。
如果你想使用树莓派的相机模块学习面部或物体识别来这个网站就对了。Adrian 在图像识别领域的深度学习和人工智能知识和实际应用是首屈一指的,而且他编写了自己的项目,这样任何人都可以进行尝试。
### 7、Raspberry Pi Roundup
这个[博客][21]由英国官方树莓派经销商之一 The Pi Hut 进行维护,会有每周的树莓派新闻。这是另一个很好的资源,可以紧跟树莓派社区的最新资讯,而且之前的文章也值得回顾。
### 8、Dave Akerman
[Dave Akerman][22] 是研究高空热气球的一流专家,他分享使用树莓派以最低的成本进行热气球发射方面的知识和经验。他会在一张由热气球拍摄的平流层照片下面对本次发射进行评论,也会对个人发射树莓派热气球给出自己的建议。
查看 Dave 的[博客][22],了解精彩的临近空间摄影作品。
### 9、Pimoroni
[Pimoroni][23] 是一家世界知名的树莓派经销商,其总部位于英国谢菲尔德。这家经销商制作了著名的 [树莓派彩虹保护壳][14],并推出了许多极好的定制附加板和配件。
Pimoroni 的博客布局与其硬件设计和品牌推广一样精美,博文内容非常适合创客和业余爱好者在家进行创作,并且可以在有趣的 YouTube 频道 [Bilge Tank][15] 上找到。
### 10、Stuff About Code
Martin O'Hanlon 以树莓派社区成员的身份转为了树莓派基金会的员工,他起初出于乐趣在树莓派上开发“我的世界”作弊器,最近作为内容编辑加入了树莓派基金会。幸运的是,马丁的新工作并没有阻止他更新[博客][24]并与世界分享有益的趣闻。
除了“我的世界”的很多内容,你还可以在 Python 库、[Blue Dot][16] 和 [guizero][17] 上找到 Martin O'Hanlon 的贡献,以及一些总结性的树莓派技巧。
------
via: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow
作者:[Ben Nuttall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[jlztan](https://github.com/jlztan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[1]: https://blog.adafruit.com/category/raspberry-pi/
[2]: https://camjam.me/?page_id=753
[3]: https://piwars.org/
[4]: https://www.raspberrypi-spy.co.uk/
[5]: https://www.raspberrypi.org/blog/first-post/
[6]: https://www.raspberrypi.org/blog/archive/
[7]: https://twitter.com/raspberrypi_otd
[8]: https://github.com/bennuttall/rpi-otd-bot/blob/master/src/bot.py
[9]: https://opensource.com/article/17/8/raspberry-pi-twitter-bot
[10]: https://rasp.io/
[11]: https://www.arduino.cc/
[12]: http://community.wemo.com/
[13]: https://www.pyimagesearch.com/category/raspberry-pi/
[14]: https://shop.pimoroni.com/products/pibow-for-raspberry-pi-3-b-plus
[15]: https://www.youtube.com/channel/UCuiDNTaTdPTGZZzHm0iriGQ
[16]: https://bluedot.readthedocs.io/en/latest/#
[17]: https://lawsie.github.io/guizero/
[18]: https://www.recantha.co.uk/blog/
[19]: https://www.raspberrypi.org/blog/
[20]: https://rasp.tv/
[21]: https://thepihut.com/blogs/raspberry-pi-roundup
[22]: http://www.daveakerman.com/
[23]: https://blog.pimoroni.com/
[24]: https://www.stuffaboutcode.com/

View File

@ -1,13 +1,13 @@
Gifski 一个跨平台的高质量 GIF 编码器
Gifski一个跨平台的高质量 GIF 编码器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/gifski-720x340.png)
作为一名文字工作者,我需要在我的文章中添加图片。有时为了更容易讲清楚某个概念,我还会添加视频或者 gif 动图,相比于文字,通过视频或者 gif 格式的输出,读者可以更容易地理解我的指导。前些天,我已经写了篇文章来介绍针对 Linux 的功能丰富的强大截屏工具 [**Flameshot**][1]。今天,我将向你展示如何从一段视频或者一些图片来制作高质量的 gif 动图。这个工具就是 **Gifski**,一个跨平台、开源、基于 **Pngquant** 的高质量命令行 GIF 编码器。
作为一名文字工作者,我需要在我的文章中添加图片。有时为了更容易讲清楚某个概念,我还会添加视频或者 gif 动图,相比于文字,通过视频或者 gif 格式的输出,读者可以更容易地理解我的指导。前些天,我已经写了篇文章来介绍针对 Linux 的功能丰富的强大截屏工具 [Flameshot][1]。今天,我将向你展示如何从一段视频或者一些图片来制作高质量的 gif 动图。这个工具就是 **Gifski**,一个跨平台、开源、基于 **Pngquant** 的高质量命令行 GIF 编码器。
对于那些好奇 pngquant 是什么的读者,简单来说 pngquant 是一个针对 PNG 图片的无损压缩命令行工具。相信我pngquant 是我使用过的最好的 PNG 无损压缩工具。它可以将 PNG 图片最高压缩 **70%** 而不会损失图片的原有质量并保存了所有的阿尔法透明度。经过压缩的图片可以在所有的网络浏览器和系统中使用。而 Gifski 是基于 Pngquant 的,它使用 pngquant 的功能来创建高质量的 GIF 动图。Gifski 能够创建每帧包含上千种颜色的 GIF 动图。Gifski 也需要 **ffmpeg** 来将视频转换为 PNG 图片。
### **安装 Gifski**
### 安装 Gifski
首先需要确保你安装了 FFMpeg 和 Pngquant。
@ -15,31 +15,34 @@ FFmpeg 在大多数的 Linux 发行版的默认软件仓库中都可以获取到
- [在 Linux 中如何安装 FFmpeg](https://www.ostechnix.com/install-ffmpeg-linux/)
Pngquant 可以从 [**AUR**][2] 中获取到。要在基于 Arch 的系统安装它,使用任意一个 AUR 帮助程序即可,例如下面示例中的 [**Yay**][3]
Pngquant 可以从 [AUR][2] 中获取到。要在基于 Arch 的系统安装它,使用任意一个 AUR 帮助程序即可,例如下面示例中的 [Yay][3]
```
$ yay -S pngquant
```
在基于 Debian 的系统中,运行:
```
$ sudo apt install pngquant
```
假如在你使用的发行版中没有 pngquant你可以从源码编译并安装它。为此你还需要安装 **`libpng-dev`** 包。
假如在你使用的发行版中没有 pngquant你可以从源码编译并安装它。为此你还需要安装 `libpng-dev` 包。
```
$ git clone --recursive https://github.com/kornelski/pngquant.git
$ make
$ sudo make install
```
安装完上述依赖后,再安装 Gifski。假如你已经安装了 [**Rust**][4] 编程语言,你可以使用 **cargo** 来安装它:
安装完上述依赖后,再安装 Gifski。假如你已经安装了 [Rust][4] 编程语言,你可以使用 **cargo** 来安装它:
```
$ cargo install gifski
```
另外,你还可以使用 [**Linuxbrew**][5] 包管理器来安装它:
另外,你还可以使用 [Linuxbrew][5] 包管理器来安装它:
```
$ brew install gifski
```
@ -49,6 +52,7 @@ $ brew install gifski
### 使用 Gifski 来创建高质量的 GIF 动图
进入你保存 PNG 图片的目录,然后运行下面的命令来从这些图片创建 GIF 动图:
```
$ gifski -o file.gif *.png
```
@ -64,12 +68,13 @@ Gifski 还有其他的特性,例如:
* 以给定顺序来编码图片,而不是以排序的结果来编码
为了创建特定大小的 GIF 动图,例如宽为 800高为 400可以使用下面的命令
```
$ gifski -o file.gif -W 800 -H 400 *.png
```
你可以设定 GIF 动图在每秒钟展示多少帧,默认值是 **20**。为此,可以运行下面的命令:
```
$ gifski -o file.gif --fps 1 *.png
```
@ -77,42 +82,49 @@ $ gifski -o file.gif --fps 1 *.png
在上面的例子中,我指定每秒钟展示 1 帧。
我们还能够以特定质量1-100 范围内)来编码。显然,更低的质量将生成更小的文件,更高的质量将生成更大的 GIF 动图文件。
```
$ gifski -o file.gif --quality 50 *.png
```
当需要编码大量图片时Gifski 将会花费更多时间。如果想要编码过程加快到通常速度的 3 倍左右,可以运行:
```
$ gifski -o file.gif --fast *.png
```
请注意上面的命令产生的 GIF 动图文件将减少 10% 的质量并且文件大小也会更大。
请注意上面的命令产生的 GIF 动图文件将减少 10% 的质量,并且文件大小也会更大。
如果想让图片以某个给定的顺序(而不是通过排序)精确地被编码,可以使用 `--nosort` 选项。
如果想让图片以某个给定的顺序(而不是通过排序)精确地被编码,可以使用 **`--nosort`** 选项。
```
$ gifski -o file.gif --nosort *.png
```
假如你不想让 GIF 循环播放,只需要使用 **`--once`** 选项即可:
假如你不想让 GIF 循环播放,只需要使用 `--once` 选项即可:
```
$ gifski -o file.gif --once *.png
```
**从视频创建 GIF 动图**
### 从视频创建 GIF 动图
有时或许你想从一个视频创建 GIF 动图。这也是可以做到的,这时候 FFmpeg 便能提供帮助。首先像下面这样,将视频转换成一系列的 PNG 图片:
```
$ ffmpeg -i video.mp4 frame%04d.png
```
上面的命令将会从 `video.mp4` 这个视频文件创建名为“frame0001.png”、“frame0002.png”、“frame0003.png”等等形式的图片其中的 `%04d` 代表帧数),然后将这些图片保存在当前的工作目录。
上面的命令将会从 `video.mp4` 这个视频文件创建名为 “frame0001.png”、“frame0002.png”、“frame0003.png” 等等形式的图片(其中的 `%04d` 代表帧数),然后将这些图片保存在当前的工作目录。
转换好图片后,只需要运行下面的命令便可以制作 GIF 动图了:
```
$ gifski -o file.gif *.png
```
想知晓更多的细节,请参考它的帮助部分:
```
$ gifski -h
```
@ -129,17 +141,17 @@ $ gifski -h
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/gifski-a-cross-platform-high-quality-gif-encoder/
via: https://www.ostechnix.com/gifski-a-cross-platform-high-quality-gif-encoder/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/
[1]: https://linux.cn/article-10180-1.html
[2]: https://aur.archlinux.org/packages/pngquant/
[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[4]: https://www.ostechnix.com/install-rust-programming-language-in-linux/

View File

@ -0,0 +1,68 @@
6 个用于写书的开源工具
======
> 这些多能、免费的工具可以满足你撰写、编辑和生成你自己的书籍的全部需求。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4)
我在 1993 年首次使用并贡献了免费和开源软件,从那时起我一直是一名开源软件的开发人员和布道者。尽管我被记住的一个项目是 [FreeDOS 项目][1],这是一个 DOS 操作系统的开源实现,但我已经编写或者贡献了数十个开源软件项目。
我最近写了一本关于 FreeDOS 的书。《[使用 FreeDOS][2]》是我庆祝 FreeDOS 出现 24 周年而撰写的。它是关于安装和使用 FreeDOS、关于我最喜欢的 DOS 程序,以及 DOS 命令行和 DOS 批处理编程的快速参考指南的集合。在一位出色的专业编辑的帮助下,我在过去的几个月里一直在编写这本书。
《使用 FreeDOS》 可在知识共享署名cc-by国际公共许可证下获得。你可以从 [FreeDOS 电子书][2]网站免费下载 EPUB 和 PDF 版本。(我也计划为那些喜欢纸质的人提供印刷版本。)
这本书几乎完全是用开源软件制作的。我想分享一下对用来创建、编辑和生成《使用 FreeDOS》的工具的看法。
### Google 文档
[Google 文档][3]是我使用的唯一不是开源软件的工具。我将我的第一份草稿上传到 Google 文档,这样我就能与编辑器进行协作。我确信有开源协作工具,但 Google 文档能够让两个人同时编辑同一个文档、发表评论、编辑建议和更改跟踪 —— 更不用说它使用段落样式和能够下载完成的文档 —— 这使其成为编辑过程中有价值的一部分。
### LibreOffice
我开始使用的是 [LibreOffice][4] 6.0,但我最终使用 LibreOffice 6.1 完成了这本书。我喜欢 LibreOffice 对样式的丰富支持。段落样式可以轻松地为标题、页眉、正文、示例代码和其他文本应用样式。字符样式允许我修改段落中文本的外观,例如内联示例代码或用不同的样式代表文件名。图形样式让我可以将某些样式应用于截图和其他图像。页面样式允许我轻松修改页面的布局和外观。
### GIMP
我的书包括很多 DOS 程序截图、网站截图和 FreeDOS 的 logo。我用 [GIMP][5] 修改这本书的图像。通常,只是裁剪或调整图像大小,但在我准备本书的印刷版时,我使用 GIMP 创建了一些更适于打印布局的图像。
### Inkscape
大多数 FreeDOS 的 logo 和小鱼吉祥物都是 SVG 格式,我使用 [Inkscape][6] 来调整它们。在准备电子书的 PDF 版本时,我想在页面顶部放置一个简单的蓝色横幅,角落里有 FreeDOS 的 logo。实验后我发现在 Inkscape 中创建一个我想要的横幅 SVG 图案更容易,然后我将其粘贴到页眉中。
### ImageMagick
虽然使用 GIMP 来完成这项工作也很好,但有时在一组图像上运行 [ImageMagick][7] 命令会更快,例如转换为 PNG 格式或调整图像大小。
### Sigil
LibreOffice 可以直接导出到 EPUB 格式,但它不是个好的转换器。我没有尝试使用 LibreOffice 6.1 创建 EPUB但在 LibreOffice 6.0 中没有包含我的图像。它还以奇怪的方式添加了样式。我使用 [Sigil][8] 来调整 EPUB 并使一切看起来正常。Sigil 甚至还有预览功能,因此你可以看到 EPUB 的样子。
### QEMU
因为本书是关于安装和运行 FreeDOS 的,所以我需要实际运行 FreeDOS。你可以在任何 PC 模拟器中启动 FreeDOS包括 VirtualBox、QEMU、GNOME Boxes、PCem 和 Bochs。但我喜欢 [QEMU][9] 的简单性。QEMU 控制台允许你以 PPM 格式转储屏幕,这非常适合抓取截图来包含在书中。
当然,我不得不提到在 [Linux][11] 上运行 [GNOME][10]。我使用 Linux 的 [Fedora][12] 发行版。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/writing-book-open-source-tools
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[1]: http://www.freedos.org/
[2]: http://www.freedos.org/ebook/
[3]: https://www.google.com/docs/about/
[4]: https://www.libreoffice.org/
[5]: https://www.gimp.org/
[6]: https://inkscape.org/
[7]: https://www.imagemagick.org/
[8]: https://sigil-ebook.com/
[9]: https://www.qemu.org/
[10]: https://www.gnome.org/
[11]: https://www.kernel.org/
[12]: https://getfedora.org/

View File

@ -1,10 +1,11 @@
正确选择开源数据库的 5 个技巧
======
> 对关键应用的选择不容许丝毫错误。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
你或许会遇到需要选择合的开源数据库的情况。但这无论对于开源方面的老手或是新手,都是一项艰巨的任务。
你或许会遇到需要选择合的开源数据库的情况。但这无论对于开源方面的老手或是新手,都是一项艰巨的任务。
在过去的几年中,采用开源技术的企业越来越多。面对这样的趋势,众多开源应用公司都纷纷承诺自己提供的解决方案能够各种问题、适应各种负载。但这些承诺不能轻信,在开源应用上的选择是重要而艰难的,尤其是数据库这种关键的应用。
@ -20,7 +21,7 @@
### 了解你的工作负载
尽管开源数据库技术的功能越来越丰富,但这些新加入的功能都不太具有普适性。譬如 MongoDB 新增了事务的支持、MySQL 新增了 JSON 存储的功能等等。目前开源数据库的普遍趋势是不断加入新的功能,但很多人的误区却在于没有选择最适合的工具来完成自己的工作——这样的人或许是一个自大的开发者,又或许是一个视野狭窄的主管——最终导致公司业务上的损失。最致命的是,在业务初期,使用了不适合的工具往往也可以顺利地完成任务,但随着业务的增长,很快就会到达瓶颈,尽管这个时候还可以替换更合适的工具,但成本就比较高了。
尽管开源数据库技术的功能越来越丰富,但这些新加入的功能都不太具有普适性。譬如 MongoDB 新增了事务的支持、MySQL 新增了 JSON 存储的功能等等。目前开源数据库的普遍趋势是不断加入新的功能,但很多人的误区却在于没有选择最适合的工具来完成自己的工作 —— 这样的人或许是一个自大的开发者,又或许是一个视野狭窄的主管 —— 最终导致公司业务上的损失。最致命的是,在业务初期,使用了不适合的工具往往也可以顺利地完成任务,但随着业务的增长,很快就会到达瓶颈,尽管这个时候还可以替换更合适的工具,但成本就比较高了。
例如,如果你需要的是数据分析仓库,关系数据库可能不是一个适合的选择;如果你处理事务的应用要求严格的数据完整性和一致性,就不要考虑 NoSQL 了。
@ -30,7 +31,7 @@
Battery Ventures 是一家专注于技术的投资公司,最近推出了一个用于跟踪最受欢迎开源项目的 [BOSS 指数][2] 。它提供了对一些被广泛采用的开源项目和活跃的开源项目的详细情况。其中,数据库技术毫无悬念地占据了榜单的主导地位,在前十位之中占了一半。这个 BOSS 指数对于刚接触开源数据库领域的人来说,这是一个很好的切入点。当然,开源技术的提供者也会针对很多常见的典型问题给出对应的解决方案。
我认为,你想要做的事情很可能已经有人解决过了。即使这些先行者的解决方案不一定完全契合你的需求,但也可以从他们成功或失败案例中根据你自己的需求修改得出合适的解决方案。
我认为,你想要做的事情很可能已经有人解决过了。即使这些先行者的解决方案不一定完全契合你的需求,但也可以从他们成功或失败案例中根据你自己的需求修改得出合适的解决方案。
如果你采用了一个最前沿的技术,这就是你探索的好机会了。如果你的工作负载刚好适合新的开源数据库技术,放胆去尝试吧。第一个吃螃蟹的人总是会得到意外的挑战和收获。
@ -46,7 +47,7 @@ Battery Ventures 是一家专注于技术的投资公司,最近推出了一个
### 有疑问,找专家
如果你仍然不确定数据库选择是否合适,可以在论坛、网站或者与软件的提供者处商讨。研究各种开源数据库是否满足自己的需求是一件很有意义的事,因为总会发现你从不知道的技术。而开源社区就是分享这些信息的地方。
如果你仍然不确定数据库选择是否合适,可以在论坛、网站或者与软件的提供者处商讨。研究各种开源数据库是否满足自己的需求是一件很有意义的事,因为总会发现你从不知道的技术。而开源社区就是分享这些信息的地方。
当你接触到开源软件和软件提供者时,有一件重要的事情需要注意。很多公司都有开放的核心业务模式,鼓励采用他们的数据库软件。你可以只接受他们的部分建议和指导,然后用你自己的能力去研究和探索替代方案。
@ -62,7 +63,7 @@ via: https://opensource.com/article/18/10/tips-choosing-right-open-source-databa
作者:[Barrett Chambers][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,37 +1,39 @@
四个开源的Android邮件客户端
四个开源的 Android 邮件客户端
======
Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。
> Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6)
现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1]社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具。
现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1]社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具的准备
考虑到邮件还没有消失,并且(很多研究表明)人们都是在移动设备上阅读邮件,拥有一个好的移动邮件客户端就变得很关键。如果你是一个想使用开源的邮件客户端的 Android 用户,事情就变得有点棘手了。
我们提供了四个开源的 Andorid 邮件客户端供选择。其中两个可以通过 Andorid 官方应用商店 [Google Play][2] 下载。你也可以在 [Fossdroid][3] 或者 [F-Droid][4] 这些开源 Android 应用库中找到他们。(下方有每个应用的具体下载方式。)
### K-9 Mail
[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emojis 和其他经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。
[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emoji 和其它经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。
K-9 基于 [Apache 2.0][7] 协议开源,[源码][8]可以从 GitHub 上获得. 应用可以从 [Google Play][9]、[Amazon][10] 和 [F-Droid][11] 上下载。
### p≡p
正如它的全称”Pretty Easy Privacy”说的那样[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密但要求你的收件人也要能够加密邮件——否则p≡p会警告你的邮件将不加密发出
正如它的全称”Pretty Easy Privacy”说的那样[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密但要求你的收件人也要能够加密邮件——否则p≡p 会警告你的邮件将不加密发出)。
你可以从 GitLab 获得[源码][13](基于 [GPLv3][14] 协议),并且可以从应用的官网上找到相应的[文档][15]。应用可以在 [Fossdroid][16] 上免费下载或者在 [Google Play][17] 上支付一点儿象征性的费用下载。
### InboxPager
[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 密。
[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail 的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 密。
InboxPager 基于 [GPLv3][20] 协议,其源码可从 GitHub 获得,并且应用可以从 [F-Droid][21] 下载。
### FairEmail
[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户,消息线程,加密等等。
[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户、消息线索、加密等等。
它基于 [GPLv3][23] 协议开源,[源码][24]可以从GitHub上获得。你可以在 [Fossdroid][25] 上下载 FairEamil 对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。
它基于 [GPLv3][23] 协议开源,[源码][24]可以从 GitHub 上获得。你可以在 [Fossdroid][25] 上下载 FairEamil对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。
肯定还有更多的开源 Android 客户端(或者上述软件的加强版本)——活跃的开发者们可以关注一下。如果你知道还有哪些优秀的应用,可以在评论里和我们分享。
@ -42,7 +44,7 @@ via: https://opensource.com/article/18/10/open-source-android-email-clients
作者:[Opensource.com][a]
选题:[lujun9972][b]
译者:[zianglei][c]
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,5 @@
thecyanbird translating
The Rise and Rise of JSON
======
JSON has taken over the world. Today, when any two applications communicate with each other across the internet, odds are they do so using JSON. It has been adopted by all the big players: Of the ten most popular web APIs, a list consisting mostly of APIs offered by major companies like Google, Facebook, and Twitter, only one API exposes data in XML rather than JSON. Twitter, to take an illustrative example from that list, supported XML until 2013, when it released a new version of its API that dropped XML in favor of using JSON exclusively. JSON has also been widely adopted by the programming rank and file: According to Stack Overflow, a question and answer site for programmers, more questions are now asked about JSON than about any other data interchange format.

View File

@ -1,118 +0,0 @@
thecyanbird translating
Where Vim Came From
======
I recently stumbled across a file format known as Intel HEX. As far as I can gather, Intel HEX files (which use the `.hex` extension) are meant to make binary images less opaque by encoding them as lines of hexadecimal digits. Apparently they are used by people who program microcontrollers or need to burn data into ROM. In any case, when I opened up a HEX file in Vim for the first time, I discovered something shocking. Here was this file format that, at least to me, was deeply esoteric, but Vim already knew all about it. Each line of a HEX file is a record divided into different fields—Vim had gone ahead and colored each of the fields a different color. `set ft?` I asked, in awe. `filetype=hex`, Vim answered, triumphant.
Vim is everywhere. It is used by so many people that something like HEX file support shouldnt be a surprise. Vim comes pre-installed on Mac OS and has a large constituency in the Linux world. It is familiar even to people that hate it, because enough popular command line tools will throw users into Vim by default that the uninitiated getting trapped in Vim has become [a meme][1]. There are major websites, including Facebook, that will scroll down when you press the `j` key and up when you press the `k` key—the unlikely high-water mark of Vims spread through digital culture.
And yet Vim is also a mystery. Unlike React, for example, which everyone knows is developed and maintained by Facebook, Vim has no obvious sponsor. Despite its ubiquity and importance, there doesnt seem to be any kind of committee or organization that makes decisions about Vim. You could spend several minutes poking around the [Vim website][2] without getting a better idea of who created Vim or why. If you launch Vim without giving it a file argument, then you will see Vims startup message, which says that Vim is developed by “Bram Moolenaar et al.” But that doesnt tell you much. Who is Bram Moolenaar and who are his shadowy confederates?
Perhaps more importantly, while were asking questions, why does exiting Vim involve typing `:wq`? Sure, its a “write” operation followed by a “quit” operation, but that is not a particularly intuitive convention. Who decided that copying text should instead be called “yanking”? Why is `:%s/foo/bar/gc` short for “find and replace”? Vims idiosyncrasies seem too arbitrary to have been made up, but then where did they come from?
The answer, as is so often the case, begins with that ancient crucible of computing, Bell Labs. In some sense, Vim is only the latest iteration of a piece of software—call it the “wq text editor”—that has been continuously developed and improved since the dawn of the Unix epoch.
### Ken Thompson Writes a Line Editor
In 1966, Bell Labs hired Ken Thompson. Thompson had just completed a Masters degree in Electrical Engineering and Computer Science at the University of California, Berkeley. While there, he had used a text editor called QED, written for the Berkeley Timesharing System between 1965 and 1966. One of the first things Thompson did after arriving at Bell Labs was rewrite QED for the MIT Compatible Time-Sharing System. He would later write another version of QED for the Multics project. Along the way, he expanded the program so that users could search for lines in a file and make substitutions using regular expressions.
The Multics project, which like the Berkeley Timesharing System sought to create a commercially viable time-sharing operating system, was a partnership between MIT, General Electric, and Bell Labs. AT&T eventually decided the project was going nowhere and pulled out. Thompson and fellow Bell Labs researcher Dennis Ritchie, now without access to a time-sharing system and missing the “feel of interactive computing” that such systems offered, set about creating their own version, which would eventually be known as Unix. In August 1969, while his wife and young son were away on vacation in California, Thompson put together the basic components of the new system, allocating “a week each to the operating system, the shell, the editor, and the assembler.”
The editor would be called `ed`. It was based on QED but was not an exact re-implementation. Thompson decided to ditch certain QED features. Regular expression support was pared back so that only relatively simple regular expressions would be understood. QED allowed users to edit several files at once by opening multiple buffers, but `ed` would only work with one buffer at a time. And whereas QED could execute a buffer containing commands, `ed` would do no such thing. These simplifications may have been called for. Dennis Ritchie has said that going without QEDs advanced regular expressions was “not much of a loss.”
`ed` is now a part of the POSIX specification, so if you have a POSIX-compliant system, you will have it installed on your computer. Its worth playing around with, because many of the `ed` commands are today a part of Vim. In order to write a buffer to disk, for example, you have to use the `w` command. In order to quit the editor, you have to use the `q` command. These two commands can be specified on the same line at once—hence, `wq`. Like Vim, `ed` is a modal editor; to enter input mode from command mode you would use the insert command (`i`), the append command (`a`), or the change command (`c`), depending on how you are trying to transform your text. `ed` also introduced the `s/foo/bar/g` syntax for finding and replacing, or “substituting,” text.
Given all these similarities, you might expect the average Vim user to have no trouble using `ed`. But `ed` is not at all like Vim in another important respect. `ed` is a true line editor. It was written and widely used in the days of the teletype printer. When Ken Thompson and Dennis Ritchie were hacking away at Unix, they looked like this:
![Ken Thompson interacting with a PDP-11 via teletype.][3]
`ed` doesnt allow you to edit lines in place among the other lines of the open buffer, or move a cursor around, because `ed` would have to reprint the entire file every time you made a change to it. There was no mechanism in 1969 for `ed` to “clear” the contents of the screen, because the screen was just a sheet of paper and everything that had already been output had been output in ink. When necessary, you can ask `ed` to print out a range of lines for you using the list command (`l`), but most of the time you are operating on text that you cant see. Using `ed` is thus a little trying to find your way around a dark house with an underpowered flashlight. You can only see so much at once, so you have to try your best to remember where everything is.
Heres an example of an `ed` session. Ive added comments (after the `#` character) explaining the purpose of each line, though if these were actually entered `ed` wouldnt recognize them as comments and would complain:
```
[sinclairtarget 09:49 ~]$ ed
i # Enter input mode
Hello world!
Isn't it a nice day?
. # Finish input
1,2l # List lines 1 to 2
Hello world!$
$
2d # Delete line 2
,l # List entire buffer
Hello world!$
Isn't it a nice day?$
s/nice/terrible/g # Substitute globally
,l
Hello world!$
Isn't it a terrible day?$
w foo.txt # Write to foo.txt
38 # (bytes written)
q # Quit
[sinclairtarget 10:50 ~]$ cat foo.txt
Hello world!
Isn't it a terrible day?
```
As you can see, `ed` is not an especially talkative program.
### Bill Joy Writes a Text Editor
`ed` worked well enough for Thompson and Ritchie. Others found it difficult to use and it acquired a reputation for being a particularly egregious example of Unixs hostility toward the novice. In 1975, a man named George Coulouris developed an improved version of `ed` on the Unix system installed at Queen Marys College, London. Coulouris wrote his editor to take advantage of the video displays that he had available at Queen Marys. Unlike `ed`, Coulouris program allowed users to edit a single line in place on screen, navigating through the line keystroke by keystroke (imagine using Vim on one line at a time). Coulouris called his program `em`, or “editor for mortals,” which he had supposedly been inspired to do after Thompson paid a visit to Queen Marys, saw the program Coulouris had built, and dismissed it, saying that he had no need to see the state of a file while editing it.
In 1976, Coulouris brought `em` with him to UC Berkeley, where he spent the summer as a visitor to the CS department. This was exactly ten years after Ken Thompson had left Berkeley to work at Bell Labs. At Berkeley, Coulouris met Bill Joy, a graduate student working on the Berkeley Software Distribution (BSD). Coulouris showed `em` to Joy, who, starting with Coulouris source code, built out an improved version of `ed` called `ex`, for “extended `ed`.” Version 1.1 of `ex` was bundled with the first release of BSD Unix in 1978. `ex` was largely compatible with `ed`, but it added two more modes: an “open” mode, which enabled single-line editing like had been possible with `em`, and a “visual” mode, which took over the whole screen and enabled live editing of an entire file like we are used to today.
For the second release of BSD in 1979, an executable named `vi` was introduced that did little more than open `ex` in visual mode.
`ex`/`vi` (henceforth `vi`) established most of the conventions we now associate with Vim that werent already a part of `ed`. The video terminal that Joy was using was a Lear Siegler ADM-3A, which had a keyboard with no cursor keys. Instead, arrows were painted on the `h`, `j`, `k`, and `l` keys, which is why Joy used those keys for cursor movement in `vi`. The escape key on the ADM-3A keyboard was also where today we would find the tab key, which explains how such a hard-to-reach key was ever assigned an operation as common as exiting a mode. The `:` character that prefixes commands also comes from `vi`, which in regular mode (i.e. the mode entered by running `ex`) used `:` as a prompt. This addressed a long-standing complaint about `ed`, which, once launched, greets users with utter silence. In visual mode, saving and quitting now involved typing the classic `:wq`. “Yanking” and “putting,” marks, and the `set` command for setting options were all part of the original `vi`. The features we use in the course of basic text editing today in Vim are largely `vi` features.
![A Lear Siegler ADM-3A keyboard.][4]
`vi` was the only text editor bundled with BSD Unix other than `ed`. At the time, Emacs could cost hundreds of dollars (this was before GNU Emacs), so `vi` became enormously popular. But `vi` was a direct descendant of `ed`, which meant that the source code could not be modified without an AT&T source license. This motivated several people to create open-source versions of `vi`. STEVIE (ST Editor for VI Enthusiasts) appeared in 1987, Elvis appeared in 1990, and `nvi` appeared in 1994. Some of these clones added extra features like syntax highlighting and split windows. Elvis in particular saw many of its features incorporated into Vim, since so many Elvis users pushed for their inclusion.
### Bram Moolenaar Writes Vim
“Vim”, which now abbreviates “Vi Improved”, originally stood for “Vi Imitation.” Like many of the other `vi` clones, Vim began as an attempt to replicate `vi` on a platform where it was not available. Bram Moolenaar, a Dutch software engineer working for a photocopier company in Venlo, the Netherlands, wanted something like `vi` for his brand-new Amiga 2000. Moolenaar had grown used to using `vi` on the Unix systems at his university and it was now “in his fingers.” So in 1988, using the existing STEVIE `vi` clone as a starting point, Moolenaar began work on Vim.
Moolenaar had access to STEVIE because STEVIE had previously appeared on something called a Fred Fish disk. Fred Fish was an American programmer that mailed out a floppy disk every month with a curated selection of the best open-source software available for the Amiga platform. Anyone could request a disk for nothing more than the price of postage. Several versions of STEVIE were released on Fred Fish disks. The version that Moolenaar used had been released on Fred Fish disk 256. (Disappointingly, Fred Fish disks seem to have nothing to do with [Freddi Fish][5].)
Moolenaar liked STEVIE but quickly noticed that there were many `vi` commands missing. So, for the first release of Vim, Moolenaar made `vi` compatibility his priority. Someone else had written a series of `vi` macros that, when run through a properly `vi`-compatible editor, could solve a [randomly generated maze][6]. Moolenaar was able to get these macros working in Vim. In 1991, Vim was released for the first time on Fred Fish disk 591 as “Vi Imitation.” Moolenaar had added some features (including multi-level undo and a “quickfix” mode for compiler errors) that meant that Vim had surpassed `vi`. But Vim would remain “Vi Imitation” until Vim 2.0, released in 1993 via FTP.
Moolenaar, with the occasional help of various internet collaborators, added features to Vim at a steady clip. Vim 2.0 introduced support for the `wrap` option and for horizontal scrolling through long lines of text. Vim 3.0 added support for split windows and buffers, a feature inspired by the `vi` clone `nvi`. Vim also now saved each buffer to a swap file, so that edited text could survive a crash. Vimscript made its first appearance in Vim 5.0, along with support for syntax highlighting. All the while, Vims popularity was growing. It was ported to MS-DOS, to Windows, to Mac, and even to Unix, where it competed with the original `vi`.
In 2006, Vim was voted the most popular editor among Linux Journal readers. Today, according to Stack Overflows 2018 Developer Survey, Vim is the most popular text-mode (i.e. terminal emulator) editor, used by 25.8% of all software developers (and 40% of Sysadmin/DevOps people). For a while, during the late 1980s and throughout the 1990s, programmers waged the “Editor Wars,” which pitted Emacs users against `vi` (and eventually Vim) users. While Emacs certainly still has a following, some people think that the Editor Wars are over and that Vim won. The 2018 Stack Overflow Developer Survey suggests that this is true; only 4.1% of respondents used Emacs.
How did Vim become so successful? Obviously people like the features that Vim has to offer. But I would argue that the long history behind Vim illustrates that it had more advantages than just its feature set. Vims codebase dates back only to 1988, when Moolenaar began working on it. The “wq text editor,” on the other hand—the broader vision of how a Unix-y text editor should work—goes back a half-century. The “wq text editor” had a few different concrete expressions, but thanks in part to the unusual attention paid to backward compatibility by both Bill Joy and Bram Moolenaar, good ideas accumulated gradually over time. The “wq text editor,” in that sense, is one of the longest-running and most successful open-source projects, having enjoyed contributions from some of the greatest minds in the computing world. I dont think the “startup-company-throws-away all-precedents-and-creates-disruptive-new-software” approach to development is necessarily bad, but Vim is a reminder that the collaborative and incremental approach can also yield wonders.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][7] on Twitter or subscribe to the [RSS feed][8] to make sure you know when a new post is out.
Previously on TwoBitHistory…
> New post! This time we're taking a look at the Altair 8800, the very first home computer, and how to simulate it on your modern PC.<https://t.co/s2sB5njrkd>
>
> — TwoBitHistory (@TwoBitHistory) [July 22, 2018][9]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/08/05/where-vim-came-from.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://stackoverflow.blog/wp-content/uploads/2017/05/meme.jpeg
[2]: https://www.vim.org/
[3]: https://upload.wikimedia.org/wikipedia/commons/8/8f/Ken_Thompson_%28sitting%29_and_Dennis_Ritchie_at_PDP-11_%282876612463%29.jpg
[4]: https://vintagecomputer.ca/wp-content/uploads/2015/01/LSI-ADM3A-full-keyboard.jpg
[5]: https://en.wikipedia.org/wiki/Freddi_Fish
[6]: https://github.com/isaacs/.vim/tree/master/macros/maze
[7]: https://twitter.com/TwoBitHistory
[8]: https://twobithistory.org/feed.xml
[9]: https://twitter.com/TwoBitHistory/status/1021058552352387074?ref_src=twsrc%5Etfw

View File

@ -0,0 +1,275 @@
Systemd Services: Reacting to Change
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/webcam.png?itok=zzYUs5VK)
[I have one of these Compute Sticks][1] (Figure 1) and use it as an all-purpose server. It is inconspicuous and silent and, as it is built around an x86 architecture, I don't have problems getting it to work with drivers for my printer, and thats what it does most days: it interfaces with the shared printer and scanner in my living room.
![ComputeStick][3]
An Intel ComputeStick. Euro coin for size.
[Used with permission][4]
Most of the time it is idle, especially when we are out, so I thought it would be good idea to use it as a surveillance system. The device doesn't come with its own camera, and it wouldn't need to be spying all the time. I also didn't want to have to start the image capturing by hand because this would mean having to log into the Stick using SSH and fire up the process by writing commands in the shell before rushing out the door.
So I thought that the thing to do would be to grab a USB webcam and have the surveillance system fire up automatically just by plugging it in. Bonus points if the surveillance system fired up also after the Stick rebooted, and it found that the camera was connected.
In prior installments, we saw that [systemd services can be started or stopped by hand][5] or [when certain conditions are met][6]. Those conditions are not limited to when the OS reaches a certain state in the boot up or powerdown sequence but can also be when you plug in new hardware or when things change in the filesystem. You do that by combining a Udev rule with a systemd service.
### Hotplugging with Udev
Udev rules live in the _/etc/udev/rules_ directory and are usually a single line containing _conditions_ and _assignments_ that lead to an _action_.
That was a bit cryptic. Let's try again:
Typically, in a Udev rule, you tell systemd what to look for when a device is connected. For example, you may want to check if the make and model of a device you just plugged in correspond to the make and model of the device you are telling Udev to wait for. Those are the _conditions_ mentioned earlier.
Then you may want to change some stuff so you can use the device easily later. An example of that would be to change the read and write permissions to a device: if you plug in a USB printer, you're going to want users to be able to read information from the printer (the user's printing app would want to know the model, make, and whether it is ready to receive print jobs or not) and write to it, that is, send stuff to print. Changing the read and write permissions for a device is done using one of the _assignments_ you read about earlier.
Finally, you will probably want the system to do something when the conditions mentioned above are met, like start a backup application to copy important files when a certain external hard disk drive is plugged in. That is an example of an _action_ mentioned above.
With that in mind, ponder this:
```
ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207",
SYMLINK+="mywebcam", TAG+="systemd", MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service"
```
The first part of the rule,
```
ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0",
ATTRS{idProduct}=="e207" [etc... ]
```
shows the conditions that the device has to meet before doing any of the other stuff you want the system to do. The device has to be added (`ACTION=="add"`) to the machine, it has to be integrated into the `video4linux` subsystem. To make sure the rule is applied only when the correct device is plugged in, you have to make sure Udev correctly identifies the manufacturer (`ATTRS{idVendor}=="03f0"`) and a model (`ATTRS{idProduct}=="e207"`) of the device.
In this case, we're talking about this device (Figure 2):
![webcam][8]
The HP webcam used in this experiment.
[Used with permission][4]
Notice how you use `==` to indicate that these are a logical operation. You would read the above snippet of the rule like this:
```
if the device is added and the device controlled by the video4linux subsystem
and the manufacturer of the device is 03f0 and the model is e207, then...
```
But where do you get all this information? Where do you find the action that triggers the event, the manufacturer, model, and so on? You will probably have to use several sources. The `IdVendor` and `idProduct` you can get by plugging the webcam into your machine and running `lsusb`:
```
lsusb
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 003: ID 03f0:e207 Hewlett-Packard
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 04f2:b1bb Chicony Electronics Co., Ltd
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
```
The webcam Im using is made by HP, and you can only see one HP device in the list above. The `ID` gives you the manufacturer and the model numbers separated by a colon (`:`). If you have more than one device by the same manufacturer and not sure which is which, unplug the webcam, run `lsusb` again and check what's missing.
OR...
Unplug the webcam, wait a few seconds, run the command `udevadmin monitor --environment` and then plug the webcam back in again. When you do that with the HP webcam, you get:
```
udevadmin monitor --environment
UDEV [35776.495221] add /devices/pci0000:00/0000:00:1c.3/0000:04:00.0
  /usb3/3-1/3-1:1.0/input/input21/event11 (input)
.MM_USBIFNUM=00
ACTION=add
BACKSPACE=guess
DEVLINKS=/dev/input/by-path/pci-0000:04:00.0-usb-0:1:1.0-event
  /dev/input/by-id/usb-Hewlett_Packard_HP_Webcam_HD_2300-event-if00
DEVNAME=/dev/input/event11
DEVPATH=/devices/pci0000:00/0000:00:1c.3/0000:04:00.0/
  usb3/3-1/3-1:1.0/input/input21/event11
ID_BUS=usb
ID_INPUT=1
ID_INPUT_KEY=1
ID_MODEL=HP_Webcam_HD_2300
ID_MODEL_ENC=HP\x20Webcam\x20HD\x202300
ID_MODEL_ID=e207
ID_PATH=pci-0000:04:00.0-usb-0:1:1.0
ID_PATH_TAG=pci-0000_04_00_0-usb-0_1_1_0
ID_REVISION=1020
ID_SERIAL=Hewlett_Packard_HP_Webcam_HD_2300
ID_TYPE=video
ID_USB_DRIVER=uvcvideo
ID_USB_INTERFACES=:0e0100:0e0200:010100:010200:030000:
ID_USB_INTERFACE_NUM=00
ID_VENDOR=Hewlett_Packard
ID_VENDOR_ENC=Hewlett\x20Packard
ID_VENDOR_ID=03f0
LIBINPUT_DEVICE_GROUP=3/3f0/e207:usb-0000:04:00.0-1/button
MAJOR=13
MINOR=75
SEQNUM=3162
SUBSYSTEM=input
USEC_INITIALIZED=35776495065
XKBLAYOUT=es
XKBMODEL=pc105
XKBOPTIONS=
XKBVARIANT=
```
That may look like a lot to process, but, check this out: the `ACTION` field early in the list tells you what event just happened, i.e., that a device got added to the system. You can also see the name of the device spelled out on several of the lines, so you can be pretty sure that it is the device you are looking for. The output also shows the manufacturer's ID number (`ID_VENDOR_ID=03f0`) and the model number (`ID_VENDOR_ID=03f0`).
This gives you three of the four values the condition part of the rule needs. You may be tempted to think that it a gives you the fourth, too, because there is also a line that says:
```
SUBSYSTEM=input
```
Be careful! Although it is true that a USB webcam is a device that provides input (as does a keyboard and a mouse), it is also belongs to the _usb_ subsystem, and several others. This means that your webcam gets added to several subsystems and looks like several devices. If you pick the wrong subsystem, your rule may not work as you want it to, or, indeed, at all.
So, the third thing you have to check is all the subsystems the webcam has got added to and pick the correct one. To do that, unplug your webcam again and run:
```
ls /dev/video*
```
This will show you all the video devices connected to the machine. If you are using a laptop, most come with a built-in webcam and it will probably show up as `/dev/video0`. Plug your webcam back in and run `ls /dev/video*` again.
Now you should see one more video device (probably `/dev/video1`).
Now you can find out all the subsystems it belongs to by running `udevadm info -a /dev/video1`:
```
udevadm info -a /dev/video1
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.
looking at device '/devices/pci0000:00/0000:00:1c.3/0000:04:00.0
  /usb3/3-1/3-1:1.0/video4linux/video1':
KERNEL=="video1"
SUBSYSTEM=="video4linux"
DRIVER==""
ATTR{dev_debug}=="0"
ATTR{index}=="0"
ATTR{name}=="HP Webcam HD 2300: HP Webcam HD"
[etc...]
```
The output goes on for quite a while, but what you're interested is right at the beginning: `SUBSYSTEM=="video4linux"`. This is a line you can literally copy and paste right into your rule. The rest of the output (not shown for brevity) gives you a couple more nuggets, like the manufacturer and mode IDs, again in a format you can copy and paste into your rule.
Now you have a way of identifying the device and what event should trigger the action univocally, it is time to tinker with the device.
The next section in the rule, `SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"` tells Udev to do three things: First, you want to create symbolic link from the device to (e.g. _/dev/video1_ ) to _/dev/mywebcam_. This is because you cannot predict what the system is going to call the device by default. When you have an in-built webcam and you hotplug a new one, the in-built webcam will usually be _/dev/video0_ while the external one will become _/dev/video1_. However, if you boot your computer with the external USB webcam plugged in, that could be reversed and the internal webcam can become _/dev/video1_ and the external one _/dev/video0_. What this is telling you is that, although your image-capturing script (which you will see later on) always needs to point to the external webcam device, you can't rely on it being _/dev/video0_ or _/dev/video1_. To solve this problem, you tell Udev to create a symbolic link which will never change in the moment the device is added to the _video4linux_ subsystem and you will make your script point to that.
The second thing you do is add `"systemd"` to the list of Udev tags associated with this rule. This tells Udev that the action that the rule will trigger will be managed by systemd, that is, it will be some sort of systemd service.
Notice how in both cases you use `+=` operator. This adds the value to a list, which means you can add more than one value to `SYMLINK` and `TAG`.
The `MODE` values, on the other hand, can only contain one value (hence you use the simple `=` assignment operator). What `MODE` does is tell Udev who can read from or write to the device. If you are familiar with `chmod` (and, if you are reading this, you should be), you will also be familiar of [how you can express permissions using numbers][9]. That is what this is: `0666` means " _give read and write privileges to the device to everybody_ ".
At last, `ENV{SYSTEMD_WANTS}="webcam.service"` tells Udev what systemd service to run.
Save this rule into file called _90-webcam.rules_ (or something like that) in _/etc/udev/rules.d_ and you can load it either by rebooting your machine, or by running:
```
sudo udevadm control --reload-rules && udevadm trigger
```
## Service at Last
The service the Udev rule triggers is ridiculously simple:
```
# webcam.service
[Service]
Type=simple
ExecStart=/home/[user name]/bin/checkimage.sh
```
Basically, it just runs the _checkimage.sh_ script stored in your personal _bin/_ and pushes it the background. [This is something you saw how to do in prior installments][5]. It may seem something little, but just because it is called by a Udev rule, you have just created a special kind of systemd unit called a _device_ unit. Congratulations.
As for the _checkimage.sh_ script _webcam.service_ calls, there are several ways of grabbing an image from a webcam and comparing it to a prior one to check for changes (which is what _checkimage.sh_ does), but this is how I did it:
```
#!/bin/bash
# This is the checkimage.sh script
mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
  /dev/mywebcam &>/dev/null
mv 00000001.png /home/[user name]/monitor/monitor.png
while true
do
mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=/dev/mywebcam &>/dev/null
mv 00000001.png /home/[user name]/monitor/temp.png
imagediff=`compare -metric mae /home/[user name]/monitor/monitor.png /home/[user name]
  /monitor/temp.png /home/[user name]/monitor/diff.png 2>&1 > /dev/null | cut -f 1 -d " "`
if [ `echo "$imagediff > 700.0" | bc` -eq 1 ]
then
mv /home/[user name]/monitor/temp.png /home/[user name]/monitor/monitor.png
fi
sleep 0.5
done
```
Start by using [MPlayer][10] to grab a frame ( _00000001.png_ ) from the webcam. Notice how we point `mplayer` to the `mywebcam` symbolic link we created in our Udev rule, instead of to `video0` or `video1`. Then you transfer the image to the _monitor/_ directory in your home directory. Then run an infinite loop that does the same thing again and again, but also uses [Image Magick's _compare_ tool][11] to see if there any differences between the last image captured and the one that is already in the _monitor/_ directory.
If the images are different, it means something has moved within the webcam's frame. The script overwrites the original image with the new image and continues comparing waiting for some more movement.
### Plugged
With all the bits and pieces in place, when you plug your webcam in, your Udev rule will be triggered and will start the _webcam.service_. The _webcam.service_ will execute _checkimage.sh_ in the background, and _checkimage.sh_ will start taking pictures every half a second. You will know because your webcam's LED will start flashing indicating every time it takes a snap.
As always, if something goes wrong, run
```
systemctl status webcam.service
```
to check what your service and script are up to.
### Coming up
You may be wondering: Why overwrite the original image? Surely you would want to see what's going on if the system detects any movement, right? You would be right, but as you will see in the next installment, leaving things as they are and processing the images using yet another type of systemd unit makes things nice, clean and easy.
Just wait and see.
Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.intel.com/content/www/us/en/products/boards-kits/compute-stick/stk1a32sc.html
[2]: https://www.linux.com/files/images/fig01png
[3]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig01.png?itok=cfEHN5f1 (ComputeStick)
[4]: https://www.linux.com/licenses/category/used-permission
[5]: https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[6]: https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
[7]: https://www.linux.com/files/images/fig02png
[8]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig02.png?itok=esFv4BdM (webcam)
[9]: https://chmod-calculator.com/
[10]: https://mplayerhq.hu/design7/news.html
[11]: https://www.imagemagick.org/script/compare.php
[12]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,152 @@
Systemd Services: Monitoring Files and Directories
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/systemd-filesystem.png?itok=iGjxwoJR)
So far in this systemd multi-part tutorial, weve covered [how to start and stop a service by hand][1], [how to start a service when booting your OS and have it stop on power down][2], and [how to boot a service when a certain device is detected][3]. This installment does something different yet again and covers how to create a unit that starts a service when something changes in the filesystem. For the practical example, you'll see how you can use one of these units to extend the [surveillance system we talked about last time][4].
### Where we left off
[Last time we saw how the surveillance system took pictures, but it did nothing with them][3]. In fact, it even overwrote the last picture it took when it detected movement so as not to fill the storage of the device.
Does that mean the system is useless? Not by a long shot. Because, you see, systemd offers yet another type of units, _paths_ , that can help you out. _Path_ units allow you to trigger a service when an event happens in the filesystem, say, when a file gets deleted or a directory accessed. And, overwriting an image is exactly the kind of event we are talking about here.
### Anatomy of a Path Unit
A systemd path unit takes the extension _.path_ , and it monitors a file or directory. A _.path_ unit calls another unit (usually a _.service_ unit with the same name) when something happens to the monitored file or directory. For example, if you have a _picchanged.path_ unit to monitor the snapshot from your webcam, you will also have a _picchanged.service_ that will execute a script when the snapshot is overwritten.
Path units contain a new section, `[Path]`, with few more directives. First, you have the what-to-watch-for directives:
* **`PathExists=`** monitors whether the file or directory exists. If it does, the associated unit gets triggered. `PathExistsGlob=` works in a similar fashion, but lets you use globbing, like when you use `ls *.jpg` to search for all the JPEG images in a directory. This lets you check, for example, whether a file with a certain extension exists.
* **`PathChanged=`** watches a file or directory and activates the configured unit whenever it changes. It is not activated on every write to the watched file but only when a monitored file open for for writing is changed and then closed. The associated unit is executed when the file is closed.
* **`PathModified=`** , on the other hand, does activate the unit when anything is changed in the file you are monitoring, even before you close the file.
* **`DirectoryNotEmpty=`** does what it says on the box, that is, it activates the associated unit if the monitored directory contains files or subdirectories.
Then, we have `Unit=` that tells the _.path_ which _.service_ unit to activate, in case you want to give it a different name to that of your _.path_ unit; `MakeDirectory=` can be `true` or `false` (or `0` or `1`, or `yes` or `no`) and creates the directory you want to monitor before monitoring starts. Obviously, using `MakeDirectory=` in combination with `PathExists=` does not make sense. However, `MakeDirectory=` can be used in combination with `DirectoryMode=`, which you use to set the the mode (permissions) of the new directory. If you don't use `DirectoryMode=`, the default permissions for the new directory are `0755`.
### Building _picchanged.path_
All these directives are very useful, but you will be just looking for changes made to one single file, so your _.path_ unit is very simple:
```
#picchanged.path
[Unit]
Wants= webcam.service
[Path]
PathChanged= /home/[user name]/monitor/monitor.jpg
```
In the `Unit=` section the line that says
```
Wants= webcam.service
```
The `Wants=` directive is the preferred way of starting up a unit the current unit needs to work properly. [`webcam.service` is the name you gave the surveillance service that you saw in the previous article][3] and is the service that actually controls the webcam and makes it take a snap every half second. This means its _picchanged.path_ that is going to start up _webcam.service_ now, and not the [Udev rule you saw in the prior article][3]. You will use the Udev rule to start _picchanged.path_ instead.
To summarize: the Udev rule pulls in your new _picchanged.path_ unit, which, in turn pulls in the _webcam.service_ as a requirement for everything to work perfectly.
The "thing" that _picchanged.path_ monitors is the _monitor.jpg_ file in the _monitor/_ directory in your home directory. As you saw last time, _webcam.service_ called a script, _checkimage.sh_ , took a picture at the beginning of its execution and stored it in _monitor/temp.jpg_. _checkimage.sh_ then took another pic, _temp.jpg_ , and compared it with _monitor.jpg_. If it found significant differences (like when somebody walks into frame) the script overwrote _monitor.jpg_ with the _temp.jpg_. That is when _picchanged.path_ fires.
As you haven't included a `Unit=` directive in your _.path_ , the unit systemd expects a matching _picchanged.service_ unit which it will trigger when _/home/[ _user name_ ]/monitor/monitor.jpg_ gets modified:
```
#picchanged.service
[Service]
Type= simple
ExecStart= /home/[user name]/bin/picmonitor.sh
```
For the time being, lets make _picmonitor.sh_ save a time-stamped copy of _monitor.jpg_ every time changes get detected:
```
#!/bin/bash
# This is the pcmonitor.sh script
cp /home/[user name]/monitor/monitor.jpg /home/[user name]/monitor/"`date`.jpg"
```
### Udev Changes
You have to change the custom Udev rule you wrote in [the previous installment][3] so everything works. Edit _/etc/udev/rules.d/01-webcam.rules_ so instead of looking like this:
```
ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0",
  ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd",
  MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service"
```
It looks like this:
```
ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0",
  ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd",
  MODE="0666", ENV{SYSTEMD_WANTS}="picchanged.path"
```
The new rule, instead of calling _webcam.service_ , now calls _picchanged.path_ when your webcam gets detected. (Note that you will have to change the `idVendor` and `IdProduct` to those of your own webcam -- you saw how to find these out previously).
For the record, I also changed _checkimage.sh_ from using PNG to JPEG images. I did this because I found some dependency problems with PNG images when working with _mplayer_ on some versions of Debian. _checkimage.sh_ now looks like this:
```
#!/bin/bash
mplayer -vo jpeg -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
  /dev/mywebcam &>/dev/null
mv 00000001.jpg /home/paul/monitor/monitor.jpg
while true
do
mplayer -vo jpeg -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
  /dev/mywebcam &>/dev/null
mv 00000001.jpg /home/paul/monitor/temp.jpg
imagediff=`compare -metric mae /home/paul/monitor/monitor.jpg
  /home/paul/monitor/temp.jpg /home/paul/monitor/diff.png 2>&1 >
  /dev/null | cut -f 1 -d " "`
if [ `echo "$imagediff > 700.0" | bc` -eq 1 ]
then
mv /home/paul/monitor/temp.jpg /home/paul/monitor/monitor.jpg
fi
sleep 0.5
done
```
### Firing up
This is a multi-unit service that, when all its bits and pieces are in place, you don't have to worry much about: you plug in the designated webcam (or boot the machine with the webcam already connected), _picchanged.path_ gets started thanks to the Udev rule and takes over, bringing up the _webcam.service_ and starting to check on the snaps. There is nothing else you need to do.
### Conclusion
Having the process split into two doesn't only help explain how path units work, but its also very useful for debugging. One service does not "touch" the other in any way, which means that you could, for example, improve the "motion detection" part, and it would be very easy to roll back if things didn't work as expected.
Admittedly, the example is a bit goofy, as there are definitely [better ways of monitoring movement using a webcam][5]. But remember: the main aim of these articles is to help you learn how systemd units work within a context.
Next time, we'll finish up with systemd units by looking at some of the other types of units available and show how to improve your home-monitoring system further by setting up service that sends images to another machine.
Learn more about Linux through the free ["Introduction to Linux" ][6]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/6/systemd-services-monitoring-files-and-directories
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[2]: https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
[3]: https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change
[4]: https://www.linux.com/blog/learn/intro-to-linux/2018/6/systemd-services-monitoring-files-and-directories
[5]: https://www.linux.com/learn/how-operate-linux-spycams-motion
[6]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,3 +1,7 @@
**translating by [erlinux](https://github.com/erlinux)**
**PROJECT MANAGEMENT TOOL called [gn2.sh](https://github.com/lctt/lctt-cli)**
How to analyze your system with perf and Python
======

View File

@ -1,170 +0,0 @@
translating---geekpi
How To Quickly Serve Files And Folders Over HTTP In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/http-720x340.png)
Today, I came across a whole bunch of methods to serve a single file or entire directory with other systems in your local area network via a web browser. I tested all of them in my Ubuntu test machine, and everything worked just fine as described below. If you ever wondered how to easily and quickly serve files and folders over HTTP in Unix-like operating systems, one of the following methods will definitely help.
### Serve Files And Folders Over HTTP In Linux
**Disclaimer:** All the methods given here are meant to be used within a secure local area network. Since these methods doesnt have any security mechanism, it is **not recommended to use them in production**. You have been warned!
#### Method 1 Using simpleHTTPserver (Python)
We already have written a brief guide to setup a simple http server to share files and directories instantly in the following link. If you have a system with Python installed, this method is quite handy.
#### Method 2 Using Quickserve (Python)
This method is specifically for Arch Linux and its variants. Check the following link for more details.
#### Method 3 Using Ruby**
In this method, we use Ruby to serve files and folders over HTTP in Unix-like systems. Install Ruby and Rails as described in the following link.
Once Ruby installed, go to the directory, for example ostechnix, that you want to share over the network:
```
$ cd ostechnix
```
And, run the following command:
```
$ ruby -run -ehttpd . -p8000
[2018-08-10 16:02:55] INFO WEBrick 1.4.2
[2018-08-10 16:02:55] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux]
[2018-08-10 16:02:55] INFO WEBrick::HTTPServer#start: pid=5859 port=8000
```
Make sure the port 8000 is opened in your router or firewall . If the port has already been used by some other services use different port.
You can now access the contents of this folder from any remote system using URL **http:// <IP-address>:8000/**.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/ruby-http-server.png)
To stop sharing press **CTRL+C**.
#### Method 4 Using Http-server (NodeJS)
[**Http-server**][1] is a simple, production ready command line http-server written in NodeJS. It requires zero configuration and can be used to instantly share files and directories via web browser.
Install NodeJS as described below.
Once NodeJS installed, run the following command to install http-server.
```
$ npm install -g http-server
```
Now, go to any directory and share its contents over HTTP as shown below.
```
$ cd ostechnix
$ http-server -p 8000
Starting up http-server, serving ./
Available on:
http://127.0.0.1:8000
http://192.168.225.24:8000
http://192.168.225.20:8000
Hit CTRL-C to stop the server
```
Now, you can access the contents of this directory from local or remote systems in the network using URL **http:// <ip-address>:8000**.
![](http://www.ostechnix.com/wp-content/uploads/2018/08/nodejs-http-server.png)
To stop sharing, press **CTRL+C**.
#### Method 5 Using Miniserve (Rust)
[**Miniserve**][2] is yet another command line utility that allows you to quickly serve files over HTTP. It is very fast, easy-to-use, and cross-platform utility written in **Rust** programming language. Unlike the above utilities/methods, it provides authentication support, so you can setup username and password to the shares.
Install Rust in your Linux system as described in the following link.
After installing Rust, run the following command to install miniserve:
```
$ cargo install miniserve
```
Alternatively, you can download the binaries from [**the releases page**][3] and make it executable.
```
$ chmod +x miniserve-linux
```
And, then you can run it using command (assuming miniserve binary file is downloaded in the current working directory):
```
$ ./miniserve-linux <path-to-share>
```
**Usage**
To serve a directory:
```
$ miniserve <path-to-directory>
```
**Example:**
```
$ miniserve /home/sk/ostechnix/
miniserve v0.2.0
Serving path /home/sk/ostechnix at http://[::]:8080, http://localhost:8080
Quit by pressing CTRL-C
```
Now, you can access the share from local system itself using URL **<http://localhost:8080>** and/or from remote system with URL **http:// <ip-address>:8080**.
To serve a single file:
```
$ miniserve <path-to-file>
```
**Example:**
```
$ miniserve ostechnix/file.txt
```
Serve file/folder with username and password:
```
$ miniserve --auth joe:123 <path-to-share>
```
Bind to multiple interfaces:
```
$ miniserve -i 192.168.225.1 -i 10.10.0.1 -i ::1 -- <path-to-share>
```
As you can see, I have given only 5 methods. But, there are few more methods given in the link attached at the end of this guide. Go and test them as well. Also, bookmark and revisit it from time to time to check if there are any new additions to the list in future.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-quickly-serve-files-and-folders-over-http-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.npmjs.com/package/http-server
[2]:https://github.com/svenstaro/miniserve
[3]:https://github.com/svenstaro/miniserve/releases

View File

@ -1,255 +0,0 @@
Translating by way-ww
How To Run MS-DOS Games And Programs In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/dosbox-720x340.png)
If you ever wanted to try some good-old MS-DOS games and defunct C++ compilers like Turbo C++ in Linux? Good! This tutorial will teach you how to run MS-DOS games and programs under Linux environment using **DOSBox**. It is an x86 PC DOS-emulator that can be used to run classic DOS games or programs. DOSBox emulates an Intel x86 PC with sound, graphics, mouse, joystick, and modem etc., that allows you to run many old MS-DOS games and programs that simply cannot be run on any modern PCs and operating systems, such as Microsoft Windows XP and later, Linux and FreeBSD. It is free, written using C++ programming language and distributed under GPL.
### Install DOSBox In Linux
DOSBox is available in the default repositories of most Linux distributions.
On Arch Linux and its variants like Antergos, Manjaro Linux:
```
$ sudo pacman -S dosbox
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install dosbox
```
On Fedora:
```
$ sudo dnf install dosbox
```
### Configure DOSBox
There is no initial configuration required to use DOSBox and it just works out of the box. The default configuration file named `dosbox-x.xx.conf` exists in your **`~/.dosbox`** folder. In this configuration file, you can edit/modify various settings, such as starting DOSBox in fullscreen mode, use double buffering in fullscreen, set preferred resolution to use for fullscreen, mouse sensitivity, enable or disable sound, speaker, joystick and a lot more. As I mentioned earlier, the default settings will work just fine. You need not to make any changes.
### Run MS-DOS Games And Programs In Linux
To launch DOSBox, run the following command from the Terminal:
```
$ dosbox
```
This is how DOSBox interface looks like.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt.png)
As you can see, DOSBox comes with its own DOS-like command prompt with a virtual `Z:\` Drive, so if youre familiar with MS-DOS, you wouldnt find any difficulties to work in DOSBox environment.
Here is the output of `dir`command (Equivalent of `ls` command in Linux) output:
![](http://www.ostechnix.com/wp-content/uploads/2018/09/dir-command-output.png)
If youre a new user and it is the first time you use DOSBox, you can view the short introduction about DOSBox by entering the following command in DOSBox prompt:
```
intro
```
Press ENTER to go through next page of the introduction section.
To view the list of most often used commands in DOS, use this command:
```
help
```
To view list of all supported commands in DOSBox, type:
```
help /all
```
Remember, these commands should be used in the DOSBox prompt, not in your Linux Terminal.
DOSBox also supports a good set of keyboard bindings. Here is the default keyboard shortcuts to effectively use DOSBox.
![](http://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-keyboard-shortcuts.png)
To exit from DOSBox, simply type and hit ENTER:
```
exit
```
By default, DOSBox starts with a normal window-sized screen like above.
To start dosbox directly in fullscreen, edit your `dosbox-x.xx.conf` file and set the value of **fullscreen** variable as **enable**. Now, DosBox will start in fullscreen mode. To go back to normal screen, press **ALT+ENTER**.
Hope you get the basic usage of DOSBox.
Let us go ahead and install some DOS programs and games.
First, we need to create directories to save the programs and games in our Linux system. I am going to create two directories named **`~/dosprograms`** and **`~/dosgames`** , the first one for storing programs and latter for storing games.
```
$ mkdir ~/dosprograms ~/dosgames
```
For the purpose of this guide, I will show you how to install **Turbo C++** program and Mario game. First, we will see how to install Turbo.
Download the latest Turbo C++ compiler, extract it and save the contents file in **`~/dosprograms`** directory. I have save the contents turbo c++ in my **~/dosprograms/TC/** directory.
```
$ ls dosprograms/tc/
BGI BIN CLASSLIB DOC EXAMPLES FILELIST.DOC INCLUDE LIB README README.COM
```
Start Dosbox:
```
$ dosbox
```
And mount the **`~/dosprograms`** directory as virtual drive **C:\** in DOSBox.
```
Z:\>mount c ~/dosprograms
```
You will see an output something like below.
```
Drive C is mounted as local directory /home/sk/dosprograms.
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-1.png)
Now, change to the C drive using command:
```
Z:\>c:
```
And then, switch to **tc/bin** directory:
```
Z:\>cd tc/bin
```
Finally, run turbo c++ executable file:
```
Z:\>tc.exe
```
**Note:** Just type first few letters and hit ENTER to autocomplete the file name.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-4.png)
You will now be in Turbo C++ console.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-5.png)
Create new file (ATL+F) and start coding:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-6.png)
Similarly, you can install and run other classic DOS programs.
**Troubleshooting:**
You might be encountered with following error while running turbo c++ or any other dos programs:
```
DOSBox switched to max cycles, because of the setting: cycles=auto. If the game runs too fast try a fixed cycles amount in DOSBox's options. Exit to error: DRC64:Unhandled memory reference
```
To fix this, edit your **~/.dosbox/dosbox-x.xx.conf** file:
```
$ nano ~/.dosbox/dosbox-0.74.conf
```
Find the following variable and change its value from:
```
core=auto
```
to
```
core=normal
```
Save and close the file. Now you can be able to run the dos programs without any problems.
Now, let us see how to run a dos-based game, for example **Mario Bros VGA**.
Download Mario game from [**here**][1] and extract the contents in **~/dosgames** directory in your Linux machine.
Start DOSBox:
```
$ dosbox
```
We have used virtual drive **c:** for dos programs. For games, let us use **d:** as virtual drive.
At the DOSBox prompt, run the following command to mount **~/dosgames** directory as virtuald drive **d**.
```
Z:\>mount d ~/dosgames
```
Switch to D: drive:
```
Z:\>d:
```
And then go to mario game directory and run the **mario.exe** file to launch the game.
```
Z:\>cd mario
Z:\>mario.exe
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-7.png)
Start playing the game:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Mario-game-in-dosbox.png)
Similarly, you can run any dos-based games as described above. You can view the complete list of supported games that can be run using DOSBox [**here**][2].
### Conclusion
Even though DOSBOX is not a complete replacement for MS-DOS and it lacks many of the features found in MS-DOS, it is just enough to install and run most DOS games and programs.
For more details, refer the official [**DOSBox manual**][3].
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.dosgames.com/game/mario-bros-vga
[2]: https://www.dosbox.com/comp_list.php
[3]: https://www.dosbox.com/DOSBoxManual.html

View File

@ -1,3 +1,5 @@
Translating by jlztan
Convert files at the command line with Pandoc
======

View File

@ -1,3 +1,5 @@
### translating by way-ww
4 Must-Have Tools for Monitoring Linux
======

View File

@ -1,112 +0,0 @@
Translating by jlztan
KeeWeb An Open Source, Cross Platform Password Manager
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/keeweb-720x340.png)
If youve been using the internet for any amount of time, chances are, you have a lot of accounts on a lot of websites. All of those accounts must have passwords, and you have to remember all those passwords. Either that, or write them down somewhere. Writing down passwords on paper may not be secure, and remembering them wont be practically possible if you have more than a few passwords. This is why Password Managers have exploded in popularity in the last few years. A password Manager is like a central repository where you store all your passwords for all your accounts, and you lock it with a master password. With this approach, the only thing you need to remember is the Master password.
**KeePass** is one such open source password manager. KeePass has an official client, but its pretty barebones. But there are a lot of other apps, both for your computer and for your phone, that are compatible with the KeePass file format for storing encrypted passwords. One such app is **KeeWeb**.
KeeWeb is an open source, cross platform password manager with features like cloud sync, keyboard shortcuts and plugin support. KeeWeb uses Electron, which means it runs on Windows, Linux, and Mac OS.
### Using KeeWeb Password Manager
When it comes to using KeeWeb, you actually have 2 options. You can either use KeeWeb webapp without having to install it on your system and use it on the fly or simply install KeeWeb client in your local system.
**Using the KeeWeb webapp**
If you dont want to bother installing a desktop app, you can just go to [**https://app.keeweb.info/**][1] and use it as a password manager.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-webapp.png)
It has all the features of the desktop app. Obviously, this requires you to be online when using the app.
**Installing KeeWeb on your Desktop**
If you like the comfort and offline availability of using a desktop app, you can also install it on your desktop.
If you use Ubuntu/Debian, you can just go to [**releases pages**][2] and download KeeWeb latest **.deb** file, which you can install via this command:
```
$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb
```
If youre on Arch, it is available in the [**AUR**][3], so you can install using any helper programs like [**Yay**][4]:
```
$ yay -S keeweb
```
Once installed, launch it from Menu or application launcher. This is how KeeWeb default interface looks like:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-desktop-client.png)
### General Layout
KeeWeb basically shows a list of all your passwords, along with all your tags to the left. Clicking on a tag will filter the list to only passwords of that tag. To the right, all the fields for the selected account are shown. You can set username, password, website, or just add a custom note. You can even create your own fields and mark them as secure fields, which is great when storing things like credit card information. You can copy passwords by just clicking on them. KeeWeb also shows the date when an account was created and modified. Deleted passwords are kept in the trash, where they can be restored or permanently deleted.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-general-layout.png)
### KeeWeb Features
**Cloud Sync**
One of the main features of KeeWeb is the support for a wide variety of remote locations and cloud services.
Other than loading local files, you can open files from:
1. WebDAV Servers
2. Google Drive
3. Dropbox
4. OneDrive
This means that if you use multiple computers, you can synchronize the password files between them, so you dont have to worry about not having all the passwords available on all devices.
**Password Generator**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-password-generator.png)
Along with encrypting your passwords, its also important to create new, strong passwords for every single account. This means that if one of your account gets hacked, the attacker wont be able to get in to your other accounts using the same password.
To achieve this, KeeWeb has a built-in password generator, that lets you generate a custom password of a specific length, including specific type of characters.
**Plugins**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-plugins.png)
You can extend KeeWeb functionality with plugins. Some of these plugins are translations for other languages, while others add new functionality, like checking **<https://haveibeenpwned.com>** for exposed passwords.
**Local Backups**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-backup.png)
Regardless of where your password file is stored, you should probably keep local backups of the file on your computer. Luckily, KeeWeb has this feature built-in. You can backup to a specific path, and set it to backup periodically, or just whenever the file is changed.
### Verdict
I have actually been using KeeWeb for several years now. It completely changed the way I store my passwords. The cloud sync is basically the feature that makes it a done deal for me. I dont have to worry about keeping multiple unsynchronized files on multiple devices. If you want a great looking password manager that has cloud sync, KeeWeb is something you should look at.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://app.keeweb.info/
[2]: https://github.com/keeweb/keeweb/releases/latest
[3]: https://aur.archlinux.org/packages/keeweb/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/

View File

@ -1,87 +0,0 @@
translating---geekpi
4 cool new projects to try in COPR for October 2018
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR is a [collection][1] of personal repositories for software that isnt carried in the standard Fedora repositories. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the standard set of Fedora Fedora packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Heres a set of new and interesting projects in COPR.
### GitKraken
[GitKraken][2] is a useful git client for people who prefer a graphical interface over command-line, providing all the features you expect. Additionally, GitKraken can create repositories and files, and has a built-in editor. A useful feature of GitKraken is the ability to stage lines or hunks of files, and to switch between branches fast. However, in some cases, you may experience performance issues with larger projects.
![][3]
#### Installation instructions
The repo currently provides GitKraken for Fedora 27, 28, 29 and Rawhide, and for OpenSUSE Tumbleweed. To install GitKraken, use these commands:
```
sudo dnf copr enable elken/gitkraken
sudo dnf install gitkraken
```
### Music On Console
[Music On Console][4] player, or mocp, is a simple console audio player. It has an interface similar to the Midnight Commander and is easy use. You simply navigate to a directory with music files and select a file or directory to play. In addition, mocp provides a set of commands, allowing it to be controlled directly from command line.
![][5]
#### Installation instructions
The repo currently provides Music On Console player for Fedora 28 and 29. To install mocp, use these commands:
```
sudo dnf copr enable Krzystof/Moc
sudo dnf install moc
```
### cnping
[Cnping][6] is a small graphical ping tool for IPv4, useful for visualization of changes in round-trip time. It offers an option to control the time period between each packet as well as the size of data sent. In addition to the graph shown, cnping provides basic statistics on round-trip times and packet loss.![][7]
#### Installation instructions
The repo currently provides cnping for Fedora 27, 28, 29 and Rawhide. To install cnping, use these commands:
```
sudo dnf copr enable dreua/cnping
sudo dnf install cnping
```
### Pdfsandwich
[Pdfsandwich][8] is a tool for adding text to PDF files which contain text in an image form — such as scanned books. It uses optical character recognition (OCR) to create an additional layer with the recognized text behind the original page. This can be useful for copying and working with the text.
#### Installation instructions
The repo currently provides pdfsandwich for Fedora 27, 28, 29 and Rawhide, and for EPEL 7. To install pdfsandwich, use these commands:
```
sudo dnf copr enable merlinm/pdfsandwich
sudo dnf install pdfsandwich
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-try-copr-october-2018/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org
[b]: https://github.com/lujun9972
[1]: https://copr.fedorainfracloud.org/
[2]: https://www.gitkraken.com/git-client
[3]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-gitkraken.png
[4]: http://moc.daper.net/
[5]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-mocp.png
[6]: https://github.com/cnlohr/cnping
[7]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-cnping.png
[8]: http://www.tobias-elze.de/pdfsandwich/

View File

@ -0,0 +1,368 @@
How Do We Find Out The Installed Packages Came From Which Repository?
======
Sometimes you might want to know the installed packages came from which repository. This will helps you to troubleshoot when you are facing the package conflict issue.
Because [third party vendor repositories][1] are holding the latest version of package and sometime it will causes the issue when you are trying to install any packages due to incompatibility.
Everything is possible in Linux because you can able to install a packages on your system even though when the package is not available on your distribution.
Also, you can able to install a package with latest version when your distribution dont have it. How?
Thats why third party repositories are came in the picture. They are allowing users to install all the available packages from their repositories.
Almost all the distributions are allowing third party repositories. Some of the distribution officially suggesting few of third party repositories which are not replacing the base packages badly like CentOS officially suggesting us to install [EPEL repository][2].
[List of Major repositories][1] and its details are below.
* **`CentOS:`** [EPEL][2], [ELRepo][3], etc is [CentOS Community Approved Repositories][4].
* **`Fedora:`** [RPMfusion repo][5] is commonly used by most of the [Fedora][6] users.
* **`ArchLinux:`** ArchLinux community repository contains packages that have been adopted by Trusted Users from the Arch User Repository.
* **`openSUSE:`** [Packman repo][7] offers various additional packages for openSUSE, especially but not limited to multimedia related applications and libraries that are on the openSUSE Build Service application blacklist. Its the largest external repository of openSUSE packages.
* **`Ubuntu:`** Personal Package Archives (PPAs) are a kind of repository. Developers create them in order to distribute their software. You can find this information on the PPAs Launchpad page. Also, you can enable Cananical partners repositories.
### What Is Repository?
A software repository is a central place which stores the software packages for the particular application.
All the Linux distributions are maintaining their own repositories and they allow users to retrieve and install packages on their machine.
Each vendor offered a unique package management tool to manage their repositories such as search, install, update, upgrade, remove, etc.
Most of the Linux distributions comes as freeware except RHEL and SUSE. To access their repositories you need to buy a subscriptions.
### Why do we need to enable third party repositories?
In Linux, installing a package from source is not advisable as this might cause so many issues while upgrading the package or system thats why we are advised to install a package from repo instead of source.
### How Do We Find Out The Installed Packages Came From Which Repository on RHEL/CentOS Systems?
This can be done in multiple ways. Here we will be giving you all the possible options and you can choose which one is best for you.
### Method-1: Using Yum Command
RHEL & CentOS systems are using RPM packages hence we can use the [Yum Package Manager][8] to get this information.
YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS.
Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories.
```
# yum info apachetop
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* epel: epel.mirror.constant.com
Installed Packages
Name : apachetop
Arch : x86_64
Version : 0.15.6
Release : 1.el7
Size : 65 k
Repo : installed
From repo : epel
Summary : A top-like display of Apache logs
URL : https://github.com/tessus/apachetop
License : BSD
Description : ApacheTop watches a logfile generated by Apache (in standard common or
: combined logformat, although it doesn't (yet) make use of any of the extra
: fields in combined) and generates human-parsable output in realtime.
```
The **`apachetop`** package is coming from **`epel repo`**.
### Method-2: Using Yumdb Command
Yumdb info provides information similar to yum info but additionally it provides package checksum data, type, user info (who installed the package). Since yum 3.2.26 yum has started storing additional information outside of the rpmdatabase (where user indicates it was installed by the user, and dep means it was brought in as a dependency).
```
# yumdb info lighttpd
Loaded plugins: fastestmirror
lighttpd-1.4.50-1.el7.x86_64
checksum_data = a24d18102ed40148cfcc965310a516050ed437d728eeeefb23709486783a4d37
checksum_type = sha256
command_line = --enablerepo=epel install lighttpd apachetop aria2 atop axel
from_repo = epel
from_repo_revision = 1540756729
from_repo_timestamp = 1540757483
installed_by = 0
origin_url = https://epel.mirror.constant.com/7/x86_64/Packages/l/lighttpd-1.4.50-1.el7.x86_64.rpm
reason = user
releasever = 7
var_contentdir = centos
var_infra = stock
var_uuid = ce328b07-9c0a-4765-b2ad-59d96a257dc8
```
The **`lighttpd`** package is coming from **`epel repo`**.
### Method-3: Using RPM Command
[RPM command][9] stands for Red Hat Package Manager is a powerful, command line Package Management utility for Red Hat based system such as (RHEL, CentOS, Fedora, openSUSE & Mageia) distributions.
The utility allow you to install, upgrade, remove, query & verify the software on your Linux system/server. RPM files comes with .rpm extension. RPM package built with required libraries and dependency which will not conflicts other packages were installed on your system.
```
# rpm -qi apachetop
Name : apachetop
Version : 0.15.6
Release : 1.el7
Architecture: x86_64
Install Date: Mon 29 Oct 2018 06:47:49 AM EDT
Group : Applications/Internet
Size : 67020
License : BSD
Signature : RSA/SHA256, Mon 22 Jun 2015 09:30:26 AM EDT, Key ID 6a2faea2352c64e5
Source RPM : apachetop-0.15.6-1.el7.src.rpm
Build Date : Sat 20 Jun 2015 09:02:37 PM EDT
Build Host : buildvm-22.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager : Fedora Project
Vendor : Fedora Project
URL : https://github.com/tessus/apachetop
Summary : A top-like display of Apache logs
Description :
ApacheTop watches a logfile generated by Apache (in standard common or
combined logformat, although it doesn't (yet) make use of any of the extra
fields in combined) and generates human-parsable output in realtime.
```
The **`apachetop`** package is coming from **`epel repo`**.
### Method-4: Using Repoquery Command
repoquery is a program for querying information from YUM repositories similarly to rpm queries.
```
# repoquery -i httpd
Name : httpd
Version : 2.4.6
Release : 80.el7.centos.1
Architecture: x86_64
Size : 9817285
Packager : CentOS BuildSystem
Group : System Environment/Daemons
URL : http://httpd.apache.org/
Repository : updates
Summary : Apache HTTP Server
Source : httpd-2.4.6-80.el7.centos.1.src.rpm
Description :
The Apache HTTP Server is a powerful, efficient, and extensible
web server.
```
The **`httpd`** package is coming from **`CentOS updates repo`**.
### How Do We Find Out The Installed Packages Came From Which Repository on Fedora System?
DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for back-end. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally.
[Dnf command][10] is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble.
```
$ dnf info tilix
Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST.
Installed Packages
Name : tilix
Version : 1.6.4
Release : 1.fc26
Arch : x86_64
Size : 3.6 M
Source : tilix-1.6.4-1.fc26.src.rpm
Repo : @System
From repo : updates
Summary : Tiling terminal emulator
URL : https://github.com/gnunn1/tilix
License : MPLv2.0 and GPLv3+ and CC-BY-SA
Description : Tilix is a tiling terminal emulator with the following features:
:
: - Layout terminals in any fashion by splitting them horizontally or vertically
: - Terminals can be re-arranged using drag and drop both within and between
: windows
: - Terminals can be detached into a new window via drag and drop
: - Input can be synchronized between terminals so commands typed in one
: terminal are replicated to the others
: - The grouping of terminals can be saved and loaded from disk
: - Terminals support custom titles
: - Color schemes are stored in files and custom color schemes can be created by
: simply creating a new file
: - Transparent background
: - Supports notifications when processes are completed out of view
:
: The application was written using GTK 3 and an effort was made to conform to
: GNOME Human Interface Guidelines (HIG).
```
The **`tilix`** package is coming from **`Fedora updates repo`**.
### How Do We Find Out The Installed Packages Came From Which Repository on openSUSE System?
Zypper is a command line package manager which makes use of libzypp. [Zypper command][11] provides functions like repository access, dependency solving, package installation, etc.
```
$ zypper info nano
Loading repository data...
Reading installed packages...
Information for package nano:
-----------------------------
Repository : Main Repository (OSS)
Name : nano
Version : 2.4.2-5.3
Arch : x86_64
Vendor : openSUSE
Installed Size : 1017.8 KiB
Installed : No
Status : not installed
Source package : nano-2.4.2-5.3.src
Summary : Pico editor clone with enhancements
Description :
GNU nano is a small and friendly text editor. It aims to emulate
the Pico text editor while also offering a few enhancements.
```
The **`nano`** package is coming from **`openSUSE Main repo (OSS)`**.
### How Do We Find Out The Installed Packages Came From Which Repository on ArchLinux System?
[Pacman command][12] stands for package manager utility. pacman is a simple command-line utility to install, build, remove and manage Arch Linux packages. Pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions.
```
# pacman -Ss chromium
extra/chromium 48.0.2564.116-1
The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser
extra/qt5-webengine 5.5.1-9 (qt qt5)
Provides support for web applications using the Chromium browser project
community/chromium-bsu 0.9.15.1-2
A fast paced top scrolling shooter
community/chromium-chromevox latest-1
Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This
package does not contain the extension code.
community/fcitx-mozc 2.17.2313.102-1
Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese
Input)
```
The **`chromium`** package is coming from **`ArchLinux extra repo`**.
Alternatively, we can use the following option to get the detailed information about the package.
```
# pacman -Si chromium
Repository : extra
Name : chromium
Version : 48.0.2564.116-1
Description : The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser
Architecture : x86_64
URL : http://www.chromium.org/
Licenses : BSD
Groups : None
Provides : None
Depends On : gtk2 nss alsa-lib xdg-utils bzip2 libevent libxss icu libexif libgcrypt ttf-font systemd dbus
flac snappy speech-dispatcher pciutils libpulse harfbuzz libsecret libvpx perl perl-file-basedir
desktop-file-utils hicolor-icon-theme
Optional Deps : kdebase-kdialog: needed for file dialogs in KDE
gnome-keyring: for storing passwords in GNOME keyring
kwallet: for storing passwords in KWallet
Conflicts With : None
Replaces : None
Download Size : 44.42 MiB
Installed Size : 172.44 MiB
Packager : Evangelos Foutras
Build Date : Fri 19 Feb 2016 04:17:12 AM IST
Validated By : MD5 Sum SHA-256 Sum Signature
```
The **`chromium`** package is coming from **`ArchLinux extra repo`**.
### How Do We Find Out The Installed Packages Came From Which Repository on Debian Based Systems?
It can be done in two ways on Debian based systems such as Ubuntu, LinuxMint, etc.,
### Method-1: Using apt-cache Command
The [apt-cache command][13] can display much of the information stored in APTs internal database. This information is a sort of cache since it is gathered from the different sources listed in the sources.list file. This happens during the apt update operation.
```
$ apt-cache policy python3
python3:
Installed: 3.6.3-0ubuntu2
Candidate: 3.6.3-0ubuntu3
Version table:
3.6.3-0ubuntu3 500
500 http://in.archive.ubuntu.com/ubuntu artful-updates/main amd64 Packages
* 3.6.3-0ubuntu2 500
500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages
100 /var/lib/dpkg/status
```
The **`python3`** package is coming from **`Ubuntu updates repo`**.
### Method-2: Using apt Command
[APT command][14] stands for Advanced Packaging Tool (APT) which is replacement for apt-get, like how DNF came to picture instead of YUM. Its feature rich command-line tools with included all the futures in one command (APT) such as apt-cache, apt-search, dpkg, apt-cdrom, apt-config, apt-key, etc..,. and several other unique features. For example we can easily install .dpkg packages through APT but we cant do through Apt-Get similar more features are included into APT command. APT-GET replaced by APT Due to lock of futures missing in apt-get which was not solved.
```
$ apt -a show notepadqq
Package: notepadqq
Version: 1.3.2-1~artful1
Priority: optional
Section: editors
Maintainer: Daniele Di Sarli
Installed-Size: 1,352 kB
Depends: notepadqq-common (= 1.3.2-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2)
Download-Size: 356 kB
APT-Sources: http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages
Description: Notepad++-like editor for Linux
Text editor with support for multiple programming
languages, multiple encodings and plugin support.
Package: notepadqq
Version: 1.2.0-1~artful1
Status: install ok installed
Priority: optional
Section: editors
Maintainer: Daniele Di Sarli
Installed-Size: 1,352 kB
Depends: notepadqq-common (= 1.2.0-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2)
Homepage: http://notepadqq.altervista.org
Download-Size: unknown
APT-Manual-Installed: yes
APT-Sources: /var/lib/dpkg/status
Description: Notepad++-like editor for Linux
Text editor with support for multiple programming
languages, multiple encodings and plugin support.
```
The **`notepadqq`** package is coming from **`Launchpad PPA`**.
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-do-we-find-out-the-installed-packages-came-from-which-repository/
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/repository/
[2]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/
[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/
[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/
[6]: https://fedoraproject.org/wiki/Third_party_repositories
[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/
[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[9]: https://www.2daygeek.com/rpm-command-examples/
[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[13]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[14]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/

View File

@ -1,3 +1,5 @@
translating---geekpi
8 creepy commands that haunt the terminal | Opensource.com
======

View File

@ -0,0 +1,407 @@
Getting started with a local OKD cluster on Linux
======
Try out OKD, the community edition of the OpenShift container platform, with this tutorial.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguin.png?itok=8sDDLbcR)
OKD is the open source upstream community edition of Red Hat's OpenShift container platform. OKD is a container management and orchestration platform based on [Docker][1] and [Kubernetes][2].
OKD is a complete solution to manage, deploy, and operate containerized applications that (in addition to the features provided by Kubernetes) includes an easy-to-use web interface, automated build tools, routing capabilities, and monitoring and logging aggregation features.
OKD provides several deployment options aimed at different requirements with single or multiple master nodes, high-availability capabilities, logging, monitoring, and more. You can create OKD clusters as small or as large as you need.
In addition to these deployment options, OKD provides a way to create a local, all-in-one cluster on your own machine using the oc command-line tool. This is a great option if you want to try OKD locally without committing the resources to create a larger multi-node cluster, or if you want to have a local cluster on your machine as part of your workflow or development process. In this case, you can create and deploy the applications locally using the same APIs and interfaces required to deploy the application on a larger scale. This process ensures a seamless integration that prevents issues with applications that work in the developer's environment but not in production.
This tutorial will show you how to create an OKD cluster using **oc cluster up** in a Linux box.
### 1\. Install Docker
The **oc cluster up** command creates a local OKD cluster on your machine using Docker containers. In order to use this command, you need Docker installed on your machine. For OKD version 3.9 and later, Docker 1.13 is the minimum recommended version. If Docker is not installed on your system, install it by using your distribution package manager. For example, on CentOS or RHEL, install Docker with this command:
```
$ sudo yum install -y docker
```
On Fedora, use dnf:
```
$ sudo dnf install -y docker
```
This installs Docker and all required dependencies.
### 2\. Configure Docker insecure registry
Once you have Docker installed, you need to configure it to allow the communication with an insecure registry on address 172.30.0.0/16. This insecure registry will be deployed with your local OKD cluster later.
On CentOS or RHEL, edit the file **/etc/docker/daemon.json** by adding these lines:
```
{
        "insecure-registries": ["172.30.0.0/16"]
}
```
On Fedora, edit the file **/etc/containers/registries.conf** by adding these lines:
```
[registries.insecure]
registries = ['172.30.0.0/16']
```
### 3\. Start Docker
Before starting Docker, create a system group named **docker** and assign this group to your user so you can run Docker commands with your own user, without requiring root or sudo access. This allows you to create your OKD cluster using your own user.
For example, these are the commands to create the group and assign it to my local user, **ricardo** :
```
$ sudo groupadd docker
$ sudo usermod -a -G docker ricardo
```
You need to log out and log back in to see the new group association. After logging back in, run the **id** command and ensure you're a member of the **docker** group:
```
$ id
uid=1000(ricardo) gid=1000(ricardo) groups=1000(ricardo),10(wheel),1001(docker)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
```
Now, start and enable the Docker daemon like this:
```
$ sudo systemctl start docker
$ sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
```
Verify that Docker is running:
```
$ docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-75.git8633870.el7.centos.x86_64
 Go version:      go1.9.4
 Git commit:      8633870/1.13.1
 Built:           Fri Sep 28 19:45:08 2018
 OS/Arch:         linux/amd64
Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-75.git8633870.el7.centos.x86_64
 Go version:      go1.9.4
 Git commit:      8633870/1.13.1
 Built:           Fri Sep 28 19:45:08 2018
 OS/Arch:         linux/amd64
 Experimental:    false
```
Ensure that the insecure registry option has been enabled by running **docker info** and looking for these lines:
```
$ docker info
... Skipping long output ...
Insecure Registries:
 172.30.0.0/16
 127.0.0.0/8
```
### 4\. Open firewall ports
Next, open firewall ports to ensure your OKD containers can communicate with the master API. By default, some distributions have the firewall enabled, which blocks required connectivity from the OKD containers to the master API. If your system has the firewall enabled, you need to add rules to allow communication on ports **8443/tcp** for the master API and **53/udp** for DNS resolution on the Docker bridge subnet.
For CentOS, RHEL, and Fedora, you can use the **firewall-cmd** command-line tool to add the rules. For other distributions, you can use the provided firewall manager, such as [UFW][3] or [iptables][4].
Before adding the firewall rules, obtain the Docker bridge network subnet's address, like this:
```
$ docker network inspect bridge | grep Subnet
                    "Subnet": "172.17.0.0/16",
```
Enable the firewall rules using this subnet. For CentOS, RHEL, and Fedora, use **firewall-cmd** to add a new zone:
```
$ sudo firewall-cmd --permanent --new-zone okdlocal
success
```
Include the subnet address you obtained before as a source to the new zone:
```
$ sudo firewall-cmd --permanent --zone okdlocal --add-source 172.17.0.0/16
success
```
Next, add the required rules to the **okdlocal** zone:
```
$ sudo firewall-cmd --permanent --zone okdlocal --add-port 8443/tcp
success
$ sudo firewall-cmd --permanent --zone okdlocal --add-port 53/udp
success
$ sudo firewall-cmd --permanent --zone okdlocal --add-port 8053/udp
success
```
Finally, reload the firewall to enable the new rules:
```
$ sudo firewall-cmd --reload
success
```
Ensure that the new zone and rules are in place:
```
$ sudo firewall-cmd --zone okdlocal --list-sources
172.17.0.0/16
$ sudo firewall-cmd --zone okdlocal --list-ports
8443/tcp 53/udp 8053/udp
```
Your system is ready to start the cluster. It's time to download the OKD client tools.
To deploy a local OKD cluster using **oc** , you need to download the OKD client tools package. For some distributions, like CentOS and Fedora, this package can be downloaded as an RPM from the official repositories. Please note that these packages may follow the distribution update cycle and usually are not the most recent version available.
For this tutorial, download the OKD client package directly from the official GitHub repository so you can get the most recent version available. At the time of writing, this was OKD v3.11.
Go to the [OKD downloads page][5] to get the link to the OKD tools for Linux, then download it with **wget** :
```
$ cd ~/Downloads/
$ wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
```
Uncompress the downloaded package:
```
$ tar -xzvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
```
Finally, to make it easier to use the **oc** command systemwide, move it to a directory included in your **$PATH** variable. A good location is **/usr/local/bin** :
```
$ sudo cp openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/oc /usr/local/bin/
```
One of the nicest features of the **oc** command is that it's a static single binary. You don't need to install it to use it.
Check that the **oc** command is working:
```
$ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
```
### 6\. Start your OKD cluster
Once you have all the prerequisites in place, start your local OKD cluster by running this command:
```
$ oc cluster up
```
This command connects to your local Docker daemon, downloads all required images from Docker Hub, and starts the containers. The first time you run it, it takes a few minutes to complete. When it's finished, you will see this message:
```
... Skipping long output ...
OpenShift server started.
The server is accessible via web console at:
    https://127.0.0.1:8443
You are logged in as:
    User:     developer
    Password: <any value>
To login as administrator:
    oc login -u system:admin
```
Access the OKD web console by using the browser and navigating to <https://127.0.0.1:8443:>
![](https://opensource.com/sites/default/files/uploads/okd-login.png)
From the command line, you can check if the cluster is running by entering this command:
```
$ oc cluster status
Web console URL: https://127.0.0.1:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /home/ricardo/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
```
You can also verify your cluster is working by logging in as the **system:admin** user and checking available nodes using the **oc** command-line tool:
```
$ oc login -u system:admin
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
    default
    kube-dns
    kube-proxy
    kube-public
    kube-system
  * myproject
    openshift
    openshift-apiserver
    openshift-controller-manager
    openshift-core-operators
    openshift-infra
    openshift-node
    openshift-service-cert-signer
    openshift-web-console
Using project "myproject".
$ oc get nodes
NAME        STATUS    ROLES     AGE       VERSION
localhost   Ready     <none>    52m       v1.11.0+d4cacc0
```
Since this is a local, all-in-one cluster, you see only **localhost** in the nodes list.
### 7\. Smoke-test your cluster
Now that your local OKD cluster is running, create a test app to smoke-test it. Use OKD to build and start the sample application so you can ensure the different components are working.
Start by logging in as the **developer** user:
```
$ oc login -u developer
Logged into "https://127.0.0.1:8443" as "developer" using existing credentials.
You have one project on this server: "myproject"
Using project "myproject".
```
You're automatically assigned to a new, empty project named **myproject**. Create a sample PHP application based on an existing GitHub repository, like this:
```
$ oc new-app php:5.6~https://github.com/rgerardi/ocp-smoke-test.git
--> Found image 92ed8b3 (5 months old) in image stream "openshift/php" under tag "5.6" for "php:5.6"
    Apache 2.4 with PHP 5.6
    -----------------------
    PHP 5.6 available as container is a base platform for building and running various PHP 5.6 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts.
    Tags: builder, php, php56, rh-php56
    * A source build using source code from https://github.com/rgerardi/ocp-smoke-test.git will be created
      * The resulting image will be pushed to image stream tag "ocp-smoke-test:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "ocp-smoke-test"
    * Ports 8080/tcp, 8443/tcp will be load balanced by service "ocp-smoke-test"
      * Other containers can access this service through the hostname "ocp-smoke-test"
--> Creating resources ...
    imagestream.image.openshift.io "ocp-smoke-test" created
    buildconfig.build.openshift.io "ocp-smoke-test" created
    deploymentconfig.apps.openshift.io "ocp-smoke-test" created
    service "ocp-smoke-test" created
--> Success
    Build scheduled, use 'oc logs -f bc/ocp-smoke-test' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/ocp-smoke-test'
    Run 'oc status' to view your app.
```
OKD starts the build process, which clones the provided GitHub repository, compiles the application (if required), and creates the necessary images. You can follow the build process by tailing its log with this command:
```
$ oc logs -f bc/ocp-smoke-test
Cloning "https://github.com/rgerardi/ocp-smoke-test.git" ...
        Commit: 391a475713d01ab0afab700bab8a3d7549c5cc27 (Create index.php)
        Author: Ricardo Gerardi <ricardo.gerardi@gmail.com>
        Date:   Tue Oct 2 13:47:25 2018 -0400
Using 172.30.1.1:5000/openshift/php@sha256:f3c95020fa870fcefa7d1440d07a2b947834b87bdaf000588e84ef4a599c7546 as the s2i builder image
---> Installing application source...
=> sourcing 20-copy-config.sh ...
---> 04:53:28     Processing additional arbitrary httpd configuration provided by s2i ...
=> sourcing 00-documentroot.conf ...
=> sourcing 50-mpm-tuning.conf ...
=> sourcing 40-ssl-certs.sh ...
Pushing image 172.30.1.1:5000/myproject/ocp-smoke-test:latest ...
Pushed 1/10 layers, 10% complete
Push successful
```
After the build process completes, OKD starts the application automatically by running a new pod based on the created image. You can see this new pod with this command:
```
$ oc get pods
NAME                     READY     STATUS      RESTARTS   AGE
ocp-smoke-test-1-build   0/1       Completed   0          1m
ocp-smoke-test-1-d8h76   1/1       Running     0          7s
```
You can see two pods are created; the first one (with the status Completed) is the pod used to build the application. The second one (with the status Running) is the application itself.
In addition, OKD creates a service for this application. Verify it by using this command:
```
$ oc get service
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
ocp-smoke-test   ClusterIP   172.30.232.241   <none>        8080/TCP,8443/TCP   1m
```
Finally, expose this service externally using OKD routes so you can access the application from a local browser:
```
$ oc expose svc ocp-smoke-test
route.route.openshift.io/ocp-smoke-test exposed
$ oc get route
NAME             HOST/PORT                                   PATH      SERVICES         PORT       TERMINATION   WILDCARD
ocp-smoke-test   ocp-smoke-test-myproject.127.0.0.1.nip.io             ocp-smoke-test   8080-tcp                 None
```
Verify that your new application is running by navigating to <http://ocp-smoke-test-myproject.127.0.0.1.nip.io> in a web browser:
![](https://opensource.com/sites/default/files/uploads/okd-smoke-test-app.png)
You can also see the status of your application by logging into the OKD web console:
![](https://opensource.com/sites/default/files/uploads/okd-smoke-test.png)
### Learn more
You can find more information about OKD on the [official site][6], which includes a link to the OKD [documentation][7].
If this is your first time working with OKD/OpenShift, you can learn the basics of the platform, including how to build and deploy containerized applications, through the [Interactive Learning Portal][8]. Another good resource is the official [OpenShift YouTube channel][9].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/local-okd-cluster-linux
作者:[Ricardo Gerardi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgerardi
[b]: https://github.com/lujun9972
[1]: https://www.docker.com/
[2]: https://kubernetes.io/
[3]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall
[4]: https://en.wikipedia.org/wiki/Iptables
[5]: https://www.okd.io/download.html#oc-platforms
[6]: https://www.okd.io/
[7]: https://docs.okd.io/
[8]: https://learn.openshift.com/
[9]: https://www.youtube.com/openshift

View File

@ -0,0 +1,77 @@
translating---geekpi
KRS: A new tool for gathering Kubernetes resource statistics
======
Zero-configuration tool simplifies gathering information, such as how many pods are running in a certain namespace.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl)
Recently I was in New York giving a talk at O'Reilly Velocity on the topic of [troubleshooting Kubernetes apps][1] and, motivated by the positive feedback and great discussions on the topic, I decided to revisit tooling in the space. It turns out that, besides [kubernetes-incubator/spartakus][2] and [kubernetes/kube-state-metrics][3], we don't really have much lightweight tooling available to collect resource stats (such as the number of pods or services in a namespace). So, I sat down on my way home and started coding on a little tool—creatively named **krs** , which is short for Kubernetes Resource Stats—that allows you to gather these stats.
You can use [mhausenblas/krs][5] in two ways:
* directly from the command line (binaries for Linux, Windows, and MacOS are available); and
* in cluster, as a deployment, using the [launch.sh][4] script, which creates the appropriate role-based access control (RBAC) permissions on the fly.
Mind you, it's very early days, and this is heavily a work in progress. However, the 0.1 release of **krs** offers the following features:
* In a per-namespace basis, it periodically gathers resource stats (supporting pods, deployments, and services).
* It exposes these stats as metrics in the [OpenMetrics format][6].
* It can be used directly via binaries or in a containerized setup with all dependencies included.
In its current form, you need to have **kubectl** installed and configured for **krs** to work, because **krs** relies on a **kubectl get all** command to be executed to gather the stats. (On the other hand, who's using Kubernetes and doesn't have **kubectl** installed?)
Using **krs** is simple; [Download][7] the binary for your platform and execute it like this:
```
$ krs thenamespacetowatch
# HELP pods Number of pods in any state, for example running
# TYPE pods gauge
pods{namespace="thenamespacetowatch"} 13
# HELP deployments Number of deployments
# TYPE deployments gauge
deployments{namespace="thenamespacetowatch"} 6
# HELP services Number of services
# TYPE services gauge
services{namespace="thenamespacetowatch"} 4
```
This will launch **krs** in the foreground, gathering resource stats from the namespace **thenamespacetowatch** and outputting them respectively in the OpenMetrics format on **stdout** for you to further process.
![krs screenshot][9]
Screenshot of krs in action.
But Michael, you may ask, why isn't it doing something useful (such as storing 'em in S3) with the metrics? Because [Unix philosophy][10].
For those wondering if they can directly use Prometheus or [kubernetes/kube-state-metrics][3] for this task: Well, sure you can, why not? The emphasis of **krs** is on being a lightweight and easy-to-use alternative to already available tooling—and maybe even being slightly complementary in certain aspects.
This was originally published on [Medium's ITNext][11] and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/kubernetes-resource-statistics
作者:[Michael Hausenblas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mhausenblas
[b]: https://github.com/lujun9972
[1]: http://troubleshooting.kubernetes.sh/
[2]: https://github.com/kubernetes-incubator/spartakus
[3]: https://github.com/kubernetes/kube-state-metrics
[4]: https://github.com/mhausenblas/krs/blob/master/launch.sh
[5]: https://github.com/mhausenblas/krs
[6]: https://openmetrics.io/
[7]: https://github.com/mhausenblas/krs/releases
[8]: /file/412706
[9]: https://opensource.com/sites/default/files/uploads/krs_screenshot.png (krs screenshot)
[10]: http://harmful.cat-v.org/cat-v/
[11]: https://itnext.io/kubernetes-resource-statistics-e8247f92b45c

View File

@ -0,0 +1,168 @@
如何在 Linux 中快速地通过 HTTP 访问文件和文件夹
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/http-720x340.png)
今天,我有很多方法来通过网络浏览器为局域网中的其他系统提供单个文件或整个目录访问。我在我的 Ubuntu 测试机上测试了这些方法,它们和下面描述的那样运行正常。如果你想知道如何在类 Unix 操作系统中通过 HTTP 轻松快速地访问文件和文件夹,以下方法之一肯定会有所帮助。
### 在 Linux 中通过 HTTP 访问文件和文件夹
**免责声明:**此处给出的所有方法适用于安全的局域网。由于这些方法没有任何安全机制,因此**不建议在生产环境中使用它们**。你注意了!
#### 方法 1 - 使用 simpleHTTPserverPython
我们写了一篇简要的指南来设置一个简单的 http 服务器,以便在以下链接中即时共享文件和目录。如果你有一个安装了 Python 的系统,这个方法非常方便。
#### 方法 2 - 使用 QuickservePython
此方法针对 Arch Linux 及其衍生版。有关详细信息,请查看下面的链接。
#### 方法 3 - 使用 Ruby
在此方法中,我们使用 Ruby 在类 Unix 系统中通过 HTTP 提供文件和文件夹访问。按照以下链接中的说明安装 Ruby 和 Rails。
安装 Ruby 后,进入要通过网络共享的目录,例如 ostechnix
```
$ cd ostechnix
```
并运行以下命令:
```
$ ruby -run -ehttpd . -p8000
[2018-08-10 16:02:55] INFO WEBrick 1.4.2
[2018-08-10 16:02:55] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux]
[2018-08-10 16:02:55] INFO WEBrick::HTTPServer#start: pid=5859 port=8000
```
确保在路由器或防火墙中打开端口 8000。如果该端口已被其他一些服务使用那么请使用不同的端口。
现在你可以使用 URL 从任何远程系统访问此文件夹的内容 - **http:// <ip-address>:8000**
![](https://www.ostechnix.com/wp-content/uploads/2018/08/ruby-http-server.png)
要停止共享,请按 **CTRL+C**
#### 方法 4 - 使用 Http-serverNodeJS
[**Http-server**][1] 是一个用 NodeJS 编写的简单的可用于生产的命令行 http-server。它不需要要配置可用于通过 Web 浏览器即时共享文件和目录。
按如下所述安装 NodeJS。
安装 NodeJS 后,运行以下命令安装 http-server。
```
$ npm install -g http-server
```
现在进入任何目录并通过 HTTP 共享其内容,如下所示。
```
$ cd ostechnix
$ http-server -p 8000
Starting up http-server, serving ./
Available on:
http://127.0.0.1:8000
http://192.168.225.24:8000
http://192.168.225.20:8000
Hit CTRL-C to stop the server
```
现在你可以使用 URL 从任何远程系统访问此文件夹的内容 - **http:// <ip-address>:8000**
![](http://www.ostechnix.com/wp-content/uploads/2018/08/nodejs-http-server.png)
要停止共享,请按 **CTRL+C**
#### 方法 5 - 使用 MiniserveRust
[**Miniserve**][2] 是另一个命令行程序,它允许你通过 HTTP 快速访问文件。它是一个非常快速,易于使用的跨平台程序,它用 **Rust** 编程语言编写。与上面的程序/方法不同,它提供身份验证支持,因此你可以为共享设置用户名和密码。
按下面的链接在 Linux 系统中安装 Rust。
安装 Rust 后,运行以下命令安装 miniserve
```
$ cargo install miniserve
```
或者,你可以在[**发布页**][3]下载二进制文件并使其可执行。
```
$ chmod +x miniserve-linux
```
然后,你可以使用命令运行它(假设 miniserve 二进制文件下载到当前的工作目录中):
```
$ ./miniserve-linux <path-to-share>
```
**用法**
要提供目录访问:
```
$ miniserve <path-to-directory>
```
**示例:**
```
$ miniserve /home/sk/ostechnix/
miniserve v0.2.0
Serving path /home/sk/ostechnix at http://[::]:8080, http://localhost:8080
Quit by pressing CTRL-C
```
现在,你可以在本地系统使用 URL **<http://localhost:8080>** 访问共享,或者在远程系统使用 URL **http:// <ip-address>:8080** 访问。
要提供单个文件访问:
```
$ miniserve <path-to-file>
```
**示例:**
```
$ miniserve ostechnix/file.txt
```
带用户名和密码提供文件/文件夹访问:
```
$ miniserve --auth joe:123 <path-to-share>
```
绑定到多个接口:
```
$ miniserve -i 192.168.225.1 -i 10.10.0.1 -i ::1 -- <path-to-share>
```
如你所见,我只给出了 5 种方法。但是,本指南末尾附带的链接中还提供了几种方法。也去测试一下它们。此外,收藏并时不时重新访问它来检查将来是否有新的方法。
今天就是这些。希望这篇文章有用。还有更多的好东西。敬请期待!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-quickly-serve-files-and-folders-over-http-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.npmjs.com/package/http-server
[2]:https://github.com/svenstaro/miniserve
[3]:https://github.com/svenstaro/miniserve/releases

View File

@ -1,94 +0,0 @@
# 10个最值得关注的树莓派博客
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA)
网上有很多很棒的树莓派爱好者网站教程代码仓库YouTube 频道和其他资源。以下是我最喜欢的十大树莓派博客,排名不分先后。
### 1. Raspberry Pi Spy
树莓派粉丝 Matt Hawkins 从很早开始就在他的网站 Raspberry Pi Spy 上撰写了大量全面且信息丰富的教程。我从这个网站上直接学到了很多东西,而且 Matt 似乎也总是第一个涵盖很多主题的人。在我学习使用树莓派的前三年里,多次在这个网站得到帮助。
让每个人感到幸运的是,这个不断采用新技术的网站仍然很强大。我希望看到它继续存在下去,让新社区成员在需要时得到帮助。
### 2. Adafruit
Adafruit 是硬件黑客中最知名的品牌之一。该公司制作和销售漂亮的硬件,并提供由员工、社区成员,甚至 Lady Ada 女士自己编写的优秀教程。
除了网上商店Adafruit 还经营一个博客,这个博客充满了来自世界各地的精彩内容。在博客上可以查看树莓派的类别,特别是在工作日的最后一天,会在 Adafruit Towers 举办名为 [Friday is Pi Day][1] 的活动。
### 3. Recantha's Raspberry Pi Pod
Mike HorneRecantha是英国一位重要的树莓派社区成员负责 [CamJam 和 Potton PiPint][2](剑桥的两个树莓派社团)以及 [Pi Wars][3] 一年一度的树莓派机器人竞赛。他为其他人建立树莓派社团提供建议并且总是有时间帮助初学者。Horne和他的共同组织者 Tim Richardson 一起开发了 CamJam Edu Kit (一系列小巧且价格合理的套件,适合初学者使用 Python 学习物理计算)。
除此之外,他还运营着 Pi Pod这是一个包含了世界各地树莓派相关内容的博客。它可能是这个列表中更新最频繁的树莓派博客所以这是一个把握树莓派社区动向的极好方式。
### 4. Raspberry Pi blog
必须提一下树莓派的官方博客:[Raspberry Pi Foundation][4],这个博客涵盖了基金会的硬件,软件,教育,社区,慈善和青年编码俱乐部的一系列内容。博客上的大型主题是家庭数字化,教育授权,以及硬件版本和软件更新的官方新闻。
该博客自 [2011 年][5] 运行至今,并提供了自那时以来所有 1800 多个帖子的 [存档][6] 。你也可以在Twitter上关注[@raspberrypi_otd][7],这是我用 [Python][8] 创建的机器人(教程在这里:[Opensource.com][9]。Twitter 机器人推送来自博客存档的过去几年同一天的树莓派帖子。
### 5. RasPi.tv
另一位开创性的树莓派社区成员是 Alex Eames通过他的博客和 YouTube 频道 RasPi.tv他很早就加入了树莓派社区。他的网站为很多创客项目提供高质量、精心制作的视频教程和书面指南。
Alex 的网站 [RasP.iO][10] 制作了一系列树莓派附加板和配件,包括方便的 GPIO 端口引脚,电路板测量尺等等。他的博客也拓展到了 [Arduino][11][WEMO][12] 以及其他小网站。
### 6. pyimagesearch
虽然不是严格的树莓派博客名称中的“py”是“Python”而不是“树莓派”但该网站有着大量的 [树莓派种类][13]。 Adrian Rosebrock 获得了计算机视觉和机器学习领域的博士学位,他的博客旨在分享他在学习和制作自己的计算机视觉项目时所学到的机器学习技巧。
如果你想使用树莓派的相机模块学习面部或物体识别来这个网站就对了。Adrian 在图像识别领域的深度学习和人工智能知识和实际应用是首屈一指的,而且他编写了自己的项目,这样任何人都可以进行尝试。
### 7. Raspberry Pi Roundup
这个博客由英国官方树莓派经销商之一 The Pi Hut 进行维护,会有每周的树莓派新闻。这是另一个很好的资源,可以紧跟树莓派社区的最新资讯,而且之前的文章也值得回顾。
### 8. Dave Akerman
Dave Akerman 是研究高空热气球的一流专家,他分享使用树莓派以最低的成本进行热气球发射方面的知识和经验。他会在一张由热气球拍摄的平流层照片下面对本次发射进行评论,也会对个人发射树莓派热气球给出自己的建议。
查看 Dave 的博客,了解精彩的临近空间摄影作品。
### 9. Pimoroni
Pimoroni 是一家世界知名的树莓派经销商,其总部位于英国谢菲尔德。这家经销商制作了著名的 [树莓派彩虹保护壳][14],并推出了许多极好的定制附加板和配件。
Pimoroni 的博客布局与其硬件设计和品牌推广一样精美,博文内容非常适合创客和业余爱好者在家进行创作,并且可以在有趣的 YouTube 频道 [Bilge Tank][15] 上找到。
### 10. Stuff About Code
Martin O'Hanlon 以树莓派社区成员的身份转为了基金会的员工,他起初出于乐趣在树莓派上开发我的世界作弊器,最近作为内容编辑加入了基金会。幸运的是,马丁的新工作并没有阻止他更新博客并与世界分享有益的趣闻。
除了我的世界的很多内容,你还可以在 Python 库,[Blue Dot][16] 和 [guizero][17] 上找到 Martin O'Hanlon 的贡献,以及一些总结性的树莓派技巧。
------
via: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow
作者:[Ben Nuttall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[jlztan](https://github.com/jlztan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[1]: https://blog.adafruit.com/category/raspberry-pi/
[2]: https://camjam.me/?page_id=753
[3]: https://piwars.org/
[4]: https://www.raspberrypi-spy.co.uk/
[5]: https://www.raspberrypi.org/blog/first-post/
[6]: https://www.raspberrypi.org/blog/archive/
[7]: https://twitter.com/raspberrypi_otd
[8]: https://github.com/bennuttall/rpi-otd-bot/blob/master/src/bot.py
[9]: https://opensource.com/article/17/8/raspberry-pi-twitter-bot
[10]: https://rasp.io/
[11]: https://www.arduino.cc/
[12]: http://community.wemo.com/
[13]: https://www.pyimagesearch.com/category/raspberry-pi/
[14]: https://shop.pimoroni.com/products/pibow-for-raspberry-pi-3-b-plus
[15]: https://www.youtube.com/channel/UCuiDNTaTdPTGZZzHm0iriGQ
[16]: https://bluedot.readthedocs.io/en/latest/#
[17]: https://lawsie.github.io/guizero/

View File

@ -0,0 +1,250 @@
在Linux中怎么运行Ms-Dos游戏和程序
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/dosbox-720x340.png)
你是否想过尝试一些经典的MS-DOS游戏和像Turbo C++这样的C++ 编译器?这篇教程将会介绍如何使用**DOSBox**在Linux环境下运行MS-DOS的游戏和程序。**DOSBox**是一个x86平台的DOS模拟器可以用来运行经典的DOS游戏和程序。 DOSBox模拟带有声音图形鼠标操纵杆和调制解调器等的因特尔 x86 电脑它允许你运行许多旧的MS-DOS游戏和程序这些游戏和程序根本无法在任何现代PC和操作系统上运行例如Microsoft Windows XP及更高版本Linux和FreeBSD。 DOSBox是免费的使用C ++编程语言编写并在GPL下分发。
### 在Linux上安装DOSBox
DOSBox在大多数Linux发行版的默认仓库中都能找的到
在Arch Linux及其衍生版如AntergosManjaro Linux上
```
$ sudo pacman -S dosbox
```
在 Debian, Ubuntu, Linux Mint上:
```
$ sudo apt-get install dosbox
```
在 Fedora上:
```
$ sudo dnf install dosbox
```
### 配置DOSBox
DOSBox是一个开箱即用的软件它不需要进行初始化配置。 它的配置文件位于**`〜/ .dosbox` **文件夹中,名为`dosbox-x.xx.conf`。 在此配置文件中,你可以编辑/修改各种设置例如以全屏模式启动DOSBox全屏使用双缓冲设置首选分辨率鼠标灵敏度启用或禁用声音扬声器操纵杆等等。 如前所述,默认设置即可正常工作。 你可以不用进行任何更改。
### 在Linux中运行MS-DOS上的游戏和程序
终端运行以下命令启动DOSBox:
```
$ dosbox
```
下图就是DOSBox的界面
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt.png)
正如你所看到的DOSBox带有自己的类似DOS的命令提示符和一个虚拟的`Z\`Drive如果你熟悉MS-DOS的话你会发现在DOSBox环境下工作不会有任何问题。
这是`dir`命令在Linux中等同于`ls`命令)的输出:
![](http://www.ostechnix.com/wp-content/uploads/2018/09/dir-command-output.png)
如果你是第一次使用DOSBox你可以通过在DOSBox提示符中输入以下命令来查看关于DOSBox的简介
```
intro
```
在介绍部分按ENTER进入下一页
要查看DOS中最常用命令的列表请使用此命令:
```
help
```
要查看DOSBox中所有支持的命令的列表请键入
```
help /all
```
记好了这些命令应该在DOSBox提示符中使用而不是在Linux终端中使用。
DOSBox还支持一些实用的键盘组合键。 下图是能有效使用DOSBox的默认键盘快捷键。
![](http://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-keyboard-shortcuts.png)
要退出DOSBox只需键入并按Enter
```
exit
```
默认情况下DOSBox开始运行时的正常屏幕窗口大小如上所示
要直接在全屏启动dosbox请编辑`dosbox-x.xx.conf`文件并将**fullscreen**变量的值设置为**enable**。 之后DosBox将以全屏模式启动。 如果要返回正常屏幕,请按 **ALT+ENTER**
希望你能掌握DOSBox的这些基本用法
让我们继续安装一些DOS程序和游戏。
首先我们需要在Linux系统中创建目录来保存程序和游戏。 我将创建两个名为**`〜/ dosprograms` **和**`〜/ dosgames` **的目录,第一个用于存储程序,后者用于存储游戏。
```
$ mkdir ~/dosprograms ~/dosgames
```
出于本指南的目的,我将向你展示如何安装**Turbo C ++**程序和Mario游戏。我们首先将看到如何安装Turbo。
下载最新的Turbo C ++编译器并将其解压到**`〜/ dosprograms` **目录中。 我已经将turbo c ++保存在在我的**〜/ dosprograms / TC /**目录中了。
```
$ ls dosprograms/tc/
BGI BIN CLASSLIB DOC EXAMPLES FILELIST.DOC INCLUDE LIB README README.COM
```
运行 Dosbox:
```
$ dosbox
```
将**`〜/ dosprograms` **目录挂载为DOSBox中的虚拟驱动器 **C:\**
```
Z:\>mount c ~/dosprograms
```
你会看到类似下面的输出
```
Drive C is mounted as local directory /home/sk/dosprograms.
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-1.png)
现在使用命令切换到C盘
```
Z:\>c:
```
然后切换到**tc / bin**目录:
```
Z:\>cd tc/bin
```
最后运行turbo c ++可执行文件:
```
Z:\>tc.exe
```
**备注:**只需输入前几个字母然后按ENTER键自动填充文件名。
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-4.png)
你现在将进入Turbo C ++控制台。
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-5.png)
创建新文件ATL + F并开始编程
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-6.png)
你可以同样安装和运行其他经典DOS程序。
**故障排除:**
运行turbo c ++或其他任何dos程序时你可能会遇到以下错误
```
DOSBox switched to max cycles, because of the setting: cycles=auto. If the game runs too fast try a fixed cycles amount in DOSBox's options. Exit to error: DRC64:Unhandled memory reference
```
要解决此问题,编辑**〜/ .dosbox / dosbox-x.xx.conf **文件:
```
$ nano ~/.dosbox/dosbox-0.74.conf
```
找到以下变量:
```
core=auto
```
并更改其值为:
```
core=normal
```
现在让我们看看如何运行基于DOS的游戏例如 **Mario Bros VGA**
从 [**这里**][1]下载Mario游戏并将其解压到Linux中的**〜/ dosgames **目录
运行 DOSBox:
```
$ dosbox
```
我们刚才使用了虚拟驱动器 **c:** 来运行dos程序。现在让我们使用 **d:** 作为虚拟驱动器来运行游戏。
在DOSBox提示符下运行以下命令将 **~/dosgames** 目录挂载为虚拟驱动器 **d**
```
Z:\>mount d ~/dosgames
```
进入驱动器D
```
Z:\>d:
```
然后进入mario游戏目录并运行 **mario.exe** 文件来启动游戏
```
Z:\>cd mario
Z:\>mario.exe
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-7.png)
开始玩游戏:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Mario-game-in-dosbox.png)
你可以同样像上面所说的那样运行任何基于DOS的游戏。 [**点击这里**] [2]查看可以使用DOSBOX运行的游戏的完整列表。
### 总结
尽管DOSBOX并不能作为MS-DOS的完全替代品并且还缺少MS-DOS中的许多功能但它足以安装和运行大多数的DOS游戏和程序。
有关更多详细信息,请参阅官方[**DOSBox手册**][3]
这就是全部内容。希望这对你有用。更多优秀指南即将到来。 敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[way-ww](https://github.com/way-ww)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.dosgames.com/game/mario-bros-vga
[2]: https://www.dosbox.com/comp_list.php
[3]: https://www.dosbox.com/DOSBoxManual.html

View File

@ -1,67 +0,0 @@
6 个用于写书的开源工具
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4)
我在 1993 年首次使用并贡献了免费和开源软件,从那时起我一直是一名开源软件开发人员和传播者。尽管我一个被记住的项目是[ FreeDOS 项目][1] 一个 DOS 操作系统的开源实现,但我已经编写或者贡献了数十个开源软件项目。
我最近写了一本关于 FreeDOS 的书。 [_使用 FreeDOS_][2]是我庆祝 FreeDOS 出现 24 周年。它是关于安装和使用 FreeDOS、关于我最喜欢的 DOS 程序的文章,以及 DOS 命令行和 DOS 批处理编程的快速参考指南的集合。在一位出色的专业编辑的帮助下,我在过去的几个月里一直在编写这本书。
_使用 FreeDOS_ 可在知识共享署名cc-by国际公共许可证下获得。你可以从[FreeDO S电子书][2]网站免费下载 EPUB 和 PDF 版本。(我也计划为那些喜欢纸质的人提供打印版本。)
这本书几乎完全是用开源软件制作的。我想分享一下对用来创建、编辑和生成_使用 FreeDOS_的工具的看法。
### Google 文档
[Google 文档][3]是我使用的唯一不是开源软件的工具。我将我的第一份草稿上传到 Google 文档,这样我就能与编辑器进行协作。我确信有开源协作工具,但 Google 文档能够让两个人同时编辑同一个文档、发表评论、编辑建议和更改跟踪 - 更不用说它使用段落样式和能够下载完成的文档 - 这使其成为编辑过程中有价值的一部分。
### LibreOffice
我开始使用 [LibreOffice][4] 6.0,但我最终使用 LibreOffice 6.1 完成了这本书。我喜欢 LibreOffice 对样式的丰富支持。段落样式可以轻松地为标题、页眉、正文、示例代码和其他文本应用样式。字符样式允许我修改段落中文本的外观,例如内联示例代码或用不同的样式代表文件名。图形样式让我可以将某些样式应用于截图和其他图像。页面样式允许我轻松修改页面的布局和外观。
### GIMP
我的书包括很多 DOS 程序截图,网站截图和 FreeDOS logo。我用 [GIMP][5] 修改了这本书的图像。通常,只是裁剪或调整图像大小,但在我准备本书的印刷版时,我使用 GIMP 创建了一些更易于打印布局的图像。
### Inkscape
大多数 FreeDOS logo 和小鱼吉祥物都是 SVG 格式,我使用 [Inkscape][6]来调整它们。在准备电子书的 PDF 版本时,我想在页面顶部放置一个简单的蓝色横幅,角落里有 FreeDOS logo。实验后我发现在 Inkscape 中创建一个我想要的横幅 SVG 图案更容易,然后我将其粘贴到页眉中。
### ImageMagick
虽然使用 GIMP 来完成这项工作也很好,但有时在一组图像上运行 [ImageMagick][7] 命令会更快,例如转换为 PNG 格式或调整图像大小。
### Sigil
LibreOffice 可以直接导出到 EPUB 格式,但它不是个好的转换器。我没有尝试使用 LibreOffice 6.1 创建 EPUB但 LibreOffice 6.0 没有包含我的图像。它还以奇怪的方式添加了样式。我使用 [Sigil][8] 来调整 EPUB 并使一切看起来正常。Sigil 甚至还有预览功能,因此你可以看到 EPUB 的样子。
### QEMU
因为本书是关于安装和运行 FreeDOS 的,所以我需要实际运行 FreeDOS。你可以在任何 PC 模拟器中启动 FreeDOS包括 VirtualBox、QEMU、GNOME Boxes、PCem 和 Bochs。但我喜欢 [QEMU] [9] 的简单性。QEMU 控制台允许你以 PPM 转储屏幕,这非常适合抓取截图来包含在书中。
当然,我不得不提到在 [Linux][11] 上运行 [GNOME][10]。我使用 Linux 的 [Fedora][12] 发行版。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/writing-book-open-source-tools
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[1]: http://www.freedos.org/
[2]: http://www.freedos.org/ebook/
[3]: https://www.google.com/docs/about/
[4]: https://www.libreoffice.org/
[5]: https://www.gimp.org/
[6]: https://inkscape.org/
[7]: https://www.imagemagick.org/
[8]: https://sigil-ebook.com/
[9]: https://www.qemu.org/
[10]: https://www.gnome.org/
[11]: https://www.kernel.org/
[12]: https://getfedora.org/

View File

@ -0,0 +1,105 @@
# KeeWeb 一个开源且跨平台的密码管理工具
![](https://www.ostechnix.com/wp-content/uploads/2018/10/keeweb-720x340.png)
如果你长时间使用互联网,那很可能在很多网站上都有很多帐户。所有这些帐户都必须有密码,而且必须记住所有的密码,或者把它们写下来。在纸上写下密码可能不安全,如果有多个密码,记住它们实际上是不可能的。这就是密码管理工具在过去几年中大受欢迎的原因。密码管理工具就像一个中央存储库,你可以在其中存储所有帐户的所有密码,并为它设置一个主密码。使用这种方法,你唯一需要记住的只有主密码。
**KeePass** 就是一个这样的开源密码管理工具,它有一个官方客户端,但功能非常简单。也有许多 PC 端和手机端的其他密码管理工具,并且与 KeePass 存储加密密码的文件格式兼容。其中一个就是 **KeeWeb**
KeeWeb 是一个开源、跨平台的密码管理工具具有云同步键盘快捷键和插件等功能。KeeWeb使用 Electron 框架,这意味着它可以在 WindowsLinux 和 Mac OS 上运行。
### KeeWeb 的使用
有两种方式可以使用 KeeWeb。第一种无需安装直接在网页上使用第二中就是在本地系统中安装 KeeWeb 客户端。
**在网页上使用 KeeWeb**
如果不想在系统中安装应用,可以去 [**https://app.keeweb.info/**][1] 使用KeeWeb。
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-webapp.png)
网页端具有桌面客户端的所有功能,当然也需要联网才能进行使用。
**在计算机中安装 KeeWeb**
如果喜欢客户端的舒适性和离线可用性,也可以将其安装在系统中。
如果使用Ubuntu/Debian你可以去 [**releases pages**][2] 下载 KeeWeb 最新的 **.deb ** 文件,然后通过下面的命令进行安装:
```
$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb
```
如果用的是 Arch在 [**AUR**][3] 上也有 KeeWeb可以使用任何 AUR 助手进行安装,例如 [**Yay**][4]
```
$ yay -S keeweb
```
安装后,从菜单中或应用程序启动器启动 KeeWeb。默认界面长这样
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-desktop-client.png)
### 总体布局
KeeWeb 界面主要显示所有密码的列表,在左侧展示所有标签。单击标签将对密码进行过滤,只显示带有那个标签的密码。在右侧,显示所选帐户的所有字段。你可以设置用户名,密码,网址,或者添加自定义的备注。你甚至可以创建自己的字段并将其标记为安全字段,这在存储信用卡信息等内容时非常有用。你只需单击即可复制密码。 KeeWeb 还显示账户的创建和修改日期。已删除的密码会保留在回收站中,可以在其中还原或永久删除。
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-general-layout.png)
### KeeWeb 功能
**云同步**
KeeWeb 的主要功能之一是支持各种远程位置和云服务。除了加载本地文件,你可以从以下位置打开文件:
```
1. WebDAV Servers
2. Google Drive
3. Dropbox
4. OneDrive
```
这意味着如果你使用多台计算机,就可以在它们之间同步密码文件,因此不必担心某台设备无法访问所有密码。
**密码生成器**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-password-generator.png)
除了对密码进行加密之外,为每个帐户创建新的强密码也很重要。这意味着,如果你的某个帐户遭到入侵,攻击者将无法使用相同的密码进入其他帐户。
为此KeeWeb 有一个内置密码生成器,可以生成特定长度、包含指定字符的自定义密码。
**插件**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-plugins.png)
你可以使用插件扩展 KeeWeb 的功能。 其中一些插件用于更改界面语言,而其他插件则添加新功能,例如访问 **<https://haveibeenpwned.com>** 以查看密码是否暴露。
**本地备份**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-backup.png)
无论密码文件存储在何处你都应该在计算机上保留一份本地备份。幸运的是KeeWeb 内置了这个功能。你可以备份到特定路径,并将其设置为定期备份,或者只在文件更改时进行备份。
### 结论
我实际使用 KeeWeb 已经好几年了,它完全改变了我存储密码的方式。云同步是我长期使用 KeeWeb 的主要功能这样我不必担心在多个设备上保存多个不同步的文件。如果你想要一个具有云同步功能的密码管理工具KeeWeb 就是你应该关注的东西。
------
via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[jlztan](https://github.com/jlztan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://app.keeweb.info/
[2]: https://github.com/keeweb/keeweb/releases/latest
[3]: https://aur.archlinux.org/packages/keeweb/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/

View File

@ -1,49 +1,49 @@
Chrony An Alternative NTP Client And Server For Unix-like Systems
Chrony 一个类 Unix 系统可选的 NTP 客户端和服务器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/chrony-1-720x340.jpeg)
In this tutorial, we will be discussing how to install and configure **Chrony** , an alternative NTP client and server for Unix-like systems. Chrony can synchronise the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time. Chrony is free, open source and supports GNU/Linux and BSD variants such as FreeBSD, NetBSD, macOS, and Solaris.
在这个教程中,我们会讨论如何安装和配置 **Chrony**,一个类 Unix 系统上可选的 NTP 客户端和服务器。Chrony 可以更快的同步系统时钟具有更好的时钟准确度并且它对于那些不是一直在线的系统很有帮助。Chrony 是免费、开源的,并且支持 GNU/Linux 和 BSD 衍生版比如 FreeBSDNetBSDmacOS 和 Solaris 等。
### Installing Chrony
### 安装 Chrony
Chrony is available in the default repositories of most Linux distributions. If youre on Arch Linux, run the following command to install it:
Chrony 可以从大多数 Linux 发行版的默认软件库中获得。如果你使用的是 Arch Linux运行下面的命令来安装它
```
$ sudo pacman -S chrony
```
On Debian, Ubuntu, Linux Mint:
在 DebianUbuntuLinux Mint 上:
```
$ sudo apt-get install chrony
```
On Fedora:
在 Fedora 上:
```
$ sudo dnf install chrony
```
Once installed, start **chronyd.service** daemon if it is not started already:
当安装完成后,如果之前没有启动过的话需启动 **chronyd.service** 守护进程:
```
$ sudo systemctl start chronyd.service
```
Make it to start automatically on every reboot using command:
使用下面的命令让它每次重启系统后自动运行:
```
$ sudo systemctl enable chronyd.service
```
To verify if the Chronyd.service has been started, run:
为了确认 Chronyd.service 已经启动,运行:
```
$ sudo systemctl status chronyd.service
```
If everything is OK, you will see an output something like below.
如果一切正常,你将看到类似下面的输出:
```
● chrony.service - chrony, an NTP client/server
@ -67,13 +67,13 @@ Oct 17 10:35:03 ubuntuserver chronyd[2482]: Selected source 91.189.89.199
Oct 17 10:35:06 ubuntuserver chronyd[2482]: Selected source 106.10.186.200
```
As you can see, Chrony service is started and working!
可以看到Chrony 服务已经启动并且正在工作!
### Configure Chrony
### 配置 Chrony
The NTP clients needs to know which NTP servers it should contact to get the current time. We can specify the NTP servers in the **server** or **pool** directive in the NTP configuration file. Usually, the default configuration file is **/etc/chrony/chrony.conf** or **/etc/chrony.conf** depending upon the Linux distribution version. For better reliability, it is recommended to specify at least three servers.
NTP 客户端需要知道它要连接到哪个 NTP 服务器来获取当前时间。我们可以直接在 NTP 配置文件中的 **server** 或者 **pool** 项指定 NTP 服务器。通常,默认的配置文件位于 **/etc/chrony/chrony.conf** 或者 **/etc/chrony.conf**,取决于 Linux 发行版版本。为了更可靠的时间同步,建议指定至少三个服务器。
The following lines are just an example taken from my Ubuntu 18.04 LTS server.
下面几行是我的 Ubuntu 18.04 LTS 服务器上的一个示例。
```
[...]
@ -87,22 +87,19 @@ pool 2.ubuntu.pool.ntp.org iburst maxsources 2
[...]
```
As you see in the above output, [**NTP Pool Project**][1] has been set as the default time server. For those wondering, NTP pool project is the cluster of time servers that provides NTP service for tens of millions clients across the world. It is the default time server for Ubuntu and most of the other major Linux distributions.
从上面的输出中你可以看到,[**NTP Pool Project**][1] 已经被设置成为了默认的时间服务器。对于那些好奇的人NTP Pool project 是一个时间服务器集群,用来为全世界千万个客户端提供 NTP 服务。它是 Ubuntu 以及其他主流 Linux 发行版的默认时间服务器。
Here,
在这里,
* **iburst** 选项用来加速初始的同步过程
* **maxsources** 代表 NTP 源的最大数量
* the **iburst** option is used to speed up the initial synchronisation.
* the **maxsources** refers the maximum number of NTP sources.
请确保你选择的 NTP 服务器是同步的、稳定的、离你的位置较近的,以便使用这些 NTP 源来提升时间准确度。
### 在命令行中管理 Chronyd
Chrony 有一个命令行工具叫做 **chronyc** 用来控制和监控 **chrony** 守护进程chronyd
Please make sure that the NTP servers you have chosen are well synchronised, stable and close to your location to improve the accuracy of the time with NTP sources.
### Manage Chronyd from command line
Chrony has a command line utility named **chronyc** to control and monitor the **chrony** daemon (chronyd).
To check if **chrony** is synchronized, we can use the **tracking** command as shown below.
为了检查是否 **chrony** 已经同步,我们可以使用下面展示的 **tracking** 命令。
```
$ chronyc tracking
@ -121,7 +118,7 @@ Update interval : 515.1 seconds
Leap status : Normal
```
We can verify the current time sources that chrony uses with command:
我们可以使用命令确认现在 chrony 使用的时间源:
```
$ chronyc sources
@ -138,7 +135,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample
^- ns2.pulsation.fr 2 10 377 311 -75ms[ -73ms] +/- 250ms
```
Chronyc utility can find the statistics of each sources, such as drift rate and offset estimation process, using **sourcestats** command.
Chronyc 工具可以对每个源进行统计,比如使用 **sourcestats** 命令获得漂移速率和进行偏移估计。
```
$ chronyc sourcestats
@ -155,7 +152,7 @@ sin1.m-d.net 29 13 83m +0.049 6.060 -8466us 9940us
ns2.pulsation.fr 32 17 88m +0.784 9.834 -62ms 22ms
```
If your system is not connected to Internet, you need to notify Chrony that the system is not connected to the Internet. To do so, run:
如果你的系统没有连接到 Internet你需要告知 Chrony 系统没有连接到 Internet。为了这样做运行
```
$ sudo chronyc offline
@ -163,7 +160,7 @@ $ sudo chronyc offline
200 OK
```
To verify the status of your NTP sources, simply run:
为了确认你的 NTP 源的状态,只需要运行:
```
$ chronyc activity
@ -175,16 +172,16 @@ $ chronyc activity
0 sources with unknown address
```
As you see, all my NTP sources are down at the moment.
可以看到,我的所有源此时都是离线状态。
Once youre connected to the Internet, just notify Chrony that your system is back online using command:
一旦你连接到 Internet只需要使用命令告知 Chrony 你的系统已经回到在线状态:
```
$ sudo chronyc online
200 OK
```
To view the status of NTP source(s), run:
为了查看 NTP 源的状态,运行:
```
$ chronyc activity
@ -196,7 +193,7 @@ $ chronyc activity
0 sources with unknown address
```
For more detailed explanation of all options and parameters, refer the man pages.
所有选项和参数的详细解释,请参考帮助手册。
```
$ man chronyc
@ -204,9 +201,9 @@ $ man chronyc
$ man chronyd
```
And, thats all for now. Hope this was useful. In the subsequent tutorials, we will see how to setup a local NTP server using Chrony and configure the clients to use it to synchronise time.
这就是文章的所有内容。希望对你有所帮助。在随后的教程中,我们会看到如何使用 Chrony 启动一个本地的 NTP 服务器并且配置客户端来使用这个服务器同步时间。
Stay tuned!
保持关注!
@ -216,7 +213,7 @@ via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-u
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[zianglei](https://github.com/zianglei)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,88 @@
2018 年 10 月在 COPR 中值得尝试的 4 个很酷的新项目
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR是软件的个人存储库的[集合] [1],它不在标准的 Fedora 仓库中携带。某些软件不符合允许轻松打包的标准。或者它可能不符合其他 Fedora 标准尽管它是免费和开源的。COPR 可以在标准的 Fedora 包之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己签名的。但是,它是尝试新的或实验性软件的一种很好的方法。
这是 COPR 中一组新的有趣项目。
### GitKraken
[GitKraken][2] 是一个有用的 git 客户端它适合喜欢图形界面而非命令行的用户并提供你期望的所有功能。此外GitKraken 可以创建仓库和文件并具有内置编辑器。GitKraken 的一个有用功能是暂存行或者文件,并快速切换分支。但是,在某些情况下,在遇到较大项目时会有性能问题。
![][3]
#### 安装说明
该仓库目前为 Fedora 27、28、29 、Rawhide 以及 OpenSUSE Tumbleweed 提供 GitKraken。要安装 GitKraken请使用以下命令
```
sudo dnf copr enable elken/gitkraken
sudo dnf install gitkraken
```
### Music On Console
[Music On Console][4] 播放器或称为 mocp是一个简单的控制台音频播放器。它有一个类似于 “Midnight Commander” 的界面并且很容易使用。你只需进入包含音乐的目录然后选择要播放的文件或目录。此外mocp 提供了一组命令,允许直接从命令行进行控制。
![][5]
#### 安装说明
该仓库目前为 Fedora 28 和 29 提供 Music On Console 播放器。要安装 mocp请使用以下命令
```
sudo dnf copr enable Krzystof/Moc
sudo dnf install moc
```
### cnping
[Cnping][6]是小型的图形化 ping IPv4 工具,可用于可视化显示 RTT 的变化。它提供了一个选项来控制每个数据包之间的间隔以及发送的数据大小。除了显示的图表外cnping 还提供 RTT 和丢包的基本统计数据。
![][7]
#### 安装说明
该仓库目前为 Fedora 27、28、29 和 Rawhide 提供 cnping。要安装 cnping请使用以下命令
```
sudo dnf copr enable dreua/cnping
sudo dnf install cnping
```
### Pdfsandwich
[Pdfsandwich][8] 是将文本添加到图像形式的文本 PDF 文件 (如扫描书籍) 的工具。它使用光学字符识别 (OCR) 创建一个额外的图层, 包含了原始页面已识别的文本。这对于复制和处理文本很有用。
#### 安装说明
该仓库目前为 Fedora 27、28、29、Rawhide 以及 EPEL 7 提供 pdfsandwich。要安装 pdfsandwich请使用以下命令
```
sudo dnf copr enable merlinm/pdfsandwich
sudo dnf install pdfsandwich
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-try-copr-october-2018/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org
[b]: https://github.com/lujun9972
[1]: https://copr.fedorainfracloud.org/
[2]: https://www.gitkraken.com/git-client
[3]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-gitkraken.png
[4]: http://moc.daper.net/
[5]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-mocp.png
[6]: https://github.com/cnlohr/cnping
[7]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-cnping.png
[8]: http://www.tobias-elze.de/pdfsandwich/