mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
commit
f09c50c3fa
@ -1,9 +1,13 @@
|
||||
Part-III 树莓派自建 NAS 云盘之云盘构建
|
||||
树莓派自建 NAS 云盘之——云盘构建
|
||||
======
|
||||
|
||||
用树莓派 NAS 云盘来保护数据的安全!
|
||||
> 用自行托管的树莓派 NAS 云盘来保护数据的安全!
|
||||
|
||||
在前面两篇文章中(译注:文章链接 [Part-I][1],[Part-II][2]),我们讨论了用树莓派搭建一个 NAS(network-attached storage) 所需要的一些 [软硬件环境及其操作步骤][1]。我们还制定了适当的 [备份策略][2] 来保护NAS上的数据。本文中,我们将介绍讨论利用 [Nestcloud][3] 来方便快捷的存储、获取以及分享你的数据。
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP)
|
||||
|
||||
在前面两篇文章中,我们讨论了用树莓派搭建一个 NAS 云盘所需要的一些 [软硬件环境及其操作步骤][1]。我们还制定了适当的 [备份策略][2] 来保护 NAS 上的数据。本文中,我们将介绍讨论利用 [Nestcloud][3] 来方便快捷的存储、获取以及分享你的数据。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/nas_part3.png)
|
||||
|
||||
### 必要的准备工作
|
||||
|
||||
@ -13,13 +17,13 @@ Part-III 树莓派自建 NAS 云盘之云盘构建
|
||||
|
||||
### 安装 Nextcloud
|
||||
|
||||
为了在树莓派(参考 [Part-I][1] 中步骤设置)中运行 Nextcloud,首先用命令 **apt** 安装 以下的一些依赖软件包。
|
||||
为了在树莓派(参考 [第一篇][1] 中步骤设置)中运行 Nextcloud,首先用命令 `apt` 安装 以下的一些依赖软件包。
|
||||
|
||||
```
|
||||
sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl
|
||||
```
|
||||
|
||||
其次,下载 Nextcloud。在树莓派中利用 **wget** 下载其 [最新的版本][5]。在 [Part-I] 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud,以确保每晚自动备份数据。
|
||||
其次,下载 Nextcloud。在树莓派中利用 `wget` 下载其 [最新的版本][5]。在 [第一篇][1] 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud,以确保每晚自动备份数据。
|
||||
|
||||
```
|
||||
sudo mkdir -p /nas/data/nextcloud
|
||||
@ -37,27 +41,27 @@ sudo chown -R www-data:www-data /nas/data/nextcloud
|
||||
|
||||
如上所述,Nextcloud 安装完毕。之前安装依赖软件包时就已经安装了 MySQL 数据库来存储 Nextcloud 的一些重要数据(例如,那些你创建的可以访问 Nextcloud 的用户的信息)。如果你更愿意使用 Pstgres 数据库,则上面的依赖软件包需要做一些调整。
|
||||
|
||||
以 root 权限启动 MySQL:
|
||||
以 root 权限启动 MySQL:
|
||||
|
||||
```
|
||||
sudo mysql
|
||||
```
|
||||
|
||||
这将会打开 SQL 提示符界面,在那里可以插入如下指令--使用数据库连接密码替换其中的占位符--为 Nextcloud 创建一个数据库。
|
||||
这将会打开 SQL 提示符界面,在那里可以插入如下指令——使用数据库连接密码替换其中的占位符——为 Nextcloud 创建一个数据库。
|
||||
|
||||
```
|
||||
CREATE USER nextcloud IDENTIFIED BY '<insert-password-here>';
|
||||
CREATE USER nextcloud IDENTIFIED BY '<这里插入密码>';
|
||||
CREATE DATABASE nextcloud;
|
||||
GRANT ALL ON nextcloud.* TO nextcloud;
|
||||
```
|
||||
|
||||
按 **Ctrl+D** 或输入 **quit** 退出 SQL 提示符界面。
|
||||
按 `Ctrl+D` 或输入 `quit` 退出 SQL 提示符界面。
|
||||
|
||||
### Web 服务器配置
|
||||
|
||||
Nextcloud 可以配置以适配于 Nginx 服务器或者其他 Web 服务器运行的环境。但本文中,我决定在我的树莓派 NAS 中运行 Apache 服务器(如果你有其他效果更好的服务器选择方案,不妨也跟我分享一下)。
|
||||
|
||||
首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 **/etc/apache2/sites-available/001-netxcloud.conf**,在其中输入下面的参数内容。修改其中 ServerName 为你的域名。
|
||||
首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 `/etc/apache2/sites-available/001-netxcloud.conf`,在其中输入下面的参数内容。修改其中 `ServerName` 为你的域名。
|
||||
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
@ -78,13 +82,13 @@ a2ensite 001-nextcloud
|
||||
sudo systemctl reload apache2
|
||||
```
|
||||
|
||||
现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot][7] 下载 [Let's Encrypt][6] 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装][8] Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 **<http://nextcloud.pi-nas.com>** 自动跳转到 **<https://nextcloud.pi-nas.com>**。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。
|
||||
现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot][7] 下载 [Let's Encrypt][6] 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装][8] Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 `http://nextcloud.pi-nas.com` 自动跳转到 `https://nextcloud.pi-nas.com`。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。
|
||||
|
||||
### 配置 Nextcloud
|
||||
|
||||
最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [Part-II][2] 一文中设置的备份策略。
|
||||
最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [第二篇][2] 一文中设置的备份策略。
|
||||
|
||||
然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 <https://nextcloud.pi-nas.com/>settings/admin)。
|
||||
然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 `https://nextcloud.pi-nas.com/settings/admin`)。
|
||||
|
||||
恭喜你,到此为止,你已经成功在树莓派中安装了你自己的云 Nextcloud。去 Nextcloud 主页 [下载 Nextcloud 客户端][9],客户端可以同步数据并且离线访问服务器。移动端甚至可以上传图片等资源,然后电脑桌面都可以去访问它们。
|
||||
|
||||
@ -95,13 +99,13 @@ via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi
|
||||
作者:[Manuel Dewald][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[jrg](https://github.com/jrglinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ntlx
|
||||
[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
|
||||
[2]: https://opensource.com/article/18/8/automate-backups-raspberry-pi
|
||||
[1]: https://linux.cn/article-10104-1.html?utm_source=index&utm_medium=more
|
||||
[2]: https://linux.cn/article-10112-1.html
|
||||
[3]: https://nextcloud.com/
|
||||
[4]: https://sourceforge.net/p/ddclient/wiki/Home/
|
||||
[5]: https://nextcloud.com/install/#instructions-server
|
@ -1,35 +1,35 @@
|
||||
三个开源的分布式追踪工具
|
||||
======
|
||||
|
||||
这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。
|
||||
> 这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
|
||||
|
||||
分布式追踪系统能够从头到尾地追踪分布式系统中的请求,跨越多个应用、服务、数据库以及像代理这样的中间件。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。
|
||||
分布式追踪系统能够从头到尾地追踪跨越了多个应用、服务、数据库以及像代理这样的中间件的分布式软件的请求。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。
|
||||
|
||||
用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能在退化,总是比客户告诉你要好。
|
||||
用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能的退化,总是比客户告诉你要好。
|
||||
|
||||
追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做 trace,trace 是抽象整个事务的概念。每一个 trace 由 span 组成,span 代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个 span 也有自己唯一的 ID。span 之下也可以创建子 span,子 span 可以有多个父 span。
|
||||
这种追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做<ruby>踪迹<rt>trace</rt></ruby>,“踪迹”是整个事务的抽象概念。每一个“踪迹”由<ruby>单元<rt>span</rt></ruby>组成,“单元”代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个“单元”也有自己唯一的 ID。“单元”之下也可以创建子“单元”,子“单元”可以有多个父“单元”。
|
||||
|
||||
当一次事务(或者说 trace)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个 trace 中的多个 span。很明显,这个图能让你一目了然地对事务有更深的了解。
|
||||
当一次事务(或者说踪迹)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个踪迹中的多个单元。很明显,这个图能让你一目了然地对事务有更深的了解。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png)
|
||||
|
||||
这个 demo 使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。
|
||||
这个演示使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。
|
||||
|
||||
那么 OpenTracing 到底是什么呢?我们来看看。
|
||||
|
||||
### OpenTracing API
|
||||
|
||||
[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin,Jaeger,和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。
|
||||
[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin、Jaeger 和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。
|
||||
|
||||
### OpenCensus
|
||||
|
||||
OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索结果中老是出现。它是一个和 OpenTracing 完全不同或者互补的竞争标准吗?
|
||||
|
||||
这个问题的答案取决于你的提问对象。我先尽我所能地解释一下他们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。
|
||||
这个问题的答案取决于你的提问对象。我先尽我所能地解释一下它们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。
|
||||
|
||||
使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin,Prometheus,Jaeger,Stackdriver,Datadog 和 SignalFx,不过任何人都可以创建一个导出器。
|
||||
使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin、Prometheus、Jaeger、Stackdriver、Datadog 和 SignalFx,不过任何人都可以创建一个导出器。
|
||||
|
||||
依我看这两者有很多重叠的部分,没有哪个一定比另外一个好,但是重要的是,要知道它们做什么事情和不做什么事情。OpenTracing 主要是一个规范,具体的实现和独断的设计由其他人来做。OpenCensus 更加独断地为本地组件提供了全面的解决方案,但是仍然需要其他系统做远程的聚合。
|
||||
|
||||
@ -39,23 +39,23 @@ OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索
|
||||
|
||||
Zipkin 是最早出现的这类工具之一。 谷歌在 2010 年发表了介绍其内部追踪系统 Dapper 的[论文][8],Twitter 以此为基础开发了 Zipkin。Zipkin 的开发语言 Java,用 Cassandra 或 ElasticSearch 作为可扩展的存储后端,这些选择能满足大部分公司的需求。Zipkin 支持的最低 Java 版本是 Java 6,它也使用了 [Thrift][9] 的二进制通信协议,Thrift 在 Twitter 的系统中很流行,现在作为 Apache 项目在托管。
|
||||
|
||||
这个系统包括上报器(客户端),数据收集器,查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的 trace ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些 span 的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP,Kafka 和 Scribe。
|
||||
这个系统包括上报器(客户端)、数据收集器、查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的踪迹 ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些单元的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP、Kafka 和 Scribe。
|
||||
|
||||
[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。
|
||||
[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。
|
||||
|
||||
#### Jaeger
|
||||
|
||||
[Jaeger][1] 来自 Uber,是一个比较新的项目,[CNCF][13] (云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。
|
||||
[Jaeger][1] 来自 Uber,是一个比较新的项目,[CNCF][13](云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。
|
||||
|
||||
Jaeger 的架构跟 Zipkin 很像,有客户端(上报器),数据收集器,查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。
|
||||
Jaeger 的架构跟 Zipkin 很像,有客户端(上报器)、数据收集器、查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。
|
||||
|
||||
默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 自适应采样已经在[路线图][15],它将通过添加额外的,能够帮助做决策的上下文,来改进采样算法。
|
||||
默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 [自适应采样已经在路线图当中][15],它将通过添加额外的、能够帮助做决策的上下文来改进采样算法。
|
||||
|
||||
#### Appdash
|
||||
|
||||
[Appdash][16] 也是一个用 Golang 写的分布式追踪系统,和 Jaeger 一样。Appdash 是 [Sourcegraph][17] 公司基于谷歌的 Dapper 和 Twitter 的 Zipkin 开发的。同样的,它也支持 Opentracing 标准,不过这是后来添加的功能,依赖了一个与默认组件不同的组件,因此增加了风险和复杂度。
|
||||
|
||||
从高层次来看,Appdash 的架构主要有三个部分:客户端,本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。 Appdash 提供了 Python,Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集 span 数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。
|
||||
从高层次来看,Appdash 的架构主要有三个部分:客户端、本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。Appdash 提供了 Python、Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集单元数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -64,7 +64,7 @@ via: https://opensource.com/article/18/9/distributed-tracing-tools
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[belitex](https://github.com/belitex)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,69 @@
|
||||
如何在家中使用 SSH 和 SFTP 协议
|
||||
======
|
||||
|
||||
> 通过 SSH 和 SFTP 协议,我们能够访问其他设备,有效而且安全的传输文件等等。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
|
||||
|
||||
几年前,我决定配置另外一台电脑,以便我能在工作时访问它来传输我所需要的文件。要做到这一点,最基本的一步是要求你的网络提供商(ISP)提供一个固定的地址。
|
||||
|
||||
有一个不必要但很重要的步骤,就是保证你的这个可以访问的系统是安全的。在我的这种情况下,我计划只在工作场所访问它,所以我能够限定访问的 IP 地址。即使如此,你依然要尽多的采用安全措施。一旦你建立起来这个系统,全世界的人们马上就能尝试访问你的系统。这是非常令人惊奇及恐慌的。你能通过日志文件来发现这一点。我推测有探测机器人在尽其所能的搜索那些没有安全措施的系统。
|
||||
|
||||
在我设置好系统不久后,我觉得这种访问没什么大用,为此,我将它关闭了以便不再为它操心。尽管如此,只要架设了它,在家庭网络中使用 SSH 和 SFTP 还是有点用的。
|
||||
|
||||
当然,有一个必备条件,这个另外的电脑必须已经开机了,至于电脑是否登录与否无所谓的。你也需要知道其 IP 地址。有两个方法能够知道,一个是通过浏览器访问你的路由器,一般情况下你的地址格式类似于 192.168.1.254 这样。通过一些搜索,很容易找出当前是开机的并且接在 eth0 或者 wifi 上的系统。如何识别你所要找到的电脑可能是个挑战。
|
||||
|
||||
更容易找到这个电脑的方式是,打开 shell,输入 :
|
||||
|
||||
```
|
||||
ifconfig
|
||||
```
|
||||
|
||||
命令会输出一些信息,你所需要的信息在 `inet` 后面,看起来和 192.168.1.234 类似。当你发现这个后,回到你要访问这台主机的客户端电脑,在命令行中输入 :
|
||||
|
||||
```
|
||||
ssh gregp@192.168.1.234
|
||||
```
|
||||
|
||||
如果要让上面的命令能够正常执行,`gregp` 必须是该主机系统中正确的用户名。你会被询问其密码。如果你键入的密码和用户名都是正确的,你将通过 shell 环境连接上了这台电脑。我坦诚,对于 SSH 我并不是经常使用的。我偶尔使用它,我能够运行 `dnf` 来更新我所常使用电脑之外的其它电脑。通常,我用 SFTP :
|
||||
|
||||
```
|
||||
sftp grego@192.168.1.234
|
||||
```
|
||||
|
||||
我更需要用简单的方法来把一个文件传输到另一个电脑。相对于闪存棒和额外的设备,它更加方便,耗时更少。
|
||||
|
||||
一旦连接建立成功,SFTP 有两个基本的命令,`get`,从主机接收文件 ;`put`,向主机发送文件。在连接之前,我经常在客户端移动到我想接收或者传输的文件夹下。在连接之后,你将处于一个顶层目录里,比如 `home/gregp`。一旦连接成功,你可以像在客户端一样的使用 `cd`,改变你在主机上的工作路径。你也许需要用 `ls` 来确认你的位置。
|
||||
|
||||
如果你想改变你的客户端的工作目录。用 `lcd` 命令( 即 local change directory 的意思)。同样的,用 `lls` 来显示客户端工作目录的内容。
|
||||
|
||||
如果主机上没有你想要的目录名,你该怎么办?用 `mkdir` 在主机上创建一个新的目录。或者你可以将整个目录的文件全拷贝到主机 :
|
||||
|
||||
```
|
||||
put -r thisDir/
|
||||
```
|
||||
|
||||
这将在主机上创建该目录并复制它的全部文件和子目录到主机上。这种传输是非常快速的,能达到硬件的上限。不像在互联网传输一样遇到网络瓶颈。要查看你能在 SFTP 会话中能够使用的命令列表:
|
||||
|
||||
```
|
||||
man sftp
|
||||
```
|
||||
|
||||
我也能够在我的电脑上的 Windows 虚拟机内用 SFTP,这是配置一个虚拟机而不是一个双系统的另外一个优势。这让我能够在系统的 Linux 部分移入或者移出文件。而我只需要在 Windows 中使用一个客户端就行。
|
||||
|
||||
你能够使用 SSH 或 SFTP 访问通过网线或者 WIFI 连接到你路由器的任何设备。这里,我使用了一个叫做 [SSHDroid][1] 的应用,能够在被动模式下运行 SSH。换句话来说,你能够用你的电脑访问作为主机的 Android 设备。近来我还发现了另外一个应用,[Admin Hands][2],不管你的客户端是平板还是手机,都能使用 SSH 或者 SFTP 操作。这个应用对于备份和手机分享照片是极好的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ssh-sftp-home-network
|
||||
|
||||
作者:[Geg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[singledo](https://github.com/singledo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/greg-p
|
||||
[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid
|
||||
[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US
|
@ -1,6 +1,7 @@
|
||||
使用 Python 为你的油箱加油
|
||||
======
|
||||
我来介绍一下我是如何使用 Python 来节省成本的。
|
||||
|
||||
> 我来介绍一下我是如何使用 Python 来节省成本的。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB)
|
||||
|
||||
@ -82,7 +83,7 @@ while i < 21: # 20 次迭代 (加油次数)
|
||||
|
||||
如你所见,这个调整会令混合汽油号数始终略高于 91。当然,我的油量表并没有 1/12 的刻度,但是 7/12 略小于 5/8,我可以近似地计算。
|
||||
|
||||
一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。但就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。
|
||||
一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。
|
||||
|
||||
当然最重要的是:开车不写码,写码不开车!
|
||||
|
||||
@ -93,7 +94,7 @@ via: https://opensource.com/article/18/10/python-gas-pump
|
||||
作者:[Greg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,24 +1,25 @@
|
||||
如何创建和维护你的Man手册
|
||||
如何创建和维护你自己的 man 手册
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png)
|
||||
|
||||
我们已经讨论了一些[Man手册的替代方案] [1]。 这些替代方案主要用于学习简洁的Linux命令示例,而无需通过全面过于详细的手册页。 如果你正在寻找一种快速而简单的方法来轻松快速地学习Linux命令,那么这些替代方案值得尝试。 现在,你可能正在考虑 - 如何为Linux命令创建自己的man-like帮助页面? 这时**“Um”**就派上用场了。 Um是一个命令行实用程序,用于轻松创建和维护包含你到目前为止所了解的所有命令的Man页面。
|
||||
我们已经讨论了一些 [man 手册的替代方案][1]。 这些替代方案主要用于学习简洁的 Linux 命令示例,而无需通过全面而过于详细的手册页。 如果你正在寻找一种快速而简单的方法来轻松快速地学习 Linux 命令,那么这些替代方案值得尝试。 现在,你可能正在考虑 —— 如何为 Linux 命令创建自己的 man 式的帮助页面? 这时 “Um” 就派上用场了。 Um 是一个命令行实用程序,可以用于轻松创建和维护包含你到目前为止所了解的所有命令的 man 页面。
|
||||
|
||||
通过创建自己的手册页,你可以在手册页中避免大量不必要的细节,并且只包含你需要记住的内容。 如果你想创建自己的一套man-like页面,“Um”也能为你提供帮助。 在这个简短的教程中,我们将学习如何安装“Um”命令以及如何创建自己的man手册页。
|
||||
通过创建自己的手册页,你可以在手册页中避免大量不必要的细节,并且只包含你需要记住的内容。 如果你想创建自己的一套 man 式的页面,“Um” 也能为你提供帮助。 在这个简短的教程中,我们将学习如何安装 “Um” 命令以及如何创建自己的 man 手册页。
|
||||
|
||||
### 安装 Um
|
||||
|
||||
Um适用于Linux和Mac OS。 目前,它只能在Linux系统中使用** Linuxbrew **软件包管理器来进行安装。 如果你尚未安装Linuxbrew,请参考以下链接。
|
||||
Um 适用于 Linux 和Mac OS。 目前,它只能在 Linux 系统中使用 Linuxbrew 软件包管理器来进行安装。 如果你尚未安装 Linuxbrew,请参考以下链接:
|
||||
|
||||
安装Linuxbrew后,运行以下命令安装Um实用程序。
|
||||
- [Linuxbrew:一个用于 Linux 和 MacOS 的通用包管理器][3]
|
||||
|
||||
安装 Linuxbrew 后,运行以下命令安装 Um 实用程序。
|
||||
|
||||
```
|
||||
$ brew install sinclairtarget/wst/um
|
||||
|
||||
```
|
||||
|
||||
如果你会看到类似下面的输出,恭喜你! Um已经安装好并且可以使用了。
|
||||
如果你会看到类似下面的输出,恭喜你! Um 已经安装好并且可以使用了。
|
||||
|
||||
```
|
||||
[...]
|
||||
@ -49,88 +50,78 @@ Emacs Lisp files have been installed to:
|
||||
==> um
|
||||
Bash completion has been installed to:
|
||||
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
|
||||
|
||||
```
|
||||
|
||||
在制作你的man手册页之前,你需要为Um启用bash补全。
|
||||
在制作你的 man 手册页之前,你需要为 Um 启用 bash 补全。
|
||||
|
||||
要开启bash'补全,首先你需要打开 **~/.bash_profile** 文件:
|
||||
要开启 bash 补全,首先你需要打开 `~/.bash_profile` 文件:
|
||||
|
||||
```
|
||||
$ nano ~/.bash_profile
|
||||
|
||||
```
|
||||
|
||||
并在其中添加以下内容:
|
||||
并在其中添加以下内容:
|
||||
|
||||
```
|
||||
if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then
|
||||
. $(brew --prefix)/etc/bash_completion.d/um-completion.sh
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
保存并关闭文件。运行以下命令以更新更改。
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
|
||||
```
|
||||
|
||||
准备工作全部完成。让我们继续创建我们的第一个man手册页。
|
||||
准备工作全部完成。让我们继续创建我们的第一个 man 手册页。
|
||||
|
||||
### 创建并维护自己的man手册
|
||||
|
||||
### 创建并维护自己的Man手册
|
||||
|
||||
如果你想为“dpkg”命令创建自己的Man手册。请运行:
|
||||
如果你想为 `dpkg` 命令创建自己的 man 手册。请运行:
|
||||
|
||||
```
|
||||
$ um edit dpkg
|
||||
|
||||
```
|
||||
|
||||
上面的命令将在默认编辑器中打开markdown模板:
|
||||
上面的命令将在默认编辑器中打开 markdown 模板:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png)
|
||||
|
||||
我的默认编辑器是Vi,因此上面的命令会在Vi编辑器中打开它。现在,开始在此模板中添加有关“dpkg”命令的所有内容。
|
||||
我的默认编辑器是 Vi,因此上面的命令会在 Vi 编辑器中打开它。现在,开始在此模板中添加有关 `dpkg` 命令的所有内容。
|
||||
|
||||
下面是一个示例:
|
||||
下面是一个示例:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png)
|
||||
|
||||
正如你在上图的输出中看到的,我为dpkg命令添加了概要,描述和两个参数选项。 你可以在Man手册中添加你所需要的所有部分。不过你也要确保为每个部分提供了适当且易于理解的标题。 完成后,保存并退出文件(如果使用Vi编辑器,请按ESC键并键入:wq)。
|
||||
正如你在上图的输出中看到的,我为 `dpkg` 命令添加了概要,描述和两个参数选项。 你可以在 man 手册中添加你所需要的所有部分。不过你也要确保为每个部分提供了适当且易于理解的标题。 完成后,保存并退出文件(如果使用 Vi 编辑器,请按 `ESC` 键并键入`:wq`)。
|
||||
|
||||
最后,使用以下命令查看新创建的Man手册页:
|
||||
最后,使用以下命令查看新创建的 man 手册页:
|
||||
|
||||
```
|
||||
$ um dpkg
|
||||
|
||||
```
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png)
|
||||
|
||||
如你所见,dpkg的Man手册页看起来与官方手册页完全相同。 如果要在手册页中编辑和/或添加更多详细信息,请再次运行相同的命令并添加更多详细信息。
|
||||
如你所见,`dpkg` 的 man 手册页看起来与官方手册页完全相同。 如果要在手册页中编辑和/或添加更多详细信息,请再次运行相同的命令并添加更多详细信息。
|
||||
|
||||
```
|
||||
$ um edit dpkg
|
||||
|
||||
```
|
||||
|
||||
要使用Um查看新创建的Man手册页列表,请运行:
|
||||
要使用 Um 查看新创建的 man 手册页列表,请运行:
|
||||
|
||||
```
|
||||
$ um list
|
||||
|
||||
```
|
||||
|
||||
所有手册页将保存在主目录中名为**`.um` **的目录下
|
||||
所有手册页将保存在主目录中名为 `.um` 的目录下
|
||||
|
||||
以防万一,如果你不想要某个特定页面,只需删除它,如下所示。
|
||||
|
||||
```
|
||||
$ um rm dpkg
|
||||
|
||||
```
|
||||
|
||||
要查看帮助部分和所有可用的常规选项,请运行:
|
||||
@ -151,7 +142,6 @@ Subcommands:
|
||||
um topics List all topics.
|
||||
um (c)onfig [config key] Display configuration environment.
|
||||
um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
|
||||
|
||||
```
|
||||
|
||||
### 配置 Um
|
||||
@ -166,22 +156,18 @@ pager = less
|
||||
pages_directory = /home/sk/.um/pages
|
||||
default_topic = shell
|
||||
pages_ext = .md
|
||||
|
||||
```
|
||||
|
||||
在此文件中,你可以根据需要编辑和更改** pager **,** editor **,** default_topic **,** pages_directory **和** pages_ext **选项的值。 比如说,如果你想在** [Dropbox] [2] **文件夹中保存新创建的Um页面,只需更改/.um/umconfig**文件中** pages_directory **的值并将其更改为Dropbox文件夹即可。
|
||||
在此文件中,你可以根据需要编辑和更改 `pager`、`editor`、`default_topic`、`pages_directory` 和 `pages_ext` 选项的值。 比如说,如果你想在 [Dropbox][2] 文件夹中保存新创建的 Um 页面,只需更改 `~/.um/umconfig` 文件中 `pages_directory` 的值并将其更改为 Dropbox 文件夹即可。
|
||||
|
||||
```
|
||||
pages_directory = /Users/myusername/Dropbox/um
|
||||
|
||||
```
|
||||
|
||||
这就是全部内容,希望这些能对你有用,更多好的内容敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
|
||||
@ -189,7 +175,7 @@ via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[way-ww](https://github.com/way-ww)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -197,3 +183,4 @@ via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/
|
||||
[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
|
||||
[3]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
@ -0,0 +1,123 @@
|
||||
Minikube 入门:笔记本上的 Kubernetes
|
||||
======
|
||||
|
||||
> 运行 Minikube 的分步指南。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ)
|
||||
|
||||
在 [Hello Minikube][1] 教程页面上 Minikube 被宣传为基于 Docker 运行 Kubernetes 的一种简单方法。 虽然该文档非常有用,但它主要是为 MacOS 编写的。 你可以深入挖掘在 Windows 或某个 Linux 发行版上的使用说明,但它们不是很清楚。 许多文档都是针对 Debian / Ubuntu 用户的,比如[安装 Minikube 的驱动程序][2]。
|
||||
|
||||
这篇指南旨在使得在基于 RHEL/Fedora/CentOS 的操作系统上更容易安装 Minikube。
|
||||
|
||||
### 先决条件
|
||||
|
||||
1. 你已经[安装了 Docker][3]。
|
||||
2. 你的计算机是一个基于 RHEL / CentOS / Fedora 的工作站。
|
||||
3. 你已经[安装了正常运行的 KVM2 虚拟机管理程序][4]。
|
||||
4. 你有一个可以工作的 docker-machine-driver-kvm2。 以下命令将安装该驱动程序:
|
||||
|
||||
```
|
||||
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
|
||||
chmod +x docker-machine-driver-kvm2 \
|
||||
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
|
||||
&& rm docker-machine-driver-kvm2
|
||||
```
|
||||
|
||||
### 下载、安装和启动Minikube
|
||||
|
||||
1、为你要即将下载的两个文件创建一个目录,两个文件分别是:[minikube][5] 和 [kubectl][6]。
|
||||
|
||||
2、打开终端窗口并运行以下命令来安装 minikube。
|
||||
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||
```
|
||||
|
||||
请注意,minikube 版本(例如,minikube-linux-amd64)可能因计算机的规格而有所不同。
|
||||
|
||||
3、`chmod` 加执行权限。
|
||||
|
||||
```
|
||||
chmod +x minikube
|
||||
```
|
||||
|
||||
4、将文件移动到 `/usr/local/bin` 路径下,以便你能将其作为命令运行。
|
||||
|
||||
```
|
||||
mv minikube /usr/local/bin
|
||||
```
|
||||
|
||||
5、使用以下命令安装 `kubectl`(类似于 minikube 的安装过程)。
|
||||
|
||||
```
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
使用 `curl` 命令确定最新版本的Kubernetes。
|
||||
|
||||
6、`chmod` 给 `kubectl` 加执行权限。
|
||||
|
||||
```
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
7、将 `kubectl` 移动到 `/usr/local/bin` 路径下作为命令运行。
|
||||
|
||||
```
|
||||
mv kubectl /usr/local/bin
|
||||
```
|
||||
|
||||
8、 运行 `minikube start` 命令。 为此,你需要有虚拟机管理程序。 我使用过 KVM2,你也可以使用 Virtualbox。 确保是以普通用户而不是 root 身份运行以下命令,以便为用户而不是 root 存储配置。
|
||||
|
||||
```
|
||||
minikube start --vm-driver=kvm2
|
||||
```
|
||||
|
||||
这可能需要一段时间,等一会。
|
||||
|
||||
9、 Minikube 应该下载并启动。 使用以下命令确保成功。
|
||||
|
||||
```
|
||||
cat ~/.kube/config
|
||||
```
|
||||
|
||||
10、 执行以下命令以运行 Minikube 作为上下文环境。 上下文环境决定了 `kubectl` 与哪个集群交互。 你可以在 `~/.kube/config` 文件中查看所有可用的上下文环境。
|
||||
|
||||
```
|
||||
kubectl config use-context minikube
|
||||
```
|
||||
|
||||
11、再次查看 `config` 文件以检查 Minikube 是否存在上下文环境。
|
||||
|
||||
```
|
||||
cat ~/.kube/config
|
||||
```
|
||||
|
||||
12、最后,运行以下命令打开浏览器查看 Kubernetes 仪表板。
|
||||
|
||||
```
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
现在 Minikube 已启动并运行,请阅读[通过 Minikube 在本地运行 Kubernetes][7] 这篇官网教程开始使用它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/getting-started-minikube
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://kubernetes.io/docs/tutorials/hello-minikube
|
||||
[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md
|
||||
[3]: https://docs.docker.com/install
|
||||
[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver
|
||||
[5]: https://github.com/kubernetes/minikube/releases
|
||||
[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl
|
||||
[7]: https://kubernetes.io/docs/setup/minikube
|
@ -1,3 +1,5 @@
|
||||
translating by ypingcn
|
||||
|
||||
Creator of the World Wide Web is Creating a New Decentralized Web
|
||||
======
|
||||
**Creator of the world wide web, Tim Berners-Lee has unveiled his plans to create a new decentralized web where the data will be controlled by the users.**
|
||||
|
@ -0,0 +1,64 @@
|
||||
We already have nice things, and other reasons not to write in-house ops tools
|
||||
======
|
||||
Let's look at the pitfalls of writing in-house ops tools, the circumstances that justify it, and how to do it better.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tool-hammer-nail-build-broken.png?itok=91xn-5wI)
|
||||
|
||||
When I was an ops consultant, I had the "great fortune" of seeing the dark underbelly of many companies in a relatively short period of time. Such fortune was exceptionally pronounced on one client engagement where I became the maintainer of an in-house deployment tool that had bloated to touch nearly every piece of infrastructure—despite lacking documentation and testing. Dismayed at the impossible task of maintaining this beast while tackling the real work of improving the product, I began reviewing my old client projects and probing my ops community for their strategies. What I found was an epidemic of "[not invented here][1]" (NIH) syndrome and a lack of collaboration with the broader community.
|
||||
|
||||
### The problem with NIH
|
||||
|
||||
One of the biggest problems of NIH is the time suck for engineers. Instead of working on functionality that adds value to the business, they're adding features to tools that solve standard problems such as deployment, continuous integration (CI), and configuration management.
|
||||
|
||||
This is a serious issue at small or midsized startups, where new hires need to hit the ground running. If they have to learn a completely new toolset, rather than drawing from their experience with industry-standard tools, the time it takes them to become useful increases dramatically. While the new hires are learning the in-house tools, the company remains reliant on the handful of people who wrote the tools to document, train, and troubleshoot them. Heaven forbid one of those engineers succumbs to [the bus factor][2], because the possibility of getting outside help if they forgot to document something is zero.
|
||||
|
||||
### Do you need to roll it yourself?
|
||||
|
||||
Before writing your own ops tool, ask yourself the following questions:
|
||||
|
||||
* Have we polled the greater ops community for solutions?
|
||||
* Have we compared the costs of proprietary tools to the estimated engineering time needed to maintain an in-house solution?
|
||||
* Have we identified open source solutions, even those that lack desired features, and attempted to contribute to them?
|
||||
* Can we fork any open source tools that are well-written but unmaintained?
|
||||
|
||||
|
||||
|
||||
If you still can't find a tool that meets your needs, you'll have to roll your own.
|
||||
|
||||
### Tips for rolling your own
|
||||
|
||||
Here's a checklist for rolling your own solutions:
|
||||
|
||||
1. In-house tooling should not be exempt from the high standards you apply to the rest of your code. Write it like you're going to open source it.
|
||||
2. Make sure you allow time in your sprints to work on feature requests, and don't allow features to be rushed in before proper testing and documentation.
|
||||
3. Keep it small. It's going to be much harder to exact any kind of exit strategy if your tool is a monstrosity that touches everything.
|
||||
4. Track your tool's usage and prune features that aren't actively utilized.
|
||||
|
||||
|
||||
|
||||
### Have an exit strategy
|
||||
|
||||
Open sourcing your in-house tool is not an exit strategy per se, but it may help you get outside contributors to free up your engineers' time. This is the more difficult strategy and will take some extra care and planning. Read "[Starting an Open Source Project][3]" and "[So You've Decided To Open-Source A Project At Work. What Now?][4]" before committing to this path. If you're interested in a cleaner exit, set aside time each quarter to research and test new open source replacements.
|
||||
|
||||
Regardless of which path you choose, explicitly stating that an in-house solution is not the preferred state—early in its development—should clear up any confusion and prevent the issue of changing directions from becoming political.
|
||||
|
||||
Sabice Arkenvirr will present [We Already Have Nice Things, Use Them!][5] at [LISA18][6], October 29-31 in Nashville, Tennessee, USA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/nice-things
|
||||
|
||||
作者:[Sabice Arkenvirr][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/vishuzdelishuz
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Not_invented_here
|
||||
[2]: https://en.wikipedia.org/wiki/Bus_factor
|
||||
[3]: https://opensource.guide/starting-a-project/
|
||||
[4]: https://www.smashingmagazine.com/2013/12/open-sourcing-projects-guide-getting-started/
|
||||
[5]: https://www.usenix.org/conference/lisa18/presentation/arkenvirr
|
||||
[6]: https://www.usenix.org/conference/lisa18
|
@ -0,0 +1,93 @@
|
||||
Think global: How to overcome cultural communication challenges
|
||||
======
|
||||
Use these tips to ensure that every member of your global development team feels involved and understood.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
|
||||
|
||||
A few weeks ago, I witnessed an interesting interaction between two work colleagues—Jason, who is from the United States; and Raj, who was visiting from India.
|
||||
|
||||
Raj typically calls into a daily standup meeting at 9:00am US Central Time from India, but since he was in the US, he and his teammates headed toward the scrum area for the meeting. Jason stopped Raj and said, “Raj, where are you going? Don’t you always call into the stand-up? It would feel strange if you don’t call in.” Raj responded, “Oh, is that so? No worries,” and headed back to his desk to call into the meeting.
|
||||
|
||||
I went to Raj’s desk. “Hey, Raj, why aren’t you going to the daily standup?” Raj replied, “Jason asked me to call in.” Meanwhile, Jason was waiting for Raj to come to the standup.
|
||||
|
||||
What happened here? Jason was obviously joking when he made the remark about Raj calling into the meeting. But how did Raj miss this?
|
||||
|
||||
Jason’s statement was meant as a joke, but Raj took it literally. This was a clear example of a misunderstanding that occurred due to unfamiliarity with each other’s cultural context.
|
||||
|
||||
I often encounter emails that end with “Please revert back to me.” At first, this phrase left me puzzled. I thought, "What changes do they want me to revert?" Finally, I figured out that “please revert” means “Please reply.”
|
||||
|
||||
In his TED talk, “[Managing Cross Cultural Remote Teams,][1]” Ricardo Fernandez describes an interaction with a South African colleague who ended an IM conversation with “I’ll call you just now.” Ricardo went back to his office and waited for the call. After fifteen minutes, he called his colleague: “Weren’t you going to call me just now?” The colleague responded, “Yes, I was going to call you just now.” That's when Ricardo realized that to his South African colleague, the phrase “just now” meant “sometime in the future.”
|
||||
|
||||
In today's workplace, our colleagues may not be located in the same office, city, or even country. A growing number of tech companies have a global workforce comprised of employees with varied experiences and perspectives. This diversity allows companies to compete in the rapidly evolving technological environment.
|
||||
|
||||
But geographically dispersed teams can face challenges. Managing and maintaining high-performing development teams is difficult even when the members are co-located; when team members come from different backgrounds and locations, that makes it even harder. Communication can deteriorate, misunderstandings can happen, and teams may stop trusting each other—all of which can affect the success of the company.
|
||||
|
||||
What factors can cause confusion in global communication? In her book, “[The Culture Map][2],” Erin Meyer presents eight scales into which all global cultures fit. We can use these scales to improve our relationships with international colleagues. She identifies the United States as a very low-context culture in the communication scale. In contrast, Japan is identified as a high-context culture.
|
||||
|
||||
What does it mean to be a high- or low-context culture? In the United States, children learn to communicate explicitly: “Say what you mean; mean what you say” is a common principle of communication. On the other hand, Japanese children learn to communicate effectively by mastering the ability to “read the air.” That means they are able to read between the lines and pick up on social cues when communicating.
|
||||
|
||||
Most Asian cultures follow the high-context style of communication. Not surprisingly, the United States, a young country composed of immigrants, follows a low-context culture: Since the people who immigrated to the United States came from different cultural backgrounds, they had no choice but to communicate explicitly and directly.
|
||||
|
||||
### The three R’s
|
||||
|
||||
How can we overcome challenges in cross-cultural communication? Americans communicating with Japanese colleagues, for example, should pay attention to the non-verbal cues, while Japanese communicating with Americans should prepare for more direct language. If you are facing a similar challenge, follow these three steps to communicate more effectively and improve relationships with your international colleagues.
|
||||
|
||||
#### Recognize the differences in cultural context
|
||||
|
||||
The first step toward effective cross-cultural communication is to recognize that there are differences. Start by increasing your awareness of other cultures.
|
||||
|
||||
#### Respect the differences in cultural context
|
||||
|
||||
Once you become aware that differences in cultural context can affect cross-cultural communication, the next step is to respect these differences. When you notice a different style of communication, learn to embrace the difference and actively listen to the other person’s point of view.
|
||||
|
||||
#### Reconcile the differences in cultural context
|
||||
|
||||
Merely recognizing and respecting cultural differences is not enough; you must also learn how to reconcile the cultural differences. Understanding and being empathetic towards the other culture will help you reconcile the differences and learn how to use them to better advance productivity.
|
||||
|
||||
### 5 ways to improve communications for cultural context
|
||||
|
||||
Over the years, I have incorporated various approaches, tips, and tricks to strengthen relationships among team members across the globe. These approaches have helped me overcome communication challenges with global colleagues. Here are a few examples:
|
||||
|
||||
#### Always use video conferencing when communicating with global teammates
|
||||
|
||||
Studies show that about 55% of communication is non-verbal. Body language offers many subtle cues that can help you decipher messages, and video conferencing enables geographically dispersed team members to see each other. Videoconferencing is my default choice when conducting remote meetings.
|
||||
|
||||
#### Ensure that every team member gets an opportunity to share their thoughts and ideas
|
||||
|
||||
Although I prefer to conduct meetings using video conferencing, this is not always possible. If video conferencing is not a common practice at your workplace, it might take some effort to get everyone comfortable with the concept. Start by encouraging everyone to participate in audio meetings.
|
||||
|
||||
One of our remote team members, who frequently met with us in audio conferences, mentioned that she often wanted to share ideas and contribute to the meeting but since we couldn’t see her and she couldn’t see us, she had no idea when to start speaking. If you are using audio conferencing, one way to mitigate this is to ensure that every team member gets an opportunity to share their ideas.
|
||||
|
||||
#### Learn from one another
|
||||
|
||||
Leverage your international friends to learn about their cultural context. This will help you interact more effectively with colleagues from these countries. I have friends from South Asia and South America who have helped me better understand their cultures, and this knowledge has helped me professionally.
|
||||
|
||||
For programmers, I recommend conducting code reviews with your global peers. This will help you understand how those from different cultures give and receive feedback, persuade others, and make technical decisions.
|
||||
|
||||
#### Be empathetic
|
||||
|
||||
Empathy is the key to strong relationships. The more you are able to put yourself in someone else's shoes, the better able you will be to gain trust and build long-lasting connections. Encourage “water-cooler” conversations among your global colleagues by allocating the first few minutes of each meeting for small talk. This offers the additional benefit of putting everyone in a more relaxed mindset. If you manage a global team, make sure every member feels included in the discussion.
|
||||
|
||||
#### Meet your global colleagues in person
|
||||
|
||||
The best way to build long-lasting relationships is to meet your team members in person. If your company can afford it, arrange for this to happen. Meeting colleagues with whom you have been working will likely strengthen your relationship with them. The companies I have worked for have a strong record of periodically sending US team members to other countries and global colleagues to the US office.
|
||||
|
||||
Another way to bring teams together is to attend conferences. This not only creates educational and training opportunities, but you can also carve out some in-person team time.
|
||||
|
||||
In today's increasingly global economy, it is becoming more important for companies to maintain a geographically diverse workforce to remain competitive. Although global teams can face communication challenges, it is possible to maintain a high-performing development team despite geographical and cultural differences. Share some of the techniques you use in the comments.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/think-global-communication-challenges
|
||||
|
||||
作者:[Avindra Fernando][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/avindrafernando
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/watch?v=QIoAkFpN8wQ
|
||||
[2]: https://www.amazon.com/The-Culture-Map-Invisible-Boundaries/dp/1610392507
|
@ -1,3 +1,5 @@
|
||||
fuowang 翻译中
|
||||
|
||||
9 Best Free Video Editing Software for Linux In 2017
|
||||
======
|
||||
**Brief: Here are best video editors for Linux, their feature, pros and cons and how to install them on your Linux distributions.**
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,3 +1,4 @@
|
||||
LuuMing translating
|
||||
Setting Up a Timer with systemd in Linux
|
||||
======
|
||||
|
||||
|
@ -1,299 +0,0 @@
|
||||
Translating by DavidChenLiang
|
||||
|
||||
CLI: improved
|
||||
======
|
||||
I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth.
|
||||
|
||||
Over the years my command line habits have improved and I often search for smarter tools for the jobs I commonly do. With that said, here's my current list of improved CLI tools.
|
||||
|
||||
|
||||
### Ignoring my improvements
|
||||
|
||||
In a number of cases I've aliased the new and improved command line tool over the original (as with `cat` and `ping`).
|
||||
|
||||
If I want to run the original command, which is sometimes I do need to do, then there's two ways I can do this (I'm on a Mac so your mileage may vary):
|
||||
```
|
||||
$ \cat # ignore aliases named "cat" - explanation: https://stackoverflow.com/a/16506263/22617
|
||||
$ command cat # ignore functions and aliases
|
||||
|
||||
```
|
||||
|
||||
### bat > cat
|
||||
|
||||
`cat` is used to print the contents of a file, but given more time spent in the command line, features like syntax highlighting come in very handy. I found [ccat][3] which offers highlighting then I found [bat][4] which has highlighting, paging, line numbers and git integration.
|
||||
|
||||
The `bat` command also allows me to search during output (only if the output is longer than the screen height) using the `/` key binding (similarly to `less` searching).
|
||||
|
||||
![Simple bat output][5]
|
||||
|
||||
I've also aliased `bat` to the `cat` command:
|
||||
```
|
||||
alias cat='bat'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][4]
|
||||
|
||||
### prettyping > ping
|
||||
|
||||
`ping` is incredibly useful, and probably my goto tool for the "oh crap is X down/does my internet work!!!". But `prettyping` ("pretty ping" not "pre typing"!) gives ping a really nice output and just makes me feel like the command line is a bit more welcoming.
|
||||
|
||||
![/images/cli-improved/ping.gif][6]
|
||||
|
||||
I've also aliased `ping` to the `prettyping` command:
|
||||
```
|
||||
alias ping='prettyping --nolegend'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][7]
|
||||
|
||||
### fzf > ctrl+r
|
||||
|
||||
In the terminal, using `ctrl+r` will allow you to [search backwards][8] through your history. It's a nice trick, albeit a bit fiddly.
|
||||
|
||||
The `fzf` tool is a **huge** enhancement on `ctrl+r`. It's a fuzzy search against the terminal history, with a fully interactive preview of the possible matches.
|
||||
|
||||
In addition to searching through the history, `fzf` can also preview and open files, which is what I've done in the video below:
|
||||
|
||||
For this preview effect, I created an alias called `preview` which combines `fzf` with `bat` for the preview and a custom key binding to open VS Code:
|
||||
```
|
||||
alias preview="fzf --preview 'bat --color \"always\" {}'"
|
||||
# add support for ctrl+o to open selected file in VS Code
|
||||
export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'"
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][9]
|
||||
|
||||
### htop > top
|
||||
|
||||
`top` is my goto tool for quickly diagnosing why the CPU on the machine is running hard or my fan is whirring. I also use these tools in production. Annoyingly (to me!) `top` on the Mac is vastly different (and inferior IMHO) to `top` on linux.
|
||||
|
||||
However, `htop` is an improvement on both regular `top` and crappy-mac `top`. Lots of colour coding, keyboard bindings and different views which have helped me in the past to understand which processes belong to which.
|
||||
|
||||
Handy key bindings include:
|
||||
|
||||
* P - sort by CPU
|
||||
* M - sort by memory usage
|
||||
* F4 - filter processes by string (to narrow to just "node" for instance)
|
||||
* space - mark a single process so I can watch if the process is spiking
|
||||
|
||||
|
||||
|
||||
![htop output][10]
|
||||
|
||||
There is a weird bug in Mac Sierra that can be overcome by running `htop` as root (I can't remember exactly what the bug is, but this alias fixes it - though annoying that I have to enter my password every now and again):
|
||||
```
|
||||
alias top="sudo htop" # alias top and fix high sierra bug
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][11]
|
||||
|
||||
### diff-so-fancy > diff
|
||||
|
||||
I'm pretty sure I picked this one up from Paul Irish some years ago. Although I rarely fire up `diff` manually, my git commands use diff all the time. `diff-so-fancy` gives me both colour coding but also character highlight of changes.
|
||||
|
||||
![diff so fancy][12]
|
||||
|
||||
Then in my `~/.gitconfig` I have included the following entry to enable `diff-so-fancy` on `git diff` and `git show`:
|
||||
```
|
||||
[pager]
|
||||
diff = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
show = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][13]
|
||||
|
||||
### fd > find
|
||||
|
||||
Although I use a Mac, I've never been a fan of Spotlight (I found it sluggish, hard to remember the keywords, the database update would hammer my CPU and generally useless!). I use [Alfred][14] a lot, but even the finder feature doesn't serve me well.
|
||||
|
||||
I tend to turn the command line to find files, but `find` is always a bit of a pain to remember the right expression to find what I want (and indeed the Mac flavour is slightly different non-mac find which adds to frustration).
|
||||
|
||||
`fd` is a great replacement (by the same individual who wrote `bat`). It is very fast and the common use cases I need to search with are simple to remember.
|
||||
|
||||
A few handy commands:
|
||||
```
|
||||
$ fd cli # all filenames containing "cli"
|
||||
$ fd -e md # all with .md extension
|
||||
$ fd cli -x wc -w # find "cli" and run `wc -w` on each file
|
||||
|
||||
```
|
||||
|
||||
![fd output][15]
|
||||
|
||||
💾 [Installation directions][16]
|
||||
|
||||
### ncdu > du
|
||||
|
||||
Knowing where disk space is being taking up is a fairly important task for me. I've used the Mac app [Disk Daisy][17] but I find that it can be a little slow to actually yield results.
|
||||
|
||||
The `du -sh` command is what I'll use in the terminal (`-sh` means summary and human readable), but often I'll want to dig into the directories taking up the space.
|
||||
|
||||
`ncdu` is a nice alternative. It offers an interactive interface and allows for quickly scanning which folders or files are responsible for taking up space and it's very quick to navigate. (Though any time I want to scan my entire home directory, it's going to take a long time, regardless of the tool - my directory is about 550gb).
|
||||
|
||||
Once I've found a directory I want to manage (to delete, move or compress files), I'll use the cmd + click the pathname at the top of the screen in [iTerm2][18] to launch finder to that directory.
|
||||
|
||||
![ncdu output][19]
|
||||
|
||||
There's another [alternative called nnn][20] which offers a slightly nicer interface and although it does file sizes and usage by default, it's actually a fully fledged file manager.
|
||||
|
||||
My `ncdu` is aliased to the following:
|
||||
```
|
||||
alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules"
|
||||
|
||||
```
|
||||
|
||||
The options are:
|
||||
|
||||
* `--color dark` \- use a colour scheme
|
||||
* `-rr` \- read-only mode (prevents delete and spawn shell)
|
||||
* `--exclude` ignore directories I won't do anything about
|
||||
|
||||
|
||||
|
||||
💾 [Installation directions][21]
|
||||
|
||||
### tldr > man
|
||||
|
||||
It's amazing that nearly every single command line tool comes with a manual via `man <command>`, but navigating the `man` output can be sometimes a little confusing, plus it can be daunting given all the technical information that's included in the manual output.
|
||||
|
||||
This is where the TL;DR project comes in. It's a community driven documentation system that's available from the command line. So far in my own usage, I've not come across a command that's not been documented, but you can [also contribute too][22].
|
||||
|
||||
![TLDR output for 'fd'][23]
|
||||
|
||||
As a nicety, I've also aliased `tldr` to `help` (since it's quicker to type!):
|
||||
```
|
||||
alias help='tldr'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][24]
|
||||
|
||||
### ack || ag > grep
|
||||
|
||||
`grep` is no doubt a powerful tool on the command line, but over the years it's been superseded by a number of tools. Two of which are `ack` and `ag`.
|
||||
|
||||
I personally flitter between `ack` and `ag` without really remembering which I prefer (that's to say they're both very good and very similar!). I tend to default to `ack` only because it rolls of my fingers a little easier. Plus, `ack` comes with the mega `ack --bar` argument (I'll let you experiment)!
|
||||
|
||||
Both `ack` and `ag` will (by default) use a regular expression to search, and extremely pertinent to my work, I can specify the file types to search within using flags like `--js` or `--html` (though here `ag` includes more files in the js filter than `ack`).
|
||||
|
||||
Both tools also support the usual `grep` options, like `-B` and `-A` for before and after context in the grep.
|
||||
|
||||
![ack in action][25]
|
||||
|
||||
Since `ack` doesn't come with markdown support (and I write a lot in markdown), I've got this customisation in my `~/.ackrc` file:
|
||||
```
|
||||
--type-set=md=.md,.mkd,.markdown
|
||||
--pager=less -FRX
|
||||
|
||||
```
|
||||
|
||||
💾 Installation directions: [ack][26], [ag][27]
|
||||
|
||||
[Futher reading on ack & ag][28]
|
||||
|
||||
### jq > grep et al
|
||||
|
||||
I'm a massive fanboy of [jq][29]. At first I struggled with the syntax, but I've since come around to the query language and use `jq` on a near daily basis (whereas before I'd either drop into node, use grep or use a tool called [json][30] which is very basic in comparison).
|
||||
|
||||
I've even started the process of writing a jq tutorial series (2,500 words and counting) and have published a [web tool][31] and a native mac app (yet to be released).
|
||||
|
||||
`jq` allows me to pass in JSON and transform the source very easily so that the JSON result fits my requirements. One such example allows me to update all my node dependencies in one command (broken into multiple lines for readability):
|
||||
```
|
||||
$ npm i $(echo $(\
|
||||
npm outdated --json | \
|
||||
jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \
|
||||
))
|
||||
|
||||
```
|
||||
|
||||
The above command will list all the node dependencies that are out of date, and use npm's JSON output format, then transform the source JSON from this:
|
||||
```
|
||||
{
|
||||
"node-jq": {
|
||||
"current": "0.7.0",
|
||||
"wanted": "0.7.0",
|
||||
"latest": "1.2.0",
|
||||
"location": "node_modules/node-jq"
|
||||
},
|
||||
"uuid": {
|
||||
"current": "3.1.0",
|
||||
"wanted": "3.2.1",
|
||||
"latest": "3.2.1",
|
||||
"location": "node_modules/uuid"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
…to this:
|
||||
|
||||
That result is then fed into the `npm install` command and voilà, I'm all upgraded (using the sledgehammer approach).
|
||||
|
||||
### Honourable mentions
|
||||
|
||||
Some of the other tools that I've started poking around with, but haven't used too often (with the exception of ponysay, which appears when I start a new terminal session!):
|
||||
|
||||
* [ponysay][32] > cowsay
|
||||
* [csvkit][33] > awk et al
|
||||
* [noti][34] > `display notification`
|
||||
* [entr][35] > watch
|
||||
|
||||
|
||||
|
||||
### What about you?
|
||||
|
||||
So that's my list. How about you? What daily command line tools have you improved? I'd love to know.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://remysharp.com/2018/08/23/cli-improved
|
||||
|
||||
作者:[Remy Sharp][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://remysharp.com
|
||||
[1]: https://remysharp.com/images/terminal-600.jpg
|
||||
[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount
|
||||
[3]: https://github.com/jingweno/ccat
|
||||
[4]: https://github.com/sharkdp/bat
|
||||
[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output)
|
||||
[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output)
|
||||
[7]: http://denilson.sa.nom.br/prettyping/
|
||||
[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks
|
||||
[9]: https://github.com/junegunn/fzf
|
||||
[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output)
|
||||
[11]: http://hisham.hm/htop/
|
||||
[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output)
|
||||
[13]: https://github.com/so-fancy/diff-so-fancy
|
||||
[14]: https://www.alfredapp.com/
|
||||
[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output)
|
||||
[16]: https://github.com/sharkdp/fd/
|
||||
[17]: https://daisydiskapp.com/
|
||||
[18]: https://www.iterm2.com/
|
||||
[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output)
|
||||
[20]: https://github.com/jarun/nnn
|
||||
[21]: https://dev.yorhel.nl/ncdu
|
||||
[22]: https://github.com/tldr-pages/tldr#contributing
|
||||
[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for 'fd')
|
||||
[24]: http://tldr-pages.github.io/
|
||||
[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args)
|
||||
[26]: https://beyondgrep.com
|
||||
[27]: https://github.com/ggreer/the_silver_searcher
|
||||
[28]: http://conqueringthecommandline.com/book/ack_ag
|
||||
[29]: https://stedolan.github.io/jq
|
||||
[30]: http://trentm.com/json/
|
||||
[31]: https://jqterm.com
|
||||
[32]: https://github.com/erkin/ponysay
|
||||
[33]: https://csvkit.readthedocs.io/en/1.0.3/
|
||||
[34]: https://github.com/variadico/noti
|
||||
[35]: http://www.entrproject.org/
|
@ -1,397 +0,0 @@
|
||||
translating by Flowsnow
|
||||
|
||||
How to build rpm packages
|
||||
======
|
||||
|
||||
Save time and effort installing files and scripts across multiple hosts.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1)
|
||||
|
||||
I have used rpm-based package managers to install software on Red Hat and Fedora Linux since I started using Linux more than 20 years ago. I have used the **rpm** program itself, **yum** , and **DNF** , which is a close descendant of yum, to install and update packages on my Linux hosts. The yum and DNF tools are wrappers around the rpm utility that provide additional functionality, such as the ability to find and install package dependencies.
|
||||
|
||||
Over the years I have created a number of Bash scripts, some of which have separate configuration files, that I like to install on most of my new computers and virtual machines. It reached the point that it took a great deal of time to install all of these packages, so I decided to automate that process by creating an rpm package that I could copy to the target hosts and install all of these files in their proper locations. Although the **rpm** tool was formerly used to build rpm packages, that function was removed and a new tool,was created to build new rpms.
|
||||
|
||||
When I started this project, I found very little information about creating rpm packages, but I managed to find a book, Maximum RPM, that helped me figure it out. That book is now somewhat out of date, as is the vast majority of information I have found. It is also out of print, and used copies go for hundreds of dollars. The online version of [Maximum RPM][1] is available at no charge and is kept up to date. The [RPM website][2] also has links to other websites that have a lot of documentation about rpm. What other information there is tends to be brief and apparently assumes that you already have a good deal of knowledge about the process.
|
||||
|
||||
In addition, every one of the documents I found assumes that the code needs to be compiled from sources as in a development environment. I am not a developer. I am a sysadmin, and we sysadmins have different needs because we don’t—or we shouldn’t—compile code to use for administrative tasks; we should use shell scripts. So we have no source code in the sense that it is something that needs to be compiled into binary executables. What we have is a source that is also the executable.
|
||||
|
||||
For the most part, this project should be performed as the non-root user student. Rpms should never be built by root, but only by non-privileged users. I will indicate which parts should be performed as root and which by a non-root, unprivileged user.
|
||||
|
||||
### Preparation
|
||||
|
||||
First, open one terminal session and `su` to root. Be sure to use the `-` option to ensure that the complete root environment is enabled. I do not believe that sysadmins should use `sudo` for any administrative tasks. Find out why in my personal blog post: [Real SysAdmins don’t sudo][3].
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ su -
|
||||
Password:
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
Create a student user that can be used for this project and set a password for that user.
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# useradd -c "Student User" student
|
||||
[root@testvm1 ~]# passwd student
|
||||
Changing password for user student.
|
||||
New password: <Enter the password>
|
||||
Retype new password: <Enter the password>
|
||||
passwd: all authentication tokens updated successfully.
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
Building rpm packages requires the `rpm-build` package, which is likely not already installed. Install it now as root. Note that this command will also install several dependencies. The number may vary, depending upon the packages already installed on your host; it installed a total of 17 packages on my test VM, which is pretty minimal.
|
||||
|
||||
```
|
||||
dnf install -y rpm-build
|
||||
```
|
||||
|
||||
The rest of this project should be performed as the user student unless otherwise explicitly directed. Open another terminal session and use `su` to switch to that user to perform the rest of these steps. Download a tarball that I have prepared of a development directory structure, utils.tar, from GitHub using the following command:
|
||||
|
||||
```
|
||||
wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar
|
||||
```
|
||||
|
||||
This tarball includes all of the files and Bash scripts that will be installed by the final rpm. There is also a complete spec file, which you can use to build the rpm. We will go into detail about each section of the spec file.
|
||||
|
||||
As user student, using your home directory as your present working directory (pwd), untar the tarball.
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd ; tar -xvf utils.tar
|
||||
```
|
||||
|
||||
Use the `tree` command to verify that the directory structure of ~/development and the contained files looks like the following output:
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ tree development/
|
||||
development/
|
||||
├── license
|
||||
│ ├── Copyright.and.GPL.Notice.txt
|
||||
│ └── GPL_LICENSE.txt
|
||||
├── scripts
|
||||
│ ├── create_motd
|
||||
│ ├── die
|
||||
│ ├── mymotd
|
||||
│ └── sysdata
|
||||
└── spec
|
||||
└── utils.spec
|
||||
|
||||
3 directories, 7 files
|
||||
[student@testvm1 ~]$
|
||||
```
|
||||
|
||||
The `mymotd` script creates a “Message Of The Day” data stream that is sent to stdout. The `create_motd` script runs the `mymotd` scripts and redirects the output to the /etc/motd file. This file is used to display a daily message to users who log in remotely using SSH.
|
||||
|
||||
The `die` script is my own script that wraps the `kill` command in a bit of code that can find running programs that match a specified string and kill them. It uses `kill -9` to ensure that they cannot ignore the kill message.
|
||||
|
||||
The `sysdata` script can spew tens of thousands of lines of data about your computer hardware, the installed version of Linux, all installed packages, and the metadata of your hard drives. I use it to document the state of a host at a point in time. I can later use it for reference. I used to do this to maintain a record of hosts that I installed for customers.
|
||||
|
||||
You may need to change ownership of these files and directories to student.student. Do this, if necessary, using the following command:
|
||||
|
||||
```
|
||||
chown -R student.student development
|
||||
```
|
||||
|
||||
Most of the files and directories in this tree will be installed on Fedora systems by the rpm you create during this project.
|
||||
|
||||
### Creating the build directory structure
|
||||
|
||||
The `rpmbuild` command requires a very specific directory structure. You must create this directory structure yourself because no automated way is provided. Create the following directory structure in your home directory:
|
||||
|
||||
```
|
||||
~ ─ rpmbuild
|
||||
├── RPMS
|
||||
│ └── noarch
|
||||
├── SOURCES
|
||||
├── SPECS
|
||||
└── SRPMS
|
||||
```
|
||||
|
||||
We will not create the rpmbuild/RPMS/X86_64 directory because that would be architecture-specific for 64-bit compiled binaries. We have shell scripts that are not architecture-specific. In reality, we won’t be using the SRPMS directory either, which would contain source files for the compiler.
|
||||
|
||||
### Examining the spec file
|
||||
|
||||
Each spec file has a number of sections, some of which may be ignored or omitted, depending upon the specific circumstances of the rpm build. This particular spec file is not an example of a minimal file required to work, but it is a good example of a moderately complex spec file that packages files that do not need to be compiled. If a compile were required, it would be performed in the `%build` section, which is omitted from this spec file because it is not required.
|
||||
|
||||
#### Preamble
|
||||
|
||||
This is the only section of the spec file that does not have a label. It consists of much of the information you see when the command `rpm -qi [Package Name]` is run. Each datum is a single line which consists of a tag, which identifies it and text data for the value of the tag.
|
||||
|
||||
```
|
||||
###############################################################################
|
||||
# Spec file for utils
|
||||
################################################################################
|
||||
# Configured to be built by user student or other non-root user
|
||||
################################################################################
|
||||
#
|
||||
Summary: Utility scripts for testing RPM creation
|
||||
Name: utils
|
||||
Version: 1.0.0
|
||||
Release: 1
|
||||
License: GPL
|
||||
URL: http://www.both.org
|
||||
Group: System
|
||||
Packager: David Both
|
||||
Requires: bash
|
||||
Requires: screen
|
||||
Requires: mc
|
||||
Requires: dmidecode
|
||||
BuildRoot: ~/rpmbuild/
|
||||
|
||||
# Build with the following syntax:
|
||||
# rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
Comment lines are ignored by the `rpmbuild` program. I always like to add a comment to this section that contains the exact syntax of the `rpmbuild` command required to create the package. The Summary tag is a short description of the package. The Name, Version, and Release tags are used to create the name of the rpm file, as in utils-1.00-1.rpm. Incrementing the release and version numbers lets you create rpms that can be used to update older ones.
|
||||
|
||||
The License tag defines the license under which the package is released. I always use a variation of the GPL. Specifying the license is important to clarify the fact that the software contained in the package is open source. This is also why I included the license and GPL statement in the files that will be installed.
|
||||
|
||||
The URL is usually the web page of the project or project owner. In this case, it is my personal web page.
|
||||
|
||||
The Group tag is interesting and is usually used for GUI applications. The value of the Group tag determines which group of icons in the applications menu will contain the icon for the executable in this package. Used in conjunction with the Icon tag (which we are not using here), the Group tag allows adding the icon and the required information to launch a program into the applications menu structure.
|
||||
|
||||
The Packager tag is used to specify the person or organization responsible for maintaining and creating the package.
|
||||
|
||||
The Requires statements define the dependencies for this rpm. Each is a package name. If one of the specified packages is not present, the DNF installation utility will try to locate it in one of the defined repositories defined in /etc/yum.repos.d and install it if it exists. If DNF cannot find one or more of the required packages, it will throw an error indicating which packages are missing and terminate.
|
||||
|
||||
The BuildRoot line specifies the top-level directory in which the `rpmbuild` tool will find the spec file and in which it will create temporary directories while it builds the package. The finished package will be stored in the noarch subdirectory that we specified earlier. The comment showing the command syntax used to build this package includes the option `–target noarch`, which defines the target architecture. Because these are Bash scripts, they are not associated with a specific CPU architecture. If this option were omitted, the build would be targeted to the architecture of the CPU on which the build is being performed.
|
||||
|
||||
The `rpmbuild` program can target many different architectures, and using the `--target` option allows us to build architecture-specific packages on a host with a different architecture from the one on which the build is performed. So I could build a package intended for use on an i686 architecture on an x86_64 host, and vice versa.
|
||||
|
||||
Change the packager name to yours and the URL to your own website if you have one.
|
||||
|
||||
#### %description
|
||||
|
||||
The `%description` section of the spec file contains a description of the rpm package. It can be very short or can contain many lines of information. Our `%description` section is rather terse.
|
||||
|
||||
```
|
||||
%description
|
||||
A collection of utility scripts for testing RPM creation.
|
||||
```
|
||||
|
||||
#### %prep
|
||||
|
||||
The `%prep` section is the first script that is executed during the build process. This script is not executed during the installation of the package.
|
||||
|
||||
This script is just a Bash shell script. It prepares the build directory, creating directories used for the build as required and copying the appropriate files into their respective directories. This would include the sources required for a complete compile as part of the build.
|
||||
|
||||
The $RPM_BUILD_ROOT directory represents the root directory of an installed system. The directories created in the $RPM_BUILD_ROOT directory are fully qualified paths, such as /user/local/share/utils, /usr/local/bin, and so on, in a live filesystem.
|
||||
|
||||
In the case of our package, we have no pre-compile sources as all of our programs are Bash scripts. So we simply copy those scripts and other files into the directories where they belong in the installed system.
|
||||
|
||||
```
|
||||
%prep
|
||||
################################################################################
|
||||
# Create the build tree and copy the files from the development directories #
|
||||
# into the build tree. #
|
||||
################################################################################
|
||||
echo "BUILDROOT = $RPM_BUILD_ROOT"
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
cp /home/student/development/utils/scripts/* $RPM_BUILD_ROOT/usr/local/bin
|
||||
cp /home/student/development/utils/license/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
exit
|
||||
```
|
||||
|
||||
Note that the exit statement at the end of this section is required.
|
||||
|
||||
#### %files
|
||||
|
||||
This section of the spec file defines the files to be installed and their locations in the directory tree. It also specifies the file attributes and the owner and group owner for each file to be installed. The file permissions and ownerships are optional, but I recommend that they be explicitly set to eliminate any chance for those attributes to be incorrect or ambiguous when installed. Directories are created as required during the installation if they do not already exist.
|
||||
|
||||
```
|
||||
%files
|
||||
%attr(0744, root, root) /usr/local/bin/*
|
||||
%attr(0644, root, root) /usr/local/share/utils/*
|
||||
```
|
||||
|
||||
#### %pre
|
||||
|
||||
This section is empty in our lab project’s spec file. This would be the place to put any scripts that are required to run during installation of the rpm but prior to the installation of the files.
|
||||
|
||||
#### %post
|
||||
|
||||
This section of the spec file is another Bash script. This one runs after the installation of files. This section can be pretty much anything you need or want it to be, including creating files, running system commands, and restarting services to reinitialize them after making configuration changes. The `%post` script for our rpm package performs some of those tasks.
|
||||
|
||||
```
|
||||
%post
|
||||
################################################################################
|
||||
# Set up MOTD scripts #
|
||||
################################################################################
|
||||
cd /etc
|
||||
# Save the old MOTD if it exists
|
||||
if [ -e motd ]
|
||||
then
|
||||
cp motd motd.orig
|
||||
fi
|
||||
# If not there already, Add link to create_motd to cron.daily
|
||||
cd /etc/cron.daily
|
||||
if [ ! -e create_motd ]
|
||||
then
|
||||
ln -s /usr/local/bin/create_motd
|
||||
fi
|
||||
# create the MOTD for the first time
|
||||
/usr/local/bin/mymotd > /etc/motd
|
||||
```
|
||||
|
||||
The comments included in this script should make its purpose clear.
|
||||
|
||||
#### %postun
|
||||
|
||||
This section contains a script that would be run after the rpm package is uninstalled. Using rpm or DNF to remove a package removes all of the files listed in the `%files` section, but it does not remove files or links created by the `%post` section, so we need to handle that in this section.
|
||||
|
||||
This script usually consists of cleanup tasks that simply erasing the files previously installed by the rpm cannot accomplish. In the case of our package, it includes removing the link created by the `%post` script and restoring the saved original of the motd file.
|
||||
|
||||
```
|
||||
%postun
|
||||
# remove installed files and links
|
||||
rm /etc/cron.daily/create_motd
|
||||
|
||||
# Restore the original MOTD if it was backed up
|
||||
if [ -e /etc/motd.orig ]
|
||||
then
|
||||
mv -f /etc/motd.orig /etc/motd
|
||||
fi
|
||||
```
|
||||
|
||||
#### %clean
|
||||
|
||||
This Bash script performs cleanup after the rpm build process. The two lines in the `%clean` section below remove the build directories created by the `rpm-build` command. In many cases, additional cleanup may also be required.
|
||||
|
||||
```
|
||||
%clean
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/bin
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
```
|
||||
|
||||
#### %changelog
|
||||
|
||||
This optional text section contains a list of changes to the rpm and files it contains. The newest changes are recorded at the top of this section.
|
||||
|
||||
```
|
||||
%changelog
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
```
|
||||
|
||||
Replace the data in the header line with your own name and email address.
|
||||
|
||||
### Building the rpm
|
||||
|
||||
The spec file must be in the SPECS directory of the rpmbuild tree. I find it easiest to create a link to the actual spec file in that directory so that it can be edited in the development directory and there is no need to copy it to the SPECS directory. Make the SPECS directory your pwd, then create the link.
|
||||
|
||||
```
|
||||
cd ~/rpmbuild/SPECS/
|
||||
ln -s ~/development/spec/utils.spec
|
||||
```
|
||||
|
||||
Run the following command to build the rpm. It should only take a moment to create the rpm if no errors occur.
|
||||
|
||||
```
|
||||
rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
Check in the ~/rpmbuild/RPMS/noarch directory to verify that the new rpm exists there.
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/
|
||||
[student@testvm1 noarch]$ ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[student@testvm1 noarch]$
|
||||
```
|
||||
|
||||
### Testing the rpm
|
||||
|
||||
As root, install the rpm to verify that it installs correctly and that the files are installed in the correct directories. The exact name of the rpm will depend upon the values you used for the tags in the Preamble section, but if you used the ones in the sample, the rpm name will be as shown in the sample command below:
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/
|
||||
[root@testvm1 noarch]# ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[root@testvm1 noarch]# rpm -ivh utils-1.0.0-1.noarch.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:utils-1.0.0-1 ################################# [100%]
|
||||
```
|
||||
|
||||
Check /usr/local/bin to ensure that the new files are there. You should also verify that the create_motd link in /etc/cron.daily has been created.
|
||||
|
||||
Use the `rpm -q --changelog utils` command to view the changelog. View the files installed by the package using the `rpm -ql utils` command (that is a lowercase L in `ql`.)
|
||||
|
||||
```
|
||||
[root@testvm1 noarch]# rpm -q --changelog utils
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
|
||||
[root@testvm1 noarch]# rpm -ql utils
|
||||
/usr/local/bin/create_motd
|
||||
/usr/local/bin/die
|
||||
/usr/local/bin/mymotd
|
||||
/usr/local/bin/sysdata
|
||||
/usr/local/share/utils/Copyright.and.GPL.Notice.txt
|
||||
/usr/local/share/utils/GPL_LICENSE.txt
|
||||
/usr/local/share/utils/utils.spec
|
||||
[root@testvm1 noarch]#
|
||||
```
|
||||
|
||||
Remove the package.
|
||||
|
||||
```
|
||||
rpm -e utils
|
||||
```
|
||||
|
||||
### Experimenting
|
||||
|
||||
Now you will change the spec file to require a package that does not exist. This will simulate a dependency that cannot be met. Add the following line immediately under the existing Requires line:
|
||||
|
||||
```
|
||||
Requires: badrequire
|
||||
```
|
||||
|
||||
Build the package and attempt to install it. What message is displayed?
|
||||
|
||||
We used the `rpm` command to install and delete the `utils` package. Try installing the package with yum or DNF. You must be in the same directory as the package or specify the full path to the package for this to work.
|
||||
|
||||
### Conclusion
|
||||
|
||||
There are many tags and a couple sections that we did not cover in this look at the basics of creating an rpm package. The resources listed below can provide more information. Building rpm packages is not difficult; you just need the right information. I hope this helps you—it took me months to figure things out on my own.
|
||||
|
||||
We did not cover building from source code, but if you are a developer, that should be a simple step from this point.
|
||||
|
||||
Creating rpm packages is another good way to be a lazy sysadmin and save time and effort. It provides an easy method for distributing and installing the scripts and other files that we as sysadmins need to install on many hosts.
|
||||
|
||||
### Resources
|
||||
|
||||
* Edward C. Baily, Maximum RPM, Sams Publishing, 2000, ISBN 0-672-31105-4
|
||||
|
||||
* Edward C. Baily, [Maximum RPM][1], updated online version
|
||||
|
||||
* [RPM Documentation][4]: This web page lists most of the available online documentation for rpm. It includes many links to other websites and information about rpm.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/how-build-rpm-packages
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[1]: http://ftp.rpm.org/max-rpm/
|
||||
[2]: http://rpm.org/index.html
|
||||
[3]: http://www.both.org/?p=960
|
||||
[4]: http://rpm.org/documentation.html
|
@ -1,110 +0,0 @@
|
||||
translating by ypingcn
|
||||
|
||||
Control your data with Syncthing: An open source synchronization tool
|
||||
======
|
||||
Decide how to store and share your personal information.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
|
||||
These days, some of our most important possessions—from pictures and videos of family and friends to financial and medical documents—are data. And even as cloud storage services are booming, so there are concerns about privacy and lack of control over our personal data. From the PRISM surveillance program to Google [letting app developers scan your personal emails][1], the news is full of reports that should give us all pause regarding the security of our personal information.
|
||||
|
||||
[Syncthing][2] can help put your mind at ease. An open source peer-to-peer file synchronization tool that runs on Linux, Windows, Mac, Android, and others (sorry, no iOS), Syncthing uses its own protocol, called [Block Exchange Protocol][3]. In brief, Syncthing lets you synchronize your data across many devices without owning a server.
|
||||
|
||||
### Linux
|
||||
|
||||
In this post, I will explain how to install and synchronize files between a Linux computer and an Android phone.
|
||||
|
||||
Syncthing is readily available for most popular distributions. Fedora 28 includes the latest version.
|
||||
|
||||
To install Syncthing in Fedora, you can either search for it in Software Center or execute the following command:
|
||||
|
||||
```
|
||||
sudo dnf install syncthing syncthing-gtk
|
||||
|
||||
```
|
||||
|
||||
Once it’s installed, open it. You’ll be welcomed by an assistant to help configure Syncthing. Click **Next** until it asks to configure the WebUI. The safest option is to keep the option **Listen on localhost**. That will disable the web interface and keep unauthorized users away.
|
||||
|
||||
![Syncthing in Setup WebUI dialog box][5]
|
||||
|
||||
Syncthing in Setup WebUI dialog box
|
||||
|
||||
Close the dialog. Now that Syncthing is installed, it’s time to share a folder, connect a device, and start syncing. But first, let’s continue with your other client.
|
||||
|
||||
### Android
|
||||
|
||||
Syncthing is available in Google Play and in F-Droid app stores.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing2.png)
|
||||
|
||||
Once the application is installed, you’ll be welcomed by a wizard. Grant Syncthing permissions to your storage. You might be asked to disable battery optimization for this application. It is safe to do so as we will optimize the app to synchronize only when plugged in and connected to a wireless network.
|
||||
|
||||
Click on the main menu icon and go to **Settings** , then **Run Conditions**. Tick **Always run in** **the background** , **Run only when charging** , and **Run only on wifi**. Now your Android client is ready to exchange files with your devices.
|
||||
|
||||
There are two important concepts to remember in Syncthing: folders and devices. Folders are what you want to share, but you must have a device to share with. Syncthing allows you to share individual folders with different devices. Devices are added by exchanging device IDs. A device ID is a unique, cryptographically secure identifier that is created when Syncthing starts for the first time.
|
||||
|
||||
### Connecting devices
|
||||
|
||||
Now let’s connect your Linux machine and your Android client.
|
||||
|
||||
In your Linux computer, open Syncthing, click on the **Settings** icon and click **Show ID**. A QR code will show up.
|
||||
|
||||
In your Android mobile, open Syncthing. In the main screen, click the **Devices** tab and press the **+** symbol. In the first field, press the QR code symbol to open the QR scanner.
|
||||
|
||||
Point your mobile camera to the computer QR code. The Device ID** **field will be populated with your desktop client Device ID. Give it a friendly name and save. Because adding a device goes two ways, you now need to confirm on the computer client that you want to add the Android mobile. It might take a couple of minutes for your computer client to ask for confirmation. When it does, click **Add**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing6.png)
|
||||
|
||||
In the **New Device** window, you can verify and configure some options about your new device, like the **Device Name** and **Addresses**. If you keep dynamic, it will try to auto-discover the device IP, but if you want to force one, you can add it in this field. If you already created a folder (more on this later), you can also share it with this new device.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing7.png)
|
||||
|
||||
Your computer and Android are now paired and ready to exchange files. (If you have more than one computer or mobile phone, simply repeat these steps.)
|
||||
|
||||
### Sharing folders
|
||||
|
||||
Now that the devices you want to sync are already connected, it’s time to share a folder. You can share folders on your computer and the devices you add to that folder will get a copy.
|
||||
|
||||
To share a folder, go to **Settings** and click **Add Shared Folder** :
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing8.png)
|
||||
|
||||
In the next window, enter the information of the folder you want to share:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing9.png)
|
||||
|
||||
You can use any label you want. **Folder ID** will be generated randomly and will be used to identify the folder between the clients. In **Path** , click **Browse** and locate the folder you want to share. If you want Syncthing to monitor the folder for changes (such as deletes, new files, etc.), click **Monitor filesystem for changes**.
|
||||
|
||||
Remember, when you share a folder, any change that happens on the other clients will be reflected on every single device. That means that if you share a folder containing pictures with other computers or mobile devices, changes in these other clients will be reflected everywhere. If this is not what you want, you can make your folder “Send Only” so it will send files to the clients, but the other clients’ changes won’t be synced.
|
||||
|
||||
When this is done, go to **Share with Devices** and select the hosts you want to sync with your folder:
|
||||
|
||||
All the devices you select will need to accept the share request; you will get a notification from the devices:
|
||||
|
||||
Just as when you shared the folder, you must configure the new shared folder:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing12.png)
|
||||
|
||||
Again, here you can define any label, but the ID must match each client. In the folder option, select the destination for the folder and its files. Remember that any change done in this folder will be reflected with every device allowed in the folder.
|
||||
|
||||
These are the steps to connect devices and share folders with Syncthing. It might take a few minutes to start copying, depending on your network settings or if you are not on the same network.
|
||||
|
||||
Syncthing offers many more great features and options. Try it—and take control of your data.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/take-control-your-data-syncthing
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://docs.syncthing.net/specs/bep-v1.html
|
||||
[4]: /file/410191
|
||||
[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png (Syncthing in Setup WebUI dialog box)
|
@ -1,3 +1,5 @@
|
||||
Translating by way-ww
|
||||
|
||||
Why Linux users should try Rust
|
||||
======
|
||||
|
||||
|
@ -1,89 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
5 cool tiling window managers
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg)
|
||||
The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out.
|
||||
|
||||
### i3
|
||||
|
||||
[i3][1] is one of the most popular tiling window managers. Like most other such WMs, i3 focuses on low resource consumption and customizability by the user.
|
||||
|
||||
You can refer to [this previous article in the Magazine][2] to get started with i3 installation details and how to configure it.
|
||||
|
||||
### sway
|
||||
|
||||
[sway][3] is a tiling Wayland compositor. It has the advantage of compatibility with an existing i3 configuration, so you can use it to replace i3 and use Wayland as the display protocol.
|
||||
|
||||
You can use dnf to install sway from Fedora repository:
|
||||
|
||||
```
|
||||
$ sudo dnf install sway
|
||||
```
|
||||
|
||||
If you want to migrate from i3 to sway, there’s a small [migration guide][4] available.
|
||||
|
||||
### Qtile
|
||||
|
||||
[Qtile][5] is another tiling manager that also happens to be written in Python. By default, you configure Qtile in a Python script located under ~/.config/qtile/config.py. When this script is not available, Qtile uses a default [configuration][6].
|
||||
|
||||
One of the benefits of Qtile being in Python is you can write scripts to control the WM. For example, the following script prints the screen details:
|
||||
|
||||
```
|
||||
> from libqtile.command import Client
|
||||
> c = Client()
|
||||
> print(c.screen.info)
|
||||
{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0}
|
||||
```
|
||||
|
||||
To install Qlite on Fedora, use the following command:
|
||||
|
||||
```
|
||||
$ sudo dnf install qtile
|
||||
```
|
||||
|
||||
### dwm
|
||||
|
||||
The [dwm][7] window manager focuses more on being lightweight. One goal of the project is to keep dwm minimal and small. For example, the entire code base never exceeded 2000 lines of code. On the other hand, dwm isn’t as easy to customize and configure. Indeed, the only way to change dwm default configuration is to [edit the source code and recompile the application][8].
|
||||
|
||||
If you want to try the default configuration, you can install dwm in Fedora using dnf:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm
|
||||
```
|
||||
|
||||
For those who wand to change their dwm configuration, the dwm-user package is available in Fedora. This package automatically recompiles dwm using the configuration stored in the user home directory at ~/.dwm/config.h.
|
||||
|
||||
### awesome
|
||||
|
||||
[awesome][9] originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. The configuration is done via Lua scripts, which allow you to write scripts to automate tasks or create widgets.
|
||||
|
||||
You can check out awesome on Fedora by installing it like this:
|
||||
|
||||
```
|
||||
$ sudo dnf install awesome
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/5-cool-tiling-window-managers/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org
|
||||
[1]: https://i3wm.org/
|
||||
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
|
||||
[3]: https://swaywm.org/
|
||||
[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide
|
||||
[5]: http://www.qtile.org/
|
||||
[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py
|
||||
[7]: https://dwm.suckless.org/
|
||||
[8]: https://dwm.suckless.org/customisation/
|
||||
[9]: https://awesomewm.org/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Lab 3: User Environments
|
||||
======
|
||||
### Lab 3: User Environments
|
||||
|
@ -1,189 +0,0 @@
|
||||
translating by dianbanjiu
|
||||
Open Source Logging Tools for Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs-main.jpg?itok=voNrSz4H)
|
||||
|
||||
If you’re a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those who’d rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options.
|
||||
|
||||
In this article, I’ll highlight a few such tools available for the Linux platform. I won’t be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log.
|
||||
|
||||
Speaking of which…
|
||||
|
||||
### What is /var/log?
|
||||
|
||||
If you’re new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and you’ll see all of the various systems that have log files you can view (Figure 1).
|
||||
|
||||
![/var/log/][2]
|
||||
|
||||
Figure 1: Our ls command reveals the logs available in /var/log/.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isn’t for you? What options do you have? Plenty. Let’s take a look at few such options.
|
||||
|
||||
### Logs
|
||||
|
||||
If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and you’ll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs.
|
||||
|
||||
![Logs tool][5]
|
||||
|
||||
Figure 2: The GNOME Logs tool is one of the easiest GUI log viewers you’ll find for Linux.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Logs is a great tool, especially if you’re not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems.
|
||||
|
||||
### KSystemLog
|
||||
|
||||
KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout.
|
||||
|
||||
From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities.
|
||||
|
||||
![KSystemLog][7]
|
||||
|
||||
Figure 3: The KSystemLog main window.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following:
|
||||
|
||||
1. Open KSystemLog.
|
||||
|
||||
2. Click File > Add Log Entry.
|
||||
|
||||
3. Create your log entry (Figure 4).
|
||||
|
||||
4. Click OK
|
||||
|
||||
|
||||
![log entry][9]
|
||||
|
||||
Figure 4: Creating a manual log entry with KSystemLog.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
KSystemLog makes viewing logs in KDE an incredibly easy task.
|
||||
|
||||
### Logwatch
|
||||
|
||||
Logwatch isn’t a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so:
|
||||
|
||||
```
|
||||
sudo apt-get install logwatch
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```
|
||||
sudo dnf install logwatch
|
||||
```
|
||||
|
||||
During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, you’ll need to install the mailutils app (so you can view mail locally, via the mail command).
|
||||
|
||||
![ Logwatch][11]
|
||||
|
||||
Figure 5: Configuring Logwatch alert sending method.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. You’ll want to edit the MailTo = option. If you’re viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, you’ll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file.
|
||||
Once configured, you can send your first mail with a command like:
|
||||
|
||||
```
|
||||
logwatch --detail Med --mailto ADDRESS --service all --range today
|
||||
Where ADDRESS is either the local user or an email address.
|
||||
|
||||
```
|
||||
|
||||
For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool.
|
||||
|
||||
### Rsyslog
|
||||
|
||||
Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents:
|
||||
|
||||
```
|
||||
# Provide UDP syslog reception
|
||||
$ModLoad imudp
|
||||
$UDPServerRun 514
|
||||
|
||||
# Provide TCP syslog reception
|
||||
$ModLoad imtcp
|
||||
$InputTCPServerRun 514
|
||||
|
||||
# Use custom filenaming scheme
|
||||
$template FILENAME,"/var/log/remote/%HOSTNAME%.log"
|
||||
*.* ?FILENAME
|
||||
|
||||
$PreserveFQDN on
|
||||
|
||||
```
|
||||
|
||||
Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents:
|
||||
|
||||
```
|
||||
$PreserveFQDN on
|
||||
$ActionQueueType LinkedList
|
||||
$ActionQueueFileName srvrfwd
|
||||
$ActionResumeRetryCount -1
|
||||
$ActionQueueSaveOnShutdown on
|
||||
*.* @@SERVER_IP:514
|
||||
|
||||
```
|
||||
|
||||
Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command:
|
||||
|
||||
```
|
||||
sudo systemctl restart rsyslog
|
||||
|
||||
```
|
||||
|
||||
You can now view the centralized log files with the command (run on the centralized server):
|
||||
|
||||
```
|
||||
tail -f /var/log/remote/*.log
|
||||
|
||||
```
|
||||
|
||||
The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6).
|
||||
|
||||
![Rsyslog][13]
|
||||
|
||||
Figure 6: Rsyslog showing entries for a connected client.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers.
|
||||
|
||||
### More where that came from
|
||||
|
||||
This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day's journey into the Linux log file.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[1]: /files/images/logs1jpg
|
||||
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/)
|
||||
[3]: /licenses/category/used-permission
|
||||
[4]: /files/images/logs2jpg
|
||||
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool)
|
||||
[6]: /files/images/logs3jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog)
|
||||
[8]: /files/images/logs4jpg
|
||||
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry)
|
||||
[10]: /files/images/logs5jpg
|
||||
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch)
|
||||
[12]: /files/images/logs6jpg
|
||||
[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog)
|
||||
[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,61 +1,59 @@
|
||||
translating by hopefully2333
|
||||
|
||||
Play Windows games on Fedora with Steam Play and Proton
|
||||
在 Fedora 上使用 Steam play 和 Proton 来玩 Windows 游戏
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg)
|
||||
|
||||
Some weeks ago, Steam [announced][1] a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.
|
||||
几周前,Steam 宣布要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。
|
||||
|
||||
According to the Steam website, there are new features in the beta release:
|
||||
据 Steam 网站称,测试版本中有以下这些新功能:
|
||||
|
||||
* Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
|
||||
* DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
|
||||
* Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
|
||||
* Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
|
||||
* Performance for multi-threaded games has been greatly improved compared to vanilla WINE.
|
||||
* 现在没有 Linux 版本的 Windows 游戏可以直接从 Linux 上的 Steam 客户端进行安装和运行,并且有完整、原生的 Steamworks 和 OpenVR 的支持。
|
||||
* 现在 DirectX 11 和 12 的实现都基于 Vulkan,它可以提高游戏的兼容性并减小游戏性能收到的影响。
|
||||
* 全屏支持已经得到了改进,全屏游戏时可以无缝扩展到所需的显示程度,而不会干扰到显示屏本身的分辨率或者说需要使用虚拟桌面。
|
||||
* 改进了对游戏控制器的支持,游戏自动识别所有 Steam 支持的控制器,比起游戏的原始版本,能够获得更多开箱即用的控制器兼容性。
|
||||
* 和 vanilla WINE 比起来,游戏的多线程性能得到了极大的提高。
|
||||
|
||||
|
||||
|
||||
### Installation
|
||||
### 安装
|
||||
|
||||
If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the [latest updated version of Steam installed][2]. In that case you no longer need Steam Beta to use Proton.)
|
||||
如果你有兴趣,想尝试一下 Steam 和 Proton。请按照下面这些简单的步骤进行操作。(请注意,如果你已经安装了最新版本的 Steam,可以忽略启用 Steam 测试版这个第一步。在这种情况下,你不再需要通过 Steam 测试版来使用 Proton。)
|
||||
|
||||
Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.
|
||||
打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持22个游戏。
|
||||
|
||||
![][3]
|
||||
|
||||
Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.
|
||||
现在点击客户端顶部的 Steam 选项,这会显示一个下拉菜单。然后选择设置。
|
||||
|
||||
![][4]
|
||||
|
||||
Now the settings window pops up. Select the Account option and next to Beta participation, click on change.
|
||||
现在弹出了设置窗口,选择账户选项,并在 Beta participation 旁边,点击更改。
|
||||
|
||||
![][5]
|
||||
|
||||
Now change None to Steam Beta Update.
|
||||
现在将 None 更改为 Steam Beta Update。
|
||||
|
||||
![][6]
|
||||
|
||||
Click on OK and a prompt asks you to restart.
|
||||
点击确定,然后系统会提示你重新启动。
|
||||
|
||||
![][7]
|
||||
|
||||
Let Steam download the update. This can take a while depending on your internet speed and computer resources.
|
||||
让 Steam 下载更新,这会需要一段时间,具体需要多久这要取决于你的网络速度和电脑配置。
|
||||
|
||||
![][8]
|
||||
|
||||
After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.
|
||||
在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定有为提供支持的游戏使用 Stream Play 这个复选框,让所有的游戏都使用 Steam Play 进行运行,而不是 steam 中游戏特定的选项。兼容性工具应该是 Proton。
|
||||
|
||||
![][9]
|
||||
|
||||
The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.
|
||||
Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。
|
||||
|
||||
![][10]
|
||||
|
||||
### Installing a Windows game using Steam Play
|
||||
### 使用 Steam Play 来安装一个 Windows 游戏
|
||||
|
||||
Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots.
|
||||
现在你已经启用 Proton,开始安装游戏,选择你想要安装的游戏,然后你会发现这个安装过程类似于在 Steam 上安装一个普通游戏,如下面这些截图所示。
|
||||
|
||||
![][11]
|
||||
|
||||
@ -65,13 +63,13 @@ Now that you have Proton enabled, install a game. Select the title you want and
|
||||
|
||||
![][14]
|
||||
|
||||
After the game is done downloading and installing, you can play it.
|
||||
在下载和安装完游戏后,你就可以开始玩了。
|
||||
|
||||
![][15]
|
||||
|
||||
![][16]
|
||||
|
||||
Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a [Google doc][17] with a list of games that have been tested.
|
||||
一些游戏可能会受到 Proton 测试性质的影响,在下面这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -80,7 +78,7 @@ via: https://fedoramagazine.org/play-windows-games-steam-play-proton/
|
||||
|
||||
作者:[Francisco J. Vergara Torres][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,76 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How to Install GRUB on Arch Linux (UEFI)
|
||||
======
|
||||
|
||||
![](http://fasterland.net/wp-content/uploads/2018/10/Arch-Linux-Boot-Menu-750x375.jpg)
|
||||
|
||||
Some time ago, I wrote a tutorial on **[how to reinstall Grub][1] on Arch Linux after installing Windows.**
|
||||
|
||||
A few weeks ago, I had to reinstall **Arch Linux** from scratch on my laptop and I discovered installing **Grub** was not as straightforward as I remembered.
|
||||
|
||||
For this reason, I’m going to write this tutorial since **installing Grub on a UEFI bios** during a new **Arch Linux** installation it’s not too easy.
|
||||
|
||||
### Locating the EFI partition
|
||||
|
||||
The first important thing to do for installing **Grub** on **Arch Linux** is to locate the **EFI** partition.
|
||||
Let’s run the following command in order to locate this partition:
|
||||
|
||||
```
|
||||
# fdisk -l
|
||||
```
|
||||
|
||||
We need to check the partition marked as **EFI System
|
||||
**In my case is **/dev/sda2**
|
||||
|
||||
After that, we need to mount this partition, for example, on /boot/efi:
|
||||
|
||||
```
|
||||
# mkdir /boot/efi
|
||||
# mount /dev/sdb2 /boot/efi
|
||||
```
|
||||
|
||||
Another important thing to do is adding this partition into the **/etc/fstab** file.
|
||||
|
||||
#### Installing Grub
|
||||
|
||||
Now we can install Grub in our system:
|
||||
|
||||
```
|
||||
# grub-mkconfig -o /boot/grub/grub.cfg
|
||||
# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
|
||||
```
|
||||
|
||||
#### Adding Windows Automatically into the Grub Menu
|
||||
|
||||
In order to automatically add the **Windows entry into the Grub menu** , we need to install the **os-prober** program:
|
||||
|
||||
```
|
||||
# pacman -Sy os-prober
|
||||
```
|
||||
|
||||
In order to add the entry item let’s run the following commands:
|
||||
|
||||
```
|
||||
# os-prober
|
||||
# grub-mkconfig -o /boot/grub/grub.cfg
|
||||
# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
|
||||
```
|
||||
|
||||
You can find more about Grub on Arch Linux [here][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://fasterland.net/how-to-install-grub-on-arch-linux-uefi.html
|
||||
|
||||
作者:[Francesco Mondello][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://fasterland.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://fasterland.net/reinstall-grub-arch-linux.html
|
||||
[2]: https://wiki.archlinux.org/index.php/GRUB
|
@ -0,0 +1,81 @@
|
||||
An introduction to Ansible Operators in Kubernetes
|
||||
======
|
||||
The new Operator SDK makes it easy to create a Kubernetes controller to deploy and manage a service or application in a cluster.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_barnraising_2.png?itok=JOBMbjTM)
|
||||
|
||||
For years, Ansible has been a go-to choice for infrastructure automation. As Kubernetes adoption has skyrocketed, Ansible has continued to shine in the emerging container orchestration ecosystem.
|
||||
|
||||
Ansible fits naturally into a Kubernetes workflow, using YAML to describe the desired state of the world. Multiple projects, including the [Automation Broker][1], are adapting Ansible for use behind specific APIs. This article will focus on a new technique, created through a joint effort by the Ansible core team and the developers of Automation Broker, that uses Ansible to create Operators with minimal effort.
|
||||
|
||||
### What is an Operator?
|
||||
|
||||
An [Operator][2] is a Kubernetes controller that deploys and manages a service or application in a cluster. It automates human operation knowledge and best practices to keep services running and healthy. Input is received in the form of a custom resource. Let's walk through that using a Memcached Operator as an example.
|
||||
|
||||
The [Memcached Operator][3] can be deployed as a service running in a cluster, and it includes a custom resource definition (CRD) for a resource called Memcached. The end user creates an instance of that custom resource to describe how the Memcached Deployment should look. The following example requests a Deployment with three Pods.
|
||||
|
||||
```
|
||||
apiVersion: "cache.example.com/v1alpha1"
|
||||
kind: "Memcached"
|
||||
metadata:
|
||||
name: "example-memcached"
|
||||
spec:
|
||||
size: 3
|
||||
```
|
||||
|
||||
The Operator's job is called reconciliation—continuously ensuring that what is specified in the "spec" matches the real state of the world. This sample Operator delegates Pod management to a Deployment controller. So while it does not directly create or delete Pods, if you change the size, the Operator's reconciliation loop ensures that the new value is applied to the Deployment resource it created.
|
||||
|
||||
A mature Operator can deploy, upgrade, back up, repair, scale, and reconfigure an application that it manages. As you can see, not only does an Operator provide a simple way to deploy arbitrary services using only native Kubernetes APIs; it enables full day-two (post-deployment, such as updates, backups, etc.) management, limited only by what you can code.
|
||||
|
||||
### Creating an Operator
|
||||
|
||||
The [Operator SDK][4] makes it easy to get started. It lays down the skeleton of a new Operator with many of the complex pieces already handled. You can focus on defining your custom resources and coding the reconciliation logic in Go. The SDK saves you a lot of time and ongoing maintenance burden, but you will still end up owning a substantial software project.
|
||||
|
||||
Ansible was recently introduced to the Operator SDK as an even simpler way to make an Operator, with no coding required. To create an Operator, you merely:
|
||||
|
||||
* Create a CRD in the form of YAML
|
||||
* Define what reconciliation should do by creating an Ansible role or playbook
|
||||
|
||||
|
||||
|
||||
It's YAML all the way down—a familiar experience for Kubernetes users.
|
||||
|
||||
### How does it work?
|
||||
|
||||
There is a preexisting Ansible Operator base container image that includes Ansible, [ansible-runner][5], and the Operator's executable service. The SDK helps to build a layer on top that adds one or more CRDs and associates each with an Ansible role or playbook.
|
||||
|
||||
When it's running, the Operator uses a Kubernetes feature to "watch" for changes to any resource of the type defined. Upon receiving such a notification, it reconciles the resource that changed. The Operator runs the corresponding role or playbook, and information about the resource is passed to Ansible as [extra-vars][6].
|
||||
|
||||
### Using Ansible with Kubernetes
|
||||
|
||||
Following several iterations, the Ansible community has produced a remarkably easy-to-use module for working with Kubernetes. Especially if you have any experience with a Kubernetes module prior to Ansible 2.6, you owe it to yourself to have a look at the [k8s module][7]. Creating, retrieving, and updating resources is a natural experience that will feel familiar to any Kubernetes user. It makes creating an Operator that much easier.
|
||||
|
||||
### Give it a try
|
||||
|
||||
If you need to build a Kubernetes Operator, doing so with Ansible could save time and complexity. To learn more, head over to the Operator SDK documentation and work through the [Getting Started Guide][8] for Ansible-based Operators. Then join us on the [Operator Framework mailing list][9] and let us know what you think.
|
||||
|
||||
Michael Hrivnak will present [Automating Multi-Service Deployments on Kubernetes][10] at [LISA18][11], October 29-31 in Nashville, Tennessee, USA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ansible-operators-kubernetes
|
||||
|
||||
作者:[Michael Hrivnak][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mhrivnak
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/18/2/automated-provisioning-kubernetes
|
||||
[2]: https://coreos.com/operators/
|
||||
[3]: https://github.com/operator-framework/operator-sdk-samples/tree/master/memcached-operator
|
||||
[4]: https://github.com/operator-framework/operator-sdk/
|
||||
[5]: https://github.com/ansible/ansible-runner
|
||||
[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#passing-variables-on-the-command-line
|
||||
[7]: https://docs.ansible.com/ansible/2.6/modules/k8s_module.html
|
||||
[8]: https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md
|
||||
[9]: https://groups.google.com/forum/#!forum/operator-framework
|
||||
[10]: https://www.usenix.org/conference/lisa18/presentation/hrivnak
|
||||
[11]: https://www.usenix.org/conference/lisa18
|
@ -1,151 +0,0 @@
|
||||
How To Browse And Read Entire Arch Wiki As Linux Man Pages
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/arch-wiki-720x340.jpg)
|
||||
|
||||
A while ago, I wrote a guide that described how to browse the Arch Wiki from your Terminal using a command line script named [**arch-wiki-cli**][1]. Using this script, anyone can easily navigate through entire Arch Wiki website and read it with a text browser of your choice. Obviously, an active Internet connection is required to use this script. Today, I stumbled upon a similar utility named **“Arch-wiki-man”**. As the name says, it is also used to read the Arch Wiki from command line, but it doesn’t require Internet connection. Arch-wiki-man program helps you to browse and read entire Arch Wiki as Linux man pages. It will display any article from Arch Wiki in man pages format. Also, you need not to be online to browse Arch Wiki. The entire Arch Wiki will be downloaded locally and the updates are pushed automatically every two days. So, you always have an up-to-date, local copy of the Arch Wiki on your system.
|
||||
|
||||
### Installing Arch-wiki-man
|
||||
|
||||
Arch-wiki-man is available in [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3].
|
||||
|
||||
```
|
||||
$ yay -S arch-wiki-man
|
||||
```
|
||||
|
||||
Alternatively, it can be installed using NPM package manager like below. Make sure you have [**installed NodeJS**][4] and run the following command to install it:
|
||||
|
||||
```
|
||||
$ npm install -g arch-wiki-man
|
||||
```
|
||||
|
||||
### Browse And Read Entire Arch Wiki As Linux Man Pages
|
||||
|
||||
The typical syntax of Arch-wiki-man is:
|
||||
|
||||
```
|
||||
$ awman <search-query>
|
||||
```
|
||||
|
||||
Let me show you some examples.
|
||||
|
||||
**Search with one or more matches**
|
||||
|
||||
Let us search for a [**Arch Linux installation guide**][5]. To do so, simply run:
|
||||
|
||||
```
|
||||
$ awman Installation guide
|
||||
```
|
||||
|
||||
The above command will search for the matches that contains the search term “Installation guide” in the Arch Wiki. If there are multiple matches for the given search term, a selection menu will appear. Choose the guide you want to read using **UP/DOWN arrows** or Vim-style keybindings ( **j/k** ) and hit ENTER to open it. The resulting guide will open in man pages format like below.
|
||||
|
||||
![][6]
|
||||
|
||||
Here, awman refers **a** rch **w** iki **m** an.
|
||||
|
||||
All man command options are supported, so you can navigate through guide as the way you do when reading a man page. To view the help section, press **h**.
|
||||
|
||||
![][7]
|
||||
|
||||
To exit the selection menu without entering **man** , simply press **Ctrl+c**.
|
||||
|
||||
To go back and/or quit man, type **q**.
|
||||
|
||||
**Search matches in titles and descriptions**
|
||||
|
||||
By default, Awman will search for the matches in titles only. You can, however, direct it to search for the matches in both the titles and descriptions as well.
|
||||
|
||||
```
|
||||
$ awman -d vim
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ awman --desc-search vim
|
||||
```
|
||||
|
||||
**Search for matches in contents**
|
||||
|
||||
Apart from searching for matches in titles and descriptions, it is also possible to scan the contents for a match as well. Please note that this will significantly slower the search process.
|
||||
|
||||
```
|
||||
$ awman -k emacs
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ awman --apropos emacs
|
||||
```
|
||||
|
||||
**Open the search results in web browser**
|
||||
|
||||
If you don’t want to view the arch wiki guides in man page format, you can open it in a web browser. To do so, run:
|
||||
|
||||
```
|
||||
$ awman -w pacman
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ awman --web pacman
|
||||
```
|
||||
|
||||
This command will open the resulting match in the default web browser rather than with **man** command. Please note that you need Internet connection to use this option.
|
||||
|
||||
**Search in other languages**
|
||||
|
||||
By default, Awman will open the Arch wiki pages in English. If you want to view the results in other languages, for example **Spanish** , simply do:
|
||||
|
||||
```
|
||||
$ awman -l spanish codecs
|
||||
```
|
||||
|
||||
![][8]
|
||||
|
||||
To view the list of available language options, run:
|
||||
|
||||
```
|
||||
|
||||
$ awman --list-languages
|
||||
|
||||
```
|
||||
|
||||
**Update the local copy of Arch Wiki**
|
||||
|
||||
Like I already said, the updates are pushed automatically every two days. If you want to update it manually, simply run:
|
||||
|
||||
```
|
||||
$ awman-update
|
||||
arch-wiki-man@1.3.0 /usr/lib/node_modules/arch-wiki-man
|
||||
└── arch-wiki-md-repo@0.10.84
|
||||
|
||||
arch-wiki-md-repo has been successfully updated or reinstalled.
|
||||
```
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-browse-and-read-entire-arch-wiki-as-linux-man-pages/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/search-arch-wiki-website-commandline/
|
||||
[2]: https://aur.archlinux.org/packages/arch-wiki-man/
|
||||
[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]: https://www.ostechnix.com/install-node-js-linux/
|
||||
[5]: https://www.ostechnix.com/install-arch-linux-latest-version/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-1.gif
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-2.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2018/10/awman-3-1.png
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Running Linux containers as a non-root with Podman
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Final JOS project
|
||||
======
|
||||
Piazza Discussion Due, November 2, 2018 Proposals Due, November 8, 2018 Code repository Due, December 6, 2018 Check-off and in-class demos, Week of December 10, 2018
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Lab 4: Preemptive Multitasking
|
||||
======
|
||||
### Lab 4: Preemptive Multitasking
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Lab 5: File system, Spawn and Shell
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Lab 6: Network Driver
|
||||
======
|
||||
### Lab 6: Network Driver (default final project)
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Turn Your Old PC into a Retrogaming Console with Lakka Linux
|
||||
======
|
||||
**If you have an old computer gathering dust, you can turn it into a PlayStation like retrogaming console with Lakka Linux distribution. **
|
||||
|
@ -0,0 +1,87 @@
|
||||
piwheels: Speedy Python package installation for the Raspberry Pi
|
||||
======
|
||||
https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V)
|
||||
|
||||
One of the great things about the Python programming language is [PyPI][1], the Python Package Index, where third-party libraries are hosted, available for anyone to install and gain access to pre-existing functionality without starting from scratch. These libraries are handy utilities, written by members of the community, that aren't found within the Python standard library. But they work in much the same way—you import them into your code and have access to functions and classes you didn't write yourself.
|
||||
|
||||
### The cross-platform problem
|
||||
|
||||
Many of the 150,000+ libraries hosted on PyPI are written in Python, but that's not the only option—you can write Python libraries in C, C++, or anything with Python bindings. The usual benefit of writing a library in C or C++ is speed. The NumPy project is a good example: NumPy provides highly powerful mathematical functionality for dealing with matrix operations. It is highly optimized code that allows users to write in Python but have access to speedy mathematics operations.
|
||||
|
||||
The problem comes when trying to distribute libraries for others to use cross-platform. The standard is to create built distributions called Python wheels. While pure Python libraries are automatically compatible cross-platform, those implemented in C/C++ must be built separately for each operating system, Python version, and system architecture. So, if a library wanted to support Windows, MacOS, and Linux, for both 32-bit and 64-bit computers, and for Python 2.7, 3.4, 3.5, and 3.6, that would require 24 different versions! Some packages do this, but others rely on users building the package from the source code, which can take a long time and can often be complex.
|
||||
|
||||
### Raspberry Pi and Arm
|
||||
|
||||
While the Raspberry Pi runs Linux, it's not the same architecture as your regular PC—it's Arm, rather than Intel. That means the Linux wheels don't work, and Raspberry Pi users had to build from source—until the piwheels project came to fruition last year. [Piwheels][2] is an open source project that aims to build Raspberry Pi platform wheels for every package on PyPI.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pi3b.jpg)
|
||||
|
||||
Packages are natively compiled on Raspberry Pi 3 hardware and hosted in a data center provided by UK-based [Mythic Beasts][3], which provides cloud Pis as part of its hosting service. The piwheels website hosts the wheels in a [pip][4]-compatible web server configuration so Raspberry Pi users can use them easily. Raspbian Stretch even comes preconfigured to use piwheels.org as an additional index to PyPI by default.
|
||||
|
||||
### The piwheels stack
|
||||
|
||||
The piwheels project runs (almost) entirely on Raspberry Pi hardware:
|
||||
|
||||
* **Master**
|
||||
* A Raspberry Pi web server hosts the wheel files and distributes jobs to the builder Pis.
|
||||
* **Database server**
|
||||
* All package information is stored in a [Postgres database][5].
|
||||
* The master logs build attempts and downloads.
|
||||
* **Builders**
|
||||
* Builder Pis are given build jobs to attempt, and they communicate with the database.
|
||||
* The backlog of packages on PyPI was completed using around 20 Raspberry Pis.
|
||||
* A smaller number of Pis is required to keep up with new releases. Currently, there are three with Raspbian Jessie (Python 3.4) and two with Raspbian Stretch (Python 3.5).
|
||||
|
||||
|
||||
|
||||
The database server was originally a Raspberry Pi but was moved to another server when the database got too large.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/piwheels-stack.png)
|
||||
|
||||
### Time saved
|
||||
|
||||
Around 500,000 packages are downloaded from piwheels.org every month.
|
||||
|
||||
Every time a package is built by piwheels or downloaded by a user, its status information (including build duration) is recorded in a database. Therefore, it's possible to calculate how much time has been saved with pre-compiled packages.
|
||||
|
||||
In the 10 months that the service has been running, over 25 years of build time has been saved.
|
||||
|
||||
### Great for projects
|
||||
|
||||
Raspberry Pi project tutorials requiring Python libraries often include warnings like "this step takes a few hours"—but that's no longer true, thanks to piwheels. Piwheels makes it easy for makers and developers to dive straight into their project and not get bogged down waiting for software to install. Amazing libraries are just a **pip install** away; no need to wait for compilation.
|
||||
|
||||
Piwheels has wheels for NumPy, SciPy, OpenCV, Keras, and even [Tensorflow][6], Google's machine learning framework. These libraries are great for [home projects][7], including image and facial recognition with the [camera module][8]. For inspiration, take a look at the Raspberry Pi category on [PyImageSearch][9] (which is one of my [favorite Raspberry Pi blogs][10]) to follow.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/camera_0.jpg)
|
||||
|
||||
Read more about piwheels on the project's [blog][11] and the [Raspberry Pi blog][12], see the [source code on GitHub][13], and check out the [piwheels website][2]. If you want to contribute to the project, check the [missing packages tag][14] and see if you can successfully build one of them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pypi.org/
|
||||
[2]: https://www.piwheels.org/
|
||||
[3]: https://www.mythic-beasts.com/order/rpi
|
||||
[4]: https://en.wikipedia.org/wiki/Pip_(package_manager)
|
||||
[5]: https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi
|
||||
[6]: https://www.tensorflow.org/
|
||||
[7]: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
|
||||
[8]: https://opensource.com/life/15/6/raspberry-pi-camera-projects
|
||||
[9]: https://www.pyimagesearch.com/category/raspberry-pi/
|
||||
[10]: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow
|
||||
[11]: https://blog.piwheels.org/
|
||||
[12]: https://www.raspberrypi.org/blog/piwheels/
|
||||
[13]: https://github.com/bennuttall/piwheels
|
||||
[14]: https://github.com/bennuttall/piwheels/issues?q=is%3Aissue+is%3Aopen+label%3A%22missing+package%22
|
@ -0,0 +1,327 @@
|
||||
Automating upstream releases with release-bot
|
||||
======
|
||||
All you need to do is file an issue into your upstream repository and release-bot takes care of the rest.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_robots.png?itok=TOZgajrd)
|
||||
|
||||
If you own or maintain a GitHub repo and have ever pushed a package from it into [PyPI][1] and/or [Fedora][2], you know it requires some additional work using the Fedora infrastructure.
|
||||
|
||||
Good news: We have developed a tool called [release-bot][3] that automates the process. All you need to do is file an issue into your upstream repository and release-bot takes care of the rest. But let’s not get ahead of ourselves. First, let’s look at what needs to be set up for this automation to happen. I’ve chosen the **meta-test-family** upstream repository as an example.
|
||||
|
||||
### Configuration files for release-bot
|
||||
|
||||
There are two configuration files for release-bot: **conf.yaml** and **release-conf.yaml**.
|
||||
|
||||
#### conf.yaml
|
||||
|
||||
**conf.yaml** must be accessible during bot initialization; it specifies how to access the GitHub repository. To show that, I have created a new git repository named **mtf-release-bot** , which contains **conf.yaml** and the other secret files.
|
||||
|
||||
```
|
||||
repository_name: name
|
||||
repository_owner: owner
|
||||
# https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
|
||||
github_token: xxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
# time in seconds during checks for new releases
|
||||
refresh_interval: 180
|
||||
```
|
||||
|
||||
For the meta-test-family case, the configuration file looks like this:
|
||||
|
||||
```
|
||||
repository_name: meta-test-family
|
||||
repository_owner: fedora-modularity
|
||||
github_token: xxxxxxxxxxxxxxxxxxxxx
|
||||
refresh_interval: 180
|
||||
```
|
||||
|
||||
#### release-conf.yaml
|
||||
|
||||
**release-conf.yaml** must be stored [in the repository itself][4]; it specifies how to do GitHub/PyPI/Fedora releases.
|
||||
|
||||
```
|
||||
# list of major python versions that bot will build separate wheels for
|
||||
python_versions:
|
||||
- 2
|
||||
- 3
|
||||
# optional:
|
||||
changelog:
|
||||
- Example changelog entry
|
||||
- Another changelog entry
|
||||
# this is info for the authorship of the changelog
|
||||
# if this is not set, person who merged the release PR will be used as an author
|
||||
author_name: John Doe
|
||||
author_email: johndoe@example.com
|
||||
# whether to release on fedora. False by default
|
||||
fedora: false
|
||||
# list of fedora branches bot should release on. Master is always implied
|
||||
fedora_branches:
|
||||
- f27
|
||||
```
|
||||
|
||||
For the meta-test-family case, the configuration file looks like this:
|
||||
|
||||
```
|
||||
python_versions:
|
||||
- 2
|
||||
fedora: true
|
||||
fedora_branches:
|
||||
- f29
|
||||
- f28
|
||||
trigger_on_issue: true
|
||||
```
|
||||
|
||||
#### PyPI configuration file
|
||||
|
||||
The file **.pypirc** , stored in your **mtf-release-bot** private repository, is needed for uploading the new package version into PyPI:
|
||||
|
||||
```
|
||||
[pypi]
|
||||
username = phracek
|
||||
password = xxxxxxxx
|
||||
```
|
||||
|
||||
Private SSH key, **id_rsa** , that you configured in [FAS][5].
|
||||
|
||||
The final structure of the git repository, with **conf.yaml** and the others, looks like this:
|
||||
|
||||
```
|
||||
$ ls -la
|
||||
total 24
|
||||
drwxrwxr-x 3 phracek phracek 4096 Sep 24 12:38 .
|
||||
drwxrwxr-x. 20 phracek phracek 4096 Sep 24 12:37 ..
|
||||
-rw-rw-r-- 1 phracek phracek 199 Sep 24 12:26 conf.yaml
|
||||
drwxrwxr-x 8 phracek phracek 4096 Sep 24 12:38 .git
|
||||
-rw-rw-r-- 1 phracek phracek 3243 Sep 24 12:38 id_rsa
|
||||
-rw------- 1 phracek phracek 78 Sep 24 12:28 .pypirc
|
||||
```
|
||||
|
||||
### Requirements
|
||||
|
||||
**requirements.txt** with both versions of pip. You must also set up your PyPI login details in **$HOME/.pypirc** , as described in the `-k/–keytab`. Also, **fedpkg** requires that you have an SSH key in your keyring that you uploaded to FAS.
|
||||
|
||||
### How to deploy release-bot
|
||||
|
||||
Releasing to PyPI requires the [wheel package][6] for both Python 2 and Python 3, so installwith both versions of pip. You must also set up your PyPI login details in, as described in the [PyPI documentation][7] . If you are releasing to Fedora, you must have an active [Kerberos][8] ticket while the bot runs, or specify the path to the Kerberos keytab file with. Also,requires that you have an SSH key in your keyring that you uploaded to FAS.
|
||||
|
||||
There are two ways to use release-bot: as a Docker image or as an OpenShift template.
|
||||
|
||||
#### Docker image
|
||||
|
||||
Let’s build the image using the `s2i` command:
|
||||
|
||||
```
|
||||
$ s2i build $CONFIGURATION_REPOSITORY_URL usercont/release-bot app-name
|
||||
```
|
||||
|
||||
where `$CONFIGURATION_REPOSITORY_URL` is a reference to the GitHub repository, like _https:// <GIT_LAB_PATH>/mtf-release-conf._
|
||||
|
||||
Let’s look at Docker images:
|
||||
|
||||
```
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
mtf-release-bot latest 08897871e65e 6 minutes ago 705 MB
|
||||
docker.io/usercont/release-bot latest 5b34aa670639 9 days ago 705 MB
|
||||
```
|
||||
|
||||
Now let’s try to run the **mtf-release-bot** image with this command:
|
||||
|
||||
```
|
||||
$ docker run mtf-release-bot
|
||||
---> Setting up ssh key...
|
||||
Agent pid 12
|
||||
Identity added: ./.ssh/id_rsa (./.ssh/id_rsa)
|
||||
12:21:18.982 configuration.py DEBUG Loaded configuration for fedora-modularity/meta-test-family
|
||||
12:21:18.982 releasebot.py INFO release-bot v0.4.1 reporting for duty!
|
||||
12:21:18.982 github.py DEBUG Fetching release-conf.yaml
|
||||
12:21:37.611 releasebot.py DEBUG No merged release PR found
|
||||
12:21:38.282 releasebot.py INFO Found new release issue with version: 0.8.5
|
||||
12:21:42.565 releasebot.py DEBUG No more open issues found
|
||||
12:21:43.190 releasebot.py INFO Making a new PR for release of version 0.8.5 based on an issue.
|
||||
12:21:46.709 utils.py DEBUG ['git', 'clone', 'https://github.com/fedora-modularity/meta-test-family.git', '.']
|
||||
|
||||
12:21:47.401 github.py DEBUG {"message":"Branch not found","documentation_url":"https://developer.github.com/v3/repos/branches/#get-branch"}
|
||||
12:21:47.994 utils.py DEBUG ['git', 'config', 'user.email', 'the.conu.bot@gmail.com']
|
||||
|
||||
12:21:47.996 utils.py DEBUG ['git', 'config', 'user.name', 'Release bot']
|
||||
|
||||
12:21:48.009 utils.py DEBUG ['git', 'checkout', '-b', '0.8.5-release']
|
||||
|
||||
12:21:48.014 utils.py ERROR No version files found. Aborting version update.
|
||||
12:21:48.014 utils.py WARNING No CHANGELOG.md present in repository
|
||||
[Errno 2] No such file or directory: '/tmp/tmpmbvb05jq/CHANGELOG.md'
|
||||
12:21:48.020 utils.py DEBUG ['git', 'commit', '--allow-empty', '-m', '0.8.5 release']
|
||||
[0.8.5-release 7ee62c6] 0.8.5 release
|
||||
|
||||
12:21:51.342 utils.py DEBUG ['git', 'push', 'origin', '0.8.5-release']
|
||||
|
||||
12:21:51.905 github.py DEBUG No open PR's found
|
||||
12:21:51.905 github.py DEBUG Attempting a PR for 0.8.5-release branch
|
||||
12:21:53.215 github.py INFO Created PR: https://github.com/fedora-modularity/meta-test-family/pull/243
|
||||
12:21:53.216 releasebot.py INFO I just made a PR request for a release version 0.8.5
|
||||
12:21:54.154 github.py DEBUG Comment added to PR: I just made a PR request for a release version 0.8.5
|
||||
Here's a [link to the PR](https://github.com/fedora-modularity/meta-test-family/pull/243)
|
||||
12:21:54.154 github.py DEBUG Attempting to close issue #242
|
||||
12:21:54.992 github.py DEBUG Closed issue #242
|
||||
```
|
||||
|
||||
As you can see, release-bot automatically closed the following issue, requesting a new upstream release of the meta-test-family: [https://github.com/fedora-modularity/meta-test-family/issues/243][9].
|
||||
|
||||
In addition, release-bot created a new PR with changelog. You can update the PR—for example, squash changelog—and once you merge it, it will automatically release to GitHub, and PyPI and Fedora will start.
|
||||
|
||||
You now have a working solution to easily release upstream versions of your package into PyPi and Fedora.
|
||||
|
||||
#### OpenShift template
|
||||
|
||||
Another option to deliver automated releases using release-bot is to deploy it in OpenShift.
|
||||
|
||||
The OpenShift template looks as follows:
|
||||
|
||||
```
|
||||
kind: Template
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: release-bot
|
||||
annotations:
|
||||
description: S2I Relase-bot image builder
|
||||
tags: release-bot s2i
|
||||
iconClass: icon-python
|
||||
labels:
|
||||
template: release-bot
|
||||
role: releasebot_application_builder
|
||||
objects:
|
||||
- kind : ImageStream
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name : ${APP_NAME}
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
- kind : ImageStream
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name : ${APP_NAME}-s2i
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
spec :
|
||||
tags :
|
||||
- name : latest
|
||||
from :
|
||||
kind : DockerImage
|
||||
name : usercont/release-bot:latest
|
||||
#importPolicy:
|
||||
# scheduled: true
|
||||
- kind : BuildConfig
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name : ${APP_NAME}
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
spec :
|
||||
triggers :
|
||||
- type : ConfigChange
|
||||
- type : ImageChange
|
||||
source :
|
||||
type : Git
|
||||
git :
|
||||
uri : ${CONFIGURATION_REPOSITORY}
|
||||
contextDir : ${CONFIGURATION_REPOSITORY}
|
||||
sourceSecret :
|
||||
name : release-bot-secret
|
||||
strategy :
|
||||
type : Source
|
||||
sourceStrategy :
|
||||
from :
|
||||
kind : ImageStreamTag
|
||||
name : ${APP_NAME}-s2i:latest
|
||||
output :
|
||||
to :
|
||||
kind : ImageStreamTag
|
||||
name : ${APP_NAME}:latest
|
||||
- kind : DeploymentConfig
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name: ${APP_NAME}
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
spec :
|
||||
strategy :
|
||||
type : Rolling
|
||||
triggers :
|
||||
- type : ConfigChange
|
||||
- type : ImageChange
|
||||
imageChangeParams :
|
||||
automatic : true
|
||||
containerNames :
|
||||
- ${APP_NAME}
|
||||
from :
|
||||
kind : ImageStreamTag
|
||||
name : ${APP_NAME}:latest
|
||||
replicas : 1
|
||||
selector :
|
||||
deploymentconfig : ${APP_NAME}
|
||||
template :
|
||||
metadata :
|
||||
labels :
|
||||
appid: release-bot-${APP_NAME}
|
||||
deploymentconfig : ${APP_NAME}
|
||||
spec :
|
||||
containers :
|
||||
- name : ${APP_NAME}
|
||||
image : ${APP_NAME}:latest
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
|
||||
parameters :
|
||||
- name : APP_NAME
|
||||
description : Name of application
|
||||
value :
|
||||
required : true
|
||||
- name : CONFIGURATION_REPOSITORY
|
||||
description : Git repository with configuration
|
||||
value :
|
||||
required : true
|
||||
```
|
||||
|
||||
The easiest way to deploy the **mtf-release-bot** repository with secret files into OpenShift is to use the following two commands:
|
||||
|
||||
```
|
||||
$ curl -sLO https://github.com/user-cont/release-bot/raw/master/openshift-template.yml
|
||||
```
|
||||
|
||||
In your OpenShift instance, deploy the template by running the following command:
|
||||
|
||||
```
|
||||
oc process -p APP_NAME="mtf-release-bot" -p CONFIGURATION_REPOSITORY="git@<git_lab_path>/mtf-release-conf.git" -f openshift-template.yml | oc apply
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
See the [example pull request][10] in the meta-test-family upstream repository, where you'll find information about what release-bot released. Once you get to this point, you can see that release-bot is able to push new upstream versions into GitHub, PyPI, and Fedora without heavy user intervention. It automates all the steps so you don’t need to manually upload and build new upstream versions of your package.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/upstream-releases-pypi-fedora-release-bot
|
||||
|
||||
作者:[Petr Stone Hracek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/phracek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pypi.org/
|
||||
[2]: https://getfedora.org/
|
||||
[3]: https://github.com/user-cont/release-bot
|
||||
[4]: https://github.com/fedora-modularity/meta-test-family
|
||||
[5]: https://admin.fedoraproject.org/accounts/
|
||||
[6]: https://pypi.org/project/wheel/
|
||||
[7]: https://packaging.python.org/tutorials/distributing-packages/#create-an-account
|
||||
[8]: https://web.mit.edu/kerberos/
|
||||
[9]: https://github.com/fedora-modularity/meta-test-family/issues/238
|
||||
[10]: https://github.com/fedora-modularity/meta-test-family/pull/243
|
@ -0,0 +1,80 @@
|
||||
Browsing the web with Min, a minimalist open source web browser
|
||||
======
|
||||
Not every web browser needs to carry every single feature. Min puts a minimalist spin on the everyday web browser.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
|
||||
|
||||
Does the world need another web browser? Even though the days of having a multiplicity of browsers to choose from are long gone, there still are folks out there developing new applications that help us use the web.
|
||||
|
||||
One of those new-fangled browsers is [Min][1]. As its name suggests (well, suggests to me, anyway), Min is a minimalist browser. That doesn't mean it's deficient in any significant way, and its open source, Apache 2.0 license piques my interest.
|
||||
|
||||
But is Min worth a look? Let's find out.
|
||||
|
||||
### Getting going
|
||||
|
||||
Min is one of many applications written using a development framework called [Electron][2]. (It's the same framework that brought us the [Atom text editor][3].) You can [get installers][4] for Linux, MacOS, and Windows. You can also grab the [source code from GitHub][5] and compile it if you're inclined.
|
||||
|
||||
I run Manjaro Linux, and there isn't an installer for that distro. Luckily, I was able to install Min from Manjaro's package manager.
|
||||
|
||||
Once that was done, I fired up Min by pressing Alt+F2, typing **min** in the run-application box, and pressing Enter, and I was ready to go.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/min-main.png)
|
||||
|
||||
Min is billed as a smarter, faster web browser. It definitely is fast—at the risk of drawing the ire of denizens of certain places on the web, I'll say that it starts faster than Firefox and Chrome on the laptops with which I tried it.
|
||||
|
||||
Browsing with Min is like browsing with Firefox or Chrome. Type a URL in the address bar, press Enter, and away you go.
|
||||
|
||||
### Min's features
|
||||
|
||||
While Min doesn't pack everything you'd find in browsers like Firefox or Chrome, it doesn't do too badly.
|
||||
|
||||
Like any other browser these days, Min supports multiple tabs. It also has a feature called Tasks, which lets you group your open tabs.
|
||||
|
||||
Min's default search engine is [DuckDuckGo][6]. I really like that touch because DuckDuckGo is one of my search engines of choice. If DuckDuckGo isn't your thing, you can set another search engine as the default in Min's preferences.
|
||||
|
||||
Instead of using tools like AdBlock to filter out content you don't want, Min has a built-in ad blocker. It uses the [EasyList filters][7], which were created for AdBlock. You can block scripts and images, and Min also has a built-in tracking blocker.
|
||||
|
||||
Like Firefox, Min has a reading mode called Reading List. Flipping the Reading List switch (well, clicking the icon in the address bar) removes most of the cruft from a page so you can focus on the words you're reading. Pages stay in the Reading List for 30 days.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/min-reading-list.png)
|
||||
|
||||
Speaking of focus, Min also has a Focus Mode that hides and prevents you from opening other tabs. So, if you're working in a web application, you'll need to click a few times if you feel like procrastinating.
|
||||
|
||||
Of course, Min has a number of keyboard shortcuts that can make using it a lot faster. You can find a reference for those shortcuts [on GitHub][8]. You can also change a number of them in Min's preferences.
|
||||
|
||||
I was pleasantly surprised to find Min can play videos on YouTube, Vimeo, Dailymotion, and similar sites. I also played sample tracks at music retailer 7Digital. I didn't try playing music on popular sites like Spotify or Last.fm (because I don't have accounts with them).
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/min-video.png)
|
||||
|
||||
### What's not there
|
||||
|
||||
The features that Min doesn't pack are as noticeable as the ones it does. There doesn't seem to be a way to bookmark sites. You either have to rely on Min's search history to find your favorite links, or you'll have to rely on a bookmarking service.
|
||||
|
||||
On top of that, Min doesn't support plugins. That's not a deal breaker for me—not having plugins is undoubtedly one of the reasons the browser starts and runs so quickly. I know a number of people who are … well, I wouldn't go so far to say junkies, but they really like their plugins. Min wouldn't cut it for them.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
Min isn't a bad browser. It's light and fast enough to appeal to the minimalists out there. That said, it lacks features that hardcore web browser users clamor for.
|
||||
|
||||
If you want a zippy browser that isn't weighed down by all the features of so-called modern web browsers, I suggest giving Min a serious look.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/min-web-browser
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://minbrowser.github.io/min/
|
||||
[2]: http://electron.atom.io/apps/
|
||||
[3]: https://opensource.com/article/17/5/atom-text-editor-packages-writers
|
||||
[4]: https://github.com/minbrowser/min/releases/
|
||||
[5]: https://github.com/minbrowser/min
|
||||
[6]: http://duckduckgo.com
|
||||
[7]: https://easylist.to/
|
||||
[8]: https://github.com/minbrowser/min/wiki
|
@ -0,0 +1,226 @@
|
||||
Chrony – An Alternative NTP Client And Server For Unix-like Systems
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/chrony-1-720x340.jpeg)
|
||||
|
||||
In this tutorial, we will be discussing how to install and configure **Chrony** , an alternative NTP client and server for Unix-like systems. Chrony can synchronise the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time. Chrony is free, open source and supports GNU/Linux and BSD variants such as FreeBSD, NetBSD, macOS, and Solaris.
|
||||
|
||||
### Installing Chrony
|
||||
|
||||
Chrony is available in the default repositories of most Linux distributions. If you’re on Arch Linux, run the following command to install it:
|
||||
|
||||
```
|
||||
$ sudo pacman -S chrony
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt-get install chrony
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install chrony
|
||||
```
|
||||
|
||||
Once installed, start **chronyd.service** daemon if it is not started already:
|
||||
|
||||
```
|
||||
$ sudo systemctl start chronyd.service
|
||||
```
|
||||
|
||||
Make it to start automatically on every reboot using command:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable chronyd.service
|
||||
```
|
||||
|
||||
To verify if the Chronyd.service has been started, run:
|
||||
|
||||
```
|
||||
$ sudo systemctl status chronyd.service
|
||||
```
|
||||
|
||||
If everything is OK, you will see an output something like below.
|
||||
|
||||
```
|
||||
● chrony.service - chrony, an NTP client/server
|
||||
Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: ena
|
||||
Active: active (running) since Wed 2018-10-17 10:34:53 UTC; 3min 15s ago
|
||||
Docs: man:chronyd(8)
|
||||
man:chronyc(1)
|
||||
man:chrony.conf(5)
|
||||
Main PID: 2482 (chronyd)
|
||||
Tasks: 1 (limit: 2320)
|
||||
CGroup: /system.slice/chrony.service
|
||||
└─2482 /usr/sbin/chronyd
|
||||
|
||||
Oct 17 10:34:53 ubuntuserver systemd[1]: Starting chrony, an NTP client/server...
|
||||
Oct 17 10:34:53 ubuntuserver chronyd[2482]: chronyd version 3.2 starting (+CMDMON
|
||||
Oct 17 10:34:53 ubuntuserver chronyd[2482]: Initial frequency -268.088 ppm
|
||||
Oct 17 10:34:53 ubuntuserver systemd[1]: Started chrony, an NTP client/server.
|
||||
Oct 17 10:35:03 ubuntuserver chronyd[2482]: Selected source 85.25.84.166
|
||||
Oct 17 10:35:03 ubuntuserver chronyd[2482]: Source 85.25.84.166 replaced with 2403
|
||||
Oct 17 10:35:03 ubuntuserver chronyd[2482]: Selected source 91.189.89.199
|
||||
Oct 17 10:35:06 ubuntuserver chronyd[2482]: Selected source 106.10.186.200
|
||||
```
|
||||
|
||||
As you can see, Chrony service is started and working!
|
||||
|
||||
### Configure Chrony
|
||||
|
||||
The NTP clients needs to know which NTP servers it should contact to get the current time. We can specify the NTP servers in the **server** or **pool** directive in the NTP configuration file. Usually, the default configuration file is **/etc/chrony/chrony.conf** or **/etc/chrony.conf** depending upon the Linux distribution version. For better reliability, it is recommended to specify at least three servers.
|
||||
|
||||
The following lines are just an example taken from my Ubuntu 18.04 LTS server.
|
||||
|
||||
```
|
||||
[...]
|
||||
# About using servers from the NTP Pool Project in general see (LP: #104525).
|
||||
# Approved by Ubuntu Technical Board on 2011-02-08.
|
||||
# See http://www.pool.ntp.org/join.html for more information.
|
||||
pool ntp.ubuntu.com iburst maxsources 4
|
||||
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
|
||||
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
|
||||
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
|
||||
[...]
|
||||
```
|
||||
|
||||
As you see in the above output, [**NTP Pool Project**][1] has been set as the default time server. For those wondering, NTP pool project is the cluster of time servers that provides NTP service for tens of millions clients across the world. It is the default time server for Ubuntu and most of the other major Linux distributions.
|
||||
|
||||
Here,
|
||||
|
||||
* the **iburst** option is used to speed up the initial synchronisation.
|
||||
* the **maxsources** refers the maximum number of NTP sources.
|
||||
|
||||
|
||||
|
||||
Please make sure that the NTP servers you have chosen are well synchronised, stable and close to your location to improve the accuracy of the time with NTP sources.
|
||||
|
||||
### Manage Chronyd from command line
|
||||
|
||||
Chrony has a command line utility named **chronyc** to control and monitor the **chrony** daemon (chronyd).
|
||||
|
||||
To check if **chrony** is synchronized, we can use the **tracking** command as shown below.
|
||||
|
||||
```
|
||||
$ chronyc tracking
|
||||
Reference ID : 6A0ABAC8 (t1.time.sg3.yahoo.com)
|
||||
Stratum : 3
|
||||
Ref time (UTC) : Wed Oct 17 11:48:51 2018
|
||||
System time : 0.000984587 seconds slow of NTP time
|
||||
Last offset : -0.000912981 seconds
|
||||
RMS offset : 0.007983995 seconds
|
||||
Frequency : 23.704 ppm slow
|
||||
Residual freq : +0.006 ppm
|
||||
Skew : 1.734 ppm
|
||||
Root delay : 0.089718960 seconds
|
||||
Root dispersion : 0.008760406 seconds
|
||||
Update interval : 515.1 seconds
|
||||
Leap status : Normal
|
||||
```
|
||||
|
||||
We can verify the current time sources that chrony uses with command:
|
||||
|
||||
```
|
||||
$ chronyc sources
|
||||
210 Number of sources = 8
|
||||
MS Name/IP address Stratum Poll Reach LastRx Last sample
|
||||
===============================================================================
|
||||
^- chilipepper.canonical.com 2 10 377 296 +102ms[ +104ms] +/- 279ms
|
||||
^- golem.canonical.com 2 10 377 302 +105ms[ +107ms] +/- 290ms
|
||||
^+ pugot.canonical.com 2 10 377 297 +36ms[ +38ms] +/- 238ms
|
||||
^- alphyn.canonical.com 2 10 377 279 -43ms[ -42ms] +/- 238ms
|
||||
^- dadns.cdnetworks.co.kr 2 10 377 1070 +40ms[ +42ms] +/- 314ms
|
||||
^* t1.time.sg3.yahoo.com 2 10 377 169 -13ms[ -11ms] +/- 80ms
|
||||
^+ sin1.m-d.net 2 10 275 567 -9633us[-7826us] +/- 115ms
|
||||
^- ns2.pulsation.fr 2 10 377 311 -75ms[ -73ms] +/- 250ms
|
||||
```
|
||||
|
||||
Chronyc utility can find the statistics of each sources, such as drift rate and offset estimation process, using **sourcestats** command.
|
||||
|
||||
```
|
||||
$ chronyc sourcestats
|
||||
210 Number of sources = 8
|
||||
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
|
||||
==============================================================================
|
||||
chilipepper.canonical.com 32 16 89m +6.293 14.345 +30ms 24ms
|
||||
golem.canonical.com 32 17 89m +0.312 18.887 +20ms 33ms
|
||||
pugot.canonical.com 32 18 89m +0.281 11.237 +3307us 23ms
|
||||
alphyn.canonical.com 31 20 88m -4.087 8.910 -58ms 17ms
|
||||
dadns.cdnetworks.co.kr 29 16 76m -1.094 9.895 -83ms 14ms
|
||||
t1.time.sg3.yahoo.com 32 16 91m +0.153 1.952 +2835us 4044us
|
||||
sin1.m-d.net 29 13 83m +0.049 6.060 -8466us 9940us
|
||||
ns2.pulsation.fr 32 17 88m +0.784 9.834 -62ms 22ms
|
||||
```
|
||||
|
||||
If your system is not connected to Internet, you need to notify Chrony that the system is not connected to the Internet. To do so, run:
|
||||
|
||||
```
|
||||
$ sudo chronyc offline
|
||||
[sudo] password for sk:
|
||||
200 OK
|
||||
```
|
||||
|
||||
To verify the status of your NTP sources, simply run:
|
||||
|
||||
```
|
||||
$ chronyc activity
|
||||
200 OK
|
||||
0 sources online
|
||||
8 sources offline
|
||||
0 sources doing burst (return to online)
|
||||
0 sources doing burst (return to offline)
|
||||
0 sources with unknown address
|
||||
```
|
||||
|
||||
As you see, all my NTP sources are down at the moment.
|
||||
|
||||
Once you’re connected to the Internet, just notify Chrony that your system is back online using command:
|
||||
|
||||
```
|
||||
$ sudo chronyc online
|
||||
200 OK
|
||||
```
|
||||
|
||||
To view the status of NTP source(s), run:
|
||||
|
||||
```
|
||||
$ chronyc activity
|
||||
200 OK
|
||||
8 sources online
|
||||
0 sources offline
|
||||
0 sources doing burst (return to online)
|
||||
0 sources doing burst (return to offline)
|
||||
0 sources with unknown address
|
||||
```
|
||||
|
||||
For more detailed explanation of all options and parameters, refer the man pages.
|
||||
|
||||
```
|
||||
$ man chronyc
|
||||
|
||||
$ man chronyd
|
||||
```
|
||||
|
||||
And, that’s all for now. Hope this was useful. In the subsequent tutorials, we will see how to setup a local NTP server using Chrony and configure the clients to use it to synchronise time.
|
||||
|
||||
Stay tuned!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-unix-like-systems/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ntppool.org/en/
|
@ -0,0 +1,94 @@
|
||||
4 open source alternatives to Microsoft Access
|
||||
======
|
||||
Build simple business applications and keep track of your data with these worthy open source alternatives.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
|
||||
When small businesses, community organizations, and similar-sized groups realize they need software to manage their data, they think first of Microsoft Access. That may be the right choice if you're already paying for a Microsoft Office subscription or don't care that it's proprietary. But it's far from your only option—whether you prefer to use open source alternatives from a philosophical standpoint or you don't have the big budget for a Microsoft Office subscription—there are several open source database applications that are worthy alternatives to proprietary software like Microsoft Access or Apple FileMaker.
|
||||
|
||||
If that sounds like you, here are four open source database tools for your consideration.
|
||||
|
||||
### LibreOffice Base
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/libreoffice-base.png)
|
||||
In case it's not obvious from its name, [Base][1] is part of the [LibreOffice][2] productivity suite, which includes Writer (word processing), Calc (spreadsheet), Impress (presentations), Draw (graphics), Charts (chart creation), and Math (formulas). As such, Base integrates with the other LibreOffice applications, much like Access does with the Microsoft Office suite. This means you can import and export data from Base into the suite's other applications to create financial reports, mail merges, charts, and more.
|
||||
|
||||
Base includes drivers that natively support multi-user database engines, including the open source MySQL, MariaDB, and PostgreSQL; Access; and other JDBC and ODBC-compliant databases. Built-in wizards and table definitions make it easy for new users to quickly get started building tables, writing queries, and creating forms and reports (such as invoices, sales reports, and customer lists). To learn more, consult the comprehensive [user manual][3] and dive into the [user forums][4]. If you're still stuck, you can find a [certified][5] support professional to help you out.
|
||||
|
||||
Installers are available for Linux, MacOS, Windows, and Android. LibreOffice is available under the [Mozilla Public License v2][6]; if you'd like to join the large contributor community and help improve the software, visit the [Get Involved][7] section of LibreOffice's website.
|
||||
|
||||
### DB Browser for SQLite
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/sqlitebrowser.png)
|
||||
|
||||
[DB Browser for SQLite][8] enables users to create and use SQLite database files without having to know complex SQL commands. This, plus its spreadsheet-like interface and pre-built wizards, make it a great option for new database users to get going without much background knowledge.
|
||||
|
||||
Although the application has gone through several name changes—from the original Arca Database Browser to the SQLite Database Browser and finally to the current name (in 2014, to avoid confusion with SQLite), it's stayed true to its goal of being easy for users to operate.
|
||||
|
||||
Its wizards enable users to easily create and modify database files, tables, indexes, records, etc.; import and export data to common file formats; create and issue queries and searches; and more. Installers are available for Windows, MacOS, and a variety of Linux versions, and its [wiki on GitHub][9] offers a wealth of information for users and developers.
|
||||
|
||||
DB Browser for SQLite is [bi-licensed][10] under the Mozilla Public License Version 2 and the GNU General Public License Version 3 or later, and you can download the source code from the project's website.
|
||||
|
||||
### Kexi
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kexi-3.0-table-view.png)
|
||||
As the database application in the [Calligra Suite][11] productivity software for the KDE desktop, [Kexi][12] integrates with the other applications in the suite, including Words (word processing), Sheets (spreadsheet), Stage (presentations), and Plan (project management).
|
||||
|
||||
As a full member of the [KDE][13] project, Kexi is purpose-built for KDE Plasma, but it's not limited to KDE users: Linux, BSD, and Unix users running GNOME can run the database, as can MacOS and Windows users.
|
||||
|
||||
Kexi's website says its development was "motivated by the lack of rapid application development ([RAD][14]) tools for database systems that are sufficiently powerful, inexpensive, open standards driven, and portable across many operating systems and hardware platforms." It has all the standard features you'd expect: designing databases, storing data, doing queries, processing data, and so forth.
|
||||
|
||||
Kexi is available under the [LGPL][15] open source license and you can download its [source code][16] from its development wiki. If you'd like to learn more, take a look at its [user handbook][17], [forums][18], and [userbase wiki][17].
|
||||
|
||||
### nuBuilder Forte
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/screenshot_from_2018-10-17_13-23-25.png)
|
||||
[NuBuilder Forte][19] is designed to be as easy as possible for people to use. It's a browser-based tool for developing web-based database applications.
|
||||
|
||||
Its clean interface and low-code tools (including support for drag-and-drop) allow users to create and use a database quickly. As a fully web-based application, data is accessible anywhere from a browser. Everything is stored in MySQL and can be backed up in one database file.
|
||||
|
||||
It uses industry-standard coding languages—HTML, PHP, JavaScript, and SQL—making it easy for developers to get started also.
|
||||
|
||||
Help is available in [videos][20] and other [documentation][21] for topics including creating forms, doing searches, building reports, and more.
|
||||
|
||||
nuBuilder Forte is licensed under [GPLv3.0][22] and you can download it on [GitHub][23]. You can learn more by consulting the [nuBuilder Forum][24] or watching its [demo][25] video.
|
||||
|
||||
Do you have a favorite open source database tool for building simple projects with little or no coding skill required? If so, please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/alternatives/access
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.libreoffice.org/discover/base/
|
||||
[2]: https://www.libreoffice.org/
|
||||
[3]: https://documentation.libreoffice.org/en/english-documentation/base/
|
||||
[4]: http://document-foundation-mail-archive.969070.n3.nabble.com/Users-f1639498.html
|
||||
[5]: https://www.libreoffice.org/get-help/professional-support/
|
||||
[6]: https://www.libreoffice.org/download/license/
|
||||
[7]: https://www.libreoffice.org/community/get-involved/
|
||||
[8]: http://sqlitebrowser.org/
|
||||
[9]: https://github.com/sqlitebrowser/sqlitebrowser/wiki
|
||||
[10]: https://github.com/sqlitebrowser/sqlitebrowser/blob/master/LICENSE
|
||||
[11]: https://www.calligra.org/
|
||||
[12]: https://www.calligra.org/kexi/
|
||||
[13]: https://www.kde.org/
|
||||
[14]: http://en.wikipedia.org/wiki/Rapid_application_development
|
||||
[15]: http://kexi-project.org/wiki/wikiview/index.php@KexiLicense.html
|
||||
[16]: http://kexi-project.org/wiki/wikiview/index.php@Download.html
|
||||
[17]: https://userbase.kde.org/Kexi/Handbook
|
||||
[18]: http://forum.kde.org/kexi
|
||||
[19]: https://www.nubuilder.com/
|
||||
[20]: https://www.nubuilder.com/videos
|
||||
[21]: https://www.nubuilder.com/wiki
|
||||
[22]: https://github.com/nuSoftware/nuBuilder4/blob/master/LICENSE.txt
|
||||
[23]: https://github.com/nuSoftware/nuBuilder4
|
||||
[24]: https://forums.nubuilder.com/viewforum.php?f=18&sid=7036bccdc08ba0da73181bc72cd63c62
|
||||
[25]: https://www.youtube.com/watch?v=tdh9ILCUAco&feature=youtu.be
|
@ -0,0 +1,74 @@
|
||||
MidnightBSD Hits 1.0! Checkout What’s New
|
||||
======
|
||||
A couple days ago, Lucas Holt announced the release of MidnightBSD 1.0. Let’s take a quick look at what is included in this new release.
|
||||
|
||||
### What is MidnightBSD?
|
||||
|
||||
![MidnightBSD][1]
|
||||
|
||||
[MidnightBSD][2] is a fork of FreeBSD. Lucas created MightnightBSD to be an option for desktop users and for BSD newbies. He wanted to create something that would allow people to quickly get a desktop experience on BSD. He believed that other options had too much of a focus on the server market.
|
||||
|
||||
### What is in MidnightBSD 1.0?
|
||||
|
||||
According to the [release notes][3], most of the work in 1.0 went towards updating the base system, improving the package manager and updating tools. The new release is compatible with FreeBSD 10-Stable.
|
||||
|
||||
Mports (MidnightBSD’s package management system) has been upgraded to support installing multiple packages with one command. The `mport upgrade` command has been fixed. Mports now tracks deprecated and expired packages. A new package format was also introduced.
|
||||
|
||||
<https://www.youtube.com/embed/-rlk2wFsjJ4>
|
||||
|
||||
Other changes include:
|
||||
|
||||
* [ZFS][4] is now supported as a boot file system. Previously, ZFS could only be used for additional storage.
|
||||
* Support for NVME SSDs
|
||||
* AMD Ryzen and Radeon support have been improved.
|
||||
* Intel, Broadcom, and other drivers updated.
|
||||
* bhyve support has been ported from FreeBSD
|
||||
* The sensors framework was removed because it was causing locking issues.
|
||||
* Sudo was removed and replaced with [doas][5] from OpenBSD.
|
||||
* Added support for Microsoft hyper-v
|
||||
|
||||
|
||||
|
||||
### Before you upgrade…
|
||||
|
||||
If you are a current MidnightBSD user or are thinking of trying out the new release, it would be a good idea to wait. Lucas is currently rebuilding packages to support the new package format and tooling. He also plans to upgrade packages and ports for the desktop environment over the next couple of months. He is currently working on porting Firefox 52 ESR because it is the last release that does not require Rust. He also hopes to get a newer version of Chromium ported to MidnightBSD. I would recommend keeping an eye on the MidnightBSD [Twi][6][t][6][ter][6] feed.
|
||||
|
||||
### What happened to 0.9?
|
||||
|
||||
You might notice that the previous release of MidnightBSD was 0.8.6. Now, you might be wondering “Why the jump to 1.0”? According to Lucas, he ran into several issues while developing 0.9. In fact, he restarted it several times. He ending up taking CURRENT in a different direction than the 0.9 branch and it became 1.0. Some packages also had an issue with the 0.* numbering system.
|
||||
|
||||
### Help Needed
|
||||
|
||||
Currently, the MidnightBSD project is the work of pretty much one guy, Lucas Holt. This is the main reason why development has been slow. If you are interested in helping out, you can contact him on [Twitter][6].
|
||||
|
||||
In the [release announcement video][7]. Lucas said that he had encountered problems with upstream projects accepting patches. They seem to think that MidnightBSD is too small. This often means that he has to port an application from scratch.
|
||||
|
||||
### Thoughts
|
||||
|
||||
I have a thing for the underdog. Of all the BSDs that I have interacted with, that monicker fits MidnightBSD the most. One guy trying to create an easy desktop experience. Currently, there is only one other BSD trying to do something similar: Project Trident. I think that this is a real barrier to BSDs success. Linux succeeds because people can quickly and easily install it. Hopefully, MidnightBSD does that for BSD, but right now it has a long way to go.
|
||||
|
||||
Have you ever used MidnightBSD? If not, what is your favorite BSD? What other BSD topics should we cover? Let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][8].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/midnightbsd-1-0-release/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/midnightbsd-wallpaper.jpeg
|
||||
[2]: https://www.midnightbsd.org/
|
||||
[3]: https://www.midnightbsd.org/notes/
|
||||
[4]: https://itsfoss.com/what-is-zfs/
|
||||
[5]: https://man.openbsd.org/doas
|
||||
[6]: https://twitter.com/midnightbsd
|
||||
[7]: https://www.youtube.com/watch?v=-rlk2wFsjJ4
|
||||
[8]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,82 @@
|
||||
TimelineJS: An interactive, JavaScript timeline building tool
|
||||
======
|
||||
Learn how to tell a story with TimelineJS.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk)
|
||||
|
||||
[TimelineJS 3][1] is an open source storytelling tool that anyone can use to create visually rich, interactive timelines to post on their websites. To get started, simply click “Make a Timeline” on the homepage and follow the easy [step-by-step instructions][1].
|
||||
|
||||
TimelineJS was developed at Northwestern University’s KnightLab in Evanston, Illinois. KnightLab is a community of designers, developers, students, and educators who work on experiments designed to push journalism into new spaces. TimelineJS has been used by more than 250,000 people, according to its website, to tell stories viewed millions of times. And TimelineJS3 is available in more than 60 languages.
|
||||
|
||||
Joe Germuska, the “chief nerd” who runs KnightLab’s technology, professional staff, and student fellows, explains, "TimelineJS was originally developed by Northwestern professor Zach Wise. He assigned his students a task to tell stories in a timeline format, only to find that none of the free available tools were as good as he thought they could be. KnightLab funded some of his time to develop the tool in 2012. Near the end of that year, I joined the lab, and among my early tasks was to bring TimelineJS in as a fully supported project of the lab. The next year, I helped Zach with a rewrite to address some issues. Along the way, many students have contributed. Interestingly, a group of students from Victoria University in Wellington, New Zealand, worked on TimelineJS (and some of our other tools) as part of a class project in 2016."
|
||||
|
||||
"In general, we designed TimelineJS to make it easy for non-technical people to tell rich, dynamic stories on the web in the context of events in time.”
|
||||
|
||||
Users create timelines by adding content into a Google spreadsheet. KnightLab provides a downloadable template that can be edited to create custom timelines. Experts can use their JSON skills to [create custom installations][2] while keeping TimelineJS’s core functionality.
|
||||
|
||||
This easy-to-follow [Vimeo video][3] shows how to get started with TimelineJS, and I used it myself to create my first timeline.
|
||||
|
||||
### Open sourcing the Adirondacks
|
||||
|
||||
Reid Larson, research and scholarly communication librarian at Hamilton College in Clinton, New York, began searching for ways to combine open data and visualization to chronicle the history of Essex County (a county in northern New York that makes up part of the Adirondacks), in the 1990s, when he was the director of the Essex County Historical Society/Adirondack History Center Museum.
|
||||
|
||||
"I wanted to take all the open data available on the history of Essex County and be able to present it to people visually. Most importantly, I wanted to make sure that the data would be available for use even if the applications used to present it are no longer available or supported," Larson explains.
|
||||
|
||||
Now at Hamilton College, Larson has found TimelineJS to be the ideal open source program to do just what he wanted: Chronicle and present a visually appealing timeline of selected places.
|
||||
|
||||
"It was a professor who was working on a project that required a solution such as Timeline, and after researching the possibilities, I started using Timeline for that project and subsequent projects," Larson adds.
|
||||
|
||||
TimelineJS can be used via a web browser, or the source code can be downloaded from [GitHub][4] for local use.
|
||||
|
||||
"I’ve been using the browser version, but I push it to the limits to see how far I can go with it, such as adding my own HTML tags. I want to fully understand it so that I can educate the students and faculty at Hamilton College on its uses," Larson says.
|
||||
|
||||
### An open source Eagle Scout project
|
||||
|
||||
Not only has Larson used TimelineJS for collegiate purposes, but his son, Erik, created an [interactive historical website][5] for his Eagle Scout project in 2017 using WordPress. The project is a chronicle of places in Waterville, New York, just south of Clinton, in Oneida County. Erik explains that he wants what he started to expand beyond the 36 places in Waterville. "The site is an experiment in online community building," Erik’s website reads.
|
||||
|
||||
Larson says he did a lot of the “tech work” on the project so that Erik could concentrate on content. The site was created with [Omeka][6], an open source web publishing platform for sharing digital collections and creating media-rich online exhibits, and [Curatescape][7], a framework for the open source Omeka CMS.
|
||||
|
||||
Larson explains that a key feature of TimelineJS is that it uses Google Sheets to store and organize the data used in the timeline. "Google Sheets is a good structure for organizing data simply, and that data will be available even if TimelineJS becomes unavailable in the future."
|
||||
|
||||
Larson says that he prefers using [ArcGIS][8] over KnightLab’s StoryMap because it uses spreadsheets to store content, whereas [StoryMap][9] does not. Larson is looking forward to integrating augmented reality into his projects in the future.
|
||||
|
||||
### Create your own open source timeline
|
||||
|
||||
I plan on using TimelineJS to create interactive content for the Development and Alumni Relations department at Clarkson University, where I am the development communications specialist. To practice with working with it, I created [a simple timeline][10] of the articles I’ve written for [Opensource.com][11]:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/google-sheet-timeline.png)
|
||||
![](https://opensource.com/sites/default/files/uploads/wordpress-timeline.png)
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/website-timeline.png)
|
||||
|
||||
As Reid Larson stated, it is very easy to use and the results are quite satisfactory. I was able to get a working timeline created and posted to my WordPress site in a matter of minutes. I used media that I had already uploaded to my Media Library in WordPress and simply copied the image address. I typed in the dates, locations, and information in the other cells and used “publish to web” under “file” in the Google spreadsheet. That produced a link and embed code. I created a new post in my WordPress site and pasted in the embed code, and the timeline was live and working.
|
||||
|
||||
Of course, there is more customization I need to do, but I was able to get it working quickly and easily, much as Reid said it would.
|
||||
|
||||
I will continue experimenting with TimelineJS on my own site, and when I get more comfortable with it, I’ll use it for my professional projects and try out the other apps that KnightLab has created for interactive, visually appealing storytelling.
|
||||
|
||||
What might you use TimelineJS for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/create-interactive-timelines-open-source-tool
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://timeline.knightlab.com/
|
||||
[2]: https://timeline.knightlab.com/docs/json-format.html
|
||||
[3]: https://vimeo.com/knightlab/timelinejs
|
||||
[4]: https://github.com/NUKnightLab/TimelineJS3
|
||||
[5]: http://nysplaces.com/
|
||||
[6]: https://github.com/omeka
|
||||
[7]: https://github.com/CPHDH/Curatescape
|
||||
[8]: https://www.arcgis.com/index.html
|
||||
[9]: https://storymap.knightlab.com/
|
||||
[10]: https://macharyas.com/index.php/2018/10/06/timeline/
|
||||
[11]: http://opensource.com/
|
@ -1,131 +1,133 @@
|
||||
Linux vs Mac: Linux 比 Mac 好的七个原因
|
||||
======
|
||||
最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。
|
||||
|
||||
如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。
|
||||
|
||||
### Linux 比 Mac 好的 7 个原因
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][2]
|
||||
|
||||
Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。
|
||||
|
||||
平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。
|
||||
|
||||
那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。
|
||||
|
||||
#### 1\. 价格
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][3]
|
||||
|
||||
假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。
|
||||
|
||||
那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。
|
||||
|
||||
买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。
|
||||
|
||||
#### 2\. 硬件支持
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][4]
|
||||
|
||||
Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5].
|
||||
|
||||
而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。
|
||||
|
||||
这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。
|
||||
|
||||
总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。
|
||||
|
||||
#### 3\. 安全性
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][7]
|
||||
|
||||
很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。
|
||||
|
||||
我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首.
|
||||
|
||||
从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。
|
||||
|
||||
这可能也是一个你应该选择 Linux 而不是 Mac 的原因。
|
||||
|
||||
#### 4\. 可定制性与灵活性
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][10]
|
||||
|
||||
如果你有不喜欢的东西,自己定制或者修改它都行。
|
||||
|
||||
举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。
|
||||
|
||||
除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗?
|
||||
|
||||
另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。
|
||||
|
||||
#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生]
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][14]
|
||||
|
||||
对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。
|
||||
|
||||
但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。
|
||||
|
||||
除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。
|
||||
|
||||
#### 6\. 可靠
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][15]
|
||||
|
||||
想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。
|
||||
|
||||
但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢?
|
||||
|
||||
答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。
|
||||
|
||||
#### 7\. 开源
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][16]
|
||||
|
||||
开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。
|
||||
|
||||
简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。
|
||||
|
||||
形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕!
|
||||
|
||||
如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。
|
||||
|
||||
### 总结
|
||||
|
||||
现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢?
|
||||
|
||||
在下方评论让我们知道你的想法。
|
||||
|
||||
Note: 这里的图片是以企鹅俱乐部为原型的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-vs-mac/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Ryze-Borgia](https://github.com/Ryze-Borgia)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[1]: https://itsfoss.com/linux-better-than-windows/
|
||||
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg
|
||||
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg
|
||||
[5]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[6]: https://hackintosh.com/
|
||||
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg
|
||||
[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html
|
||||
[9]: https://www.imore.com/how-to-remove-browser-hijack
|
||||
[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg
|
||||
[11]: https://www.gnome.org/
|
||||
[12]: https://itsfoss.com/best-gnome-extensions/
|
||||
[13]: https://elementary.io/
|
||||
[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg
|
||||
[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg
|
||||
[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg
|
||||
[17]: https://opensource.com/life/15/12/why-open-source
|
||||
Linux vs Mac:Linux 比 Mac 好的 7 个原因
|
||||
======
|
||||
|
||||
最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问,Linux 是个非常优秀的平台。但是它和其它操作系统一样也会有缺点。对于某些专门的领域,像是游戏,Windows 当然更好。而对于视频编辑等任务,Mac 系统可能更为方便。这一切都取决于你的偏好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。
|
||||
|
||||
如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac。
|
||||
|
||||
### Linux 比 Mac 好的 7 个原因
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][2]
|
||||
|
||||
Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、bash 和其它 shell,相比于 Windows,它们所支持的应用和游戏比较少。但也就是这点比较相似。
|
||||
|
||||
平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。
|
||||
|
||||
那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。
|
||||
|
||||
#### 1、价格
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][3]
|
||||
|
||||
假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。
|
||||
|
||||
那在这种情况下,你觉得花费几百美金买个系统完成这项工作,或者花费更多直接买个 Macbook 更好?当然,最终的决定权还是在你。
|
||||
|
||||
买个装好 Mac 系统的电脑?还是买个便宜的电脑,然后自己装上免费的 Linux 系统?这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro(专业的视频编辑软件)和 Logic Pro X(专业的音乐制作软件)(译注:这两款软件都是苹果公司推出的)。
|
||||
|
||||
#### 2、硬件支持
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][4]
|
||||
|
||||
Linux 支持多种平台。无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了,你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5]。
|
||||
|
||||
而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备配套的。
|
||||
|
||||
这有一些[在非苹果系统上安装 Mac OS 的教程][6]。这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。
|
||||
|
||||
总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。
|
||||
|
||||
#### 3、安全性
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][7]
|
||||
|
||||
很多人都说 iOS 和 Mac 是非常安全的平台。的确,或许相比于 Windows,它确实比较安全,可并不一定有 Linux 安全。
|
||||
|
||||
我不是在危言耸听。Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户使用着很慢的 Mac 电脑并且为此深受折磨。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首。
|
||||
|
||||
从来没有绝对安全的操作系统,Linux 也不例外。Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。
|
||||
|
||||
这可能也是一个你应该选择 Linux 而不是 Mac 的原因。
|
||||
|
||||
#### 4、可定制性与灵活性
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][10]
|
||||
|
||||
如果你有不喜欢的东西,自己定制或者修改它都行。
|
||||
|
||||
举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][18]。你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。
|
||||
|
||||
除此之外,你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)来打造适合你的系统。这个在 Mac OS 上可以做吗?
|
||||
|
||||
另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你喜欢 Mac OS 上的工作流,[Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言,Mac OS 缺乏这种灵活性。
|
||||
|
||||
#### 5、使用 Linux 有助于你的职业生涯(针对 IT 行业和科学领域的学生)
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][14]
|
||||
|
||||
对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。
|
||||
|
||||
但是当你开始使用 Linux 并且探索如何使用的时候,你将会积累非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行操作文件系统以及安装应用程序。你可能不会知道这些都是一些 IT 公司的新员工需要培训的内容。
|
||||
|
||||
除此之外,Linux 在就业市场上还有很大的发展空间。Linux 相关的技术有很多(Cloud、Kubernetes、Sysadmin 等),你可以学习,考取专业技能证书并获得一份相关的高薪工作。要学习这些,你必须使用 Linux 。
|
||||
|
||||
#### 6、可靠
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][15]
|
||||
|
||||
想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。
|
||||
|
||||
但是它为什么可靠呢,相比于 Mac OS,它的可靠体现在什么方面呢?
|
||||
|
||||
答案很简单 —— 给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux,你可以做任何你想做的事情 —— 这可能会导致(对某些人来说)糟糕的用户体验 —— 但它确实使其更可靠。
|
||||
|
||||
#### 7、开源
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][16]
|
||||
|
||||
开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。
|
||||
|
||||
简单解释一下,如果是开源软件,你可以自己查看或者修改源代码。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。
|
||||
|
||||
形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可太差劲了!
|
||||
|
||||
如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。
|
||||
|
||||
### 总结
|
||||
|
||||
现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢?
|
||||
|
||||
请在下方评论让我们知道你的想法。
|
||||
|
||||
提示:这里的图片是以企鹅俱乐部为原型的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-vs-mac/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Ryze-Borgia](https://github.com/Ryze-Borgia)
|
||||
校对:[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[1]: https://itsfoss.com/linux-better-than-windows/
|
||||
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg
|
||||
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg
|
||||
[5]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[6]: https://hackintosh.com/
|
||||
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg
|
||||
[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html
|
||||
[9]: https://www.imore.com/how-to-remove-browser-hijack
|
||||
[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg
|
||||
[11]: https://www.gnome.org/
|
||||
[12]: https://itsfoss.com/best-gnome-extensions/
|
||||
[13]: https://elementary.io/
|
||||
[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg
|
||||
[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg
|
||||
[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg
|
||||
[17]: https://opensource.com/life/15/12/why-open-source
|
||||
[18]: https://www.kde.org/plasma-desktop
|
||||
|
File diff suppressed because it is too large
Load Diff
350
translated/tech/20180823 CLI- improved.md
Normal file
350
translated/tech/20180823 CLI- improved.md
Normal file
@ -0,0 +1,350 @@
|
||||
|
||||
命令行:增强版
|
||||
======
|
||||
|
||||
我不确定有多少Web 开发者能完全逃避使用命令行。就我来说,我从1997年上大学就开始使用命令行了,那时的l33t-hacker 让我着迷,同时我也觉得它很难掌握。
|
||||
|
||||
过去这些年我的命令行本领在逐步加强,我经常会去搜寻在我工作中能使用的更好的命令行工具。下面就是我现在使用的用于增强原有命令行工具的列表。
|
||||
|
||||
|
||||
### 怎么忽略我所做的命令行增强
|
||||
|
||||
通常情况下我会用别名将新的或者增强的命令行工具链接到原来的命令行(如`cat`和`ping`)。
|
||||
|
||||
|
||||
如果我需要运行原来的命令的话(有时我确实需要这么做),我会像下面这样来运行未加修改的原来的命令行。(我用的是Mac,你的输出可能不一样)
|
||||
|
||||
|
||||
```
|
||||
$ \cat # 忽略叫 "cat" 的别名 - 具体解释: https://stackoverflow.com/a/16506263/22617
|
||||
$ command cat # 忽略函数和别名
|
||||
|
||||
```
|
||||
|
||||
### bat > cat
|
||||
|
||||
`cat`用于打印文件的内容,如果你在命令行上要花很多时间的话,例如语法高亮之类的功能会非常有用。我首先发现了[ccat][3]这个有语法高亮功能的的工具,然后我发现了[bat][4],它的功能有语法高亮,分页,行号和git集成。
|
||||
|
||||
|
||||
`bat`命令也能让我在输出里(只要输出比屏幕的高度长)
|
||||
使用`/`关键字绑定来搜索(和用`less`搜索功能一样)。
|
||||
|
||||
|
||||
![Simple bat output][5]
|
||||
|
||||
我将别名`cat`链接到了`bat`命令:
|
||||
|
||||
|
||||
|
||||
```
|
||||
alias cat='bat'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][4]
|
||||
|
||||
### prettyping > ping
|
||||
|
||||
`ping`非常有用,当我碰到“糟了,是不是什么服务挂了?/我的网不通了?”这种情况下我最先想到的工具就是它了。但是`prettyping`(“prettyping” 可不是指"pre typing")(译注:英文字面意思是'预打印')在`ping`上加上了友好的输出,这可让我感觉命令行友好了很多呢。
|
||||
|
||||
|
||||
![/images/cli-improved/ping.gif][6]
|
||||
|
||||
我也将`ping`用别名链接到了`prettyping`命令:
|
||||
|
||||
|
||||
```
|
||||
alias ping='prettyping --nolegend'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][7]
|
||||
|
||||
### fzf > ctrl+r
|
||||
|
||||
在命令行上使用`ctrl+r`将允许你在命令历史里[反向搜索][8]使用过的命令,这是个挺好的小技巧,但是它需要你给出非常精确的输入才能正常运行。
|
||||
|
||||
`fzf`这个工具相比于`ctrl+r`有了**巨大的**进步。它能针对命令行历史进行模糊查询,并且提供了对可能的合格结果进行全面交互式预览。
|
||||
|
||||
|
||||
除了搜索命令历史,`fzf`还能预览和打开文件,我在下面的视频里展示了这些功能。
|
||||
|
||||
|
||||
为了这个预览的效果,我创建了一个叫`preview`的别名,它将`fzf`和前文提到的`bat`组合起来完成预览功能,还给上面绑定了一个定制的热键Ctrl+o来打开 VS Code:
|
||||
|
||||
|
||||
```
|
||||
alias preview="fzf --preview 'bat --color \"always\" {}'"
|
||||
# 支持在 VS Code 里用ctrl+o 来打开选择的文件
|
||||
export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'"
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][9]
|
||||
|
||||
### htop > top
|
||||
|
||||
`top`是当我想快速诊断为什么机器上的CPU跑的那么累或者风扇为什么突然呼呼大做的时候首先会想到的工具。我在产品环境也会使用这个工具。讨厌的是Mac上的`top`和 Linux 上的`top`有着极大的不同(恕我直言,应该是差的多)。
|
||||
|
||||
|
||||
不过,`htop`是对 Linux 上的`top`和 Mac 上蹩脚的`top`的极大改进。它增加了包括颜色输出编码,键盘热键绑定以及不同的视图输出,这极大的帮助了我来理解进程之间的父子关系。
|
||||
|
||||
|
||||
方便的热键绑定包括:
|
||||
|
||||
* P - CPU使用率排序
|
||||
* M - 内存使用排序
|
||||
* F4 - 用字符串过滤进程(例如只看包括"node"的进程)
|
||||
* space - 锚定一个单独进程,这样我能观察它是否有尖峰状态
|
||||
|
||||
|
||||
![htop output][10]
|
||||
|
||||
在Mac Sieera 上htop 有个奇怪的bug,不过这个bug可以通过以root运行来绕过(我实在记不清这个bug 是什么,但是这个别名能搞定它,有点讨厌的是我得每次都输入root密码。):
|
||||
|
||||
|
||||
```
|
||||
alias top="sudo htop" # 给top加上别名并且绕过 Sieera 上的bug
|
||||
```
|
||||
|
||||
💾 [Installation directions][11]
|
||||
|
||||
### diff-so-fancy > diff
|
||||
|
||||
我非常确定我是一些年前从 Paul Irish 那儿学来的这个技巧,尽管我很少直接使用`diff`,但我的git命令行会一直使用`diff`。`diff-so-fancy`给了我代码语法颜色和更改字符高亮的功能。
|
||||
|
||||
|
||||
![diff so fancy][12]
|
||||
|
||||
在我的`~/.gitconfig`文件里我有下面的选项来打开`git diff`和`git show`的`diff-so-fancy`功能。
|
||||
|
||||
|
||||
```
|
||||
[pager]
|
||||
diff = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
show = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][13]
|
||||
|
||||
### fd > find
|
||||
|
||||
尽管我使用 Mac, 但我从来不是一个Spotlight的拥趸,我觉得它的性能很差,关键字也难记,加上更新它自己的数据库时会拖慢CPU,简直一无是处。我经常使用[Alfred][14],但是它的搜索功能也工作的不是很好。
|
||||
|
||||
|
||||
我倾向于在命令行中搜索文件,但是`find`的难用在于很难去记住那些合适的表达式来描述我想要的文件。(而且 Mac 上的 find 命令和非Mac的find命令还有些许不同,这更加深了我的失望。)
|
||||
|
||||
`fd`是一个很好的替代品(它的作者和`bat`的作者是同一个人)。它非常快而且对于我经常要搜索的命令非常好记。
|
||||
|
||||
|
||||
|
||||
几个使用方便的例子:
|
||||
|
||||
```
|
||||
$ fd cli # 所有包含"cli"的文件名
|
||||
$ fd -e md # 所有以.md作为扩展名的文件
|
||||
$ fd cli -x wc -w # 搜索"cli"并且在每个搜索结果上运行`wc -w`
|
||||
|
||||
|
||||
```
|
||||
|
||||
![fd output][15]
|
||||
|
||||
💾 [Installation directions][16]
|
||||
|
||||
### ncdu > du
|
||||
|
||||
对我来说,知道当前的磁盘空间使用是非常重要的任务。我用过 Mac 上的[Dish Daisy][17],但是我觉得那个程序产生结果有点慢。
|
||||
|
||||
|
||||
`du -sh`命令是我经常会跑的命令(`-sh`是指结果以`总结`和`人类可读`的方式显示),我经常会想要深入挖掘那些占用了大量磁盘空间的目录,看看到底是什么在占用空间。
|
||||
|
||||
`ncdu`是一个非常棒的替代品。它提供了一个交互式的界面并且允许快速的扫描那些占用了大量磁盘空间的目录和文件,它又快又准。(尽管不管在哪个工具的情况下,扫描我的home目录都要很长时间,它有550G)
|
||||
|
||||
|
||||
一旦当我找到一个目录我想要“处理”一下(如删除,移动或压缩文件),我都会使用命令+点击屏幕[iTerm2][18]上部的目录名字来对那个目录执行搜索。
|
||||
|
||||
|
||||
![ncdu output][19]
|
||||
|
||||
还有另外一个选择[一个叫nnn的另外选择][20],它提供了一个更漂亮的界面,它也提供文件尺寸和使用情况,实际上它更像一个全功能的文件管理器。
|
||||
|
||||
|
||||
我的`ncdu`使用下面的别名链接:
|
||||
|
||||
```
|
||||
alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules"
|
||||
|
||||
```
|
||||
|
||||
|
||||
选项有:
|
||||
|
||||
* `--color dark` 使用颜色方案
|
||||
* `-rr` 只读模式(防止误删和运行新的登陆程序)
|
||||
* `--exclude` 忽略不想操作的目录
|
||||
|
||||
|
||||
|
||||
💾 [Installation directions][21]
|
||||
|
||||
### tldr > man
|
||||
|
||||
几乎所有的单独命令行工具都有一个相伴的手册,其可以被`man <命令名>`来调出,但是在`man`的输出里找到东西可有点让人困惑,而且在一个包含了所有的技术细节的输出里找东西也挺可怕的。
|
||||
|
||||
|
||||
这就是TL;DR(译注:英文里`文档太长,没空去读`的缩写)项目创建的初衷。这是一个由社区驱动的文档系统,而且针对的是命令行。就我现在用下来,我还没碰到过一个命令它没有相应的文档,你[也可以做贡献][22]。
|
||||
|
||||
|
||||
![TLDR output for 'fd'][23]
|
||||
|
||||
作为一个小技巧,我将`tldr`的别名链接到`help`(这样输入会快一点。。。)
|
||||
|
||||
```
|
||||
alias help='tldr'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][24]
|
||||
|
||||
### ack || ag > grep
|
||||
|
||||
`grep`毫无疑问是一个命令行上的强力工具,但是这些年来它已经被一些工具超越了,其中两个叫`ack`和`ag`。
|
||||
|
||||
|
||||
我个人对`ack`和`ag`都尝试过,而且没有非常明显的个人偏好,(那也就是说他们都很棒,并且很相似)。我倾向于默认只使用`ack`,因为这三个字符就在指尖,很好打。并且,`ack`有大量的`ack --`参数可以使用,(你一定会体会到这一点。)
|
||||
|
||||
|
||||
`ack`和`ag`都将使用正则表达式来表达搜索,这非常契合我的工作,我能指定搜索的文件类型而不用使用类似于`--js`或`--html`的文件标识(尽管`ag`比`ack`在文件类型过滤器里包括了更多的文件类型。)
|
||||
|
||||
|
||||
两个工具都支持常见的`grep`选项,如`-B`和`-A`用于在搜索的上下文里指代`之前`和`之后`。
|
||||
|
||||
|
||||
![ack in action][25]
|
||||
|
||||
因为`ack`不支持markdown(而我又恰好写了很多markdown), 我在我的`~/.ackrc`文件里放了如下的定制语句:
|
||||
|
||||
|
||||
|
||||
```
|
||||
--type-set=md=.md,.mkd,.markdown
|
||||
--pager=less -FRX
|
||||
|
||||
```
|
||||
|
||||
💾 Installation directions: [ack][26], [ag][27]
|
||||
|
||||
[Futher reading on ack & ag][28]
|
||||
|
||||
### jq > grep et al
|
||||
|
||||
我是[jq][29]的粉丝之一。当然一开始我也在它的语法里苦苦挣扎,好在我对查询语言还算有些使用心得,现在我对`jq`可以说是每天都要用。(不过从前我要么使用grep 或者使用一个叫[json][30]的工具,相比而言后者的功能就非常基础了。)
|
||||
|
||||
|
||||
我甚至开始撰写一个`jq`的教程系列(有2500字并且还在增加),我还发布了一个[web tool][31]和一个Mac 上的应用(这个还没有发布。)
|
||||
|
||||
|
||||
`jq`允许我传入一个 JSON 并且能非常简单的将其转变为一个 使用JSON格式的结果,这正是我想要的。下面这个例子允许我用一个命令更新我的所有节点依赖(为了阅读方便,我将其分成为多行。)
|
||||
|
||||
|
||||
```
|
||||
$ npm i $(echo $(\
|
||||
npm outdated --json | \
|
||||
jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \
|
||||
))
|
||||
|
||||
```
|
||||
上面的命令将使用npm 的 JSON 输出格式来列出所有的过期节点依赖,然后将下面的源JSON转换为:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"node-jq": {
|
||||
"current": "0.7.0",
|
||||
"wanted": "0.7.0",
|
||||
"latest": "1.2.0",
|
||||
"location": "node_modules/node-jq"
|
||||
},
|
||||
"uuid": {
|
||||
"current": "3.1.0",
|
||||
"wanted": "3.2.1",
|
||||
"latest": "3.2.1",
|
||||
"location": "node_modules/uuid"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
转换结果为:(译注:原文此处并未给出结果)
|
||||
|
||||
上面的结果会被作为`npm install`的输入,你瞧,我的升级就这样全部搞定了。(当然,这里有点小题大做了。)
|
||||
|
||||
|
||||
### 很荣幸提及一些其他的工具
|
||||
|
||||
我也在开始尝试一些别的工具,但我还没有完全掌握他们。(除了`ponysay`,当我新启动一个命令行会话时,它就会出现。)
|
||||
|
||||
|
||||
* [ponysay][32] > cowsay
|
||||
* [csvkit][33] > awk et al
|
||||
* [noti][34] > `display notification`
|
||||
* [entr][35] > watch
|
||||
|
||||
|
||||
|
||||
### 你有什么好点子吗?
|
||||
|
||||
|
||||
上面是我的命令行清单。能告诉我们你的吗?你有没有试着去增强一些你每天都会用到的命令呢?请告诉我,我非常乐意知道。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://remysharp.com/2018/08/23/cli-improved
|
||||
|
||||
作者:[Remy Sharp][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:DavidChenLiang(https://github.com/DavidChenLiang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://remysharp.com
|
||||
[1]: https://remysharp.com/images/terminal-600.jpg
|
||||
[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount
|
||||
[3]: https://github.com/jingweno/ccat
|
||||
[4]: https://github.com/sharkdp/bat
|
||||
[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output)
|
||||
[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output)
|
||||
[7]: http://denilson.sa.nom.br/prettyping/
|
||||
[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks
|
||||
[9]: https://github.com/junegunn/fzf
|
||||
[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output)
|
||||
[11]: http://hisham.hm/htop/
|
||||
[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output)
|
||||
[13]: https://github.com/so-fancy/diff-so-fancy
|
||||
[14]: https://www.alfredapp.com/
|
||||
[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output)
|
||||
[16]: https://github.com/sharkdp/fd/
|
||||
[17]: https://daisydiskapp.com/
|
||||
[18]: https://www.iterm2.com/
|
||||
[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output)
|
||||
[20]: https://github.com/jarun/nnn
|
||||
[21]: https://dev.yorhel.nl/ncdu
|
||||
[22]: https://github.com/tldr-pages/tldr#contributing
|
||||
[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for 'fd')
|
||||
[24]: http://tldr-pages.github.io/
|
||||
[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args)
|
||||
[26]: https://beyondgrep.com
|
||||
[27]: https://github.com/ggreer/the_silver_searcher
|
||||
[28]: http://conqueringthecommandline.com/book/ack_ag
|
||||
[29]: https://stedolan.github.io/jq
|
||||
[30]: http://trentm.com/json/
|
||||
[31]: https://jqterm.com
|
||||
[32]: https://github.com/erkin/ponysay
|
||||
[33]: https://csvkit.readthedocs.io/en/1.0.3/
|
||||
[34]: https://github.com/variadico/noti
|
||||
[35]: http://www.entrproject.org/
|
390
translated/tech/20180912 How to build rpm packages.md
Normal file
390
translated/tech/20180912 How to build rpm packages.md
Normal file
@ -0,0 +1,390 @@
|
||||
如何构建rpm包
|
||||
======
|
||||
|
||||
节省跨多个主机安装文件和脚本的时间和精力。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1)
|
||||
|
||||
自20多年前我开始使用 Linux 以来,我已经使用过基于 rpm 的软件包管理器在 Red Hat 和 Fedora Linux系统上安装软件。我使用过 **rpm** 程序本身,还有 **yum** 和 **DNF** ,用于在我的 Linux 主机上安装和更新软件包,DNF 是 yum 的一个紧密后代。 yum 和 DNF 工具是 rpm 实用程序的包装器,它提供了其他功能,例如查找和安装包依赖项的功能。
|
||||
|
||||
多年来,我创建了许多 Bash 脚本,其中一些脚本具有单独的配置文件,我希望在大多数新计算机和虚拟机上安装这些脚本。这也能解决安装所有这些软件包需要花费大量时间的难题,因此我决定通过创建一个 rpm 软件包来自动执行该过程,我可以将其复制到目标主机并将所有这些文件安装在适当的位置。虽然 **rpm** 工具以前用于构建 rpm 包,但该功能已被删除,并且创建了一个新工具来构建新的 rpm。
|
||||
|
||||
当我开始这个项目时,我发现很少有关于创建 rpm 包的信息,但我找到了一本书,名为《Maximum RPM》,这本书才帮我弄明白了。这本书现在已经过时了,我发现的绝大多数信息都是如此。它也已经绝版,使用复印件需要花费数百美元。[Maximum RPM][1] 的在线版本是免费提供的,并保持最新。 [RPM 网站][2]还有其他网站的链接,这些网站上有很多关于 rpm 的文档。其他的信息往往是简短的,显然都是假设你已经对该过程有了很多了解。
|
||||
|
||||
此外,我发现的每个文档都假定代码需要在开发环境中从源代码编译。我不是开发人员。我是一个系统管理员,我们系统管理员有不同的需求,因为我们不需要或者我们不应该为了管理任务而去编译代码;我们应该使用 shell 脚本。所以我们没有源代码,因为它需要被编译成二进制可执行文件。我们拥有的是一个也是可执行的源代码。
|
||||
|
||||
在大多数情况下,此项目应作为非 root 用户执行。 Rpm 包永远不应该由 root 用户构建,而只能由非特权普通用户构建。我将指出哪些部分应该以 root 身份执行,哪些部分应由非 root,非特权用户执行。
|
||||
|
||||
### 准备
|
||||
|
||||
首先,打开一个终端会话,然后 `su` 到 root 用户。 请务必使用 `-` 选项以确保启用完整的 root 环境。 我不认为系统管理员应该使用 `sudo` 来执行任何管理任务。 在我的个人博客文章中可以找出为什么:[真正的系统管理员不要使用 sudo][3]。
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ su -
|
||||
Password:
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
创建可用于此项目的普通用户 student,并为该用户设置密码。
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# useradd -c "Student User" student
|
||||
[root@testvm1 ~]# passwd student
|
||||
Changing password for user student.
|
||||
New password: <Enter the password>
|
||||
Retype new password: <Enter the password>
|
||||
passwd: all authentication tokens updated successfully.
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
构建 rpm 包需要 `rpm-build` 包,该包可能尚未安装。 现在以 root 身份安装它。 请注意,此命令还将安装多个依赖项。 数量可能会有所不同,具体取决于主机上已安装的软件包; 它在我的测试虚拟机上总共安装了17个软件包,这是非常小的。
|
||||
|
||||
```
|
||||
dnf install -y rpm-build
|
||||
```
|
||||
|
||||
除非另有明确指示,否则本项目的剩余部分应以普通用户用户 student 来执行。 打开另一个终端会话并使用 `su` 切换到该用户以执行其余步骤。 使用以下命令从 GitHub 下载我准备好的开发目录结构 utils.tar 这个<ruby>tar 包<rt>tarball</rt></ruby>(LCTT 译注:tarball 是以 tar 命令来打包和压缩的文件的统称):
|
||||
|
||||
```
|
||||
wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar
|
||||
```
|
||||
|
||||
此 tar 包包含将由最终 rpm 程序安装的所有文件和 Bash 脚本。 还有一个完整的 spec 文件,你可以使用它来构建 rpm。 我们将详细介绍 spec 文件的每个部分。
|
||||
|
||||
作为普通学生 student,使用你的家目录作为当前工作目录(pwd),解压缩 tar 包。
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd ; tar -xvf utils.tar
|
||||
```
|
||||
|
||||
使用 `tree` 命令验证~/development 的目录结构和包含的文件,如下所示:
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ tree development/
|
||||
development/
|
||||
├── license
|
||||
│ ├── Copyright.and.GPL.Notice.txt
|
||||
│ └── GPL_LICENSE.txt
|
||||
├── scripts
|
||||
│ ├── create_motd
|
||||
│ ├── die
|
||||
│ ├── mymotd
|
||||
│ └── sysdata
|
||||
└── spec
|
||||
└── utils.spec
|
||||
|
||||
3 directories, 7 files
|
||||
[student@testvm1 ~]$
|
||||
```
|
||||
|
||||
`mymotd` 脚本创建一个发送到标准输出的“当日消息”数据流。 `create_motd` 脚本运行 `mymotd` 脚本并将输出重定向到 /etc/motd 文件。 此文件用于向使用SSH远程登录的用户显示每日消息。
|
||||
|
||||
`die` 脚本是我自己的脚本,它将 `kill` 命令包装在一些代码中,这些代码可以找到与指定字符串匹配的运行程序并将其终止。 它使用 `kill -9` 来确保kill命令一定会执行。
|
||||
|
||||
`sysdata` 脚本可以显示有关计算机硬件,还有已安装的 Linux 版本,所有已安装的软件包以及硬盘驱动器元数据的数万行数据。 我用它来记录某个时间点的主机状态。 我以后可以用它作为参考。 我曾经这样做是为了维护我为客户安装的主机记录。
|
||||
|
||||
你可能需要将这些文件和目录的所有权更改为 student:student 。 如有必要,使用以下命令执行此操作:
|
||||
|
||||
```
|
||||
chown -R student:student development
|
||||
```
|
||||
|
||||
此文件树中的大多数文件和目录将通过你在此项目期间创建的 rpm 包安装在 Fedora 系统上。
|
||||
|
||||
### 创建构建目录结构
|
||||
|
||||
`rpmbuild` 命令需要非常特定的目录结构。 你必须自己创建此目录结构,因为没有提供自动方式。 在家目录中创建以下目录结构:
|
||||
|
||||
```
|
||||
~ ─ rpmbuild
|
||||
├── RPMS
|
||||
│ └── noarch
|
||||
├── SOURCES
|
||||
├── SPECS
|
||||
└── SRPMS
|
||||
```
|
||||
|
||||
我们不会创建 rpmbuild/RPMS/X86_64 目录,因为对于64位编译的二进制文件这是特定于体系结构的。 我们有 shell 脚本,不是特定于体系结构的。 实际上,我们也不会使用 SRPMS 目录,它将包含编译器的源文件。
|
||||
|
||||
### 检查 spec 文件
|
||||
|
||||
每个 spec 文件都有许多部分,其中一些部分可能会被忽视或省略,取决于 rpm 构建的具体情况。 这个特定的 spec 文件不是工作所需的最小文件的示例,但它是一个很好的包含不需要编译的文件的中等复杂 spec 文件的例子。 如果需要编译,它将在`构建`部分中执行,该部分在此 spec 文件中省略掉了,因为它不是必需的。
|
||||
|
||||
#### 前言
|
||||
|
||||
这是 spec 文件中唯一没有标签的部分。 它包含运行命令 `rpm -qi [Package Name]` 时看到的大部分信息。 每个数据都是一行,由标签和标签值的文本数据组成。
|
||||
|
||||
```
|
||||
###############################################################################
|
||||
# Spec file for utils
|
||||
################################################################################
|
||||
# Configured to be built by user student or other non-root user
|
||||
################################################################################
|
||||
#
|
||||
Summary: Utility scripts for testing RPM creation
|
||||
Name: utils
|
||||
Version: 1.0.0
|
||||
Release: 1
|
||||
License: GPL
|
||||
URL: http://www.both.org
|
||||
Group: System
|
||||
Packager: David Both
|
||||
Requires: bash
|
||||
Requires: screen
|
||||
Requires: mc
|
||||
Requires: dmidecode
|
||||
BuildRoot: ~/rpmbuild/
|
||||
|
||||
# Build with the following syntax:
|
||||
# rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
`rpmbuild` 程序会忽略注释行。我总是喜欢在本节中添加注释,其中包含创建包所需的 `rpmbuild` 命令的确切语法。摘要标签是包的简短描述。 Name,Version 和 Release 标签用于创建 rpm 文件的名称,如utils-1.00-1.rpm 中所示。通过增加发行版号码和版本号,你可以创建 rpm 包去更新旧版本的。
|
||||
|
||||
许可证标签定义了发布包的许可证。我总是使用 GPL 的一个变体。指定许可证对于澄清包中包含的软件是开源的这一事实非常重要。这也是我将许可证和 GPL 语句包含在将要安装的文件中的原因。
|
||||
|
||||
URL 通常是项目或项目所有者的网页。在这种情况下,它是我的个人网页。
|
||||
|
||||
Group 标签很有趣,通常用于 GUI 应用程序。 Group 标签的值决定了应用程序菜单中的哪一组图标将包含此包中可执行文件的图标。与 Icon 标签(我们此处未使用)一起使用时,Group 标签允许添加图标和所需信息用于将程序启动到应用程序菜单结构中。
|
||||
|
||||
Packager 标签用于指定负责维护和创建包的人员或组织。
|
||||
|
||||
Requires 语句定义此 rpm 包的依赖项。每个都是包名。如果其中一个指定的软件包不存在,DNF 安装实用程序将尝试在 /etc/yum.repos.d 中定义的某个已定义的存储库中找到它,如果存在则安装它。如果 DNF 找不到一个或多个所需的包,它将抛出一个错误,指出哪些包丢失并终止。
|
||||
|
||||
BuildRoot 行指定顶级目录,`rpmbuild` 工具将在其中找到 spec 文件,并在构建包时在其中创建临时目录。完成的包将存储在我们之前指定的noarch子目录中。注释显示了构建此程序包的命令语法,包括定义了目标体系结构的 `–target noarch` 选项。因为这些是Bash脚本,所以它们与特定的CPU架构无关。如果省略此选项,则构建将选用正在执行构建的CPU的体系结构。
|
||||
|
||||
`rpmbuild` 程序可以针对许多不同的体系结构,并且使用 `--target` 选项允许我们在不同的体系结构主机上构建特定体系结构的包,其具有与执行构建的体系结构不同的体系结构。所以我可以在 x86_64 主机上构建一个用于 i686 架构的软件包,反之亦然。
|
||||
|
||||
如果你有自己的网站,请将打包者的名称更改为你自己的网站。
|
||||
|
||||
#### 描述
|
||||
|
||||
spec 文件的 `描述` 部分包含 rpm 包的描述。 它可以很短,也可以包含许多信息。 我们的 `描述` 部分相当简洁。
|
||||
|
||||
```
|
||||
%description
|
||||
A collection of utility scripts for testing RPM creation.
|
||||
```
|
||||
|
||||
#### 准备
|
||||
|
||||
`准备` 部分是在构建过程中执行的第一个脚本。 在安装程序包期间不会执行此脚本。
|
||||
|
||||
这个脚本只是一个 Bash shell 脚本。 它准备构建目录,根据需要创建用于构建的目录,并将相应的文件复制到各自的目录中。 这将包括完整编译作为构建的一部分所需的源。
|
||||
|
||||
$RPM_BUILD_ROOT 目录表示已安装系统的根目录。 在 $RPM_BUILD_ROOT 目录中创建的目录是实时文件系统中的绝对路径,例如 /user/local/share/utils,/usr/local/bin 等。
|
||||
|
||||
对于我们的包,我们没有预编译源,因为我们的所有程序都是 Bash 脚本。 因此,我们只需将这些脚本和其他文件复制到已安装系统的目录中。
|
||||
|
||||
```
|
||||
%prep
|
||||
################################################################################
|
||||
# Create the build tree and copy the files from the development directories #
|
||||
# into the build tree. #
|
||||
################################################################################
|
||||
echo "BUILDROOT = $RPM_BUILD_ROOT"
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
cp /home/student/development/utils/scripts/* $RPM_BUILD_ROOT/usr/local/bin
|
||||
cp /home/student/development/utils/license/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
exit
|
||||
```
|
||||
|
||||
请注意,本节末尾的 exit 语句是必需的。
|
||||
|
||||
#### 文件
|
||||
|
||||
spec 文件的这一部分定义了要安装的文件及其在目录树中的位置。 它还指定了要安装的每个文件的文件属性以及所有者和组所有者。 文件权限和所有权是可选的,但我建议明确设置它们以消除这些属性在安装时不正确或不明确的任何可能性。 如果目录尚不存在,则会在安装期间根据需要创建目录。
|
||||
|
||||
```
|
||||
%files
|
||||
%attr(0744, root, root) /usr/local/bin/*
|
||||
%attr(0644, root, root) /usr/local/share/utils/*
|
||||
```
|
||||
|
||||
#### 安装前
|
||||
|
||||
在我们的实验室项目的 spec 文件中,此部分为空。 这将放置那些需要 rpm 安装前执行的脚本。
|
||||
|
||||
#### 安装后
|
||||
|
||||
spec 文件的这一部分是另一个 Bash 脚本。 这个在安装文件后运行。 此部分几乎可以是你需要或想要的任何内容,包括创建文件,运行系统命令以及重新启动服务以在进行配置更改后重新初始化它们。 我们的 rpm 包的 `安装后` 脚本执行其中一些任务。
|
||||
|
||||
```
|
||||
%post
|
||||
################################################################################
|
||||
# Set up MOTD scripts #
|
||||
################################################################################
|
||||
cd /etc
|
||||
# Save the old MOTD if it exists
|
||||
if [ -e motd ]
|
||||
then
|
||||
cp motd motd.orig
|
||||
fi
|
||||
# If not there already, Add link to create_motd to cron.daily
|
||||
cd /etc/cron.daily
|
||||
if [ ! -e create_motd ]
|
||||
then
|
||||
ln -s /usr/local/bin/create_motd
|
||||
fi
|
||||
# create the MOTD for the first time
|
||||
/usr/local/bin/mymotd > /etc/motd
|
||||
```
|
||||
|
||||
此脚本中包含的注释应明确其用途。
|
||||
|
||||
#### 卸载后
|
||||
|
||||
此部分包含将在卸载 rpm 软件包后运行的脚本。 使用 rpm 或 DNF 删除包会删除文件部分中列出的所有文件,但它不会删除安装后部分创建的文件或链接,因此我们需要在本节中处理。
|
||||
|
||||
此脚本通常由清理任务组成,只是清除以前由rpm安装的文件,但rpm本身无法完成清除。 对于我们的包,它包括删除 `安装后` 脚本创建的链接并恢复 motd 文件的已保存原件。
|
||||
|
||||
```
|
||||
%postun
|
||||
# remove installed files and links
|
||||
rm /etc/cron.daily/create_motd
|
||||
|
||||
# Restore the original MOTD if it was backed up
|
||||
if [ -e /etc/motd.orig ]
|
||||
then
|
||||
mv -f /etc/motd.orig /etc/motd
|
||||
fi
|
||||
```
|
||||
|
||||
#### 清理
|
||||
|
||||
这个 Bash 脚本在 rpm 构建过程之后开始清理。 下面 `清理` 部分中的两行删除了 `rpm-build` 命令创建的构建目录。 在许多情况下,可能还需要额外的清理。
|
||||
|
||||
```
|
||||
%clean
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/bin
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
```
|
||||
|
||||
#### 更新日志
|
||||
|
||||
此可选的文本部分包含 rpm 及其包含的文件的更改列表。 最新的更改记录在本部分顶部。
|
||||
|
||||
```
|
||||
%changelog
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
```
|
||||
|
||||
使用你自己的姓名和电子邮件地址替换标题行中的数据。
|
||||
|
||||
### 构建 rpm
|
||||
|
||||
spec 文件必须位于 rpmbuild 目录树的 SPECS 目录中。 我发现最简单的方法是创建一个指向该目录中实际 spec 文件的链接,以便可以在开发目录中对其进行编辑,而无需将其复制到 SPECS 目录。 将 SPECS 目录设为当前工作目录,然后创建链接。
|
||||
|
||||
```
|
||||
cd ~/rpmbuild/SPECS/
|
||||
ln -s ~/development/spec/utils.spec
|
||||
```
|
||||
|
||||
运行以下命令以构建 rpm 。 如果没有错误发生,只需要花一点时间来创建 rpm 。
|
||||
|
||||
```
|
||||
rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
检查 ~/rpmbuild/RPMS/noarch 目录以验证新的 rpm 是否存在。
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/
|
||||
[student@testvm1 noarch]$ ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[student@testvm1 noarch]$
|
||||
```
|
||||
|
||||
### 测试 rpm
|
||||
|
||||
以 root 用户身份安装 rpm 以验证它是否正确安装并且文件是否安装在正确的目录中。 rpm 的确切名称将取决于你在 Preamble 部分中标签的值,但如果你使用了示例中的值,则 rpm 名称将如下面的示例命令所示:
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/
|
||||
[root@testvm1 noarch]# ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[root@testvm1 noarch]# rpm -ivh utils-1.0.0-1.noarch.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:utils-1.0.0-1 ################################# [100%]
|
||||
```
|
||||
|
||||
检查 /usr/local/bin 以确保新文件存在。 你还应验证是否已创建 /etc/cron.daily 中的 create_motd 链接。
|
||||
|
||||
使用 `rpm -q --changelog utils` 命令查看更改日志。 使用 `rpm -ql utils` 命令(在 `ql`中为小写 L )查看程序包安装的文件。
|
||||
|
||||
```
|
||||
[root@testvm1 noarch]# rpm -q --changelog utils
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
|
||||
[root@testvm1 noarch]# rpm -ql utils
|
||||
/usr/local/bin/create_motd
|
||||
/usr/local/bin/die
|
||||
/usr/local/bin/mymotd
|
||||
/usr/local/bin/sysdata
|
||||
/usr/local/share/utils/Copyright.and.GPL.Notice.txt
|
||||
/usr/local/share/utils/GPL_LICENSE.txt
|
||||
/usr/local/share/utils/utils.spec
|
||||
[root@testvm1 noarch]#
|
||||
```
|
||||
|
||||
删除包。
|
||||
|
||||
```
|
||||
rpm -e utils
|
||||
```
|
||||
|
||||
### 试验
|
||||
|
||||
现在,你将更改 spec 文件以要求一个不存在的包。 这将模拟无法满足的依赖关系。 在现有依赖行下立即添加以下行:
|
||||
|
||||
```
|
||||
Requires: badrequire
|
||||
```
|
||||
|
||||
构建包并尝试安装它。 显示什么消息?
|
||||
|
||||
我们使用 `rpm` 命令来安装和删除 `utils` 包。 尝试使用 yum 或 DNF 安装软件包。 你必须与程序包位于同一目录中,或指定程序包的完整路径才能使其正常工作。
|
||||
|
||||
### 总结
|
||||
|
||||
在这里看一下创建 rpm 包的基础知识,我们没有涉及很多标签和很多部分。 下面列出的资源可以提供更多信息。 构建 rpm 包并不困难;你只需要正确的信息。 我希望这对你有所帮助——我花了几个月的时间来自己解决问题。
|
||||
|
||||
我们没有涵盖源代码构建,但如果你是开发人员,那么从这一点开始应该是一个简单的步骤。
|
||||
|
||||
创建 rpm 包是另一种成为懒惰系统管理员的好方法,可以节省时间和精力。 它提供了一种简单的方法来分发和安装那些我们作为系统管理员需要在许多主机上安装的脚本和其他文件。
|
||||
|
||||
### 资料
|
||||
|
||||
- Edward C. Baily,Maximum RPM,Sams著,于2000年,ISBN 0-672-31105-4
|
||||
- Edward C. Baily,[Maximum RPM][1],更新在线版本
|
||||
- [RPM文档][4]:此网页列出了 rpm 的大多数可用在线文档。 它包括许多其他网站的链接和有关 rpm 的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/how-build-rpm-packages
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[1]: http://ftp.rpm.org/max-rpm/
|
||||
[2]: http://rpm.org/index.html
|
||||
[3]: http://www.both.org/?p=960
|
||||
[4]: http://rpm.org/documentation.html
|
@ -0,0 +1,111 @@
|
||||
使用 Syncthing —— 一个开源同步工具来把握你数据的控制权
|
||||
|
||||
决定如何存储和共享您的个人信息。
|
||||
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
|
||||
如今,我们的一些最重要的财产——从家人和朋友的照片和视频到财务和医疗文件——都是数据。
|
||||
即便是云存储服务的迅猛发展,我们仍有对隐私和个人数据缺乏控制的担忧。从 PRISM 的监控计划到谷歌[让 APP 开发者扫描你的个人邮件][1],这些新闻的报道应该会让我们对我们个人信息的安全性有所顾虑。
|
||||
|
||||
[Syncthing][2] 可以让你放下心来。它是一款开源点对点的文件同步工具,可以运行在Linux、Windows、Mac、Android和其他 (抱歉,没有iOS)。Syncthing 使用自定的协议,叫[块交换协议](3)。简而言之,Syncting 能让你无需拥有服务器来跨设备同步数据,。
|
||||
|
||||
### Linux
|
||||
|
||||
在这篇文章中,我将解释如何在 Linux 电脑和安卓手机之间安装和同步文件。
|
||||
|
||||
Syncting 在大多数流行的发行版都能下载。Fedora 28 包含其最新版本。
|
||||
|
||||
要在 Fedora 上安装 Syncthing,你能在软件中心搜索,或者执行以下命令:
|
||||
|
||||
```
|
||||
sudo dnf install syncthing syncthing-gtk
|
||||
```
|
||||
|
||||
一旦安装好后,打开它。你将会看到一个助手帮你配置 Syncthing。点击 **下一步** 直到它要求配置 WebUI。最安全的选项是选择**监听本地地址**。那将会禁止 Web 接口并且阻止未经授权的用户。
|
||||
|
||||
![Syncthing in Setup WebUI dialog box][5]
|
||||
|
||||
Syncthing 安装时的 WebUI 对话框
|
||||
|
||||
关闭对话框。现在 Syncthing 安装好了。是时间分享一个文件夹,连接一台设备开始同步了。但是,让我们用你其他的客户端继续。
|
||||
|
||||
### Android
|
||||
|
||||
Syncthing 在 Google Play 和 F-Droid 应用商店都能下载
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing2.png)
|
||||
|
||||
安装应用程序后,会显示欢迎界面。给 Syncthing 授予你设备存储的权限。
|
||||
你可能会被要求为了此应用程序而禁用电池优化。这样做是安全的,因为我们将优化应用程序,使其仅在插入并连接到无线网络时同步。
|
||||
|
||||
点击主菜单图标来到**设置**,然后是**运行条件**。点击**总是在后台运行**, **仅在充电时运行**和**仅在 WIFI 下运行**。现在你的安卓客户端已经准备好与你的设备交换文件。
|
||||
|
||||
Syncting 中有两个重要的概念需要记住:文件夹和设备。文件夹是你想要分享的,但是你必须有一台设备来分享。 Syncthing 允许你用不同的设备分享独立的文件夹。设备是通过交换设备 ID 来添加的。设备ID是在 Syncting 首次启动时创建的一个唯一的密码安全标识符。
|
||||
|
||||
### 连接设备
|
||||
|
||||
现在让我们连接你的Linux机器和你的Android客户端。
|
||||
|
||||
在您的Linux计算机中,打开 Syncting,单击 **设置** 图标,然后单击 **显示ID** ,就会显示一个二维码。
|
||||
|
||||
在你的安卓手机上,打开 Syncthing。在主界面上,点击 **设备** 页后点击 **+** 。在第一个区域内点击二维码符号来启动二维码扫描。
|
||||
|
||||
将你手机的摄像头对准电脑上的二维码。设备ID字段将由您的桌面客户端设备 ID 填充。起一个适合的名字并保存。因为添加设备有两种方式,现在你需要在电脑客户端上确认你想要添加安卓手机。你的电脑客户端可能会花上好几分钟来请求确认。当提示确认时,点击**添加**。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing6.png)
|
||||
|
||||
在 **新设备** 窗口,你能确认并配置一些关于你设备的选项,像是**设备名** 和 **地址**。如果你在地址那一栏选择 dynamic (动态),客户端将会自动探测设备的 IP 地址,但是你想要保持住某一个 IP 地址,你能将该地址填进这一栏里。如果你已经创建了文件夹(或者在这之后),你也能与新设备分享这个文件夹。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing7.png)
|
||||
|
||||
你的电脑和安卓设备已经配对,可以交换文件了。(如果你有多台电脑或手机,只需重复这些步骤。)
|
||||
|
||||
### 分享文件夹
|
||||
|
||||
既然您想要同步的设备之间已经连接,现在是时候共享一个文件夹了。您可以在电脑上共享文件夹,添加了该文件夹中的设备将获得一份副本。
|
||||
|
||||
若要共享文件夹,请转至**设置**并单击**添加共享文件夹**:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing8.png)
|
||||
|
||||
在下一个窗口中,输入要共享的文件夹的信息:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing9.png)
|
||||
|
||||
你可以使用任何你想要的标签。**文件夹ID **将随机生成,用于识别客户端之间的文件夹。在**路径**里,点击**浏览**就能定位到你想要分享的文件夹。如果你想 Syncthing 监控文件夹的变化(例如删除,新建文件等),点击** 监控文件系统变化**
|
||||
|
||||
记住,当你分享一个文件夹,在其他客户端的任何改动都将会反映到每一台设备上。这意味着如果你在其他电脑和手机设备之间分享了一个包含图片的文件夹,在这些客户端上的改动都会同步到每一台设备。如果这不是你想要的,你能让你的文件夹“只是发送"给其他客户端,但是其他客户端的改动都不会被同步。
|
||||
|
||||
完成后,转至**与设备共享**页并选择要与之同步文件夹的主机:
|
||||
|
||||
您选择的所有设备都需要接受共享请求;您将在设备上收到通知。
|
||||
|
||||
正如共享文件夹时一样,您必须配置新的共享文件夹:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing12.png)
|
||||
|
||||
同样,在这里您可以定义任何标签,但是 ID 必须匹配每个客户端。在文件夹选项中,选择文件夹及其文件的位置。请记住,此文件夹中所做的任何更改都将反映到文件夹所允许同步的每个设备上。
|
||||
|
||||
这些是连接设备和与 Syncting 共享文件夹的步骤。开始复制可能需要几分钟时间,这取决于您的网络设置或您是否不在同一网络上。
|
||||
|
||||
Syncting 提供了更多出色的功能和选项。试试看,并把握你数据的控制权。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/take-control-your-data-syncthing
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ypingcn](https://github.com/ypingcn)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://docs.syncthing.net/specs/bep-v1.html
|
||||
[4]: /file/410191
|
||||
[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png "Syncthing in Setup WebUI dialog box"
|
87
translated/tech/20180927 5 cool tiling window managers.md
Normal file
87
translated/tech/20180927 5 cool tiling window managers.md
Normal file
@ -0,0 +1,87 @@
|
||||
5 个很酷的平铺窗口管理器
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg)
|
||||
Linux 桌面生态中有多种窗口管理器 (WM)。有些是作为桌面环境的一部分开发的。有的则被用作独立程序。平铺 WM 就是这种情况,它提供了一个更轻量级的自定义环境。本文介绍了五种这样的平铺 WM 供你试用。
|
||||
|
||||
### i3
|
||||
|
||||
[i3][1] 是最受欢迎的平铺窗口管理器之一。与大多数其他此类 WM 一样,i3 专注于低资源消耗和用户可定制性。
|
||||
|
||||
您可以参考[Magazine 上的这篇文章][2]了解 i3 安装细节以及如何配置它。
|
||||
|
||||
### sway
|
||||
|
||||
[sway][3] 是一个平铺 Wayland 合成器。它有与现有 i3 配置兼容的优点,因此你可以使用它来替换 i3 并使用 Wayland 作为显示协议。
|
||||
|
||||
您可以使用 dnf 从 Fedora 仓库安装 sway:
|
||||
|
||||
```
|
||||
$ sudo dnf install sway
|
||||
```
|
||||
|
||||
如果你想从 i3 迁移到 sway,这里有一个[迁移指南][4]。
|
||||
|
||||
### Qtile
|
||||
|
||||
[Qtile][5] 是另一个平铺管理器,也恰好是用 Python 编写的。默认情况下,你在位于 ~/.config/qtile/config.py 下的 Python 脚本中配置 Qtile。当此脚本不存在时,Qtile 会使用默认[配置][6]。
|
||||
|
||||
Qtile 使用 Python 的一个好处是你可以编写脚本来控制 WM。例如,以下脚本打印屏幕详细信息:
|
||||
|
||||
```
|
||||
> from libqtile.command import Client
|
||||
> c = Client()
|
||||
> print(c.screen.info)
|
||||
{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0}
|
||||
```
|
||||
|
||||
要在 Fedora 上安装 Qlite,请使用以下命令:
|
||||
|
||||
```
|
||||
$ sudo dnf install qtile
|
||||
```
|
||||
|
||||
### dwm
|
||||
|
||||
[dwm][7] 窗口管理器更侧重于轻量级。该项目的一个目标是保持 dwm 最小。例如,整个代码库从未超过 2000 行代码。另一方面,dwm 不容易定制和配置。实际上,改变 dwm 默认配置的唯一方法是[编辑源代码并重新编译程序][8]。
|
||||
|
||||
如果你想尝试默认配置,你可以使用 dnf 在 Fedora 中安装 dwm:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm
|
||||
```
|
||||
|
||||
对于那些想要改变 dwm 配置的人,Fedora 中有一个 dwm-user 包。该软件包使用用户主目录中 ~/.dwm/config.h 的配置自动重新编译 dwm。
|
||||
|
||||
### awesome
|
||||
|
||||
[awesome][9] 最初是作为 dwm 的一个分支开发,使用外部配置文件提供 WM 的配置。配置通过 Lua 脚本完成,这些脚本允许你编写脚本以自动执行任务或创建 widget。
|
||||
|
||||
你可以使用这个命令在 Fedora 上安装 awesome:
|
||||
|
||||
```
|
||||
$ sudo dnf install awesome
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/5-cool-tiling-window-managers/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org
|
||||
[1]: https://i3wm.org/
|
||||
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
|
||||
[3]: https://swaywm.org/
|
||||
[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide
|
||||
[5]: http://www.qtile.org/
|
||||
[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py
|
||||
[7]: https://dwm.suckless.org/
|
||||
[8]: https://dwm.suckless.org/customisation/
|
||||
[9]: https://awesomewm.org/
|
@ -1,74 +0,0 @@
|
||||
如何在家中使用 SSH 和 SFTP 协议
|
||||
======
|
||||
|
||||
通过 SSH 和 SFTP 协议 ,我们能够访问其他设备 ,有效而且安全的传输文件及更多 。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
|
||||
|
||||
多年前 ,我决定配置一个额外的电脑 ,以便我能在工作时能够访问它来传输我所需要的文件 。最基本的一步是要求你的网络提供商 ( ISP )提供一个固定的地址 ( IP Address )。
|
||||
|
||||
保证你系统的访问是安全的 ,这是一个不必要但很重要的步骤 。在此种特殊情况下 ,我计划只在工作的时候能够访问它 。所以我能够约束访问的 IP 地址 。即使如此 ,你依然要尽多的采用安全措施 。一旦你建立起来这个 ,全世界的人们都能立即访问你的系统 。这是非常令人惊奇及恐慌的 。你能通过日志文件来发现这一点 。我推测有探测机器人在尽它们所能的搜索那些没有安全措施的系统 。
|
||||
|
||||
在我建立系统不久后 ,我觉得我的访问是一个简单的玩具而不是我想要的 ,为此 ,我将它关闭了好让我不在为它而担心 。尽管如此 ,这个系统在家庭网络中对于 SSH 和 SFTP 还有其他的用途 ,它至少已经为你而创建了 。
|
||||
|
||||
一个必备条件 ,你家的另一台电脑必须已经开机了 ,至于电脑是否已经非常老旧是没有影响的 。你也需要知道另一台电脑的 IP 地址 。有两个方法能够知道做到 ,一个是通过网页进入你的路由器 ,一般情况下你的地址格式类似于 **192.168.1.254** 。通过一些搜索 ,找出当前是开机的并且和系统 eth0 或者 wifi 挂钩的系统是足够简单的 。如何组织你所敢兴趣的电脑是一个挑战 。
|
||||
|
||||
询问电脑问题是简单的 ,打开 shell ,输入 :
|
||||
|
||||
```
|
||||
ifconfig
|
||||
|
||||
```
|
||||
|
||||
命令会输出一些信息 ,你所需要的信息在 `inet` 后面 ,看起来和 **192.168.1.234** 类似 。当你发现这个后 ,回到你的客户端电脑 ,在命令行中输入 :
|
||||
|
||||
```
|
||||
ssh gregp@192.168.1.234
|
||||
|
||||
```
|
||||
|
||||
上面的命令能够正常执行 ,**gregp** 必须在主机系统中是中确的用户名 。用户的密码也会被需要 。如果你键入的密码和用户名都是正确的 ,你将通过 shell 环境连接上了其他电脑 。我坦诚 ,对于 SSH 我并不是经常使用的 。我偶尔使用它 ,所以我能够运行 `dnf` 来更新我就坐的其他电脑 。通常 ,我用 SFTP :
|
||||
|
||||
```
|
||||
sftp grego@192.168.1.234
|
||||
|
||||
```
|
||||
|
||||
对于用更简单的方法来把一个文件传输到另一个文件 ,我有很强烈的需求 。相对于闪存棒和额外的设备 ,它更加方便 ,耗时更少 。
|
||||
|
||||
一旦连接建立成功 ,SFTP 有两个基本的命令 ,`get` ,从主机接收文件 ;`put` ,向主机发送文件 。在客户端 ,我经常移动到我想接收或者传输的文件夹下 ,在开始连接之前 。在连接之后 ,你将在顶层目录 **home/gregp** 。一旦连接成功 ,你将和在客户端一样的使用 `cd` ,除非在主机上你改变了你的工作路径 。你会需要用 `ls` 来确认你的位置 。
|
||||
|
||||
在客户端 ,如果你想改变工作路劲 。用 `lcd` 命令 ( **local change directory**)。相同的 ,用 `lls` 来显示客户端工作目录的内容 。
|
||||
|
||||
如果你不喜欢主机工作目录的名字时 ,你该怎么办 ?用 `mkdir` 在主机上创建一个新的文件夹 。或者将整个文件全拷贝到主机 :
|
||||
|
||||
```
|
||||
put -r thisDir/
|
||||
|
||||
```
|
||||
|
||||
在主机上创建文件夹和传输文件以及子文件夹是非常快速的 ,能达到硬件的上限 。在网络传输的过程中不会遇到瓶颈 。查看 SFTP 能够使用的功能 ,查看 :
|
||||
|
||||
```
|
||||
man sftp
|
||||
|
||||
```
|
||||
|
||||
在我的电脑上我也可以在 windows 虚拟机上用 SFTP ,另一个优势是配置一个虚拟机而不是一个双系统 。这让我能够在系统的 Linux 部分移入或者移出文件 。到目前为止 ,我只用了 windows 的客户端 。
|
||||
|
||||
你能够进入到任何通过无线或者 WIFI 连接到你路由器的设备 。暂时 ,我使用一个叫做 [SSHDroid][1] 的应用 ,能够在被动模式下运行 SSH 。换句话来说 ,你能够用你的电脑访问作为主机的 Android 设备 。近来我还发现了另外一个应用 ,[Admin Hands][2] ,不管你的客户端是桌面还是手机 ,都能使用 SSH 或者 SFTP 操作 。这个应用对于备份和手机分享照片是极好的 。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ssh-sftp-home-network
|
||||
|
||||
作者:[Geg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[singledo](https://github.com/singledo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/greg-p
|
||||
[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid
|
||||
[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US
|
@ -1,24 +1,25 @@
|
||||
在 Linux 命令行中使用 ls 列出文件的提示
|
||||
======
|
||||
学习一些 Linux "ls" 命令最有用的变化。
|
||||
|
||||
学习一些 Linux `ls` 命令最有用的变化。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
|
||||
|
||||
我在 Linux 中最先学到的命令之一就是 `ls`。了解系统中文件所在目录中的内容非常重要。能够查看和修改不仅仅是一些文件还要所有文件也很重要。
|
||||
我在 Linux 中最先学到的命令之一就是 `ls`。了解系统中文件所在目录中的内容非常重要。能够查看和修改不仅仅是一些文件还有所有文件也很重要。
|
||||
|
||||
我的第一个 Linux 备忘录是[单页 Linux 手册][1],它于 1999 年发布,它成为我的首选参考资料。当我开始探索 Linux 时,我把它贴在桌子上并经常参考它。它的第一页第一列的底部有使用 `ls -l` 列出文件的命令。
|
||||
我的第一个 Linux 备忘录是[单页 Linux 手册][1],它于 1999 年发布,成了我的首选参考资料。当我开始探索 Linux 时,我把它贴在桌子上并经常参考它。它在第一页第一列的底部介绍了 `ls -l` 列出文件的命令。
|
||||
|
||||
之后,我将学习这个最基本命令的其他迭代。通过 `ls` 命令,我开始了解 Linux 文件权限的复杂性以及哪些是我的文件,哪些需要 root 或者 root 权限来修改。随着时间的推移,我习惯使用命令行,虽然我仍然使用 `ls -l` 来查找目录中的文件,但我经常使用 `ls -al`,这样我就可以看到可能需要更改的隐藏文件,比如那些配置文件。
|
||||
之后,我将学习这个最基本命令的其它迭代。通过 `ls` 命令,我开始了解 Linux 文件权限的复杂性,以及哪些是我的文件,哪些需要 root 或者 sudo 权限来修改。随着时间的推移,我习惯了使用命令行,虽然我仍然使用 `ls -l` 来查找目录中的文件,但我经常使用 `ls -al`,这样我就可以看到可能需要更改的隐藏文件,比如那些配置文件。
|
||||
|
||||
根据 Eric Fischer 在[Linux 文档项目][2]中关于 `ls` 命令的文章,该命令的根源可以追溯到 1961年 MIT 的相容分时系统 (CTSS
|
||||
) 上的 `listf` 命令。当 CTSS 被 [Multics][3] 代替时,命令变为 `list`,并有像 `list -all` 的开关。根据[维基百科][4],“ls” 出现在 AT&T Unix 的原始版本中。我们今天在 Linux 系统上使用的 `ls` 命令来自 [GNU Core Utilities][5]。
|
||||
根据 Eric Fischer 在 [Linux 文档项目][2]中关于 `ls` 命令的文章,该命令的起源可以追溯到 1961 年 MIT 的<ruby>相容分时系统<rt>Compatible Time-Sharing System</rt></ruby>(CTSS)上的 `listf` 命令。当 CTSS 被 [Multics][3] 代替时,命令变为 `list`,并有像 `list -all` 的开关。根据[维基百科][4],`ls` 出现在 AT&T Unix 的原始版本中。我们今天在 Linux 系统上使用的 `ls` 命令来自 [GNU Core Utilities][5]。
|
||||
|
||||
大多数时候,我只使用几个迭代的命令。使用 `ls` 或 `ls -al` 查看目录内部是我通常使用该命令的方法,但是你还应该熟悉许多其他选项。
|
||||
大多数时候,我只使用几个迭代的命令。我通常用 `ls` 或 `ls -al` 查看目录内容,但是你还应该熟悉许多其它选项。
|
||||
|
||||
`$ ls -l` 提供了一个简单的目录列表:
|
||||
`ls -l` 提供了一个简单的目录列表:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png)
|
||||
|
||||
使用我的 Fedora 28 系统中的手册页,我发现 `ls` 还有许多其他选项,所有这些选项都提供了有关 Linux 文件系统的有趣且有用的信息。通过在命令提示符下输入 `man ls`,我们可以开始探索其他一些选项:
|
||||
在我的 Fedora 28 系统的手册页中,我发现 `ls` 还有许多其它选项,所有这些选项都提供了有关 Linux 文件系统的有趣且有用的信息。通过在命令提示符下输入 `man ls`,我们可以开始探索其它一些选项:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png)
|
||||
|
||||
@ -38,18 +39,16 @@
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png)
|
||||
|
||||
以下是我认为有用且有趣的一些其他选项:
|
||||
以下是我认为有用且有趣的一些其它选项:
|
||||
|
||||
* 仅列出目录中的 .txt 文件:`ls * .txt`
|
||||
* 按文件大小列出:`ls -s`
|
||||
* 按时间和日期排序:`ls -d`
|
||||
* 按扩展名排序:`ls -X`
|
||||
* 按文件大小排序:`ls -S`
|
||||
* 带有文件大小的长格式:`ls -ls`
|
||||
* 仅列出目录中的 .txt 文件:`ls *.txt`
|
||||
* 按文件大小列出:`ls -s`
|
||||
* 按时间和日期排序:`ls -t`
|
||||
* 按扩展名排序:`ls -X`
|
||||
* 按文件大小排序:`ls -S`
|
||||
* 带有文件大小的长格式:`ls -ls`
|
||||
|
||||
|
||||
|
||||
要生成指定格式的目录列表并将其定向到文件供以后查看,请输入 `ls -al> mydirectorylist`。最后,我找到的一个更奇特的命令是 `ls -R`,它提供了计算机上所有目录及其内容的递归列表。
|
||||
要生成指定格式的目录列表并将其定向到文件供以后查看,请输入 `ls -al > mydirectorylist`。最后,我找到的一个更奇特的命令是 `ls -R`,它提供了计算机上所有目录及其内容的递归列表。
|
||||
|
||||
有关 `ls` 命令的所有迭代的完整列表,请参阅 [GNU Core Utilities][6]。
|
||||
|
||||
@ -60,7 +59,7 @@ via: https://opensource.com/article/18/10/ls-command
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
187
translated/tech/20181005 Open Source Logging Tools for Linux.md
Normal file
187
translated/tech/20181005 Open Source Logging Tools for Linux.md
Normal file
@ -0,0 +1,187 @@
|
||||
Linux 上的开源日志工具
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs-main.jpg?itok=voNrSz4H)
|
||||
|
||||
如果你是一位 Linux 系统管理员,你进行故障排除的第一个工具就是日志文件。这些文件会保留关键的信息,可以在很长一段时间内帮你去解决影响你桌面和服务器的问题。对于许多系统管理员(特别是对于老手),在命令行上没有什么比检查日志文件更好的方式了。但是对于那些想要更高效(也可能是更现代)的排除故障的人,这里也有许多的选择。
|
||||
|
||||
在这篇文章中,我将会罗列一些在 Linux 平台上一些值得一用的工具。一些特定于某项服务(例如 Kubernetes 或者 Apache)的日志工具将不会出现在该清单中,取而代之的将会是那些能够挖掘写入 /var/log 的所有神奇信息深度的工具。
|
||||
|
||||
|
||||
### 什么是 /var/log?
|
||||
|
||||
如果你是刚开始使用 Linux ,你可能不知道 /var/log 目录都包含了些什么。然而,这个名字已经说明了一切。这个目录收容了所有系统以及安装在系统上的主要服务(如 Apache,MySQL,MariaDB 等)的日志信息。打开一个终端窗口键入 ``` cd /var/log ``` 命令,接着键入 ```ls``` 命令,您将看到所有可以查看的日志文件。(图 1)
|
||||
|
||||
![/var/log/][2]
|
||||
|
||||
图 1:```ls``` 命令显示在 /var/log 下可用的日志。
|
||||
|
||||
[经许可使用][3]
|
||||
|
||||
如果你想查看 syslog 日志文件。运行 ```less syslog``` 命令,你就可以滚动查看特定日志的所有细节。但是如果标准的终端不适合你?你还有什么选择呢?其实有很多。让我们来看看以下几个选择。
|
||||
|
||||
### Logs
|
||||
|
||||
如果你使用 Gnome 桌面(或是其他的桌面,Logs 不仅能安装在 Gnome),你就已经有了这样的一个工具了,它仅仅在日志文件上加上了一个轻量的图形化界面,却使得查看日志的过程更加简单和高效。(从标准库中)安装完成后,从桌面菜单中打开 Logs,然后你将看到一个(图 2 的)界面,在这里允许你从各种类型的日志中进行选择(重要、所有、系统、安全和硬件),也可以选择一个启动时段 (从顶部中间的下拉菜单中),甚至是从所有可用的日志中进行搜索。
|
||||
|
||||
![Logs tool][5]
|
||||
|
||||
图 2:Gnome 日志工具是你可以找到的为 Linux 最简易的日志图形化软件之一。
|
||||
|
||||
[经许可使用][3]
|
||||
|
||||
Logs 是一个非常好的工具,特别是如果你不想要太多花里胡哨,妨碍你查看日志的功能,Logs 对于你查看系统日志来说就是一个很好的工具。
|
||||
|
||||
### KSystemLog
|
||||
|
||||
KSystemLog 是 KDE 的,Logs 是 GNOME 的,但是有许多功能都融合到了里面。尽管两者都使得查看系统的日志文件变得非常简单,但只有 KsystemLog 有彩色的日志行、分页查看、复制日志行到桌面剪贴板、内置直接发送日志信息到系统的功能、详细阅读每行日志的信息、以及更多。KSystemLog 查看的所有日志都可以从 Gnome 的 Logs 中找到,只是各有不同的格式。
|
||||
|
||||
从主窗口上(图 3),你可以看到许多不同的日志(来自于系统日志,认证日志,X.org 日志,Journald 日志),通过日期、拥有者、进程、信息选择一个日志优先级可以过滤日志。
|
||||
|
||||
![KSystemLog][7]
|
||||
|
||||
图 3:KSystemLog 主界面
|
||||
|
||||
[经许可使用][3]
|
||||
|
||||
如果你点击窗口菜单,你可以打开一个新的标签,你可以在里面选择一个不同的日志/筛选组合去查看。在同一个标签中,你甚至可以复制当前标签。你可以通过以下操作,手动添加一个日志到一个文件中。
|
||||
|
||||
1. 打开 KSystemLog。
|
||||
|
||||
2. 点击文件的标签 > 添加日志入口
|
||||
|
||||
3. 创建你的日志入口(图 4)。
|
||||
|
||||
4. 点击 OK。
|
||||
|
||||
|
||||
|
||||
![log entry][9]
|
||||
|
||||
图 4:使用 KystemLog 创建一个手册日志
|
||||
|
||||
[经许可使用][3]
|
||||
|
||||
KsystemLogs 使得在 KDE 下查看日志变为一个非常简单的操作。
|
||||
|
||||
### Logwatch
|
||||
|
||||
Logwatch 不是一个花俏的图形化工具。相反,Logwatch 允许你设置一个日志记录系统 e-mail 给你重要的警告。你可以通过一个 SMTP 服务 e-mail 这些重要通知,或者你可以只是在本地机器上查看。你可以在几乎所有发行版的标准库中找到 Logwatch,所以可以使用一个简单的命令完成安装:
|
||||
|
||||
```
|
||||
sudo apt-get install logwatch
|
||||
```
|
||||
|
||||
或者:
|
||||
|
||||
```
|
||||
sudo dnf install logwatch
|
||||
```
|
||||
|
||||
在安装期间,你需要选择一个发送警告的方式(图 5)。如果你选择仅以本地邮件的方式发达,你需要安装 mailutils(这样你就能通过 ```mail``` 命令查看本地邮件)。
|
||||
|
||||
![ Logwatch][11]
|
||||
|
||||
图 5:配置 Logwatch 警告发送方式
|
||||
|
||||
[经许可使用][3]
|
||||
|
||||
所有的 Logwatch 配置文件都被放在一个文件中。可以使用 ``` sudo nano /usr/share/logwatch/default.conf/logwatch.conf``` 命令编辑该文件。你可能想要编辑 MailTo = 选项。如果你是在本地查看日志,设置这项为你想要为你想要接收日志的 Linux 用户名(例如 MailTo = jack)。如果你想要发送这些日志到一个外部的邮件地址,你需要修改 MailTo = 为一个正确的邮件地址。在同一个配置文件中你还可以设置日志的细节层级和发送范围。保存并关闭该文件。设置成功后,你就可以使用以下命令发送你的第一封邮件:
|
||||
|
||||
```
|
||||
logwatch --detail Med --mailto ADDRESS --service all --range today
|
||||
Where ADDRESS is either the local user or an email address.
|
||||
|
||||
```
|
||||
|
||||
使用 ```man logwatch``` 可以查看更多有关使用 Logwatch 的信息。通过阅读手册页可以看到这个工具的不同选项。
|
||||
|
||||
### Rsyslog
|
||||
|
||||
Rsyslog 是一个发送远程客户端的日志到集群服务的简便方式。你只有一台服务器,但你想收集在你数据中心的其他 Linux 服务的日志。有了 Rsyslog,这可以很方便的实现。Rsyslog 必须被安装在所有的客户端和集群服务上(通过运行```sudo apt-get install rsyslog```)。 安装完成后,在集群服务上创建 /etc/rsyslog.d/server.conf 文件,包含以下内容:
|
||||
|
||||
```
|
||||
# Provide UDP syslog reception
|
||||
$ModLoad imudp
|
||||
$UDPServerRun 514
|
||||
|
||||
# Provide TCP syslog reception
|
||||
$ModLoad imtcp
|
||||
$InputTCPServerRun 514
|
||||
|
||||
# Use custom filenaming scheme
|
||||
$template FILENAME,"/var/log/remote/%HOSTNAME%.log"
|
||||
*.* ?FILENAME
|
||||
|
||||
$PreserveFQDN on
|
||||
|
||||
```
|
||||
|
||||
保存并退出这个文件。现在,在任意一台客户端机器上,创建这个文件 /etc/rsyslog.d/client.conf,包含以下内容:
|
||||
|
||||
```
|
||||
$PreserveFQDN on
|
||||
$ActionQueueType LinkedList
|
||||
$ActionQueueFileName srvrfwd
|
||||
$ActionResumeRetryCount -1
|
||||
$ActionQueueSaveOnShutdown on
|
||||
*.* @@SERVER_IP:514
|
||||
|
||||
```
|
||||
|
||||
SERVER_IP 处是你的集群服务的 IP 地址。保存并关闭该文件。使用以下命令在所有机器上重启 rsyslog。
|
||||
|
||||
```
|
||||
sudo systemctl restart rsyslog
|
||||
|
||||
```
|
||||
|
||||
你现在可以使用以下命令(运行在集群服务器上)查看集群日志文件。
|
||||
|
||||
```
|
||||
tail -f /var/log/remote/*.log
|
||||
|
||||
```
|
||||
|
||||
```tail``` 命令允许你实时查看写入这些文件中的内容。你应该可以看到包含主机名的日志条目(图 6)。
|
||||
|
||||
![Rsyslog][13]
|
||||
|
||||
图 6:Rsyslog 为一个已连接的客户端显示条目。
|
||||
|
||||
[经许可使用][3]
|
||||
|
||||
如果你想要为你的 Linux 服务器上的所有用户,创建单一的日志查看入口点, Rsyslog 是一个非常好的工具。
|
||||
|
||||
### 了解更多
|
||||
|
||||
这篇文章仅仅搜集了为 Linux 平台创建的日志记录工具的一点皮毛。上述每个工具都能够比此处概述的内容做到更多。这篇梗概应该会对你开始你漫长的日志记录历程有一点帮助。
|
||||
|
||||
你可以从 Linux 基金会和 edx 的免费课程 ["Introduction to Linux" ][14]了解更多。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[dianbanjiu](https://github.com/dianbanjiu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[1]: /files/images/logs1jpg
|
||||
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/)
|
||||
[3]: /licenses/category/used-permission
|
||||
[4]: /files/images/logs2jpg
|
||||
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool)
|
||||
[6]: /files/images/logs3jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog)
|
||||
[8]: /files/images/logs4jpg
|
||||
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry)
|
||||
[10]: /files/images/logs5jpg
|
||||
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch)
|
||||
[12]: /files/images/logs6jpg
|
||||
[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog)
|
||||
[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,123 +0,0 @@
|
||||
Minikube入门:笔记本上的Kubernetes
|
||||
======
|
||||
运行Minikube的分步指南。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ)
|
||||
|
||||
在[Hello Minikube][1]教程页面上Minikube被宣传为基于Docker运行Kubernetes的一种简单方法。 虽然该文档非常有用,但它主要是为MacOS编写的。 你可以深入挖掘在Windows或某个Linux发行版上的使用说明,但它们不是很清楚。 许多文档都是针对Debian / Ubuntu用户的,比如[安装Minikube的驱动程序][2]。
|
||||
|
||||
### 先决条件
|
||||
|
||||
1. 你已经[安装了Docker][3]。
|
||||
2. 你的计算机是一个RHEL / CentOS / 基于Fedora的工作站。
|
||||
3. 你已经[安装了正常运行的KVM2虚拟机管理程序][4]。
|
||||
4. 你有一个运行的**docker-machine-driver-kvm2**。 以下命令将安装驱动程序:
|
||||
|
||||
```
|
||||
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
|
||||
chmod +x docker-machine-driver-kvm2 \
|
||||
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
|
||||
&& rm docker-machine-driver-kvm2
|
||||
```
|
||||
|
||||
### 下载,安装和启动Minikube
|
||||
|
||||
1. 为你要即将下载的两个文件创建一个目录,两个文件分别是:[minikube][5]和[kubectl][6]。
|
||||
|
||||
|
||||
2. 打开终端窗口并运行以下命令来安装minikube。
|
||||
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||
```
|
||||
|
||||
请注意,minikube版本(例如,minikube-linux-amd64)可能因计算机的规格而有所不同。
|
||||
|
||||
3. **chmod**加执行权限。
|
||||
|
||||
```
|
||||
chmod +x minikube
|
||||
```
|
||||
|
||||
4. 将文件移动到**/usr/local/bin**路径下,以便你能将其作为命令运行。
|
||||
|
||||
```
|
||||
mv minikube /usr/local/bin
|
||||
```
|
||||
|
||||
5. 使用以下命令安装kubectl(类似于minikube的安装过程)。
|
||||
|
||||
```
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
使用**curl**命令确定最新版本的Kubernetes。
|
||||
|
||||
6. **chmod**给kubectl加执行权限。
|
||||
|
||||
```
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
7. 将kubectl移动到**/usr/local/bin**路径下作为命令运行。
|
||||
|
||||
```
|
||||
mv kubectl /usr/local/bin
|
||||
```
|
||||
|
||||
8. 运行**minikube start**命令。 为此,你需要有虚拟机管理程序。 我使用过KVM2,你也可以使用Virtualbox。 确保是用户而不是root身份运行以下命令,以便为用户而不是root存储配置。
|
||||
|
||||
```
|
||||
minikube start --vm-driver=kvm2
|
||||
```
|
||||
|
||||
这可能需要一段时间,等一会。
|
||||
|
||||
9. Minikube应该下载并开始。 使用以下命令确保成功。
|
||||
|
||||
```
|
||||
cat ~/.kube/config
|
||||
```
|
||||
|
||||
10. 执行以下命令以运行Minikube作为上下文。 上下文决定了kubectl与哪个集群交互。 你可以在~/.kube/config文件中查看所有可用的上下文。
|
||||
|
||||
```
|
||||
kubectl config use-context minikube
|
||||
```
|
||||
|
||||
11. 再次查看**config** 文件以检查Minikube是否存在上下文。
|
||||
|
||||
```
|
||||
cat ~/.kube/config
|
||||
```
|
||||
|
||||
12. 最后,运行以下命令打开浏览器查看Kubernetes仪表板。
|
||||
|
||||
```
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
本指南旨在使RHEL / Fedora / CentOS操作系统用户操作更轻松。
|
||||
|
||||
现在Minikube已启动并运行,请阅读[通过Minikube在本地运行Kubernetes][7]这篇官网教程开始使用它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/getting-started-minikube
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://kubernetes.io/docs/tutorials/hello-minikube
|
||||
[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md
|
||||
[3]: https://docs.docker.com/install
|
||||
[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver
|
||||
[5]: https://github.com/kubernetes/minikube/releases
|
||||
[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl
|
||||
[7]: https://kubernetes.io/docs/setup/minikube
|
@ -0,0 +1,72 @@
|
||||
如何在 Arch Linux(UEFI)上安装 GRUB
|
||||
======
|
||||
|
||||
![](http://fasterland.net/wp-content/uploads/2018/10/Arch-Linux-Boot-Menu-750x375.jpg)
|
||||
|
||||
前段时间,我写了一篇在安装 Windows 后在 Arch Linux 上**[如何重新安装 Grub][1]的教程。**
|
||||
|
||||
几周前,我不得不在我的笔记本上从头开始重新安装 **Arch Linux**,同时我发现安装 **Grub** 并不像我想的那么简单。
|
||||
|
||||
出于这个原因,由于在新安装 **Arch Linux** 时**在 UEFI bios 中安装 Grub** 并不容易,所以我要写这篇教程。
|
||||
|
||||
### 定位 EFI 分区
|
||||
|
||||
在 **Arch Linux** 上安装 **Grub** 的第一件重要事情是定位 **EFI** 分区。让我们运行以下命令以找到此分区:
|
||||
|
||||
```
|
||||
# fdisk -l
|
||||
```
|
||||
|
||||
我们需要检查标记为 **EFI System** 的分区,我这里是 **/dev/sda2**。
|
||||
|
||||
之后,我们需要在例如 /boot/efi 上挂载这个分区:
|
||||
|
||||
```
|
||||
# mkdir /boot/efi
|
||||
# mount /dev/sdb2 /boot/efi
|
||||
```
|
||||
|
||||
另一件重要的事情是将此分区添加到 **/etc/fstab** 中。
|
||||
|
||||
#### 安装 Grub
|
||||
|
||||
现在我们可以在我们的系统中安装 Grub:
|
||||
|
||||
```
|
||||
# grub-mkconfig -o /boot/grub/grub.cfg
|
||||
# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
|
||||
```
|
||||
|
||||
#### 自动将 Windows 添加到 Grub 菜单中
|
||||
|
||||
为了自动将**Windows 条目添加到 Grub 菜单**,我们需要安装 **os-prober**:
|
||||
|
||||
```
|
||||
# pacman -Sy os-prober
|
||||
```
|
||||
|
||||
要添加它,让我们运行以下命令:
|
||||
|
||||
```
|
||||
# os-prober
|
||||
# grub-mkconfig -o /boot/grub/grub.cfg
|
||||
# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
|
||||
```
|
||||
|
||||
你可以在[这里][2]找到更多关于在 Arch Linux 上 Grub 的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://fasterland.net/how-to-install-grub-on-arch-linux-uefi.html
|
||||
|
||||
作者:[Francesco Mondello][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://fasterland.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://fasterland.net/reinstall-grub-arch-linux.html
|
||||
[2]: https://wiki.archlinux.org/index.php/GRUB
|
@ -0,0 +1,151 @@
|
||||
在 Linux 手册页中查看整个 Arch Linux Wiki
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/arch-wiki-720x340.jpg)
|
||||
|
||||
不久之前,我写了篇关于一个名叫 [**arch-wiki-cli**][1] 的命令行脚本的文章,使用它可以在终端命令行中查看整个 Arch Linux Wiki。使用这个脚本,你可以很轻松的用你喜欢的文本浏览器查看整个 Arch Wiki。显然,使用这个脚本需要你有一个好的网络环境。我今天偶然发现了一个名为 **Arch-wiki-man** 的程序,与其有着相同的功能。就跟名字说的一样,它可以让你在命令行查看 Arch Wiki,但是无需联网。它可以以手册页的形式为你显示来自 Arch Wiki 的任何文章。它会下载整个 Arch Wiki,并每两天自动推送一次。因此,你的系统上总能有一份 Arch Wiki 最新的复印件。
|
||||
|
||||
### 安装 Arch-wiki-man
|
||||
|
||||
Arch-wiki-man 在 [**AUR**][2] 中可用,所以你可以通过类似[**Yay**][3] 的 AUR 帮助程序安装它。
|
||||
|
||||
```
|
||||
$ yay -S arch-wiki-man
|
||||
```
|
||||
|
||||
另外,它也可以使用 NPM 安装。首先确保你已经[**安装了 NoodJS**][4],然后使用以下命令安装它。
|
||||
|
||||
```
|
||||
$ npm install -g arch-wiki-man
|
||||
```
|
||||
|
||||
### 以手册页的形式查看整个 Arch Wiki
|
||||
|
||||
Arch-wiki-man 的典型语法如下:
|
||||
|
||||
```
|
||||
$ awman <search-query>
|
||||
```
|
||||
|
||||
下面看一些具体的例子:
|
||||
|
||||
**搜索一个或多个匹配项**
|
||||
|
||||
只需要下面的命令,就可以搜索 [**Arch Linux 安装指南**][5]。
|
||||
|
||||
```
|
||||
$ awman Installation guide
|
||||
```
|
||||
|
||||
上面的命令将会从 Arch Wiki 中搜索所有包含 “安装指南” 的条目。如果对于给出的搜索条目有很多的匹配项,将会展示为一个选择菜单。使用 **上下键** 或是 Vim 风格的方向键(**j/k**),移动到你想查看的指南上,点击回车打开。然后就会像下面这样,以手册页的形式展示指南的内容。
|
||||
|
||||
![][6]
|
||||
|
||||
awman 指的是 arch wiki man 的首字母组合。
|
||||
|
||||
它支持手册页的所有操作,所以你可以像使用手册页一样使用它。按 **h** 查看帮助选项。
|
||||
|
||||
![][7]
|
||||
|
||||
退出选择菜单不需要输入**man**,只需要按 **Ctrl+c**。
|
||||
|
||||
输入 **q** 返回或者/并且退出手册页。
|
||||
|
||||
**在标题或者概述中搜索匹配项**
|
||||
|
||||
Awman 默认只会在标题中搜索匹配项。但是你也可以指定它同时在标题和概述中搜索匹配项。
|
||||
|
||||
```
|
||||
$ awman -d vim
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ awman --desc-search vim
|
||||
```
|
||||
|
||||
**在目录中搜索匹配项**
|
||||
|
||||
不同于在标题和概述中搜索匹配项,它也能够扫描整个目录达到匹配的目的。不过请注意,这样将会使搜索进程明显变慢。
|
||||
|
||||
```
|
||||
$ awman -k emacs
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ awman --apropos emacs
|
||||
```
|
||||
|
||||
**在 web 浏览器中打开搜索结果**
|
||||
|
||||
如果你不想以手册页的形式查看 Arch Wiki 指南,你也可以像下面这样在 web 浏览器中打开它。
|
||||
|
||||
```
|
||||
$ awman -w pacman
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ awman --web pacman
|
||||
```
|
||||
|
||||
这条命令将会在 web 浏览器中打开匹配结果。请注意,使用这个选项需要网络连接。
|
||||
|
||||
**在其他语言中搜索**
|
||||
|
||||
Awman 默认打开的是英文的 Arch Wiki 页面。如果你想用其他的语言查看搜索结果,例如 **西班牙语**,只需要像这样做:
|
||||
|
||||
```
|
||||
$ awman -l spanish codecs
|
||||
```
|
||||
|
||||
![][8]
|
||||
|
||||
使用以下命令查看可用的语言:
|
||||
|
||||
```
|
||||
|
||||
$ awman --list-languages
|
||||
|
||||
```
|
||||
|
||||
**升级本地的 Arch Wiki复印件**
|
||||
|
||||
就像我已经说过的,更新会每两天自动推送一次。或者你也可以使用以下命令手动更新。
|
||||
|
||||
```
|
||||
$ awman-update
|
||||
arch-wiki-man@1.3.0 /usr/lib/node_modules/arch-wiki-man
|
||||
└── arch-wiki-md-repo@0.10.84
|
||||
|
||||
arch-wiki-md-repo has been successfully updated or reinstalled.
|
||||
```
|
||||
|
||||
:)
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-browse-and-read-entire-arch-wiki-as-linux-man-pages/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[dianbanjiu](https://github.com/dianbanjiu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/search-arch-wiki-website-commandline/
|
||||
[2]: https://aur.archlinux.org/packages/arch-wiki-man/
|
||||
[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]: https://www.ostechnix.com/install-node-js-linux/
|
||||
[5]: https://www.ostechnix.com/install-arch-linux-latest-version/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-1.gif
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-2.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2018/10/awman-3-1.png
|
Loading…
Reference in New Issue
Block a user