mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject
This commit is contained in:
commit
f1da0e6828
@ -5,15 +5,16 @@ Linux 下如何通过两个或多个输出设备播放声音
|
||||
|
||||
在 Linux 上处理音频是一件很痛苦的事情。Pulseaudio 的出现则是利弊参半。虽然有些事情 Pluseaudio 能够做的更好,但有些事情则反而变得更复杂了。处理音频的输出就是这么一件事情。
|
||||
|
||||
如果你想要在 Linux PC 上启用多个音频输出,你只需要利用一个简单的工具就能在一个虚拟界面上启用另一个发音设备。这比看起来要简单的多。
|
||||
如果你想要在 Linux PC 上启用多个音频输出,你只需要利用一个简单的工具就能在一个虚拟j接口上启用另一个声音设备。这比看起来要简单的多。
|
||||
|
||||
你可能会好奇为什么要这么做,一个很常见的情况是用电脑在电视上播放视频,你可以同时使用电脑和电视上的扬声器。
|
||||
|
||||
### 安装 Paprefs
|
||||
|
||||
实现从多个来源启用音频播放的最简单的方法是是一款名为 "paprefs" 的简单图形化工具。它是 PulseAudio Preferences 的缩写。
|
||||
实现从多个来源启用音频播放的最简单的方法是是一款名为 “paprefs” 的简单图形化工具。它是 PulseAudio Preferences 的缩写。
|
||||
|
||||
该软件包含在 Ubuntu 仓库中,可以直接用 apt 来进行安装。
|
||||
|
||||
```
|
||||
sudo apt install paprefs
|
||||
```
|
||||
@ -24,17 +25,17 @@ sudo apt install paprefs
|
||||
|
||||
虽然这款工具是图形化的,但作为普通用户在命令行中输入 `paprefs` 来启动它恐怕还是要更容易一些。
|
||||
|
||||
打开的窗口中有一些标签页,这些标签页内有一些可以调整的设置项。我们这里选择最后那个标签页,"Simultaneous Output。"
|
||||
打开的窗口中有一些标签页,这些标签页内有一些可以调整的设置项。我们这里选择最后那个标签页,“Simultaneous Output。”
|
||||
|
||||
![Paprefs on Ubuntu][1]
|
||||
|
||||
这个标签页中没有什么内容,只是一个复选框用来启用设置。
|
||||
这个标签页中没有什么内容,只是一个复选框用来启用该设置。
|
||||
|
||||
下一步,打开常规的声音首选项。这在不同的发行版中位于不同的位置。在 Ubuntu 上,它位于 GNOME 系统设置内。
|
||||
|
||||
![Enable Simultaneous Audio][2]
|
||||
|
||||
打开声音首选项后,选择 "output" 标签页。勾选 "Simultaneous output" 单选按钮。现在它就成了你的默认输出了。
|
||||
打开声音首选项后,选择 “output” 标签页。勾选 “Simultaneous output” 单选按钮。现在它就成了你的默认输出了。
|
||||
|
||||
### 测试一下
|
||||
|
||||
@ -42,7 +43,7 @@ sudo apt install paprefs
|
||||
|
||||
一切顺利的话,你就能从所有连接的设备中听到有声音传出了。
|
||||
|
||||
这就是所有要做的事了。此功能最适用于有多个设备(如 HDMI 端口和标准模拟输出)时。你当然也可以试一下其他配置。你还需要注意,只有一个音量控制器存在,因此你需要根据实际情况调整物理输出设备。
|
||||
这就是所有要做的事了。此功能最适用于有多个设备(如 HDMI 端口和标准模拟输出)时。你当然也可以试一下其他配置。你还需要注意,只有一个音量控制器,因此你需要根据实际情况调整物理输出设备。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -51,7 +52,7 @@ via: https://www.maketecheasier.com/play-sound-through-multiple-devices-linux/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,28 +1,25 @@
|
||||
使用 Kafka 和 MongoDB 进行 Go 异步处理
|
||||
============================================================
|
||||
|
||||
在我前面的博客文章 ["使用 MongoDB 和 Docker 多阶段构建我的第一个 Go 微服务][9] 中,我创建了一个 Go 微服务示例,它发布一个 REST 式的 http 端点,并将从 HTTP POST 中接收到的数据保存到 MongoDB 数据库。
|
||||
在我前面的博客文章 “[我的第一个 Go 微服务:使用 MongoDB 和 Docker 多阶段构建][9]” 中,我创建了一个 Go 微服务示例,它发布一个 REST 式的 http 端点,并将从 HTTP POST 中接收到的数据保存到 MongoDB 数据库。
|
||||
|
||||
在这个示例中,我将保存数据到 MongoDB 和创建另一个微服务去处理它解耦了。我还添加了 Kafka 为消息层服务,这样微服务就可以异步地处理它自己关心的东西了。
|
||||
在这个示例中,我将数据的保存和 MongoDB 分离,并创建另一个微服务去处理它。我还添加了 Kafka 为消息层服务,这样微服务就可以异步处理它自己关心的东西了。
|
||||
|
||||
> 如果你有时间去看,我将这个博客文章的整个过程录制到 [这个视频中了][1] :)
|
||||
|
||||
下面是这个使用了两个微服务的简单的异步处理示例的高级架构。
|
||||
下面是这个使用了两个微服务的简单的异步处理示例的上层架构图。
|
||||
|
||||
![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg)
|
||||
|
||||
微服务 1 —— 是一个 REST 式微服务,它从一个 /POST http 调用中接收数据。接收到请求之后,它从 http 请求中检索数据,并将它保存到 Kafka。保存之后,它通过 /POST 发送相同的数据去响应调用者。
|
||||
|
||||
微服务 2 —— 是一个在 Kafka 中订阅一个主题的微服务,在这里就是微服务 1 保存的数据。一旦消息被微服务消费之后,它接着保存数据到 MongoDB 中。
|
||||
微服务 2 —— 是一个订阅了 Kafka 中的一个主题的微服务,微服务 1 的数据保存在该主题。一旦消息被微服务消费之后,它接着保存数据到 MongoDB 中。
|
||||
|
||||
在你继续之前,我们需要能够去运行这些微服务的几件东西:
|
||||
|
||||
1. [下载 Kafka][2] —— 我使用的版本是 kafka_2.11-1.1.0
|
||||
|
||||
2. 安装 [librdkafka][3] —— 不幸的是,这个库应该在目标系统中
|
||||
|
||||
3. 安装 [Kafka Go 客户端][4]
|
||||
|
||||
4. 运行 MongoDB。你可以去看我的 [以前的文章][5] 中关于这一块的内容,那篇文章中我使用了一个 MongoDB docker 镜像。
|
||||
|
||||
我们开始吧!
|
||||
@ -32,14 +29,12 @@
|
||||
```
|
||||
$ cd /<download path>/kafka_2.11-1.1.0
|
||||
$ bin/zookeeper-server-start.sh config/zookeeper.properties
|
||||
|
||||
```
|
||||
|
||||
接着运行 Kafka —— 我使用 9092 端口连接到 Kafka。如果你需要改变端口,只需要在 `config/server.properties` 中配置即可。如果你像我一样是个新手,我建议你现在还是使用默认端口。
|
||||
|
||||
```
|
||||
$ bin/kafka-server-start.sh config/server.properties
|
||||
|
||||
```
|
||||
|
||||
Kafka 跑起来之后,我们需要 MongoDB。它很简单,只需要使用这个 `docker-compose.yml` 即可。
|
||||
@ -61,17 +56,15 @@ volumes:
|
||||
|
||||
networks:
|
||||
network1:
|
||||
|
||||
```
|
||||
|
||||
使用 Docker Compose 去运行 MongoDB docker 容器。
|
||||
|
||||
```
|
||||
docker-compose up
|
||||
|
||||
```
|
||||
|
||||
这里是微服务 1 的相关代码。我只是修改了我前面的示例去保存到 Kafka 而不是 MongoDB。
|
||||
这里是微服务 1 的相关代码。我只是修改了我前面的示例去保存到 Kafka 而不是 MongoDB:
|
||||
|
||||
[rest-to-kafka/rest-kafka-sample.go][10]
|
||||
|
||||
@ -133,15 +126,13 @@ func saveJobToKafka(job Job) {
|
||||
}, nil)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
这里是微服务 2 的代码。在这个代码中最重要的东西是从 Kafka 中消耗数据,保存部分我已经在前面的博客文章中讨论过了。这里代码的重点部分是从 Kafka 中消费数据。
|
||||
这里是微服务 2 的代码。在这个代码中最重要的东西是从 Kafka 中消费数据,保存部分我已经在前面的博客文章中讨论过了。这里代码的重点部分是从 Kafka 中消费数据:
|
||||
|
||||
[kafka-to-mongo/kafka-mongo-sample.go][11]
|
||||
|
||||
```
|
||||
|
||||
func main() {
|
||||
|
||||
//Create MongoDB session
|
||||
@ -206,14 +197,12 @@ func saveJobToMongo(jobString string) {
|
||||
fmt.Printf("Saved to MongoDB : %s", jobString)
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
我们来演示一下,运行微服务 1。确保 Kafka 已经运行了。
|
||||
我们来演示一下,运行微服务 1。确保 Kafka 已经运行了。
|
||||
|
||||
```
|
||||
$ go run rest-kafka-sample.go
|
||||
|
||||
```
|
||||
|
||||
我使用 Postman 向微服务 1 发送数据。
|
||||
@ -228,7 +217,6 @@ $ go run rest-kafka-sample.go
|
||||
|
||||
```
|
||||
$ go run kafka-mongo-sample.go
|
||||
|
||||
```
|
||||
|
||||
现在,你将在微服务 2 上看到消费的数据,并将它保存到了 MongoDB。
|
||||
@ -239,27 +227,26 @@ $ go run kafka-mongo-sample.go
|
||||
|
||||
![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png)
|
||||
|
||||
完整的源代码可以在这里找到
|
||||
完整的源代码可以在这里找到:
|
||||
|
||||
[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12]
|
||||
|
||||
现在是广告时间:如果你喜欢这篇文章,请在 Twitter [@donvito][6] 上关注我。我的 Twitter 上有关于 Docker、Kubernetes、GoLang、Cloud、DevOps、Agile 和 Startups 的内容。欢迎你们在 [GitHub][7] 和 [LinkedIn][8] 关注我。
|
||||
|
||||
[视频](https://youtu.be/xa0Yia1jdu8)
|
||||
|
||||
开心地玩吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/
|
||||
|
||||
作者:[Melvin Vivas ][a]
|
||||
作者:[Melvin Vivas][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.melvinvivas.com/author/melvin/
|
||||
[1]:https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/#video1
|
||||
[1]:https://youtu.be/xa0Yia1jdu8
|
||||
[2]:https://kafka.apache.org/downloads
|
||||
[3]:https://github.com/confluentinc/confluent-kafka-go
|
||||
[4]:https://github.com/confluentinc/confluent-kafka-go
|
@ -0,0 +1,194 @@
|
||||
Yaourt 已死!在 Arch 上使用这些替代品
|
||||
======
|
||||
|
||||
**前略:Yaourt 曾是最流行的 AUR 助手,但现已停止开发。在这篇文章中,我们会为 Arch 衍生发行版们列出 Yaourt 最佳的替代品。**
|
||||
|
||||
[Arch User Repository][1] (常被称作 AUR),是一个为 Arch 用户而生的社区驱动软件仓库。Debian/Ubuntu 用户的对应类比是 PPA。
|
||||
|
||||
AUR 包含了不直接被 [Arch Linux][2] 官方所背书的软件。如果有人想在 Arch 上发布软件或者包,它可以通过这个社区仓库提供。这让最终用户们可以使用到比默认仓库里更多的软件。
|
||||
|
||||
所以你该如何使用 AUR 呢?简单来说,你需要另外的工具以从 AUR 中安装软件。Arch 的包管理器 [pacman][3] 不直接支持 AUR。那些支持 AUR 的“特殊工具”我们称之为 [AUR 助手][4]。
|
||||
|
||||
Yaourt (Yet AnOther User Repository Tool)(曾经)是 `pacman` 的一个封装,便于用户在 Arch Linux 上安装 AUR 软件。它基本上采用和 `pacman` 一样的语法。Yaourt 对于 AUR 的搜索、安装,乃至冲突解决和包依赖关系维护都有着良好的支持。
|
||||
|
||||
然而,Yaourt 的开发进度近来十分缓慢,甚至在 Arch Wiki 上已经被[列为][5]“停止或有问题”。[许多 Arch 用户认为它不安全][6] 进而开始寻找其它的 AUR 助手。
|
||||
|
||||
![Yaourt 以外的 AUR Helpers][7]
|
||||
|
||||
在这篇文章中,我们会介绍 Yaourt 最佳的替代品以便于你从 AUR 安装软件。
|
||||
|
||||
### 最好的 AUR 助手
|
||||
|
||||
我刻意忽略掉了例如 Trizen 和 Packer 这样的流行的选择,因为它们也被列为“停止或有问题”的了。
|
||||
|
||||
#### 1、 aurman
|
||||
|
||||
[aurman][8] 是最好的 AUR 助手之一,也能胜任 Yaourt 替代品的地位。它有非常类似于 `pacman` 的语法,可以支持所有的 `pacman` 操作。你可以搜索 AUR、解决包依赖,在构建 AUR 包前检查 PKGBUILD 的内容等等。
|
||||
|
||||
aurman 的特性:
|
||||
|
||||
* aurman 支持所有 `pacman` 操作,并且引入了可靠的包依赖解决方案、冲突判定和<ruby>分包<rt>split package</rt></ruby>支持
|
||||
* 线程化的 sudo 循环会在后台运行,所以你每次安装只需要输入一次管理员密码
|
||||
* 提供开发包支持,并且可以区分显性安装和隐性安装的包
|
||||
* 支持搜索 AUR 包和仓库
|
||||
* 在构建 AUR 包之前,你可以检视并编辑 PKGBUILD 的内容
|
||||
* 可以用作单独的 [包依赖解决工具][9]
|
||||
|
||||
安装 aurman:
|
||||
|
||||
```
|
||||
git clone https://aur.archlinux.org/aurman.git
|
||||
cd aurman
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
使用 aurman:
|
||||
|
||||
用名字搜索:
|
||||
|
||||
```
|
||||
aurman -Ss <package-name>
|
||||
```
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
aurman -S <package-name>
|
||||
```
|
||||
|
||||
#### 2、 yay
|
||||
|
||||
[yay][10] 是下一个最好的 AUR 助手。它使用 Go 语言写成,宗旨是提供最少化用户输入的 `pacman` 界面、yaourt 式的搜索,而几乎没有任何依赖软件。
|
||||
|
||||
yay 的特性:
|
||||
|
||||
* `yay` 提供 AUR 表格补全,并且从 ABS 或 AUR 下载 PKGBUILD
|
||||
* 支持收窄搜索,并且不需要引用 PKGBUILD 源
|
||||
* `yay` 的二进制文件除了 `pacman` 以外别无依赖
|
||||
* 提供先进的包依赖解决方案,以及在编译安装之后移除编译时的依赖
|
||||
* 当在 `/etc/pacman.conf` 文件配置中启用了色彩时支持色彩输出
|
||||
* `yay` 可被配置成只支持 AUR 或者 repo 里的软件包
|
||||
|
||||
安装 yay:
|
||||
|
||||
你可以从 `git` 克隆并编译安装。
|
||||
|
||||
```
|
||||
git clone https://aur.archlinux.org/yay.git
|
||||
cd yay
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
使用 yay:
|
||||
|
||||
搜索:
|
||||
|
||||
```
|
||||
yay -Ss <package-name>
|
||||
```
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
yay -S <package-name>
|
||||
```
|
||||
|
||||
#### 3、 pakku
|
||||
|
||||
[Pakku][11] 是另一个还处于开发早期的 pacman 封装,虽然它还处于开放早期,但这不说明它逊于其它任何 AUR 助手。Pakku 能很好地支持从 AUR 搜索和安装,并且也可以在安装后移除不必要的编译依赖。
|
||||
|
||||
pakku 的特性:
|
||||
|
||||
* 从 AUR 搜索和安装软件
|
||||
* 检视不同构建之间的文件和变化
|
||||
* 从官方仓库编译,并事后移除编译依赖
|
||||
* 获取 PKGBUILD 以及 pacman 整合
|
||||
* 类 pacman 的用户界面和选项支持
|
||||
* 支持pacman 配置文件以及无需 PKGBUILD 源
|
||||
|
||||
安装 pakku:
|
||||
|
||||
```
|
||||
git clone https://aur.archlinux.org/pakku.git
|
||||
cd pakku
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
使用 pakku:
|
||||
|
||||
搜索:
|
||||
|
||||
```
|
||||
pakku -Ss spotify
|
||||
```
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
pakku -S spotify
|
||||
```
|
||||
|
||||
#### 4、 aurutils
|
||||
|
||||
[aurutils][12] 本质上是一堆使用 AUR 的自动化脚本的集合。它可以搜索 AUR、检查更新,并且解决包依赖。
|
||||
|
||||
aurutils 的特性:
|
||||
|
||||
* aurutils 使用本地仓库以支持 pacman 文件,所有的包都支持 `–asdeps`
|
||||
* 不同的任务可以有多个仓库
|
||||
* `aursync -u` 一键同步本地代码库
|
||||
* `aursearch` 搜索提供 pkgbase、长格式和 raw 支持
|
||||
* 能忽略指定包
|
||||
|
||||
安装 aurutils:
|
||||
|
||||
```
|
||||
git clone https://aur.archlinux.org/aurutils.git
|
||||
cd aurutils
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
使用 aurutils:
|
||||
|
||||
搜索:
|
||||
|
||||
```
|
||||
aurutils -Ss <package-name>
|
||||
```
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
aurutils -S <package-name>
|
||||
```
|
||||
|
||||
所有这些包,在有 Yaourt 或者其它 AUR 助手的情况下都可以直接安装。
|
||||
|
||||
### 写在最后
|
||||
|
||||
Arch Linux 有着[很多 AUR 助手][4] 可以自动完成 AUR 各方面的日常任务。很多用户依然使用 Yaourt 来完成 AUR 相关任务,每个人都有自己不一样的偏好,欢迎留言告诉我们你在 Arch 里使用什么,又有什么心得?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-aur-helpers/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Moelf](https://github.com/Moelf)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://wiki.archlinux.org/index.php/Arch_User_Repository
|
||||
[2]:https://www.archlinux.org/
|
||||
[3]:https://wiki.archlinux.org/index.php/pacman
|
||||
[4]:https://wiki.archlinux.org/index.php/AUR_helpers
|
||||
[5]:https://wiki.archlinux.org/index.php/AUR_helpers#Comparison_table
|
||||
[6]:https://www.reddit.com/r/archlinux/comments/4azqyb/whats_so_bad_with_yaourt/
|
||||
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/no-yaourt-arch-800x450.jpeg
|
||||
[8]:https://github.com/polygamma/aurman
|
||||
[9]:https://github.com/polygamma/aurman/wiki/Using-aurman-as-dependency-solver
|
||||
[10]:https://github.com/Jguer/yay
|
||||
[11]:https://github.com/kitsunyan/pakku
|
||||
[12]:https://github.com/AladW/aurutils
|
209
published/20180806 What is CI-CD.md
Normal file
209
published/20180806 What is CI-CD.md
Normal file
@ -0,0 +1,209 @@
|
||||
什么是 CI/CD?
|
||||
======
|
||||
|
||||
在软件开发中经常会提到<ruby>持续集成<rt>Continuous Integration</rt></ruby>(CI)和<ruby>持续交付<rt>Continuous Delivery</rt></ruby>(CD)这几个术语。但它们真正的意思是什么呢?
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
|
||||
|
||||
在谈论软件开发时,经常会提到<ruby>持续集成<rt>Continuous Integration</rt></ruby>(CI)和<ruby>持续交付<rt>Continuous Delivery</rt></ruby>(CD)这几个术语。但它们真正的意思是什么呢?在本文中,我将解释这些和相关术语背后的含义和意义,例如<ruby>持续测试<rt>Continuous Testing</rt></ruby>和<ruby>持续部署<rt>Continuous Deployment</rt></ruby>。
|
||||
|
||||
### 概览
|
||||
|
||||
工厂里的装配线以快速、自动化、可重复的方式从原材料生产出消费品。同样,软件交付管道以快速、自动化和可重复的方式从源代码生成发布版本。如何完成这项工作的总体设计称为“持续交付”(CD)。启动装配线的过程称为“持续集成”(CI)。确保质量的过程称为“持续测试”,将最终产品提供给用户的过程称为“持续部署”。一些专家让这一切简单、顺畅、高效地运行,这些人被称为<ruby>运维开发<rt>DevOps</rt></ruby>践行者。
|
||||
|
||||
### “持续”是什么意思?
|
||||
|
||||
“持续”用于描述遵循我在此提到的许多不同流程实践。这并不意味着“一直在运行”,而是“随时可运行”。在软件开发领域,它还包括几个核心概念/最佳实践。这些是:
|
||||
|
||||
* **频繁发布**:持续实践背后的目标是能够频繁地交付高质量的软件。此处的交付频率是可变的,可由开发团队或公司定义。对于某些产品,一季度、一个月、一周或一天交付一次可能已经足够频繁了。对于另一些来说,一天可能需要多次交付也是可行的。所谓持续也有“偶尔、按需”的方面。最终目标是相同的:在可重复、可靠的过程中为最终用户提供高质量的软件更新。通常,这可以通过很少甚至无需用户的交互或掌握的知识来完成(想想设备更新)。
|
||||
* **自动化流程**:实现此频率的关键是用自动化流程来处理软件生产中的方方面面。这包括构建、测试、分析、版本控制,以及在某些情况下的部署。
|
||||
* **可重复**:如果我们使用的自动化流程在给定相同输入的情况下始终具有相同的行为,则这个过程应该是可重复的。也就是说,如果我们把某个历史版本的代码作为输入,我们应该得到对应相同的可交付产出。这也假设我们有相同版本的外部依赖项(即我们不创建该版本代码使用的其它交付物)。理想情况下,这也意味着可以对管道中的流程进行版本控制和重建(请参阅稍后的 DevOps 讨论)。
|
||||
* **快速迭代**:“快速”在这里是个相对术语,但无论软件更新/发布的频率如何,预期的持续过程都会以高效的方式将源代码转换为交付物。自动化负责大部分工作,但自动化处理的过程可能仍然很慢。例如,对于每天需要多次发布候选版更新的产品来说,一轮<ruby>集成测试<rt>integrated testing</rt></ruby>下来耗时就要大半天可能就太慢了。
|
||||
|
||||
### 什么是“持续交付管道”?
|
||||
|
||||
将源代码转换为可发布产品的多个不同的<ruby>任务<rt>task</rt></ruby>和<ruby>作业<rt>job</rt></ruby>通常串联成一个软件“管道”,一个自动流程成功完成后会启动管道中的下一个流程。这些管道有许多不同的叫法,例如持续交付管道、部署管道和软件开发管道。大体上讲,程序管理者在管道执行时管理管道各部分的定义、运行、监控和报告。
|
||||
|
||||
### 持续交付管道是如何工作的?
|
||||
|
||||
软件交付管道的实际实现可以有很大不同。有许多程序可用在管道中,用于源代码跟踪、构建、测试、指标采集,版本管理等各个方面。但整体工作流程通常是相同的。单个业务流程/工作流应用程序管理整个管道,每个流程作为独立的作业运行或由该应用程序进行阶段管理。通常,在业务流程中,这些独立作业是以应用程序可理解并可作为工作流程管理的语法和结构定义的。
|
||||
|
||||
这些作业被用于一个或多个功能(构建、测试、部署等)。每个作业可能使用不同的技术或多种技术。关键是作业是自动化的、高效的,并且可重复的。如果作业成功,则工作流管理器将触发管道中的下一个作业。如果作业失败,工作流管理器会向开发人员、测试人员和其他人发出警报,以便他们尽快纠正问题。这个过程是自动化的,所以比手动运行一组过程可更快地找到错误。这种快速排错称为<ruby>快速失败<rt>fail fast</rt></ruby>,并且在抵达管道端点方面同样有价值。
|
||||
|
||||
### “快速失败”是什么意思?
|
||||
|
||||
管道的工作之一就是快速处理变更。另一个是监视创建发布的不同任务/作业。由于编译失败或测试未通过的代码可以阻止管道继续运行,因此快速通知用户此类情况非常重要。快速失败指的是在管道流程中尽快发现问题并快速通知用户的方式,这样可以及时修正问题并重新提交代码以便使管道再次运行。通常在管道流程中可通过查看历史记录来确定是谁做了那次修改并通知此人及其团队。
|
||||
|
||||
### 所有持续交付管道都应该被自动化吗?
|
||||
|
||||
管道的几乎所有部分都是应该自动化的。对于某些部分,有一些人为干预/互动的地方可能是有意义的。一个例子可能是<ruby>用户验收测试<rt>user-acceptance testing</rt></ruby>(让最终用户试用软件并确保它能达到他们想要/期望的水平)。另一种情况可能是部署到生产环境时用户希望拥有更多的人为控制。当然,如果代码不正确或不能运行,则需要人工干预。
|
||||
|
||||
有了对“持续”含义理解的背景,让我们看看不同类型的持续流程以及它们在软件管道上下文中的含义。
|
||||
|
||||
### 什么是“持续集成”?
|
||||
|
||||
持续集成(CI)是在源代码变更后自动检测、拉取、构建和(在大多数情况下)进行单元测试的过程。持续集成是启动管道的环节(尽管某些预验证 —— 通常称为<ruby>上线前检查<rt>pre-flight checks</rt></ruby> —— 有时会被归在持续集成之前)。
|
||||
|
||||
持续集成的目标是快速确保开发人员新提交的变更是好的,并且适合在代码库中进一步使用。
|
||||
|
||||
### 持续集成是如何工作的?
|
||||
|
||||
持续集成的基本思想是让一个自动化过程监测一个或多个源代码仓库是否有变更。当变更被推送到仓库时,它会监测到更改、下载副本、构建并运行任何相关的单元测试。
|
||||
|
||||
### 持续集成如何监测变更?
|
||||
|
||||
目前,监测程序通常是像 [Jenkins][1] 这样的应用程序,它还协调管道中运行的所有(或大多数)进程,监视变更是其功能之一。监测程序可以以几种不同方式监测变更。这些包括:
|
||||
|
||||
* **轮询**:监测程序反复询问代码管理系统,“代码仓库里有什么我感兴趣的新东西吗?”当代码管理系统有新的变更时,监测程序会“唤醒”并完成其工作以获取新代码并构建/测试它。
|
||||
* **定期**:监测程序配置为定期启动构建,无论源码是否有变更。理想情况下,如果没有变更,则不会构建任何新内容,因此这不会增加额外的成本。
|
||||
* **推送**:这与用于代码管理系统检查的监测程序相反。在这种情况下,代码管理系统被配置为提交变更到仓库时将“推送”一个通知到监测程序。最常见的是,这可以以 webhook 的形式完成 —— 在新代码被推送时一个<ruby>挂勾<rt>hook</rt></ruby>的程序通过互联网向监测程序发送通知。为此,监测程序必须具有可以通过网络接收 webhook 信息的开放端口。
|
||||
|
||||
### 什么是“预检查”(又称“上线前检查”)?
|
||||
|
||||
在将代码引入仓库并触发持续集成之前,可以进行其它验证。这遵循了最佳实践,例如<ruby>测试构建<rt>test build</rt></ruby>和<ruby>代码审查<rt>code review</rt></ruby>。它们通常在代码引入管道之前构建到开发过程中。但是一些管道也可能将它们作为其监控流程或工作流的一部分。
|
||||
|
||||
例如,一个名为 [Gerrit][2] 的工具允许在开发人员推送代码之后但在允许进入([Git][3] 远程)仓库之前进行正式的代码审查、验证和测试构建。Gerrit 位于开发人员的工作区和 Git 远程仓库之间。它会“接收”来自开发人员的推送,并且可以执行通过/失败验证以确保它们在被允许进入仓库之前的检查是通过的。这可以包括检测新变更并启动构建测试(CI 的一种形式)。它还允许开发者在那时进行正式的代码审查。这种方式有一种额外的可信度评估机制,即当变更的代码被合并到代码库中时不会破坏任何内容。
|
||||
|
||||
### 什么是“单元测试”?
|
||||
|
||||
单元测试(也称为“提交测试”),是由开发人员编写的小型的专项测试,以确保新代码独立工作。“独立”这里意味着不依赖或调用其它不可直接访问的代码,也不依赖外部数据源或其它模块。如果运行代码需要这样的依赖关系,那么这些资源可以用<ruby>模拟<rt>mock</rt></ruby>来表示。模拟是指使用看起来像资源的<ruby>代码存根<rt>code stub</rt></ruby>,可以返回值,但不实现任何功能。
|
||||
|
||||
在大多数组织中,开发人员负责创建单元测试以证明其代码正确。事实上,一种称为<ruby>测试驱动开发<rt>test-driven develop</rt></ruby>(TDD)的模型要求将首先设计单元测试作为清楚地验证代码功能的基础。因为这样的代码可以更改速度快且改动量大,所以它们也必须执行很快。
|
||||
|
||||
由于这与持续集成工作流有关,因此开发人员在本地工作环境中编写或更新代码,并通单元测试来确保新开发的功能或方法正确。通常,这些测试采用断言形式,即函数或方法的给定输入集产生给定的输出集。它们通常进行测试以确保正确标记和处理出错条件。有很多单元测试框架都很有用,例如用于 Java 开发的 [JUnit][4]。
|
||||
|
||||
### 什么是“持续测试”?
|
||||
|
||||
持续测试是指在代码通过持续交付管道时运行扩展范围的自动化测试的实践。单元测试通常与构建过程集成,作为持续集成阶段的一部分,并专注于和其它与之交互的代码隔离的测试。
|
||||
|
||||
除此之外,可以有或者应该有各种形式的测试。这些可包括:
|
||||
|
||||
* **集成测试** 验证组件和服务组合在一起是否正常。
|
||||
* **功能测试** 验证产品中执行功能的结果是否符合预期。
|
||||
* **验收测试** 根据可接受的标准验证产品的某些特征。如性能、可伸缩性、抗压能力和容量。
|
||||
|
||||
所有这些可能不存在于自动化的管道中,并且一些不同类型的测试分类界限也不是很清晰。但是,在交付管道中持续测试的目标始终是相同的:通过持续的测试级别证明代码的质量可以在正在进行的发布中使用。在持续集成快速的原则基础上,第二个目标是快速发现问题并提醒开发团队。这通常被称为快速失败。
|
||||
|
||||
### 除了测试之外,还可以对管道中的代码进行哪些其它类型的验证?
|
||||
|
||||
除了测试是否通过之外,还有一些应用程序可以告诉我们测试用例执行(覆盖)的源代码行数。这是一个可以衡量代码量指标的例子。这个指标称为<ruby>代码覆盖率<rt>code-coverage</rt></ruby>,可以通过工具(例如用于 Java 的 [JaCoCo][5])进行统计。
|
||||
|
||||
还有很多其它类型的指标统计,例如代码行数、复杂度以及代码结构对比分析等。诸如 [SonarQube][6] 之类的工具可以检查源代码并计算这些指标。此外,用户还可以为他们可接受的“合格”范围的指标设置阈值。然后可以在管道中针对这些阈值设置一个检查,如果结果不在可接受范围内,则流程终端上。SonarQube 等应用程序具有很高的可配置性,可以设置仅检查团队感兴趣的内容。
|
||||
|
||||
### 什么是“持续交付”?
|
||||
|
||||
持续交付(CD)通常是指整个流程链(管道),它自动监测源代码变更并通过构建、测试、打包和相关操作运行它们以生成可部署的版本,基本上没有任何人为干预。
|
||||
|
||||
持续交付在软件开发过程中的目标是自动化、效率、可靠性、可重复性和质量保障(通过持续测试)。
|
||||
|
||||
持续交付包含持续集成(自动检测源代码变更、执行构建过程、运行单元测试以验证变更),持续测试(对代码运行各种测试以保障代码质量),和(可选)持续部署(通过管道发布版本自动提供给用户)。
|
||||
|
||||
### 如何在管道中识别/跟踪多个版本?
|
||||
|
||||
版本控制是持续交付和管道的关键概念。持续意味着能够经常集成新代码并提供更新版本。但这并不意味着每个人都想要“最新、最好的”。对于想要开发或测试已知的稳定版本的内部团队来说尤其如此。因此,管道创建并轻松存储和访问的这些版本化对象非常重要。
|
||||
|
||||
在管道中从源代码创建的对象通常可以称为<ruby>工件<rt>artifact</rt></ruby>。工件在构建时应该有应用于它们的版本。将版本号分配给工件的推荐策略称为<ruby>语义化版本控制<rt>semantic versioning</rt></ruby>。(这也适用于从外部源引入的依赖工件的版本。)
|
||||
|
||||
语义版本号有三个部分:<ruby>主要版本<rt>major</rt></ruby>、<ruby>次要版本<rt>minor</rt></ruby> 和 <ruby>补丁版本<rt>patch</rt></ruby>。(例如,1.4.3 反映了主要版本 1,次要版本 4 和补丁版本 3。)这个想法是,其中一个部分的更改表示工件中的更新级别。主要版本仅针对不兼容的 API 更改而递增。当以<ruby>向后兼容<rt>backward-compatible</rt></ruby>的方式添加功能时,次要版本会增加。当进行向后兼容的版本 bug 修复时,补丁版本会增加。这些是建议的指导方针,但只要团队在整个组织内以一致且易于理解的方式这样做,团队就可以自由地改变这种方法。例如,每次为发布完成构建时增加的数字可以放在补丁字段中。
|
||||
|
||||
### 如何“分销”工件?
|
||||
|
||||
团队可以为工件分配<ruby>分销<rt>promotion</rt></ruby>级别以指示适用于测试、生产等环境或用途。有很多方法。可以用 Jenkins 或 [Artifactory][7] 等应用程序进行分销。或者一个简单的方案可以在版本号字符串的末尾添加标签。例如,`-snapshot` 可以指示用于构建工件的代码的最新版本(快照)。可以使用各种分销策略或工具将工件“提升”到其它级别,例如 `-milestone` 或 `-production`,作为工件稳定性和完备性版本的标记。
|
||||
|
||||
### 如何存储和访问多个工件版本?
|
||||
|
||||
从源代码构建的版本化工件可以通过管理<ruby>工件仓库<rt>artifact repository</rt></ruby>的应用程序进行存储。工件仓库就像构建工件的版本控制工具一样。像 Artifactory 或 [Nexus][8] 这类应用可以接受版本化工件,存储和跟踪它们,并提供检索的方法。
|
||||
|
||||
管道用户可以指定他们想要使用的版本,并在这些版本中使用管道。
|
||||
|
||||
### 什么是“持续部署”?
|
||||
|
||||
持续部署(CD)是指能够自动提供持续交付管道中发布版本给最终用户使用的想法。根据用户的安装方式,可能是在云环境中自动部署、app 升级(如手机上的应用程序)、更新网站或只更新可用版本列表。
|
||||
|
||||
这里的一个重点是,仅仅因为可以进行持续部署并不意味着始终部署来自管道的每组可交付成果。它实际上指,通过管道每套可交付成果都被证明是“可部署的”。这在很大程度上是由持续测试的连续级别完成的(参见本文中的持续测试部分)。
|
||||
|
||||
管道构建的发布成果是否被部署可以通过人工决策,或利用在完全部署之前“试用”发布的各种方法来进行控制。
|
||||
|
||||
### 在完全部署到所有用户之前,有哪些方法可以测试部署?
|
||||
|
||||
由于必须回滚/撤消对所有用户的部署可能是一种代价高昂的情况(无论是技术上还是用户的感知),已经有许多技术允许“尝试”部署新功能并在发现问题时轻松“撤消”它们。这些包括:
|
||||
|
||||
#### 蓝/绿测试/部署
|
||||
|
||||
在这种部署软件的方法中,维护了两个相同的主机环境 —— 一个“蓝色” 和一个“绿色”。(颜色并不重要,仅作为标识。)对应来说,其中一个是“生产环境”,另一个是“预发布环境”。
|
||||
|
||||
在这些实例的前面是调度系统,它们充当产品或应用程序的客户“网关”。通过将调度系统指向蓝色或绿色实例,可以将客户流量引流到期望的部署环境。通过这种方式,切换指向哪个部署实例(蓝色或绿色)对用户来说是快速,简单和透明的。
|
||||
|
||||
当新版本准备好进行测试时,可以将其部署到非生产环境中。在经过测试和批准后,可以更改调度系统设置以将传入的线上流量指向它(因此它将成为新的生产站点)。现在,曾作为生产环境实例可供下一次候选发布使用。
|
||||
|
||||
同理,如果在最新部署中发现问题并且之前的生产实例仍然可用,则简单的更改可以将客户流量引流回到之前的生产实例 —— 有效地将问题实例“下线”并且回滚到以前的版本。然后有问题的新实例可以在其它区域中修复。
|
||||
|
||||
#### 金丝雀测试/部署
|
||||
|
||||
在某些情况下,通过蓝/绿发布切换整个部署可能不可行或不是期望的那样。另一种方法是为<ruby>金丝雀<rt>canary</rt></ruby>测试/部署。在这种模型中,一部分客户流量被重新引流到新的版本部署中。例如,新版本的搜索服务可以与当前服务的生产版本一起部署。然后,可以将 10% 的搜索查询引流到新版本,以在生产环境中对其进行测试。
|
||||
|
||||
如果服务那些流量的新版本没问题,那么可能会有更多的流量会被逐渐引流过去。如果仍然没有问题出现,那么随着时间的推移,可以对新版本增量部署,直到 100% 的流量都调度到新版本。这有效地“更替”了以前版本的服务,并让新版本对所有客户生效。
|
||||
|
||||
#### 功能开关
|
||||
|
||||
对于可能需要轻松关掉的新功能(如果发现问题),开发人员可以添加<ruby>功能开关<rt>feature toggles</rt></ruby>。这是代码中的 `if-then` 软件功能开关,仅在设置数据值时才激活新代码。此数据值可以是全局可访问的位置,部署的应用程序将检查该位置是否应执行新代码。如果设置了数据值,则执行代码;如果没有,则不执行。
|
||||
|
||||
这为开发人员提供了一个远程“终止开关”,以便在部署到生产环境后发现问题时关闭新功能。
|
||||
|
||||
#### 暗箱发布
|
||||
|
||||
在<ruby>暗箱发布<rt>dark launch</rt></ruby>中,代码被逐步测试/部署到生产环境中,但是用户不会看到更改(因此名称中有<ruby>暗箱<rt>dark</rt></ruby>一词)。例如,在生产版本中,网页查询的某些部分可能会重定向到查询新数据源的服务。开发人员可收集此信息进行分析,而不会将有关接口,事务或结果的任何信息暴露给用户。
|
||||
|
||||
这个想法是想获取候选版本在生产环境负载下如何执行的真实信息,而不会影响用户或改变他们的经验。随着时间的推移,可以调度更多负载,直到遇到问题或认为新功能已准备好供所有人使用。实际上功能开关标志可用于这种暗箱发布机制。
|
||||
|
||||
### 什么是“运维开发”?
|
||||
|
||||
<ruby>[运维开发][9]<rt>DevOps</rt></ruby> 是关于如何使开发和运维团队更容易合作开发和发布软件的一系列想法和推荐的实践。从历史上看,开发团队研发了产品,但没有像客户那样以常规、可重复的方式安装/部署它们。在整个周期中,这组安装/部署任务(以及其它支持任务)留给运维团队负责。这经常导致很多混乱和问题,因为运维团队在后期才开始介入,并且必须在短时间内完成他们的工作。同样,开发团队经常处于不利地位 —— 因为他们没有充分测试产品的安装/部署功能,他们可能会对该过程中出现的问题感到惊讶。
|
||||
|
||||
这往往导致开发和运维团队之间严重脱节和缺乏合作。DevOps 理念主张是贯穿整个开发周期的开发和运维综合协作的工作方式,就像持续交付那样。
|
||||
|
||||
### 持续交付如何与运维开发相交?
|
||||
|
||||
持续交付管道是几个 DevOps 理念的实现。产品开发的后期阶段(如打包和部署)始终可以在管道的每次运行中完成,而不是等待产品开发周期中的特定时间。同样,从开发到部署过程中,开发和运维都可以清楚地看到事情何时起作用,何时不起作用。要使持续交付管道循环成功,不仅要通过与开发相关的流程,还要通过与运维相关的流程。
|
||||
|
||||
说得更远一些,DevOps 建议实现管道的基础架构也会被视为代码。也就是说,它应该自动配置、可跟踪、易于修改,并在管道发生变化时触发新一轮运行。这可以通过将管道实现为代码来完成。
|
||||
|
||||
### 什么是“管道即代码”?
|
||||
|
||||
<ruby>管道即代码<rt>pipeline-as-code</rt></ruby>是通过编写代码创建管道作业/任务的通用术语,就像开发人员编写代码一样。它的目标是将管道实现表示为代码,以便它可以与代码一起存储、评审、跟踪,如果出现问题并且必须终止管道,则可以轻松地重建。有几个工具允许这样做,如 [Jenkins 2][1]。
|
||||
|
||||
### DevOps 如何影响生产软件的基础设施?
|
||||
|
||||
传统意义上,管道中使用的各个硬件系统都有配套的软件(操作系统、应用程序、开发工具等)。在极端情况下,每个系统都是手工设置来定制的。这意味着当系统出现问题或需要更新时,这通常也是一项自定义任务。这种方法违背了持续交付的基本理念,即具有易于重现和可跟踪的环境。
|
||||
|
||||
多年来,很多应用被开发用于标准化交付(安装和配置)系统。同样,<ruby>虚拟机<rt>virtual machine</rt></ruby>被开发为模拟在其它计算机之上运行的计算机程序。这些 VM 要有管理程序才能在底层主机系统上运行,并且它们需要自己的操作系统副本才能运行。
|
||||
|
||||
后来有了<ruby>容器<rt>container</rt></ruby>。容器虽然在概念上与 VM 类似,但工作方式不同。它们只需使用一些现有的操作系统结构来划分隔离空间,而不需要运行单独的程序和操作系统的副本。因此,它们的行为类似于 VM 以提供隔离但不需要过多的开销。
|
||||
|
||||
VM 和容器是根据配置定义创建的,因此可以轻易地销毁和重建,而不会影响运行它们的主机系统。这允许运行管道的系统也可重建。此外,对于容器,我们可以跟踪其构建定义文件的更改 —— 就像对源代码一样。
|
||||
|
||||
因此,如果遇到 VM 或容器中的问题,我们可以更容易、更快速地销毁和重建它们,而不是在当前环境尝试调试和修复。
|
||||
|
||||
这也意味着对管道代码的任何更改都可以触发管道新一轮运行(通过 CI),就像对代码的更改一样。这是 DevOps 关于基础架构的核心理念之一。
|
||||
|
||||
---
|
||||
|
||||
via: https://opensource.com/article/18/8/what-cicd
|
||||
|
||||
作者:[Brent Laster][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bclaster
|
||||
[1]: https://jenkins.io
|
||||
[2]: https://www.gerritcodereview.com
|
||||
[3]: https://opensource.com/resources/what-is-git
|
||||
[4]: https://junit.org/junit5/
|
||||
[5]: https://www.eclemma.org/jacoco/
|
||||
[6]: https://www.sonarqube.org/
|
||||
[7]: https://jfrog.com/artifactory/
|
||||
[8]: https://www.sonatype.com/nexus-repository-sonatype
|
||||
[9]: https://opensource.com/resources/devops
|
@ -1,111 +0,0 @@
|
||||
My Lisp Experiences and the Development of GNU Emacs
|
||||
======
|
||||
|
||||
> (Transcript of Richard Stallman's Speech, 28 Oct 2002, at the International Lisp Conference).
|
||||
|
||||
Since none of my usual speeches have anything to do with Lisp, none of them were appropriate for today. So I'm going to have to wing it. Since I've done enough things in my career connected with Lisp I should be able to say something interesting.
|
||||
|
||||
My first experience with Lisp was when I read the Lisp 1.5 manual in high school. That's when I had my mind blown by the idea that there could be a computer language like that. The first time I had a chance to do anything with Lisp was when I was a freshman at Harvard and I wrote a Lisp interpreter for the PDP-11. It was a very small machine — it had something like 8k of memory — and I managed to write the interpreter in a thousand instructions. This gave me some room for a little bit of data. That was before I got to see what real software was like, that did real system jobs.
|
||||
|
||||
I began doing work on a real Lisp implementation with JonL White once I started working at MIT. I got hired at the Artificial Intelligence Lab not by JonL, but by Russ Noftsker, which was most ironic considering what was to come — he must have really regretted that day.
|
||||
|
||||
During the 1970s, before my life became politicized by horrible events, I was just going along making one extension after another for various programs, and most of them did not have anything to do with Lisp. But, along the way, I wrote a text editor, Emacs. The interesting idea about Emacs was that it had a programming language, and the user's editing commands would be written in that interpreted programming language, so that you could load new commands into your editor while you were editing. You could edit the programs you were using and then go on editing with them. So, we had a system that was useful for things other than programming, and yet you could program it while you were using it. I don't know if it was the first one of those, but it certainly was the first editor like that.
|
||||
|
||||
This spirit of building up gigantic, complicated programs to use in your own editing, and then exchanging them with other people, fueled the spirit of free-wheeling cooperation that we had at the AI Lab then. The idea was that you could give a copy of any program you had to someone who wanted a copy of it. We shared programs to whomever wanted to use them, they were human knowledge. So even though there was no organized political thought relating the way we shared software to the design of Emacs, I'm convinced that there was a connection between them, an unconscious connection perhaps. I think that it's the nature of the way we lived at the AI Lab that led to Emacs and made it what it was.
|
||||
|
||||
The original Emacs did not have Lisp in it. The lower level language, the non-interpreted language — was PDP-10 Assembler. The interpreter we wrote in that actually wasn't written for Emacs, it was written for TECO. It was our text editor, and was an extremely ugly programming language, as ugly as could possibly be. The reason was that it wasn't designed to be a programming language, it was designed to be an editor and command language. There were commands like ‘5l’, meaning ‘move five lines’, or ‘i’ and then a string and then an ESC to insert that string. You would type a string that was a series of commands, which was called a command string. You would end it with ESC ESC, and it would get executed.
|
||||
|
||||
Well, people wanted to extend this language with programming facilities, so they added some. For instance, one of the first was a looping construct, which was < >. You would put those around things and it would loop. There were other cryptic commands that could be used to conditionally exit the loop. To make Emacs, we (1) added facilities to have subroutines with names. Before that, it was sort of like Basic, and the subroutines could only have single letters as their names. That was hard to program big programs with, so we added code so they could have longer names. Actually, there were some rather sophisticated facilities; I think that Lisp got its unwind-protect facility from TECO.
|
||||
|
||||
We started putting in rather sophisticated facilities, all with the ugliest syntax you could ever think of, and it worked — people were able to write large programs in it anyway. The obvious lesson was that a language like TECO, which wasn't designed to be a programming language, was the wrong way to go. The language that you build your extensions on shouldn't be thought of as a programming language in afterthought; it should be designed as a programming language. In fact, we discovered that the best programming language for that purpose was Lisp.
|
||||
|
||||
It was Bernie Greenberg, who discovered that it was (2). He wrote a version of Emacs in Multics MacLisp, and he wrote his commands in MacLisp in a straightforward fashion. The editor itself was written entirely in Lisp. Multics Emacs proved to be a great success — programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
|
||||
|
||||
So Bernie saw that an application — a program that does something useful for you — which has Lisp inside it and which you could extend by rewriting the Lisp programs, is actually a very good way for people to learn programming. It gives them a chance to write small programs that are useful for them, which in most arenas you can't possibly do. They can get encouragement for their own practical use — at the stage where it's the hardest — where they don't believe they can program, until they get to the point where they are programmers.
|
||||
|
||||
At that point, people began to wonder how they could get something like this on a platform where they didn't have full service Lisp implementation. Multics MacLisp had a compiler as well as an interpreter — it was a full-fledged Lisp system — but people wanted to implement something like that on other systems where they had not already written a Lisp compiler. Well, if you didn't have the Lisp compiler you couldn't write the whole editor in Lisp — it would be too slow, especially redisplay, if it had to run interpreted Lisp. So we developed a hybrid technique. The idea was to write a Lisp interpreter and the lower level parts of the editor together, so that parts of the editor were built-in Lisp facilities. Those would be whatever parts we felt we had to optimize. This was a technique that we had already consciously practiced in the original Emacs, because there were certain fairly high level features which we re-implemented in machine language, making them into TECO primitives. For instance, there was a TECO primitive to fill a paragraph (actually, to do most of the work of filling a paragraph, because some of the less time-consuming parts of the job would be done at the higher level by a TECO program). You could do the whole job by writing a TECO program, but that was too slow, so we optimized it by putting part of it in machine language. We used the same idea here (in the hybrid technique), that most of the editor would be written in Lisp, but certain parts of it that had to run particularly fast would be written at a lower level.
|
||||
|
||||
Therefore, when I wrote my second implementation of Emacs, I followed the same kind of design. The low level language was not machine language anymore, it was C. C was a good, efficient language for portable programs to run in a Unix-like operating system. There was a Lisp interpreter, but I implemented facilities for special purpose editing jobs directly in C — manipulating editor buffers, inserting leading text, reading and writing files, redisplaying the buffer on the screen, managing editor windows.
|
||||
|
||||
Now, this was not the first Emacs that was written in C and ran on Unix. The first was written by James Gosling, and was referred to as GosMacs. A strange thing happened with him. In the beginning, he seemed to be influenced by the same spirit of sharing and cooperation of the original Emacs. I first released the original Emacs to people at MIT. Someone wanted to port it to run on Twenex — it originally only ran on the Incompatible Timesharing System we used at MIT. They ported it to Twenex, which meant that there were a few hundred installations around the world that could potentially use it. We started distributing it to them, with the rule that “you had to send back all of your improvements” so we could all benefit. No one ever tried to enforce that, but as far as I know people did cooperate.
|
||||
|
||||
Gosling did, at first, seem to participate in this spirit. He wrote in a manual that he called the program Emacs hoping that others in the community would improve it until it was worthy of that name. That's the right approach to take towards a community — to ask them to join in and make the program better. But after that he seemed to change the spirit, and sold it to a company.
|
||||
|
||||
At that time I was working on the GNU system (a free software Unix-like operating system that many people erroneously call “Linux”). There was no free software Emacs editor that ran on Unix. I did, however, have a friend who had participated in developing Gosling's Emacs. Gosling had given him, by email, permission to distribute his own version. He proposed to me that I use that version. Then I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as ‘mocklisp’, which looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.
|
||||
|
||||
I concluded I couldn't use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs.
|
||||
|
||||
The one exception was redisplay. For a long time, redisplay was sort of an alternate world. The editor would enter the world of redisplay and things would go on with very special data structures that were not safe for garbage collection, not safe for interruption, and you couldn't run any Lisp programs during that. We've changed that since — it's now possible to run Lisp code during redisplay. It's quite a convenient thing.
|
||||
|
||||
This second Emacs program was ‘free software’ in the modern sense of the term — it was part of an explicit political campaign to make software free. The essence of this campaign was that everybody should be free to do the things we did in the old days at MIT, working together on software and working with whomever wanted to work with us. That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge.
|
||||
|
||||
At the time, you could make a computer that was about the same price range as other computers that weren't meant for Lisp, except that it would run Lisp much faster than they would, and with full type checking in every operation as well. Ordinary computers typically forced you to choose between execution speed and good typechecking. So yes, you could have a Lisp compiler and run your programs fast, but when they tried to take `car` of a number, it got nonsensical results and eventually crashed at some point.
|
||||
|
||||
The Lisp machine was able to execute instructions about as fast as those other machines, but each instruction — a car instruction would do data typechecking — so when you tried to get the car of a number in a compiled program, it would give you an immediate error. We built the machine and had a Lisp operating system for it. It was written almost entirely in Lisp, the only exceptions being parts written in the microcode. People became interested in manufacturing them, which meant they should start a company.
|
||||
|
||||
There were two different ideas about what this company should be like. Greenblatt wanted to start what he called a “hacker” company. This meant it would be a company run by hackers and would operate in a way conducive to hackers. Another goal was to maintain the AI Lab culture (3). Unfortunately, Greenblatt didn't have any business experience, so other people in the Lisp machine group said they doubted whether he could succeed. They thought that his plan to avoid outside investment wouldn't work.
|
||||
|
||||
Why did he want to avoid outside investment? Because when a company has outside investors, they take control and they don't let you have any scruples. And eventually, if you have any scruples, they also replace you as the manager.
|
||||
|
||||
So Greenblatt had the idea that he would find a customer who would pay in advance to buy the parts. They would build machines and deliver them; with profits from those parts, they would then be able to buy parts for a few more machines, sell those and then buy parts for a larger number of machines, and so on. The other people in the group thought that this couldn't possibly work.
|
||||
|
||||
Greenblatt then recruited Russell Noftsker, the man who had hired me, who had subsequently left the AI Lab and created a successful company. Russell was believed to have an aptitude for business. He demonstrated this aptitude for business by saying to the other people in the group, “Let's ditch Greenblatt, forget his ideas, and we'll make another company.” Stabbing in the back, clearly a real businessman. Those people decided they would form a company called Symbolics. They would get outside investment, not have scruples, and do everything possible to win.
|
||||
|
||||
But Greenblatt didn't give up. He and the few people loyal to him decided to start Lisp Machines Inc. anyway and go ahead with their plans. And what do you know, they succeeded! They got the first customer and were paid in advance. They built machines and sold them, and built more machines and more machines. They actually succeeded even though they didn't have the help of most of the people in the group. Symbolics also got off to a successful start, so you had two competing Lisp machine companies. When Symbolics saw that LMI was not going to fall flat on its face, they started looking for ways to destroy it.
|
||||
|
||||
Thus, the abandonment of our lab was followed by “war” in our lab. The abandonment happened when Symbolics hired away all the hackers, except me and the few who worked at LMI part-time. Then they invoked a rule and eliminated people who worked part-time for MIT, so they had to leave entirely, which left only me. The AI lab was now helpless. And MIT had made a very foolish arrangement with these two companies. It was a three-way contract where both companies licensed the use of Lisp machine system sources. These companies were required to let MIT use their changes. But it didn't say in the contract that MIT was entitled to put them into the MIT Lisp machine systems that both companies had licensed. Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was.
|
||||
|
||||
So Symbolics came up with a plan (4). They said to the lab, “We will continue making our changes to the system available for you to use, but you can't put it into the MIT Lisp machine system. Instead, we'll give you access to Symbolics' Lisp machine system, and you can run it, but that's all you can do.”
|
||||
|
||||
This, in effect, meant that they demanded that we had to choose a side, and use either the MIT version of the system or the Symbolics version. Whichever choice we made determined which system our improvements went to. If we worked on and improved the Symbolics version, we would be supporting Symbolics alone. If we used and improved the MIT version of the system, we would be doing work available to both companies, but Symbolics saw that we would be supporting LMI because we would be helping them continue to exist. So we were not allowed to be neutral anymore.
|
||||
|
||||
Up until that point, I hadn't taken the side of either company, although it made me miserable to see what had happened to our community and the software. But now, Symbolics had forced the issue. So, in an effort to help keep Lisp Machines Inc. going (5) — I began duplicating all of the improvements Symbolics had made to the Lisp machine system. I wrote the equivalent improvements again myself (i.e., the code was my own).
|
||||
|
||||
After a while (6), I came to the conclusion that it would be best if I didn't even look at their code. When they made a beta announcement that gave the release notes, I would see what the features were and then implement them. By the time they had a real release, I did too.
|
||||
|
||||
In this way, for two years, I prevented them from wiping out Lisp Machines Incorporated, and the two companies went on. But, I didn't want to spend years and years punishing someone, just thwarting an evil deed. I figured they had been punished pretty thoroughly because they were stuck with competition that was not leaving or going to disappear (7). Meanwhile, it was time to start building a new community to replace the one that their actions and others had wiped out.
|
||||
|
||||
The Lisp community in the 70s was not limited to the MIT AI Lab, and the hackers were not all at MIT. The war that Symbolics started was what wiped out MIT, but there were other events going on then. There were people giving up on cooperation, and together this wiped out the community and there wasn't much left.
|
||||
|
||||
Once I stopped punishing Symbolics, I had to figure out what to do next. I had to make a free operating system, that was clear — the only way that people could work together and share was with a free operating system.
|
||||
|
||||
At first, I thought of making a Lisp-based system, but I realized that wouldn't be a good idea technically. To have something like the Lisp machine system, you needed special purpose microcode. That's what made it possible to run programs as fast as other computers would run their programs and still get the benefit of typechecking. Without that, you would be reduced to something like the Lisp compilers for other machines. The programs would be faster, but unstable. Now that's okay if you're running one program on a timesharing system — if one program crashes, that's not a disaster, that's something your program occasionally does. But that didn't make it good for writing the operating system in, so I rejected the idea of making a system like the Lisp machine.
|
||||
|
||||
I decided instead to make a Unix-like operating system that would have Lisp implementations to run as user programs. The kernel wouldn't be written in Lisp, but we'd have Lisp. So the development of that operating system, the GNU operating system, is what led me to write the GNU Emacs. In doing this, I aimed to make the absolute minimal possible Lisp implementation. The size of the programs was a tremendous concern.
|
||||
|
||||
There were people in those days, in 1985, who had one-megabyte machines without virtual memory. They wanted to be able to use GNU Emacs. This meant I had to keep the program as small as possible.
|
||||
|
||||
For instance, at the time the only looping construct was ‘while’, which was extremely simple. There was no way to break out of the ‘while’ statement, you just had to do a catch and a throw, or test a variable that ran the loop. That shows how far I was pushing to keep things small. We didn't have ‘caar’ and ‘cadr’ and so on; “squeeze out everything possible” was the spirit of GNU Emacs, the spirit of Emacs Lisp, from the beginning.
|
||||
|
||||
Obviously, machines are bigger now, and we don't do it that way any more. We put in ‘caar’ and ‘cadr’ and so on, and we might put in another looping construct one of these days. We're willing to extend it some now, but we don't want to extend it to the level of common Lisp. I implemented Common Lisp once on the Lisp machine, and I'm not all that happy with it. One thing I don't like terribly much is keyword arguments (8). They don't seem quite Lispy to me; I'll do it sometimes but I minimize the times when I do that.
|
||||
|
||||
That was not the end of the GNU projects involved with Lisp. Later on around 1995, we were looking into starting a graphical desktop project. It was clear that for the programs on the desktop, we wanted a programming language to write a lot of it in to make it easily extensible, like the editor. The question was what it should be.
|
||||
|
||||
At the time, TCL was being pushed heavily for this purpose. I had a very low opinion of TCL, basically because it wasn't Lisp. It looks a tiny bit like Lisp, but semantically it isn't, and it's not as clean. Then someone showed me an ad where Sun was trying to hire somebody to work on TCL to make it the “de-facto standard extension language” of the world. And I thought, “We've got to stop that from happening.” So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large. The idea was that we would have a Scheme interpreter designed to be linked into applications in the same way TCL was linked into applications. We would then recommend that as the preferred extensibility package for all GNU programs.
|
||||
|
||||
There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them. Our idea was that if each extensible application supported Scheme, you could write an implementation of TCL or Python or Perl in Scheme that translates that program into Scheme. Then you could load that into any application and customize it in your favorite language and it would work with other customizations as well.
|
||||
|
||||
As long as the extensibility languages are weak, the users have to use only the language you provided them. Which means that people who love any given language have to compete for the choice of the developers of applications — saying “Please, application developer, put my language into your application, not his language.” Then the users get no choices at all — whichever application they're using comes with one language and they're stuck with [that language]. But when you have a powerful language that can implement others by translating into it, then you give the user a choice of language and we don't have to have a language war anymore. That's what we're hoping ‘Guile’, our scheme interpreter, will do. We had a person working last summer finishing up a translator from Python to Scheme. I don't know if it's entirely finished yet, but for anyone interested in this project, please get in touch. So that's the plan we have for the future.
|
||||
|
||||
I haven't been speaking about free software, but let me briefly tell you a little bit about what that means. Free software does not refer to price; it doesn't mean that you get it for free. (You may have paid for a copy, or gotten a copy gratis.) It means that you have freedom as a user. The crucial thing is that you are free to run the program, free to study what it does, free to change it to suit your needs, free to redistribute the copies of others and free to publish improved, extended versions. This is what free software means. If you are using a non-free program, you have lost crucial freedom, so don't ever do that.
|
||||
|
||||
The purpose of the GNU project is to make it easier for people to reject freedom-trampling, user-dominating, non-free software by providing free software to replace it. For those who don't have the moral courage to reject the non-free software, when that means some practical inconvenience, what we try to do is give a free alternative so that you can move to freedom with less of a mess and less of a sacrifice in practical terms. The less sacrifice the better. We want to make it easier for you to live in freedom, to cooperate.
|
||||
|
||||
This is a matter of the freedom to cooperate. We're used to thinking of freedom and cooperation with society as if they are opposites. But here they're on the same side. With free software you are free to cooperate with other people as well as free to help yourself. With non-free software, somebody is dominating you and keeping people divided. You're not allowed to share with them, you're not free to cooperate or help society, anymore than you're free to help yourself. Divided and helpless is the state of users using non-free software.
|
||||
|
||||
We've produced a tremendous range of free software. We've done what people said we could never do; we have two operating systems of free software. We have many applications and we obviously have a lot farther to go. So we need your help. I would like to ask you to volunteer for the GNU project; help us develop free software for more jobs. Take a look at [http://www.gnu.org/help][1] to find suggestions for how to help. If you want to order things, there's a link to that from the home page. If you want to read about philosophical issues, look in /philosophy. If you're looking for free software to use, look in /directory, which lists about 1900 packages now (which is a fraction of all the free software out there). Please write more and contribute to us. My book of essays, “Free Software and Free Society”, is on sale and can be purchased at [www.gnu.org][2]. Happy hacking!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.gnu.org/gnu/rms-lisp.html
|
||||
|
||||
作者:[Richard Stallman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.gnu.org
|
||||
[1]:https://www.gnu.org/help/
|
||||
[2]:http://www.gnu.org/
|
@ -0,0 +1,63 @@
|
||||
Tips for Success with Open Source Certification
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/desktop_1.jpg?itok=Nf2yTUar)
|
||||
|
||||
In today’s technology arena, open source is pervasive. The [2018 Open Source Jobs Report][1] found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications. And yet, 87 percent of hiring managers also cite difficulty in finding the right open source skills and expertise. This article is the second in a weekly series on the growing importance of open source certification.
|
||||
|
||||
In the [first article][2], we focused on why certification matters now more than ever. Here, we’ll focus on the kinds of certifications that are making a difference, and what is involved in completing necessary training and passing the performance-based exams that lead to certification, with tips from Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation.
|
||||
|
||||
### Performance-based exams
|
||||
|
||||
So, what are the details on getting certified and what are the differences between major types of certification? Most types of open source credentials and certification that you can obtain are performance-based. In many cases, trainees are required to demonstrate their skills directly from the command line.
|
||||
|
||||
“You're going to be asked to do something live on the system, and then at the end, we're going to evaluate that system to see if you were successful in accomplishing the task,” said Seepersad. This approach obviously differs from multiple choice exams and other tests where candidate answers are put in front of you. Often, certification programs involve online self-paced courses, so you can learn at your own speed, but the exams can be tough and require demonstration of expertise. That’s part of why the certifications that they lead to are valuable.
|
||||
|
||||
### Certification options
|
||||
|
||||
Many people are familiar with the certifications offered by The Linux Foundation, including the [Linux Foundation Certified System Administrator][3] (LFCS) and [Linux Foundation Certified Engineer][4] (LFCE) certifications. The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.
|
||||
|
||||
“Note that there are no prerequisites,” Seepersad said. “What that means is that if you're an experienced Linux engineer, and you think the LFCE, the certified engineer credential, is the right one for you…, you're allowed to do what we call ‘challenge the exams.’ If you think you're ready for the LFCE, you can sign up for the LFCE without having to have gone through and taken and passed the LFCS.”
|
||||
|
||||
Seepersad noted that the LFCS credential is great for people starting their careers, and the LFCE credential is valuable for many people who have experience with Linux such as volunteer experience, and now want to demonstrate the breadth and depth of their skills for employers. He also said that the LFCS and LFCE coursework prepares trainees to work with various Linux distributions. Other certification options, such as the [Kubernetes Fundamentals][5] and [Essentials of OpenStack Administration][6]courses and exams, have also made a difference for many people, as cloud adoption has increased around the world.
|
||||
|
||||
Seepersad added that certification can make a difference if you are seeking a promotion. “Being able show that you're over the bar in terms of certification at the engineer level can be a great way to get yourself into the consideration set for that next promotion,” he said.
|
||||
|
||||
### Tips for Success
|
||||
|
||||
In terms of practical advice for taking an exam, Seepersad offered a number of tips:
|
||||
|
||||
* Set the date, and don’t procrastinate.
|
||||
|
||||
* Look through the online exam descriptions and get any training needed to be able to show fluency with the required skill sets.
|
||||
|
||||
* Practice on a live Linux system. This can involve downloading a free terminal emulator or other software and actually performing tasks that you will be tested on.
|
||||
|
||||
|
||||
|
||||
|
||||
Seepersad also noted some common mistakes that people make when taking their exams. These include spending too long on a small set of questions, wasting too much time looking through documentation and reference tools, and applying changes without testing them in the work environment.
|
||||
|
||||
With open source certification playing an increasingly important role in securing a rewarding career, stay tuned for more certification details in this article series, including how to prepare for certification.
|
||||
|
||||
[Learn more about Linux training and certification.][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/sysadmin-cert/2018/7/tips-success-open-source-certification
|
||||
|
||||
作者:[Sam Dean][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/sam-dean
|
||||
[1]:https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
|
||||
[2]:https://www.linux.com/blog/sysadmin-cert/2018/7/5-reasons-open-source-certification-matters-more-ever
|
||||
[3]:https://training.linuxfoundation.org/certification/lfcs
|
||||
[4]:https://training.linuxfoundation.org/certification/lfce
|
||||
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals
|
||||
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/openstack-administration-fundamentals
|
||||
[7]:https://training.linuxfoundation.org/certification
|
@ -1,186 +0,0 @@
|
||||
FSSlc translating
|
||||
|
||||
View The Contents Of An Archive Or Compressed File Without Extracting It
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/07/View-The-Contents-Of-An-Archive-Or-Compressed-File-720x340.png)
|
||||
|
||||
In this tutorial, we are going to learn how to view the contents of an Archive and/or Compressed file without actually extracting it in Unix-like operating systems. Before going further, let be clear about Archive and compress files. There is significant difference between both. The Archiving is the process of combining multiple files or folders or both into a single file. In this case, the resulting file is not compressed. The compressing is a method of combining multiple files or folders or both into a single file and finally compress the resulting file. The archive is not a compressed file, but the compressed file can be an archive. Clear? Well, let us get to the topic.
|
||||
|
||||
### View The Contents Of An Archive Or Compressed File Without Extracting It
|
||||
|
||||
Thanks to Linux community, there are many command line applications are available to do it. Let us going to see some of them with examples.
|
||||
|
||||
**1\. Using Vim Editor**
|
||||
|
||||
Vim is not just an editor. Using Vim, we can do numerous things. The following command displays the contents of an compressed archive file without decompressing it.
|
||||
```
|
||||
$ vim ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
You can even browse through the archive and open the text files (if there are any) in the archive as well. To open a text file, just put the mouse cursor in-front of the file using arrow keys and hit ENTER to open it.
|
||||
|
||||
|
||||
**2\. Using Tar command**
|
||||
|
||||
To list the contents of a tar archive file, run:
|
||||
```
|
||||
$ tar -tf ostechnix.tar
|
||||
ostechnix/
|
||||
ostechnix/image.jpg
|
||||
ostechnix/file.pdf
|
||||
ostechnix/song.mp3
|
||||
|
||||
```
|
||||
|
||||
Or, use **-v** flag to view the detailed properties of the archive file, such as permissions, file owner, group, creation date etc.
|
||||
```
|
||||
$ tar -tvf ostechnix.tar
|
||||
drwxr-xr-x sk/users 0 2018-07-02 19:30 ostechnix/
|
||||
-rw-r--r-- sk/users 53632 2018-06-29 15:57 ostechnix/image.jpg
|
||||
-rw-r--r-- sk/users 156831 2018-06-04 12:37 ostechnix/file.pdf
|
||||
-rw-r--r-- sk/users 9702219 2018-04-25 20:35 ostechnix/song.mp3
|
||||
|
||||
```
|
||||
|
||||
|
||||
**3\. Using Rar command**
|
||||
|
||||
To view the contents of a rar file, simply do:
|
||||
```
|
||||
$ rar v ostechnix.rar
|
||||
|
||||
RAR 5.60 Copyright (c) 1993-2018 Alexander Roshal 24 Jun 2018
|
||||
Trial version Type 'rar -?' for help
|
||||
|
||||
Archive: ostechnix.rar
|
||||
Details: RAR 5
|
||||
|
||||
Attributes Size Packed Ratio Date Time Checksum Name
|
||||
----------- --------- -------- ----- ---------- ----- -------- ----
|
||||
-rw-r--r-- 53632 52166 97% 2018-06-29 15:57 70260AC4 ostechnix/image.jpg
|
||||
-rw-r--r-- 156831 139094 88% 2018-06-04 12:37 C66C545E ostechnix/file.pdf
|
||||
-rw-r--r-- 9702219 9658527 99% 2018-04-25 20:35 DD875AC4 ostechnix/song.mp3
|
||||
----------- --------- -------- ----- ---------- ----- -------- ----
|
||||
9912682 9849787 99% 3
|
||||
|
||||
```
|
||||
|
||||
**4\. Using Unrar command**
|
||||
|
||||
You can also do the same using **Unrar** command with **l** flag as shown below.
|
||||
```
|
||||
$ unrar l ostechnix.rar
|
||||
|
||||
UNRAR 5.60 freeware Copyright (c) 1993-2018 Alexander Roshal
|
||||
|
||||
Archive: ostechnix.rar
|
||||
Details: RAR 5
|
||||
|
||||
Attributes Size Date Time Name
|
||||
----------- --------- ---------- ----- ----
|
||||
-rw-r--r-- 53632 2018-06-29 15:57 ostechnix/image.jpg
|
||||
-rw-r--r-- 156831 2018-06-04 12:37 ostechnix/file.pdf
|
||||
-rw-r--r-- 9702219 2018-04-25 20:35 ostechnix/song.mp3
|
||||
----------- --------- ---------- ----- ----
|
||||
9912682 3
|
||||
|
||||
```
|
||||
|
||||
**5\. Using Zip command**
|
||||
|
||||
To view the contents of a zip file without extracting it, use the following zip command:
|
||||
```
|
||||
$ zip -sf ostechnix.zip
|
||||
Archive contains:
|
||||
Life advices.jpg
|
||||
Total 1 entries (597219 bytes)
|
||||
|
||||
```
|
||||
|
||||
**6. Using Unzip command
|
||||
**
|
||||
|
||||
You can also use Unzip command with -l flag to display the contents of a zip file like below.
|
||||
```
|
||||
$ unzip -l ostechnix.zip
|
||||
Archive: ostechnix.zip
|
||||
Length Date Time Name
|
||||
--------- ---------- ----- ----
|
||||
597219 2018-04-09 12:48 Life advices.jpg
|
||||
--------- -------
|
||||
597219 1 file
|
||||
|
||||
```
|
||||
|
||||
|
||||
**7\. Using Zipinfo command**
|
||||
```
|
||||
$ zipinfo ostechnix.zip
|
||||
Archive: ostechnix.zip
|
||||
Zip file size: 584859 bytes, number of entries: 1
|
||||
-rw-r--r-- 6.3 unx 597219 bx defN 18-Apr-09 12:48 Life advices.jpg
|
||||
1 file, 597219 bytes uncompressed, 584693 bytes compressed: 2.1%
|
||||
|
||||
```
|
||||
|
||||
As you can see, the above command displays the contents of the zip file, its permissions, creating date, and percentage of compression etc.
|
||||
|
||||
**8. Using Zcat command
|
||||
**
|
||||
|
||||
To view the contents of a compressed archive file without extracting it using **zcat** command, we do:
|
||||
```
|
||||
$ zcat ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
The zcat is same as “gunzip -c” command. So, you can also use the following command to view the contents of the archive/compressed file:
|
||||
```
|
||||
$ gunzip -c ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
**9. Using Zless command
|
||||
**
|
||||
|
||||
To view the contents of an archive/compressed file using Zless command, simply do:
|
||||
```
|
||||
$ zless ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
This command is similar to “less” command where it displays the output page by page.
|
||||
|
||||
**10. Using Less command
|
||||
**
|
||||
|
||||
As you might already know, the **less** command can be used to open a file for interactive reading, allowing scrolling and search.
|
||||
|
||||
Run the following command to view the contents of an archive/compressed file using less command:
|
||||
```
|
||||
$ less ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now. You know now how to view the contents of an archive of compressed file using various commands in Linux. Hope you find this useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-view-the-contents-of-an-archive-or-compressed-file-without-extracting-it/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/vim.png
|
@ -0,0 +1,70 @@
|
||||
3 cool productivity apps for Fedora 28
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/07/3-productivity-apps-2018-816x345.jpg)
|
||||
|
||||
Productivity apps are especially popular on mobile devices. But when you sit down to do work, you’re often at a laptop or desktop computer. Let’s say you use a Fedora system for your platform. Can you find apps that help you get your work done? Of course! Read on for tips on apps to help you focus on your goals.
|
||||
|
||||
All these apps are available for free on your Fedora system. And they also respect your freedom. (Many also let you use existing services where you may have an account.)
|
||||
|
||||
### FocusWriter
|
||||
|
||||
FocusWriter is simply a full screen word processor. The app makes you more productive because it covers everything else on your screen. When you use FocusWriter, you have nothing between you and your text. With this app at work, you can focus on your thoughts with fewer distractions.
|
||||
|
||||
[![Screenshot of FocusWriter][1]][2]
|
||||
|
||||
FocusWriter lets you adjust fonts, colors, and theme to best suit your preferences. It also remembers your last document and location. This feature lets you jump right back into focusing on writing without delay.
|
||||
|
||||
To install FocusWriter, use the Software app in your Fedora Workstation. Or run this command in a terminal [using sudo][3]:
|
||||
```
|
||||
sudo dnf install focuswriter
|
||||
|
||||
```
|
||||
|
||||
### GNOME ToDo
|
||||
|
||||
This unique app is designed, as you can guess, for the GNOME desktop environment. It’s a great fit for your Fedora Workstation for that reason. ToDo has a simple purpose: it lets you make lists of things you need to get done.
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-15-18-08-59.png)
|
||||
|
||||
Using ToDo, you can prioritize and schedule deadlines for all your tasks. You can also build as many tasks lists as you want. ToDo has numerous extensions for useful functions to boost your productivity. These include GNOME Shell notifications, and list management with a todo.txt file. ToDo can even interface with a Todoist or Google account if you use one. It synchronizes tasks so you can share across your devices.
|
||||
|
||||
To install, search for ToDo in Software, or at the command line run:
|
||||
```
|
||||
sudo dnf install gnome-todo
|
||||
|
||||
```
|
||||
|
||||
### Zanshin
|
||||
|
||||
If you are a KDE using productivity fan, you may enjoy [Zanshin][4]. This organizer helps you plan your actions across multiple projects. It has a full featured interface, and lets you browse across your various tasks to see what’s most important to do next.
|
||||
|
||||
[![Screenshot of Zanshin on Fedora 28][5]][6]
|
||||
|
||||
Zanshin is extremely keyboard friendly, so you can be efficient during hacking sessions. It also integrates across numerous KDE applications as well as the Plasma Desktop. You can use it inline with KMail, KOrganizer, and KRunner.
|
||||
|
||||
To install, run this command:
|
||||
```
|
||||
sudo dnf install zanshin
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/3-cool-productivity-apps/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/pfrields/
|
||||
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-15-18-10-18-1024x768.png
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-15-18-10-18.png
|
||||
[3]:https://fedoramagazine.org/howto-use-sudo/
|
||||
[4]:https://zanshin.kde.org/
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot_20180715_192216-1024x653.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot_20180715_192216.png
|
@ -0,0 +1,46 @@
|
||||
Confessions of a recovering Perl hacker
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
|
||||
|
||||
My name's MikeCamel, and I'm a Perl hacker.
|
||||
|
||||
There, I've said it. That's the first step.
|
||||
|
||||
My handle on IRC, Twitter and pretty much everywhere else in the world is "MikeCamel." This is because, back in the day, when there were no chat apps—no apps at all, in fact—I was in a technical "chatroom" and the name "Mike" had been taken. I looked around, and the first thing I noticed on my desk was the [Camel Book][1], the O'Reilly Perl Bible.
|
||||
|
||||
I have the second edition now, but this was the first edition. Yesterday, I happened to pick up the second edition, the really thick one, to show someone on a video conference call, and it had a thin layer of dust on it. I was a little bit ashamed, but a little bit relieved as well.
|
||||
|
||||
For years, I was a sysadmin. Just bits and pieces, from time to time. Nothing serious, you understand—mainly my systems, my friends' systems. Sometimes I'd admin systems owned by other people—even at work. I always had it under control, and I was always able to step away. There were whole weeks—well days—when I didn't administer a system at all. With the exception of remote systems, which felt different, somehow less serious.
|
||||
|
||||
What pushed it over the edge, on reflection, was the Perl. This was the '90s—the 1990s, just to be clear—when Perl was young, and free, and didn't even pretend to be object-oriented. We all know it still isn't, but those youngsters—they like to pretend, and we old lags, well, we play along.
|
||||
|
||||
The thing about Perl is that it just starts small, with a regexp here, a text-file line counter there. Nothing that couldn't have been managed quite easily in Bash or Sed or Awk. But once you've written a couple of scripts, you're in—there's no going back. Long-term Perl users remember how we started, and we see the newbs going the same way.
|
||||
|
||||
I taught myself Perl in order to collate static web pages from five disparate FoxPro databases. I did it by starting at the beginning of the Camel Book and reading as much of it as I could before my brain started to hurt, then picking up a few pages back and carrying on. And then writing some Perl, which always failed, mainly because of lack of semicolons to start with, and then because I didn't really understand much of what I was doing. But I kept with it until I wasn't just writing scripts to collate databases, but scripts to load data into a single database and using CGI to serve pages in real time. My wife knew, and some of my colleagues knew, but I don't think they fully understood how deep I was in.
|
||||
|
||||
You know that Perl has you when you start looking for admin tasks to automate with it. Tasks that don't need automating and that would be much, much faster if you performed them by hand. When you start scouring the web for three- or four-character commands that, when executed, alphabetise, spell-check, and decrypt three separate files in parallel and output them to STDERR, ROT13ed.
|
||||
|
||||
I was lucky: I escaped in time. I always insisted on commenting my Perl. I never got to the very end of the Camel Book. Not in one reading, anyway. I never experimented with the darker side-effects; three or four separate operations per line was always enough for me. Over time, as my responsibilities moved more to programming, I cut back on the sysadmin tasks. Of course, that didn't stop the Perl use completely—it's amazing how often you can find an excuse to automate a task and how often Perl is the answer. But it reduced my Perl to manageable levels, levels that didn't affect my day-to-day functioning.
|
||||
|
||||
I'd like to pretend that I've stopped, but you never really give up on Perl, and it never gives up on you.
|
||||
|
||||
I'd like to pretend that I've stopped, but you never really give up on Perl, and it never gives up on you. My Camel Book (2nd ed.) is still around, even if it's a little dusty. I always check that the core modules are installed on any systems I run. And about five months ago, I found that my 10-year-old daughter had some mathematics homework that was susceptible to brute-forcing. Just a few lines. A couple of loops. No more than that. Nothing that I didn't feel went out of scope.
|
||||
|
||||
I'd like to pretend that I've stopped, but you never really give up on Perl, and it never gives up on you. My Camel Book (2nd ed.) is still around, even if it's a little dusty. I always check that the core modules are installed on any systems I run. And about five months ago, I found that my 10-year-old daughter had some mathematics homework that was susceptible to brute-forcing. Just a few lines. A couple of loops. No more than that. Nothing that I didn't feel went out of scope.
|
||||
|
||||
I discovered after she handed in the results that it hadn't produced the correct results, but I didn't mind. It was tight, it was elegant, it was beautiful. It was Perl. My Perl.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/confessions-recovering-perl-hacker
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://en.wikipedia.org/wiki/Programming_Perl
|
@ -0,0 +1,259 @@
|
||||
How To Find The Mounted Filesystem Type In Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/07/filesystem-720x340.png)
|
||||
|
||||
As you may already know, the Linux supports numerous filesystems, such as Ext4, ext3, ext2, sysfs, securityfs, FAT16, FAT32, NTFS, and many. The most commonly used filesystem is Ext4. Ever wondered what type of filesystem are you currently using in your Linux system? No? Worry not! We got your back. This guide explains how to find the mounted filesystem type in Unix-like operating systems.
|
||||
|
||||
### Find The Mounted Filesystem Type In Linux
|
||||
|
||||
There can be many ways to find the filesystem type in Linux. Here, I have given 8 different methods. Let us get started, shall we?
|
||||
|
||||
#### Method 1 – Using findmnt command
|
||||
|
||||
This is the most commonly used method to find out the type of a filesystem. The **findmnt** command will list all mounted filesystems or search for a filesystem. The findmnt command can be able to search in **/etc/fstab** , **/etc/mtab** or **/proc/self/mountinfo**.
|
||||
|
||||
findmnt command comes pre-installed in most Linux distributions, because it is part of the package named **util-linux**. Just in case if it is not available, simply install this package and you’re good to go. For instance, you can install **util-linux** package in Debian-based systems using command:
|
||||
```
|
||||
$ sudo apt install util-linux
|
||||
|
||||
```
|
||||
|
||||
Let us go ahead and see how to use findmnt command to find out the mounted filesystems.
|
||||
|
||||
If you run it without any arguments/options, it will list all mounted filesystems in a tree-like format as shown below.
|
||||
```
|
||||
$ findmnt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
![][2]
|
||||
|
||||
As you can see, the findmnt command displays the target mount point (TARGET), source device (SOURCE), file system type (FSTYPE), and relevant mount options, like whether the filesystem is read/write or read-only. (OPTIONS). In my case, my root(/) filesystem type is EXT4.
|
||||
|
||||
If you don’t like/want to display the output in tree-like format, use **-l** flag to display in simple, plain format.
|
||||
```
|
||||
$ findmnt -l
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
You can also list a particular type of filesystem, for example **ext4** , using **-t** option.
|
||||
```
|
||||
$ findmnt -t ext4
|
||||
TARGET SOURCE FSTYPE OPTIONS
|
||||
/ /dev/sda2 ext4 rw,relatime,commit=360
|
||||
└─/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered
|
||||
|
||||
```
|
||||
|
||||
Findmnt can produce df style output as well.
|
||||
```
|
||||
$ findmnt --df
|
||||
|
||||
```
|
||||
|
||||
Or
|
||||
```
|
||||
$ findmnt -D
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
SOURCE FSTYPE SIZE USED AVAIL USE% TARGET
|
||||
dev devtmpfs 3.9G 0 3.9G 0% /dev
|
||||
run tmpfs 3.9G 1.1M 3.9G 0% /run
|
||||
/dev/sda2 ext4 456.3G 342.5G 90.6G 75% /
|
||||
tmpfs tmpfs 3.9G 32.2M 3.8G 1% /dev/shm
|
||||
tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
|
||||
bpf bpf 0 0 0 - /sys/fs/bpf
|
||||
tmpfs tmpfs 3.9G 8.4M 3.9G 0% /tmp
|
||||
/dev/loop0 squashfs 82.1M 82.1M 0 100% /var/lib/snapd/snap/core/4327
|
||||
/dev/sda1 ext4 92.8M 55.7M 30.1M 60% /boot
|
||||
tmpfs tmpfs 788.8M 32K 788.8M 0% /run/user/1000
|
||||
gvfsd-fuse fuse.gvfsd-fuse 0 0 0 - /run/user/1000/gvfs
|
||||
|
||||
```
|
||||
|
||||
You can also display filesystems for a specific device, or mountpoint too.
|
||||
|
||||
Search for a device:
|
||||
```
|
||||
$ findmnt /dev/sda1
|
||||
TARGET SOURCE FSTYPE OPTIONS
|
||||
/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered
|
||||
|
||||
```
|
||||
|
||||
Search for a mountpoint:
|
||||
```
|
||||
$ findmnt /
|
||||
TARGET SOURCE FSTYPE OPTIONS
|
||||
/ /dev/sda2 ext4 rw,relatime,commit=360
|
||||
|
||||
```
|
||||
|
||||
You can even find filesystems with specific label:
|
||||
```
|
||||
$ findmnt LABEL=Storage
|
||||
|
||||
```
|
||||
|
||||
For more details, refer the man pages.
|
||||
```
|
||||
$ man findmnt
|
||||
|
||||
```
|
||||
|
||||
The findmnt command is just enough to find the type of a mounted filesystem in Linux. It is created for that specific purpose only. However, there are also few other ways available to find out the filesystem type. If you’re interested to know, read on.
|
||||
|
||||
#### Method 2 – Using blkid command
|
||||
|
||||
The **blkid** command is used locate and print block device attributes. It is also part of the util-linux package, so you don’t bother to install it.
|
||||
|
||||
To find out the type of a filesystem using blkid command, run:
|
||||
```
|
||||
$ blkid /dev/sda1
|
||||
|
||||
```
|
||||
|
||||
#### Method 3 – Using df command
|
||||
|
||||
The **df** command is used to report filesystem disk space usage in Unix-like operating systems. To find the type of all mounted filesystems, simply run:
|
||||
```
|
||||
$ df -T
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
![][4]
|
||||
|
||||
For details about df command, refer the following guide.
|
||||
|
||||
Also, check man pages.
|
||||
```
|
||||
$ man df
|
||||
|
||||
```
|
||||
|
||||
#### Method 4 – Using file command
|
||||
|
||||
The **file** command determines the type of a specified file. It works just fine for files with no file extension.
|
||||
|
||||
Run the following command to find the filesystem type of a partition:
|
||||
```
|
||||
$ sudo file -sL /dev/sda1
|
||||
[sudo] password for sk:
|
||||
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=83a1dbbf-1e15-4b45-94fe-134d3872af96 (needs journal recovery) (extents) (large files) (huge files)
|
||||
|
||||
```
|
||||
|
||||
Check man pages for more details:
|
||||
```
|
||||
$ man file
|
||||
|
||||
```
|
||||
|
||||
#### Method 5 – Using fsck command
|
||||
|
||||
The **fsck** command is used to check the integrity of a filesystem or repair it. You can find the type of a filesystem by passing the partition as an argument like below.
|
||||
```
|
||||
$ fsck -N /dev/sda1
|
||||
fsck from util-linux 2.32
|
||||
[/usr/bin/fsck.ext4 (1) -- /boot] fsck.ext4 /dev/sda1
|
||||
|
||||
```
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ man fsck
|
||||
|
||||
```
|
||||
|
||||
#### Method 6 – Using fstab Command
|
||||
|
||||
**fstab** is a file that contains static information about the filesystems. This file usually contains the mount point, filesystem type and mount options.
|
||||
|
||||
To view the type of a filesystem, simply run:
|
||||
```
|
||||
$ cat /etc/fstab
|
||||
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ man fstab
|
||||
|
||||
```
|
||||
|
||||
#### Method 7 – Using lsblk command
|
||||
|
||||
The **lsblk** command displays the information about devices.
|
||||
|
||||
To display info about mounted filesystems, simply run:
|
||||
```
|
||||
$ lsblk -f
|
||||
NAME FSTYPE LABEL UUID MOUNTPOINT
|
||||
loop0 squashfs /var/lib/snapd/snap/core/4327
|
||||
sda
|
||||
├─sda1 ext4 83a1dbbf-1e15-4b45-94fe-134d3872af96 /boot
|
||||
├─sda2 ext4 4d25ddb0-5b20-40b4-ae35-ef96376d6594 /
|
||||
└─sda3 swap 1f8f5e2e-7c17-4f35-97e6-8bce7a4849cb [SWAP]
|
||||
sr0
|
||||
|
||||
```
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ man lsblk
|
||||
|
||||
```
|
||||
|
||||
#### Method 8 – Using mount command
|
||||
|
||||
The **mount** command is used to mount a local or remote filesystems in Unix-like systems.
|
||||
|
||||
To find out the type of a filesystem using mount command, do:
|
||||
```
|
||||
$ mount | grep "^/dev"
|
||||
/dev/sda2 on / type ext4 (rw,relatime,commit=360)
|
||||
/dev/sda1 on /boot type ext4 (rw,relatime,commit=360,data=ordered)
|
||||
|
||||
```
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ man mount
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now folks. You now know 8 different Linux commands to find out the type of a mounted Linux filesystems. If you know any other methods, feel free to let me know in the comment section below. I will check and update this guide accordingly.
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-find-the-mounted-filesystem-type-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-2.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2018/07/df.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/fstab.png
|
@ -0,0 +1,110 @@
|
||||
Users, Groups and Other Linux Beasts: Part 2
|
||||
======
|
||||
![](https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2)
|
||||
In this ongoing tour of Linux, we’ve looked at [how to manipulate folders/directories][1], and now we’re continuing our discussion of _permissions_ , _users_ and _groups_ , which are necessary to establish who can manipulate which files and directories. [Last time,][2] we showed how to create new users, and now we’re going to dive right back in:
|
||||
|
||||
You can create new groups and then add users to them at will with the `groupadd` command. For example, using:
|
||||
```
|
||||
sudo groupadd photos
|
||||
|
||||
```
|
||||
|
||||
will create the _photos_ group.
|
||||
|
||||
You’ll need to [create a directory][1] hanging off the root directory:
|
||||
```
|
||||
sudo mkdir /photos
|
||||
|
||||
```
|
||||
|
||||
If you run `ls -l /`, one of the lines will be:
|
||||
```
|
||||
drwxr-xr-x 1 root root 0 jun 26 21:14 photos
|
||||
|
||||
```
|
||||
|
||||
The first _root_ in the output is the user owner and the second _root_ is the group owner.
|
||||
|
||||
To transfer the ownership of the _/photos_ directory to the _photos_ group, use
|
||||
```
|
||||
chgrp photos /photos
|
||||
|
||||
```
|
||||
|
||||
The `chgrp` command typically takes two parameters, the first parameter is the group that will take ownership of the file or directory and the second is the file or directory you want to give over to the the group.
|
||||
|
||||
Next, run `ls -l /` and you'll see the line has changed to:
|
||||
```
|
||||
drwxr-xr-x 1 root photos 0 jun 26 21:14 photos
|
||||
|
||||
```
|
||||
|
||||
You have successfully transferred the ownership of your new directory over to the _photos_ group.
|
||||
|
||||
Then, add your own user and the _guest_ user to the _photos_ group:
|
||||
```
|
||||
sudo usermod <your username here> -a -G photos
|
||||
sudo usermod guest -a -G photos
|
||||
|
||||
```
|
||||
|
||||
You may have to log out and log back in to see the changes, but, when you do, running `groups` will show _photos_ as one of the groups you belong to.
|
||||
|
||||
A couple of things to point out about the `usermod` command shown above. First: Be careful not to use the `-g` option instead of `-G`. The `-g` option changes your primary group and could lock you out of your stuff if you use it by accident. `-G`, on the other hand, _adds_ you to the groups listed and doesn't mess with the primary group. If you want to add your user to more groups than one, list them one after another, separated by commas, no spaces, after `-G`:
|
||||
```
|
||||
sudo usermod <your username> -a -G photos,pizza,spaceforce
|
||||
|
||||
```
|
||||
|
||||
Second: Be careful not to forget the `-a` parameter. The `-a` parameter stands for _append_ and attaches the list of groups you pass to `-G` to the ones you already belong to. This means that, if you don't include `-a`, the list of groups you already belong to, will be overwritten, again locking you out from stuff you need.
|
||||
|
||||
Neither of these are catastrophic problems, but it will mean you will have to add your user back manually to all the groups you belonged to, which can be a pain, especially if you have lost access to the _sudo_ and _wheel_ group.
|
||||
|
||||
### Permits, Please!
|
||||
|
||||
There is still one more thing to do before you can copy images to the _/photos_ directory. Notice how, when you did `ls -l /` above, permissions for that folder came back as _drwxr-xr-x_.
|
||||
|
||||
If you read [the article I recommended at the beginning of this post][3], you'll know that the first _d_ indicates that the entry in the file system is a directory, and then you have three sets of three characters ( _rwx_ , _r-x_ , _r-x_ ) that indicate the permissions for the user owner ( _rwx_ ) of the directory, then the group owner ( _r-x_ ), and finally the rest of the users ( _r-x_ ). This means that the only person who has write permissions so far, that is, the only person who can copy or create files in the _/photos_ directory, is the _root_ user.
|
||||
|
||||
But [that article I mentioned also tells you how to change the permissions for a directory or file][3]:
|
||||
```
|
||||
sudo chmod g+w /photos
|
||||
|
||||
```
|
||||
|
||||
Running `ls -l /` after that will give you _/photos_ permissions as _drwxrwxr-x_ which is what you want: group members can now write into the directory.
|
||||
|
||||
Now you can try and copy an image or, indeed, any other file to the directory and it should go through without a problem:
|
||||
```
|
||||
cp image.jpg /photos
|
||||
|
||||
```
|
||||
|
||||
The _guest_ user will also be able to read and write from the directory. They will also be able to read and write to it, and even move or delete files created by other users within the shared directory.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The permissions and privileges system in Linux has been honed over decades. inherited as it is from the old Unix systems of yore. As such, it works very well and is well thought out. Becoming familiar with it is essential for any Linux sysadmin. In fact, you can't do much admining at all unless you understand it. But, it's not that hard.
|
||||
|
||||
Next time, we'll be dive into files and see the different ways of creating, manipulating, and destroying them in creative ways. Always fun, that last one.
|
||||
|
||||
See you then!
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bro66
|
||||
[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux
|
||||
[2]:https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts
|
||||
[3]:https://www.linux.com/learn/understanding-linux-file-permissions
|
||||
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
95
sources/tech/20180717 Getting started with Etcher.io.md
Normal file
95
sources/tech/20180717 Getting started with Etcher.io.md
Normal file
@ -0,0 +1,95 @@
|
||||
Getting started with Etcher.io
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)
|
||||
|
||||
Bootable USB drives are a great way to try out a new Linux distribution to see if you like it before you install. While some Linux distributions, like [Fedora][1], make it easy to create bootable media, most others provide the ISOs or image files and leave the media creation decisions up to the user. There's always the option to use `dd` to create media on the command line—but let's face it, even for the most experienced user, that's still a pain. There are other utilities—like UnetBootIn, Disk Utility on MacOS, and Win32DiskImager on Windows—that create bootable USBs.
|
||||
|
||||
### Installing Etcher
|
||||
|
||||
About 18 months ago, I came upon [Etcher.io][2] , a great open source project that allows easy and foolproof media creation on Linux, Windows, or MacOS. Etcher.io has become my "go-to" application for creating bootable media for Linux. I can easily download ISO or IMG files and burn them to flash drives and SD cards. It's an open source project licensed under [Apache 2.0][3] , and the [source code][4] is available on GitHub.
|
||||
|
||||
Go to the [Etcher.io][5] website and click on the download link for your operating system—32- or 64-bit Linux, 32- or 64-bit Windows, or MacOS.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/etcher_1.png)
|
||||
|
||||
Etcher provides great instructions in its GitHub repository for adding Etcher to your collection of Linux utilities.
|
||||
|
||||
If you are on Debian or Ubuntu, add the Etcher Debian repository:
|
||||
```
|
||||
$echo "deb https://dl.bintray.com/resin-io/debian stable etcher" | sudo tee
|
||||
|
||||
/etc/apt/sources.list.d/etcher.list
|
||||
|
||||
|
||||
|
||||
Trust Bintray.com GPG key
|
||||
|
||||
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61
|
||||
|
||||
```
|
||||
|
||||
Then update your system and install:
|
||||
```
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install etcher-electron
|
||||
|
||||
```
|
||||
|
||||
If you are using Fedora or Red Hat Enterprise Linux, add the Etcher RPM repository:
|
||||
```
|
||||
$ sudo wget https://bintray.com/resin-io/redhat/rpm -O /etc/yum.repos.d/bintray-
|
||||
|
||||
resin-io-redhat.repo
|
||||
|
||||
```
|
||||
|
||||
Update and install using either:
|
||||
```
|
||||
$ sudo yum install -y etcher-electron
|
||||
|
||||
```
|
||||
|
||||
or:
|
||||
```
|
||||
$ sudo dnf install -y etcher-electron
|
||||
|
||||
```
|
||||
|
||||
### Creating bootable drives
|
||||
|
||||
In addition to creating bootable images for Ubuntu, EndlessOS, and other flavors of Linux, I have used Etcher to [create SD card images][6] for the Raspberry Pi. Here's how to create bootable media.
|
||||
|
||||
First, download to your computer the ISO or image you want to use. Then, launch Etcher and insert your USB or SD card into the computer.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/etcher_2.png)
|
||||
|
||||
Click on **Select Image**. In this example, I want to create a bootable USB drive to install Ubermix on a new computer. Once I have selected my Ubermix image file and inserted my USB drive into the computer, Etcher.io "sees" the drive, and I can begin the process of installing Ubermix on my USB.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/etcher_3.png)
|
||||
|
||||
Once I click on **Flash** , the installation process begins. The time required depends on the image's size. After the image is installed on the drive, the software verifies the installation; at the end, a banner announces my media creation is complete.
|
||||
|
||||
If you need [help with Etcher][7], contact the community through its [Discourse][8] forum. Etcher is very easy to use, and it has replaced all my other media creation tools because none of them do the job as easily or as well as Etcher.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/getting-started-etcherio
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://getfedora.org/en_GB/workstation/download/
|
||||
[2]:http://etcher.io
|
||||
[3]:https://github.com/resin-io/etcher/blob/master/LICENSE
|
||||
[4]:https://github.com/resin-io/etcher
|
||||
[5]:https://etcher.io/
|
||||
[6]:https://www.raspberrypi.org/magpi/pi-sd-etcher/
|
||||
[7]:https://github.com/resin-io/etcher/blob/master/SUPPORT.md
|
||||
[8]:https://forums.resin.io/c/etcher
|
@ -1,225 +0,0 @@
|
||||
translating by pityonline
|
||||
|
||||
What is CI/CD?
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
|
||||
|
||||
Continuous integration (CI) and continuous delivery (CD) are extremely common terms used when talking about producing software. But what do they really mean? In this article, I'll explain the meaning and significance behind these and related terms, such as continuous testing and continuous deployment.
|
||||
|
||||
### Quick summary
|
||||
|
||||
An assembly line in a factory produces consumer goods from raw materials in a fast, automated, reproducible manner. Similarly, a software delivery pipeline produces releases from source code in a fast, automated, and reproducible manner. The overall design for how this is done is called "continuous delivery." The process that kicks off the assembly line is referred to as "continuous integration." The process that ensures quality is called "continuous testing" and the process that makes the end product available to users is called "continuous deployment." And the overall efficiency experts that make everything run smoothly and simply for everyone are known as "DevOps" practitioners.
|
||||
|
||||
### What does "continuous" mean?
|
||||
|
||||
Continuous is used to describe many different processes that follow the practices I describe here. It doesn't mean "always running." It does mean "always ready to run." In the context of creating software, it also includes several core concepts/best practices. These are:
|
||||
|
||||
* **Frequent releases:** The goal behind continuous practices is to enable delivery of quality software at frequent intervals. Frequency here is variable and can be defined by the team or company. For some products, once a quarter, month, week, or day may be frequent enough. For others, multiple times a day may be desired and doable. Continuous can also take on an "occasional, as-needed" aspect. The end goal is the same: Deliver software updates of high quality to end users in a repeatable, reliable process. Often this may be done with little to no interaction or even knowledge of the users (think device updates).
|
||||
|
||||
* **Automated processes:** A key part of enabling this frequency is having automated processes to handle nearly all aspects of software production. This includes building, testing, analysis, versioning, and, in some cases, deployment.
|
||||
|
||||
* **Repeatable:** If we are using automated processes that always have the same behavior given the same inputs, then processing should be repeatable. That is, if we go back and enter the same version of code as an input, we should get the same set of deliverables. This also assumes we have the same versions of external dependencies (i.e., other deliverables we don't create that our code uses). Ideally, this also means that the processes in our pipelines can be versioned and re-created (see the DevOps discussion later on).
|
||||
|
||||
* **Fast processing:** "Fast" is a relative term here, but regardless of the frequency of software updates/releases, continuous processes are expected to process changes from source code to deliverables in an efficient manner. Automation takes care of much of this, but automated processes may still be slow. For example, integrated testing across all aspects of a product that takes most of the day may be too slow for product updates that have a new candidate release multiple times per day.
|
||||
|
||||
|
||||
|
||||
|
||||
### What is a "continuous delivery pipeline"?
|
||||
|
||||
The different tasks and jobs that handle transforming source code into a releasable product are usually strung together into a software "pipeline" where successful completion of one automatic process kicks off the next process in the sequence. Such pipelines go by many different names, such as continuous delivery pipeline, deployment pipeline, and software development pipeline. An overall supervisor application manages the definition, running, monitoring, and reporting around the different pieces of the pipeline as they are executed.
|
||||
|
||||
### How does a continuous delivery pipeline work?
|
||||
|
||||
The actual implementation of a software delivery pipeline can vary widely. There are a large number and variety of applications that may be used in a pipeline for the various aspects of source tracking, building, testing, gathering metrics, managing versions, etc. But the overall workflow is generally the same. A single orchestration/workflow application manages the overall pipeline, and each of the processes runs as a separate job or is stage-managed by that application. Typically, the individual "jobs" are defined in a syntax and structure that the orchestration application understands and can manage as a workflow.
|
||||
|
||||
Jobs are created to do one or more functions (building, testing, deploying, etc.). Each job may use a different technology or multiple technologies. The key is that the jobs are automated, efficient, and repeatable. If a job is successful, the workflow manager application triggers the next job in the pipeline. If a job fails, the workflow manager alerts developers, testers, and others so they can correct the problem as quickly as possible. Because of the automation, errors can be found much more quickly than by running a set of manual processes. This quick identification of errors is called "fail fast" and can be just as valuable in getting to the pipeline's endpoint.
|
||||
|
||||
### What is meant by "fail fast"?
|
||||
|
||||
One of a pipeline's jobs is to quickly process changes. Another is to monitor the different tasks/jobs that create the release. Since code that doesn't compile or fails a test can hold up the pipeline, it's important for the users to be notified quickly of such situations. Fail fast refers to the idea that the pipeline processing finds problems as soon as possible and quickly notifies users so the problems can be corrected and code resubmitted for another run through the pipeline. Often, the pipeline process can look at the history to determine who made that change and notify the person and their team.
|
||||
|
||||
### Do all parts of a continuous delivery pipeline have to be automated?
|
||||
|
||||
Nearly all parts of the pipeline should be automated. For some parts, it may make sense to have a spot for human intervention/interaction. An example might be for user-acceptance testing (having end users try out the software and make sure it does what they want/expect). Another case might be deployment to production environments where groups want to have more human control. And, of course, human intervention is required if the code isn't correct and breaks.
|
||||
|
||||
With that background on the meaning of continuous, let's look at the different types of continuous processing and what each means in the context of a software pipeline.
|
||||
|
||||
### What is continuous integration?
|
||||
|
||||
Continuous integration (CI) is the process of automatically detecting, pulling, building, and (in most cases) doing unit testing as source code is changed for a product. CI is the activity that starts the pipeline (although certain pre-validations—often called "pre-flight checks"—are sometimes incorporated ahead of CI).
|
||||
|
||||
The goal of CI is to quickly make sure a new change from a developer is "good" and suitable for further use in the code base.
|
||||
|
||||
### How does continuous integration work?
|
||||
|
||||
The basic idea is having an automated process "watching" one or more source code repositories for changes. When a change is pushed to the repositories, the watching process detects the change, pulls down a copy, builds it, and runs any associated unit tests.
|
||||
|
||||
### How does continuous integration detect changes?
|
||||
|
||||
These days, the watching process is usually an application like [Jenkins][1] that also orchestrates all (or most) of the processes running in the pipeline and monitors for changes as one of its functions. The watching application can monitor for changes in several different ways. These include:
|
||||
|
||||
* **Polling:** The monitoring program repeatedly asks the source management system, "Do you have anything new in the repositories I'm interested in?" When the source management system has new changes, the monitoring program "wakes up" and does its work to pull the new code and build/test it.
|
||||
|
||||
* **Periodic:** The monitoring program is configured to periodically kick off a build regardless of whether there are changes or not. Ideally, if there are no changes, then nothing new is built, so this doesn't add much additional cost.
|
||||
|
||||
* **Push:** This is the inverse of the monitoring application checking with the source management system. In this case, the source management system is configured to "push out" a notification to the monitoring application when a change is committed into a repository. Most commonly, this can be done in the form of a "webhook"—a program that is "hooked" to run when new code is pushed and sends a notification over the internet to the monitoring program. For this to work, the monitoring program must have an open port that can receive the webhook information over the internet.
|
||||
|
||||
|
||||
|
||||
|
||||
### What are "pre-checks" (aka pre-flight checks)?
|
||||
|
||||
Additional validations may be done before code is introduced into the source repository and triggers continuous integration. These follow best practices such as test builds and code reviews. They are usually built into the development process before the code is introduced in the pipeline. But some pipelines may also include them as part of their monitored processes or workflows.
|
||||
|
||||
As an example, a tool called [Gerrit][2] allows for formal code reviews, validations, and test builds after a developer has pushed code but before it is allowed into the ([Git][3] remote) repository. Gerrit sits between the developer's workspace and the Git remote repository. It "catches" pushes from the developer and can do pass/fail validations to ensure they pass before being allowed to make it into the repository. This can include detecting the proposed change and kicking off a test build (a form of CI). It also allows for groups to do formal code reviews at that point. In this way, there is an extra measure of confidence that the change will not break anything when it is merged into the codebase.
|
||||
|
||||
### What are "unit tests"?
|
||||
|
||||
Unit tests (also known as "commit tests") are small, focused tests written by developers to ensure new code works in isolation. "In isolation" here means not depending on or making calls to other code that isn't directly accessible nor depending on external data sources or other modules. If such a dependency is required for the code to run, those resources can be represented by mocks. Mocks refer to using a code stub that looks like the resource and can return values but doesn't implement any functionality.
|
||||
|
||||
In most organizations, developers are responsible for creating unit tests to prove their code works. In fact, one model (known as test-driven development [TDD]) requires unit tests to be designed first as a basis for clearly identifying what the code should do. Because such code changes can be fast and numerous, they must also be fast to execute.
|
||||
|
||||
As they relate to the continuous integration workflow, a developer creates or updates the source in their local working environment and uses the unit tests to ensure the newly developed function or method works. Typically, these tests take the form of asserting that a given set of inputs to a function or method produces a given set of outputs. They generally test to ensure that error conditions are properly flagged and handled. Various unit-testing frameworks, such as [JUnit][4] for Java development, are available to assist.
|
||||
|
||||
### What is continuous testing?
|
||||
|
||||
Continuous testing refers to the practice of running automated tests of broadening scope as code goes through the CD pipeline. Unit testing is typically integrated with the build processes as part of the CI stage and focused on testing code in isolation from other code interacting with it.
|
||||
|
||||
Beyond that, there are various forms of testing that can/should occur. These can include:
|
||||
|
||||
* **Integration testing** validates that groups of components and services all work together.
|
||||
|
||||
* **Functional testing** validates the result of executing functions in the product are as expected.
|
||||
|
||||
* **Acceptance testing** measures some characteristic of the system against acceptable criteria. Examples include performance, scalability, stress, and capacity.
|
||||
|
||||
|
||||
|
||||
|
||||
All of these may not be present in the automated pipeline, and the lines between some of the different types can be blurred. But the goal of continuous testing in a delivery pipeline is always the same: to prove by successive levels of testing that the code is of a quality that it can be used in the release that's in progress. Building on the continuous principle of being fast, a secondary goal is to find problems quickly and alert the development team. This is usually referred to as fail fast.
|
||||
|
||||
### Besides testing, what other kinds of validations can be done against code in the pipeline?
|
||||
|
||||
In addition to the pass/fail aspects of tests, applications exist that can also tell us the number of source code lines that are exercised (covered) by our test cases. This is an example of a metric that can be computed across the source code. This metric is called code-coverage and can be measured by tools (such as [JaCoCo][5] for Java source).
|
||||
|
||||
Many other types of metrics exist, such as counting lines of code, measuring complexity, and comparing coding structures against known patterns. Tools such as [SonarQube][6] can examine source code and compute these metrics. Beyond that, users can set thresholds for what kind of ranges they are willing to accept as "passing" for these metrics. Then, processing in the pipeline can be set to check the computed values against the thresholds, and if the values aren't in the acceptable range, processing can be stopped. Applications such as SonarQube are highly configurable and can be tuned to check only for the things that a team is interested in.
|
||||
|
||||
### What is continuous delivery?
|
||||
|
||||
Continuous delivery (CD) generally refers to the overall chain of processes (pipeline) that automatically gets source code changes and runs them through build, test, packaging, and related operations to produce a deployable release, largely without any human intervention.
|
||||
|
||||
The goals of CD in producing software releases are automation, efficiency, reliability, reproducibility, and verification of quality (through continuous testing).
|
||||
|
||||
CD incorporates CI (automatically detecting source code changes, executing build processes for the changes, and running unit tests to validate), continuous testing (running various kinds of tests on the code to gain successive levels of confidence in the quality of the code), and (optionally) continuous deployment (making releases from the pipeline automatically available to users).
|
||||
|
||||
### How are multiple versions identified/tracked in pipelines?
|
||||
|
||||
Versioning is a key concept in working with CD and pipelines. Continuous implies the ability to frequently integrate new code and make updated releases available. But that doesn't imply that everyone always wants the "latest and greatest." This may be especially true for internal teams that want to develop or test against a known, stable release. So, it is important that the pipeline versions objects that it creates and can easily store and access those versioned objects.
|
||||
|
||||
The objects created in the pipeline processing from the source code can generally be called artifacts. Artifacts should have versions applied to them when they are built. The recommended strategy for assigning version numbers to artifacts is called semantic versioning. (This also applies to versions of dependent artifacts that are brought in from external sources.)
|
||||
|
||||
Semantic version numbers have three parts: major, minor, and patch. (For example, 1.4.3 reflects major version 1, minor version 4, and patch version 3.) The idea is that a change in one of these parts represents a level of update in the artifact. The major version is incremented only for incompatible API changes. The minor version is incremented when functionality is added in a backward-compatible manner. And the patch version is incremented when backward-compatible bug fixes are made. These are recommended guidelines, but teams are free to vary from this approach, as long as they do so in a consistent and well-understood manner across the organization. For example, a number that increases each time a build is done for a release may be put in the patch field.
|
||||
|
||||
### How are artifacts "promoted"?
|
||||
|
||||
Teams can assign a promotion "level" to artifacts to indicate suitability for testing, production, etc. There are various approaches. Applications such as Jenkins or [Artifactory][7] can be enabled to do promotion. Or a simple scheme can be to add a label to the end of the version string. For example, -snapshot can indicate the latest version (snapshot) of the code was used to build the artifact. Various promotion strategies or tools can be used to "promote" the artifact to other levels such as -milestone or -production as an indication of the artifact's stability and readiness for release.
|
||||
|
||||
### How are multiple versions of artifacts stored and accessed?
|
||||
|
||||
Versioned artifacts built from source can be stored via applications that manage "artifact repositories." Artifact repositories are like source management for built artifacts. The application (such as Artifactory or [Nexus][8]) can accept versioned artifacts, store and track them, and provide ways for them to be retrieved.
|
||||
|
||||
Pipeline users can specify the versions they want to use and have the pipeline pull in those versions.
|
||||
|
||||
### What is continuous deployment?
|
||||
|
||||
Continuous deployment (CD) refers to the idea of being able to automatically take a release of code that has come out of the CD pipeline and make it available for end users. Depending on the way the code is "installed" by users, that may mean automatically deploying something in a cloud, making an update available (such as for an app on a phone), updating a website, or simply updating the list of available releases.
|
||||
|
||||
An important point here is that just because continuous deployment can be done doesn't mean that every set of deliverables coming out of a pipeline is always deployed. It does mean that, via the pipeline, every set of deliverables is proven to be "deployable." This is accomplished in large part by the successive levels of continuous testing (see the section on Continuous Testing in this article).
|
||||
|
||||
Whether or not a release from a pipeline run is deployed may be gated by human decisions and various methods employed to "try out" a release before fully deploying it.
|
||||
|
||||
### What are some ways to test out deployments before fully deploying to all users?
|
||||
|
||||
Since having to rollback/undo a deployment to all users can be a costly situation (both technically and in the users' perception), numerous techniques have been developed to allow "trying out" deployments of new functionality and easily "undoing" them if issues are found. These include:
|
||||
|
||||
#### Blue/green testing/deployments
|
||||
|
||||
In this approach to deploying software, two identical hosting environments are maintained — a _blue_ one and a _green_ one. (The colors are not significant and only serves as identifers.) At any given point, one of these is the _production_ deployment and the other is the _candidate_ deployment.
|
||||
|
||||
In front of these instances is a router or other system that serves as the customer “gateway” to the product or application. By pointing the router to the desired blue or green instance, customer traffic can be directed to the desired deployment. In this way, swapping out which deployment instance is pointed to (blue or green) is quick, easy, and transparent to the user.
|
||||
|
||||
When a new release is ready for testing, it can be deployed to the non-production environment. After it’s been tested and approved, the router can be changed to point the incoming production traffic to it (so it becomes the new production site). Now the hosting environment that was production is available for the next candidate.
|
||||
|
||||
Likewise, if a problem is found with the latest deployment and the previous production instance is still deployed in the other environment, a simple change can point the customer traffic back to the previous production instance — effectively taking the instance with the problem “offline” and rolling back to the previous version. The new deployment with the problem can then be fixed in the other area.
|
||||
|
||||
#### Canary testing/deployment
|
||||
|
||||
In some cases, swapping out the entire deployment via a blue/green environment may not be workable or desired. Another approach is known as _canary_ testing/deployment. In this model, a portion of customer traffic is rerouted to new pieces of the product. For example, a new version of a search service in a product may be deployed alongside the current production version of the service. Then, 10% of search queries may be routed to the new version to test it out in a production environment.
|
||||
|
||||
If the new service handles the limited traffic with no problems, then more traffic may be routed to it over time. If no problems arise, then over time, the amount of traffic routed to the new service can be increased until 100% of the traffic is going to it. This effectively “retires” the previous version of the service and puts the new version into effect for all customers.
|
||||
|
||||
#### Feature toggles
|
||||
|
||||
For new functionality that may need to be easily backed out (in case a problem is found), developers can add a feature toggle. This is a software if-then switch in the code that only activates the code if a data value is set. This data value can be a globally accessible place that the deployed application checks to see whether it should execute the new code. If the data value is set, it executes the code; if not, it doesn't.
|
||||
|
||||
This gives developers a remote "kill switch" to turn off the new functionality if a problem is found after deployment to production.
|
||||
|
||||
#### Dark launch
|
||||
|
||||
In this practice, code is incrementally tested/deployed into production, but changes are not made visible to users (thus the "dark" name). For example, in the production release, some portion of web queries might be redirected to a service that queries a new data source. This information can be collected by development for analysis—without exposing any information about the interface, transaction, or results back to users.
|
||||
|
||||
The idea here is to get real information on how a candidate change would perform under a production load without impacting users or changing their experience. Over time, more load can be redirected until either a problem is found or the new functionality is deemed ready for all to use. Feature flags can be used actually to handle the mechanics of dark launches.
|
||||
|
||||
### What is DevOps?
|
||||
|
||||
[DevOps][9] is a set of ideas and recommended practices around how to make it easier for development and operational teams to work together on developing and releasing software. Historically, development teams created products but did not install/deploy them in a regular, repeatable way, as customers would do. That set of install/deploy tasks (as well as other support tasks) were left to the operations teams to sort out late in the cycle. This often resulted in a lot of confusion and problems, since the operations team was brought into the loop late in the cycle and had to make what they were given work in a short timeframe. As well, development teams were often left in a bad position—because they had not sufficiently tested the product's install/deploy functionality, they could be surprised by problems that emerged during that process.
|
||||
|
||||
This often led to a serious disconnect and lack of cooperation between development and operations teams. The DevOps ideals advocate ways of doing things that involve both development and operations staff from the start of the cycle through the end, such as CD.
|
||||
|
||||
### How does CD intersect with DevOps?
|
||||
|
||||
The CD pipeline is an implementation of several DevOps ideals. The later stages of a product, such as packaging and deployment, can always be done on each run of the pipeline rather than waiting for a specific point in the product development cycle. As well, both development and operations staff can clearly see when things work and when they don't, from development to deployment. For a cycle of a CD pipeline to be successful, it must pass through not only the processes associated with development but also the ones associated with operations.
|
||||
|
||||
Carried to the next level, DevOps suggests that even the infrastructure that implements the pipeline be treated like code. That is, it should be automatically provisioned, trackable, easy to change, and spawn a new run of the pipeline if it changes. This can be done by implementing the pipeline as code.
|
||||
|
||||
### What is "pipeline-as-code"?
|
||||
|
||||
Pipeline-as-code is a general term for creating pipeline jobs/tasks via programming code, just as developers work with source code for products. The goal is to have the pipeline implementation expressed as code so it can be stored with the code, reviewed, tracked over time, and easily spun up again if there is a problem and the pipeline must be stopped. Several tools allow this, including [Jenkins 2][1].
|
||||
|
||||
### How does DevOps impact infrastructure for producing software?
|
||||
|
||||
Traditionally, individual hardware systems used in pipelines were configured with software (operating systems, applications, development tools, etc.) one at a time. At the extreme, each system was a custom, hand-crafted setup. This meant that when a system had problems or needed to be updated, that was frequently a custom task as well. This kind of approach goes against the fundamental CD ideal of having an easily reproducible and trackable environment.
|
||||
|
||||
Over the years, applications have been developed to standardize provisioning (installing and configuring) systems. As well, virtual machines were developed as programs that emulate computers running on top of other computers. These VMs require a supervisory program to run them on the underlying host system. And they require their own operating system copy to run.
|
||||
|
||||
Next came containers. Containers, while similar in concept to VMs, work differently. Instead of requiring a separate program and a copy of an OS to run, they simply use some existing OS constructs to carve out isolated space in the operating system. Thus, they behave similarly to a VM to provide the isolation but don't require the overhead.
|
||||
|
||||
Because VMs and containers are created from stored definitions, they can be destroyed and re-created easily with no impact to the host systems where they are running. This allows a re-creatable system to run pipelines on. Also, for containers, we can track changes to the definition file they are built from—just as we would for source code.
|
||||
|
||||
Thus, if we run into a problem in a VM or container, it may be easier and quicker to just destroy and re-create it instead of trying to debug and make a fix to the existing one.
|
||||
|
||||
This also implies that any change to the code for the pipeline can trigger a new run of the pipeline (via CI) just as a change to code would. This is one of the core ideals of DevOps regarding infrastructure.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/what-cicd
|
||||
|
||||
作者:[Brent Laster][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/bclaster
|
||||
[1]:https://jenkins.io
|
||||
[2]:https://www.gerritcodereview.com
|
||||
[3]:https://opensource.com/resources/what-is-git
|
||||
[4]:https://junit.org/junit5/
|
||||
[5]:https://www.eclemma.org/jacoco/
|
||||
[6]:https://www.sonarqube.org/
|
||||
[7]:https://jfrog.com/artifactory/
|
||||
[8]:https://www.sonatype.com/nexus-repository-sonatype
|
||||
[9]:https://opensource.com/resources/devops
|
@ -0,0 +1,231 @@
|
||||
How to display data in a human-friendly way on Linux
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/08/smile-face-on-hand-100767756-large.jpg)
|
||||
|
||||
Not everyone thinks in binary or wants to mentally insert commas into large numbers to come to grips with the sizes of their files. So, it's not surprising that Linux commands have evolved over several decades to incorporate more human-friendly ways of displaying information to its users. In today’s post, we look at some of the options provided by various commands that make digesting data just a little easier.
|
||||
|
||||
### Why not default to friendly?
|
||||
|
||||
If you’re wondering why human-friendliness isn’t the default –- we humans are, after all, the default users of computers — you might be asking yourself, “Why do we have to go out of our way to get command responses that will make sense to everyone?” The answer is primarily that changing the default output of commands would likely interfere with numerous other processes that were built to expect the default responses. Other tools, as well as scripts, that have been developed over the decades might break in some very ugly ways if they were suddenly fed output in a very different format than what they were built to expect.
|
||||
|
||||
It’s probably also true that some of us might prefer to see all of the digits in our file sizes — 1338277310 instead of 1.3G. In any case, switching defaults could be very disruptive, while promoting some easy options for more human-friendly responses only involves us learning some command options.
|
||||
|
||||
### Commands for displaying human-friendly data
|
||||
|
||||
What are some of the easy options for making the output of Unix commands a little easier to parse? Let's check some command by command.
|
||||
|
||||
#### top
|
||||
|
||||
You may not have noticed this, but you can change the display of overall memory usage in top by typing " **E** " (i.e., capital E) once top is running. Successive presses will change the numeric display from KiB to MiB to GiB to TiB to PiB to EiB and back to KiB.
|
||||
|
||||
OK with those units? These and a couple more are defined here:
|
||||
```
|
||||
2**10 = 1,024 = 1 KiB (kibibyte)
|
||||
2**20 = 1,048,576 = 1 MiB (mebibyte)
|
||||
2**30 = 1,073,741,824 = 1 GiB (gibibyte)
|
||||
2**40 = 1,099,511,627,776 = 1 TiB (tebibyte)
|
||||
2**50 = 1,125,899,906,842,624 = PiB (pebibyte)
|
||||
2**60 = 1,152,921,504,606,846,976 = EiB (exbibyte)
|
||||
2**70 = 1,180,591,620,717,411,303,424 = 1 ZiB (zebibyte)
|
||||
2**80 = 1,208,925,819,614,629,174,706,176 = 1 YiB (yobibyte)
|
||||
|
||||
```
|
||||
|
||||
These units are closely related to kilobytes, megabytes, and gigabytes, etc. But, while close, there's still a significant difference between them. One set is based on powers of 10 and the other powers of 2. Comparing kilobytes and kibibytes, for example, we can see how they diverge:
|
||||
```
|
||||
KB = 1000 = 10**3
|
||||
KiB = 1024 = 2**10
|
||||
|
||||
```
|
||||
|
||||
Here's an example of top output using the default display in KiB:
|
||||
```
|
||||
top - 10:49:06 up 5 days, 35 min, 1 user, load average: 0.05, 0.04, 0.01
|
||||
Tasks: 158 total, 1 running, 118 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu(s): 0.0 us, 0.2 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
KiB Mem : 6102680 total, 4634980 free, 392244 used, 1075456 buff/cache
|
||||
KiB Swap: 2097148 total, 2097148 free, 0 used. 5407432 avail Mem
|
||||
|
||||
```
|
||||
|
||||
After one press of an E, it changes to MiB:
|
||||
```
|
||||
top - 10:49:31 up 5 days, 36 min, 1 user, load average: 0.03, 0.04, 0.01
|
||||
Tasks: 158 total, 2 running, 118 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu(s): 0.0 us, 0.6 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
MiB Mem : 5959.648 total, 4526.348 free, 383.055 used, 1050.246 buff/cache
|
||||
MiB Swap: 2047.996 total, 2047.996 free, 0.000 used. 5280.684 avail Mem
|
||||
|
||||
```
|
||||
|
||||
After a second E, we get GiB:
|
||||
```
|
||||
top - 10:49:49 up 5 days, 36 min, 1 user, load average: 0.02, 0.03, 0.01
|
||||
Tasks: 158 total, 1 running, 118 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
GiB Mem : 5.820 total, 4.420 free, 0.374 used, 1.026 buff/cache
|
||||
GiB Swap: 2.000 total, 2.000 free, 0.000 used. 5.157 avail Mem
|
||||
|
||||
```
|
||||
|
||||
You can also change the numbers displaying per-process memory usage by pressing the letter “ **e** ”. It will change from the default of KiB to MiB to GiB to TiB to PiB (expect to see LOTS of zeroes!) and back. Here's some top output after one press of an " **e** ":
|
||||
```
|
||||
top - 08:45:28 up 4 days, 22:32, 1 user, load average: 0.02, 0.03, 0.00
|
||||
Tasks: 167 total, 1 running, 118 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu(s): 0.2 us, 0.0 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
KiB Mem : 6102680 total, 4641836 free, 393348 used, 1067496 buff/cache
|
||||
KiB Swap: 2097148 total, 2097148 free, 0 used. 5406396 avail Mem
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
784 root 20 0 543.2m 26.8m 16.1m S 0.9 0.5 0:22.20 snapd
|
||||
733 root 20 0 107.8m 2.0m 1.8m S 0.4 0.0 0:18.49 irqbalance
|
||||
22574 shs 20 0 107.5m 5.5m 4.6m S 0.4 0.1 0:00.09 sshd
|
||||
1 root 20 0 156.4m 9.3m 6.7m S 0.0 0.2 0:05.59 systemd
|
||||
|
||||
```
|
||||
|
||||
#### du
|
||||
|
||||
The du command, which shows how much disk space files or directories use, adjusts the sizes to the most appropriate measurement if the **-h** option is used. By default, it reports in kilobytes.
|
||||
```
|
||||
$ du camper*
|
||||
360 camper_10.jpg
|
||||
5684 camper.jpg
|
||||
240 camper_small.jpg
|
||||
$ du -h camper*
|
||||
360K camper_10.jpg
|
||||
5.6M camper.jpg
|
||||
240K camper_small.jpg
|
||||
|
||||
```
|
||||
|
||||
#### df
|
||||
|
||||
The df command also offers a -h option. Note in the example below how sizes are reported in both gigabytes and megabytes.
|
||||
```
|
||||
$ df -h | grep -v loop
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
udev 2.9G 0 2.9G 0% /dev
|
||||
tmpfs 596M 1.7M 595M 1% /run
|
||||
/dev/sda1 110G 9.0G 95G 9% /
|
||||
tmpfs 3.0G 0 3.0G 0% /dev/shm
|
||||
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
|
||||
tmpfs 596M 16K 596M 1% /run/user/121
|
||||
/dev/sdb2 457G 73M 434G 1% /apps
|
||||
tmpfs 596M 0 596M 0% /run/user/1000
|
||||
|
||||
```
|
||||
|
||||
The command below uses the **-h** option, but also includes **-T** to display the type of file system we are looking at.
|
||||
```
|
||||
$ df -hT /mnt2
|
||||
Filesystem Type Size Used Avail Use% Mounted on
|
||||
/dev/sdb2 ext4 457G 73M 434G 1% /apps
|
||||
|
||||
```
|
||||
|
||||
#### ls
|
||||
|
||||
Even ls gives us the option of adjusting size displays to the measurements that are the most reasonable.
|
||||
```
|
||||
$ ls -l camper*
|
||||
-rw-rw-r-- 1 shs shs 365091 Jul 14 19:42 camper_10.jpg
|
||||
-rw-rw-r-- 1 shs shs 5818597 Jul 14 19:41 camper.jpg
|
||||
-rw-rw-r-- 1 shs shs 241844 Jul 14 19:45 camper_small.jpg
|
||||
$ ls -lh camper*
|
||||
-rw-rw-r-- 1 shs shs 357K Jul 14 19:42 camper_10.jpg
|
||||
-rw-rw-r-- 1 shs shs 5.6M Jul 14 19:41 camper.jpg
|
||||
-rw-rw-r-- 1 shs shs 237K Jul 14 19:45 camper_small.jpg
|
||||
|
||||
```
|
||||
|
||||
#### free
|
||||
|
||||
The free command allows you to report memory usage in bytes, kilobytes, megabytes, and gigabytes.
|
||||
```
|
||||
$ free -b
|
||||
total used free shared buff/cache available
|
||||
Mem: 6249144320 393076736 4851625984 1654784 1004441600 5561253888
|
||||
Swap: 2147479552 0 2147479552
|
||||
$ free -k
|
||||
total used free shared buff/cache available
|
||||
Mem: 6102680 383836 4737924 1616 980920 5430932
|
||||
Swap: 2097148 0 2097148
|
||||
$ free -m
|
||||
total used free shared buff/cache available
|
||||
Mem: 5959 374 4627 1 957 5303
|
||||
Swap: 2047 0 2047
|
||||
$ free -g
|
||||
total used free shared buff/cache available
|
||||
Mem: 5 0 4 0 0 5
|
||||
Swap: 1 0 1
|
||||
|
||||
```
|
||||
|
||||
#### tree
|
||||
|
||||
While not related to file or memory measurements, the tree command also provides very human-friendly view of files by displaying them in a hierarchical display to illustrate how the files are organized. This kind of display can be very useful when trying to get an idea for how the contents of a directory are arranged.
|
||||
```
|
||||
$ tree
|
||||
.g to
|
||||
├── 123
|
||||
├── appended.png
|
||||
├── appts
|
||||
├── arrow.jpg
|
||||
├── arrow.png
|
||||
├── bin
|
||||
│ ├── append
|
||||
│ ├── cpuhog1
|
||||
│ ├── cpuhog2
|
||||
│ ├── loop
|
||||
│ ├── mkhome
|
||||
│ ├── runme
|
||||
|
||||
```
|
||||
|
||||
#### stat
|
||||
|
||||
The stat command is another that displays information in a very human-friendly format. It provides a lot more metadata on files, including the file sizes in bytes and blocks, the file types, device and inode, owner and group (names and numeric IDs), file permissions in both numeric and rwx format and the dates the file was last accessed and modified. In some circumstances, it might also display when the file was initially created.
|
||||
```
|
||||
$ stat camper*
|
||||
File: camper_10.jpg
|
||||
Size: 365091 Blocks: 720 IO Block: 4096 regular file
|
||||
Device: 801h/2049d Inode: 796059 Links: 1
|
||||
Access: (0664/-rw-rw-r--) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
|
||||
Access: 2018-07-19 18:56:31.841013385 -0400
|
||||
Modify: 2018-07-14 19:42:25.230519509 -0400
|
||||
Change: 2018-07-14 19:42:25.230519509 -0400
|
||||
Birth: -
|
||||
File: camper.jpg
|
||||
Size: 5818597 Blocks: 11368 IO Block: 4096 regular file
|
||||
Device: 801h/2049d Inode: 796058 Links: 1
|
||||
Access: (0664/-rw-rw-r--) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
|
||||
Access: 2018-07-19 18:56:31.845013872 -0400
|
||||
Modify: 2018-07-14 19:41:46.882024039 -0400
|
||||
Change: 2018-07-14 19:41:46.882024039 -0400
|
||||
Birth: -
|
||||
|
||||
```
|
||||
|
||||
### Wrap-up
|
||||
|
||||
Linux provides many command options that can make their output easier for users to understand or compare. For many commands, the **-h** option brings up the friendlier output format. For others, you might have to specify how you'd prefer to see your output by using some specific option or pressing a key as with **top**. I hope that some of these choices will make your Linux systems seem just a little friendlier.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3296631/linux/displaying-data-in-a-human-friendly-way-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
@ -1,3 +1,5 @@
|
||||
translating by ypingcn
|
||||
|
||||
Tips for using the top command in Linux
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,674 @@
|
||||
HTTP request routing and validation with gorilla/mux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr)
|
||||
|
||||
The Go networking library includes the `http.ServeMux` structure type, which supports HTTP request multiplexing (routing): A web server routes an HTTP request for a hosted resource, with a URI such as /sales4today, to a code handler; the handler performs the appropriate logic before sending an HTTP response, typically an HTML page. Here’s a sketch of the architecture:
|
||||
```
|
||||
+------------+ +--------+ +---------+
|
||||
HTTP request---->| web server |---->| router |---->| handler |
|
||||
+------------+ +--------+ +---------+
|
||||
```
|
||||
|
||||
In a call to the `ListenAndServe` method to start an HTTP server
|
||||
```
|
||||
http.ListenAndServe(":8888", nil) // args: port & router
|
||||
```
|
||||
|
||||
a second argument of `nil` means that the `DefaultServeMux` is used for request routing.
|
||||
|
||||
The `gorilla/mux` package has a `mux.Router` type as an alternative to either the `DefaultServeMux` or a customized request multiplexer. In the `ListenAndServe` call, a `mux.Router` instance would replace `nil` as the second argument. What makes the `mux.Router` so appealing is best shown through a code example:
|
||||
|
||||
### 1\. A sample crud web app
|
||||
|
||||
The crud web application (see below) supports the four CRUD (Create Read Update Delete) operations, which match four HTTP request methods: POST, GET, PUT, and DELETE, respectively. In the crud app, the hosted resource is a list of cliche pairs, each a cliche and a conflicting cliche such as this pair:
|
||||
```
|
||||
Out of sight, out of mind. Absence makes the heart grow fonder.
|
||||
|
||||
```
|
||||
|
||||
New cliche pairs can be added, and existing ones can be edited or deleted.
|
||||
|
||||
**The crud web app**
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"gorilla/mux"
|
||||
"net/http"
|
||||
"fmt"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
const GETALL string = "GETALL"
|
||||
const GETONE string = "GETONE"
|
||||
const POST string = "POST"
|
||||
const PUT string = "PUT"
|
||||
const DELETE string = "DELETE"
|
||||
|
||||
type clichePair struct {
|
||||
Id int
|
||||
Cliche string
|
||||
Counter string
|
||||
}
|
||||
|
||||
// Message sent to goroutine that accesses the requested resource.
|
||||
type crudRequest struct {
|
||||
verb string
|
||||
cp *clichePair
|
||||
id int
|
||||
cliche string
|
||||
counter string
|
||||
confirm chan string
|
||||
}
|
||||
|
||||
var clichesList = []*clichePair{}
|
||||
var masterId = 1
|
||||
var crudRequests chan *crudRequest
|
||||
|
||||
// GET /
|
||||
// GET /cliches
|
||||
func ClichesAll(res http.ResponseWriter, req *http.Request) {
|
||||
cr := &crudRequest{verb: GETALL, confirm: make(chan string)}
|
||||
completeRequest(cr, res, "read all")
|
||||
}
|
||||
|
||||
// GET /cliches/id
|
||||
func ClichesOne(res http.ResponseWriter, req *http.Request) {
|
||||
id := getIdFromRequest(req)
|
||||
cr := &crudRequest{verb: GETONE, id: id, confirm: make(chan string)}
|
||||
completeRequest(cr, res, "read one")
|
||||
}
|
||||
|
||||
// POST /cliches
|
||||
|
||||
func ClichesCreate(res http.ResponseWriter, req *http.Request) {
|
||||
|
||||
cliche, counter := getDataFromRequest(req)
|
||||
|
||||
cp := new(clichePair)
|
||||
|
||||
cp.Cliche = cliche
|
||||
|
||||
cp.Counter = counter
|
||||
|
||||
cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)}
|
||||
|
||||
completeRequest(cr, res, "create")
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// PUT /cliches/id
|
||||
|
||||
func ClichesEdit(res http.ResponseWriter, req *http.Request) {
|
||||
|
||||
id := getIdFromRequest(req)
|
||||
|
||||
cliche, counter := getDataFromRequest(req)
|
||||
|
||||
cr := &crudRequest{verb: PUT, id: id, cliche: cliche, counter: counter, confirm: make(chan string)}
|
||||
|
||||
completeRequest(cr, res, "edit")
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// DELETE /cliches/id
|
||||
|
||||
func ClichesDelete(res http.ResponseWriter, req *http.Request) {
|
||||
|
||||
id := getIdFromRequest(req)
|
||||
|
||||
cr := &crudRequest{verb: DELETE, id: id, confirm: make(chan string)}
|
||||
|
||||
completeRequest(cr, res, "delete")
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) {
|
||||
|
||||
crudRequests<-cr
|
||||
|
||||
msg := <-cr.confirm
|
||||
|
||||
res.Write([]byte(msg))
|
||||
|
||||
logIt(logMsg)
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func main() {
|
||||
|
||||
populateClichesList()
|
||||
|
||||
|
||||
|
||||
// From now on, this gorountine alone accesses the clichesList.
|
||||
|
||||
crudRequests = make(chan *crudRequest, 8)
|
||||
|
||||
go func() { // resource manager
|
||||
|
||||
for {
|
||||
|
||||
select {
|
||||
|
||||
case req := <-crudRequests:
|
||||
|
||||
if req.verb == GETALL {
|
||||
|
||||
req.confirm<-readAll()
|
||||
|
||||
} else if req.verb == GETONE {
|
||||
|
||||
req.confirm<-readOne(req.id)
|
||||
|
||||
} else if req.verb == POST {
|
||||
|
||||
req.confirm<-addPair(req.cp)
|
||||
|
||||
} else if req.verb == PUT {
|
||||
|
||||
req.confirm<-editPair(req.id, req.cliche, req.counter)
|
||||
|
||||
} else if req.verb == DELETE {
|
||||
|
||||
req.confirm<-deletePair(req.id)
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}()
|
||||
|
||||
startServer()
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func startServer() {
|
||||
|
||||
router := mux.NewRouter()
|
||||
|
||||
|
||||
|
||||
// Dispatch map for CRUD operations.
|
||||
|
||||
router.HandleFunc("/", ClichesAll).Methods("GET")
|
||||
|
||||
router.HandleFunc("/cliches", ClichesAll).Methods("GET")
|
||||
|
||||
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET")
|
||||
|
||||
|
||||
|
||||
router.HandleFunc("/cliches", ClichesCreate).Methods("POST")
|
||||
|
||||
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT")
|
||||
|
||||
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesDelete).Methods("DELETE")
|
||||
|
||||
|
||||
|
||||
http.Handle("/", router) // enable the router
|
||||
|
||||
|
||||
|
||||
// Start the server.
|
||||
|
||||
port := ":8888"
|
||||
|
||||
fmt.Println("\nListening on port " + port)
|
||||
|
||||
http.ListenAndServe(port, router); // mux.Router now in play
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Return entire list to requester.
|
||||
|
||||
func readAll() string {
|
||||
|
||||
msg := "\n"
|
||||
|
||||
for _, cliche := range clichesList {
|
||||
|
||||
next := strconv.Itoa(cliche.Id) + ": " + cliche.Cliche + " " + cliche.Counter + "\n"
|
||||
|
||||
msg += next
|
||||
|
||||
}
|
||||
|
||||
return msg
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Return specified clichePair to requester.
|
||||
|
||||
func readOne(id int) string {
|
||||
|
||||
msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n"
|
||||
|
||||
|
||||
|
||||
index := findCliche(id)
|
||||
|
||||
if index >= 0 {
|
||||
|
||||
cliche := clichesList[index]
|
||||
|
||||
msg = "\n" + strconv.Itoa(id) + ": " + cliche.Cliche + " " + cliche.Counter + "\n"
|
||||
|
||||
}
|
||||
|
||||
return msg
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Create a new clichePair and add to list
|
||||
|
||||
func addPair(cp *clichePair) string {
|
||||
|
||||
cp.Id = masterId
|
||||
|
||||
masterId++
|
||||
|
||||
clichesList = append(clichesList, cp)
|
||||
|
||||
return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n"
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Edit an existing clichePair
|
||||
|
||||
func editPair(id int, cliche string, counter string) string {
|
||||
|
||||
msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n"
|
||||
|
||||
index := findCliche(id)
|
||||
|
||||
if index >= 0 {
|
||||
|
||||
clichesList[index].Cliche = cliche
|
||||
|
||||
clichesList[index].Counter = counter
|
||||
|
||||
msg = "\nCliche edited: " + cliche + " " + counter + "\n"
|
||||
|
||||
}
|
||||
|
||||
return msg
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Delete a clichePair
|
||||
|
||||
func deletePair(id int) string {
|
||||
|
||||
idStr := strconv.Itoa(id)
|
||||
|
||||
msg := "\n" + "Bad Id: " + idStr + "\n"
|
||||
|
||||
index := findCliche(id)
|
||||
|
||||
if index >= 0 {
|
||||
|
||||
clichesList = append(clichesList[:index], clichesList[index + 1:]...)
|
||||
|
||||
msg = "\nCliche " + idStr + " deleted\n"
|
||||
|
||||
}
|
||||
|
||||
return msg
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
//*** utility functions
|
||||
|
||||
func findCliche(id int) int {
|
||||
|
||||
for i := 0; i < len(clichesList); i++ {
|
||||
|
||||
if id == clichesList[i].Id {
|
||||
|
||||
return i;
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return -1 // not found
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func getIdFromRequest(req *http.Request) int {
|
||||
|
||||
vars := mux.Vars(req)
|
||||
|
||||
id, _ := strconv.Atoi(vars["id"])
|
||||
|
||||
return id
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func getDataFromRequest(req *http.Request) (string, string) {
|
||||
|
||||
// Extract the user-provided data for the new clichePair
|
||||
|
||||
req.ParseForm()
|
||||
|
||||
form := req.Form
|
||||
|
||||
cliche := form["cliche"][0] // 1st and only member of a list
|
||||
|
||||
counter := form["counter"][0] // ditto
|
||||
|
||||
return cliche, counter
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func logIt(msg string) {
|
||||
|
||||
fmt.Println(msg)
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
func populateClichesList() {
|
||||
|
||||
var cliches = []string {
|
||||
|
||||
"Out of sight, out of mind.",
|
||||
|
||||
"A penny saved is a penny earned.",
|
||||
|
||||
"He who hesitates is lost.",
|
||||
|
||||
}
|
||||
|
||||
var counterCliches = []string {
|
||||
|
||||
"Absence makes the heart grow fonder.",
|
||||
|
||||
"Penny-wise and dollar-foolish.",
|
||||
|
||||
"Look before you leap.",
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
for i := 0; i < len(cliches); i++ {
|
||||
|
||||
cp := new(clichePair)
|
||||
|
||||
cp.Id = masterId
|
||||
|
||||
masterId++
|
||||
|
||||
cp.Cliche = cliches[i]
|
||||
|
||||
cp.Counter = counterCliches[i]
|
||||
|
||||
clichesList = append(clichesList, cp)
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
To focus on request routing and validation, the crud app does not use HTML pages as responses to requests. Instead, requests result in plaintext response messages: A list of the cliche pairs is the response to a GET request, confirmation that a new cliche pair has been added to the list is a response to a POST request, and so on. This simplification makes it easy to test the app, in particular, the `gorilla/mux` components, with a command-line utility such as [curl][1].
|
||||
|
||||
The `gorilla/mux` package can be installed from [GitHub][2]. The crud app runs indefinitely; hence, it should be terminated with a Control-C or equivalent. The code for the crud app, together with a README and sample curl tests, is available on [my website][3].
|
||||
|
||||
### 2\. Request routing
|
||||
|
||||
The `mux.Router` extends REST-style routing, which gives equal weight to the HTTP method (e.g., GET) and the URI or path at the end of a URL (e.g., /cliches). The URI serves as the noun for the HTTP verb (method). For example, in an HTTP request a startline such as
|
||||
```
|
||||
GET /cliches
|
||||
|
||||
```
|
||||
|
||||
means get all of the cliche pairs, whereas a startline such as
|
||||
```
|
||||
POST /cliches
|
||||
|
||||
```
|
||||
|
||||
means create a cliche pair from data in the HTTP body.
|
||||
|
||||
In the crud web app, there are five functions that act as request handlers for five variations of an HTTP request:
|
||||
```
|
||||
ClichesAll(...) # GET: get all of the cliche pairs
|
||||
|
||||
ClichesOne(...) # GET: get a specified cliche pair
|
||||
|
||||
ClichesCreate(...) # POST: create a new cliche pair
|
||||
|
||||
ClichesEdit(...) # PUT: edit an existing cliche pair
|
||||
|
||||
ClichesDelete(...) # DELETE: delete a specified cliche pair
|
||||
|
||||
```
|
||||
|
||||
Each function takes two arguments: an `http.ResponseWriter` for sending a response back to the requester, and a pointer to an `http.Request`, which encapsulates information from the underlying HTTP request. The `gorilla/mux` package makes it easy to register these request handlers with the web server, and to perform regex-based validation.
|
||||
|
||||
The `startServer` function in the crud app registers the request handlers. Consider this pair of registrations, with `router` as a `mux.Router` instance:
|
||||
```
|
||||
router.HandleFunc("/", ClichesAll).Methods("GET")
|
||||
|
||||
router.HandleFunc("/cliches", ClichesAll).Methods("GET")
|
||||
|
||||
```
|
||||
|
||||
These statements mean that a GET request for either the single slash / or /cliches should be routed to the `ClichesAll` function, which then handles the request. For example, the curl request (with % as the command-line prompt)
|
||||
```
|
||||
% curl --request GET localhost:8888/
|
||||
|
||||
```
|
||||
|
||||
produces this response:
|
||||
```
|
||||
1: Out of sight, out of mind. Absence makes the heart grow fonder.
|
||||
|
||||
2: A penny saved is a penny earned. Penny-wise and dollar-foolish.
|
||||
|
||||
3: He who hesitates is lost. Look before you leap.
|
||||
|
||||
```
|
||||
|
||||
The three cliche pairs are the initial data in the crud app.
|
||||
|
||||
In this pair of registration statements
|
||||
```
|
||||
router.HandleFunc("/cliches", ClichesAll).Methods("GET")
|
||||
|
||||
router.HandleFunc("/cliches", ClichesCreate).Methods("POST")
|
||||
|
||||
```
|
||||
|
||||
the URI is the same (/cliches) but the verbs differ: GET in the first case, and POST in the second. This registration exemplifies REST-style routing because the difference in the verbs alone suffices to dispatch the requests to two different handlers.
|
||||
|
||||
More than one HTTP method is allowed in a registration, although this strains the spirit of REST-style routing:
|
||||
```
|
||||
router.HandleFunc("/cliches", DoItAll).Methods("POST", "GET")
|
||||
|
||||
```
|
||||
|
||||
HTTP requests can be routed on features besides the verb and the URI. For example, the registration
|
||||
```
|
||||
router.HandleFunc("/cliches", ClichesCreate).Schemes("https").Methods("POST")
|
||||
|
||||
```
|
||||
|
||||
requires HTTPS access for a POST request to create a new cliche pair. In similar fashion, a registration might require a request to have a specified HTTP header element (e.g., an authentication credential).
|
||||
|
||||
### 3\. Request validation
|
||||
|
||||
The `gorilla/mux` package takes an easy, intuitive approach to request validation through regular expressions. Consider this request handler for a get one operation:
|
||||
```
|
||||
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET")
|
||||
|
||||
```
|
||||
|
||||
This registration rules out HTTP requests such as
|
||||
```
|
||||
% curl --request GET localhost:8888/cliches/foo
|
||||
|
||||
```
|
||||
|
||||
because foo is not a decimal numeral. The request results in the familiar 404 (Not Found) status code. Including the regex pattern in this handler registration ensures that the `ClichesOne` function is called to handle a request only if the request URI ends with a decimal integer value:
|
||||
```
|
||||
% curl --request GET localhost:8888/cliches/3 # ok
|
||||
|
||||
```
|
||||
|
||||
As a second example, consider the request
|
||||
```
|
||||
% curl --request PUT --data "..." localhost:8888/cliches
|
||||
|
||||
```
|
||||
|
||||
This request results in a status code of 405 (Bad Method) because the /cliches URI is registered, in the crud app, only for GET and POST requests. A PUT request, like a GET one request, must include a numeric id at the end of the URI:
|
||||
```
|
||||
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT")
|
||||
|
||||
```
|
||||
|
||||
### 4\. Concurrency issues
|
||||
|
||||
The `gorilla/mux` router executes each call to a registered request handler as a separate goroutine, which means that concurrency is baked into the package. For example, if there are ten simultaneous requests such as
|
||||
```
|
||||
% curl --request POST --data "..." localhost:8888/cliches
|
||||
|
||||
```
|
||||
|
||||
then the `mux.Router` launches ten goroutines to execute the `ClichesCreate` handler.
|
||||
|
||||
Of the five request operations GET all, GET one, POST, PUT, and DELETE, the last three alter the requested resource, the shared `clichesList` that houses the cliche pairs. Accordingly, the crudapp needs to guarantee safe concurrency by coordinating access to the `clichesList`. In different but equivalent terms, the crud app must prevent a race condition on the `clichesList`. In a production environment, a database system might be used to store a resource such as the `clichesList`, and safe concurrency then could be managed through database transactions.
|
||||
|
||||
The crud app takes the recommended Go approach to safe concurrency:
|
||||
|
||||
* Only a single goroutine, the resource manager started in the crud app `startServer` function, has access to the `clichesList` once the web server starts listening for requests.
|
||||
* The request handlers such as `ClichesCreate` and `ClichesAll` send a (pointer to) a `crudRequest` instance to a Go channel (thread-safe by default), and the resource manager alone reads from this channel. The resource manager then performs the requested operation on the `clichesList`.
|
||||
|
||||
|
||||
|
||||
The safe-concurrency architecture can be sketched as follows:
|
||||
```
|
||||
crudRequest read/write
|
||||
|
||||
request handlers------------->resource manager------------>clichesList
|
||||
|
||||
```
|
||||
|
||||
With this architecture, no explicit locking of the `clichesList` is needed because only one goroutine, the resource manager, accesses the `clichesList` once CRUD requests start coming in.
|
||||
|
||||
To keep the crud app as concurrent as possible, it’s essential to have an efficient division of labor between the request handlers, on the one side, and the single resource manager, on the other side. Here, for review, is the `ClichesCreate` request handler:
|
||||
```
|
||||
func ClichesCreate(res http.ResponseWriter, req *http.Request) {
|
||||
|
||||
cliche, counter := getDataFromRequest(req)
|
||||
|
||||
cp := new(clichePair)
|
||||
|
||||
cp.Cliche = cliche
|
||||
|
||||
cp.Counter = counter
|
||||
|
||||
cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)}
|
||||
|
||||
completeRequest(cr, res, "create")
|
||||
|
||||
}ClichesCreateres httpResponseWriterreqclichecountergetDataFromRequestreqcpclichePaircpClicheclichecpCountercountercr&crudRequestverbPOSTcpcpconfirmcompleteRequestcrres
|
||||
|
||||
```
|
||||
|
||||
`ClichesCreate` calls the utility function `getDataFromRequest`, which extracts the new cliche and counter-cliche from the POST request. The `ClichesCreate` function then creates a new `ClichePair`, sets two fields, and creates a `crudRequest` to be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because the `clichesList` is not being accessed yet.
|
||||
|
||||
The request handlercalls the utility function, which extracts the new cliche and counter-cliche from the POST request. Thefunction then creates a new, sets two fields, and creates ato be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because theis not being accessed yet.
|
||||
|
||||
The `completeRequest` utility function called at the end of the `ClichesCreate` function and the other request handlers
|
||||
```
|
||||
completeRequest(cr, res, "create") // shown above
|
||||
|
||||
```
|
||||
|
||||
brings the resource manager into play by putting a `crudRequest` into the `crudRequests` channel:
|
||||
```
|
||||
func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) {
|
||||
|
||||
crudRequests<-cr // send request to resource manager
|
||||
|
||||
msg := <-cr.confirm // await confirmation string
|
||||
|
||||
res.Write([]byte(msg)) // send confirmation back to requester
|
||||
|
||||
logIt(logMsg) // print to the standard output
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
For a POST request, the resource manager calls the utility function `addPair`, which changes the `clichesList` resource:
|
||||
```
|
||||
func addPair(cp *clichePair) string {
|
||||
|
||||
cp.Id = masterId // assign a unique ID
|
||||
|
||||
masterId++ // update the ID counter
|
||||
|
||||
clichesList = append(clichesList, cp) // update the list
|
||||
|
||||
return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n"
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The resource manager calls similar utility functions for the other CRUD operations. It’s worth repeating that the resource manager is the only goroutine to read or write the `clichesList` once the web server starts accepting requests.
|
||||
|
||||
For web applications of any type, the `gorilla/mux` package provides request routing, request validation, and related services in a straightforward, intuitive API. The crud web app highlights the package’s main features. Give the package a test drive, and you’ll likely be a buyer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/http-request-routing-validation-gorillamux
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mkalindepauledu
|
||||
[1]:https://curl.haxx.se/
|
||||
[2]:https://github.com/gorilla/mux
|
||||
[3]:http://condor.depaul.edu/mkalin
|
@ -0,0 +1,71 @@
|
||||
Happy birthday, GNOME: 6 reasons to love this Linux desktop
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_anniversary_celebrate_hats_cake.jpg?itok=Zfsv6DE_)
|
||||
|
||||
GNOME has been my favorite [desktop environment][1] for quite some time. While I always make it a point to check out other environments from time to time, there are some aspects of the GNOME desktop that are hard to live without. While there are many great desktop environments out there, [GNOME][2] feels like home to me. Here are some of the features I enjoy most about GNOME.
|
||||
|
||||
### Stability
|
||||
|
||||
Having a stable working environment is the most important aspect of a desktop for me. After all, the feature set of an environment doesn't matter at all if it crashes constantly and you lose work. For me, GNOME is rock-solid. I have heard of others experiencing crashes and instability, but it always seems to be due to either the user running GNOME on unsupported hardware or due to faulty extensions (more on that later). On my end, I run GNOME primarily on hardware that is known to be well-supported in Linux ([System76][3], for example). I also have a few systems that are not as well supported (a custom-built desktop and a Dell Latitude laptop), and I actually don't have any issues there either. For me, GNOME is rock-solid. I have compared stability in other well-known desktop environments, and I had unfortunate results. Nothing comes close to GNOME when it comes to stability.
|
||||
|
||||
### Extensions
|
||||
|
||||
I really enjoy being able to add additional functionality to my environment. I don't necessarily require any extensions, because I am perfectly fine with stock-GNOME with no extensions whatsoever. However, having the ability to add a few things here and there, is welcome. GNOME features various extensions to do things such as add a weather display to your panel, and much more. This adds a level of customization that is not typical of other environments. That said, proceed with caution. Sometimes extensions are of varying quality and may lead to stability issues. I find though that if you only install extensions you absolutely need, and you make sure they're kept up to date (and aren't abandoned by the developer) you'll generally be in good shape.
|
||||
|
||||
### Activities overview
|
||||
|
||||
Activities overview is quite possibly the easiest feature to use in GNOME, and it's barely detailed enough to justify its own section in this article. However, when I use other desktop environments, I miss this feature the most.
|
||||
|
||||
The thing is, I am very busy, with multiple projects going on at any one time, and dozens of different windows open. To access the activities overview, I simply press the Super key. Immediately, my workspace is "zoomed out" and I see all of my windows side-by-side. This is often a faster way to locate a window that is hidden behind others, and a good way overall to see what exactly is running on any given workspace.
|
||||
|
||||
When using other desktop environments, I will often find myself pressing the Super key out of habit, only to remember that I'm not using GNOME at the time. There are ways of achieving similar behavior in other environments (such as installing and tweaking Compiz), but in GNOME this feature is built-in.
|
||||
|
||||
### Dynamic workspaces
|
||||
|
||||
While working, I am not sure up-front how many workspaces I will need. Sometimes I can be working on three projects at a time, or as many as ten. With most desktop environments, I can access the settings screen and add or remove workspaces as needed. But with GNOME, I have exactly as many workspaces as I need at any given time. Every time I open applications on a workspace, I am given another blank one that I can switch to in order to start another project. Typically, I keep all windows related to a specific project on their own workspace, so it makes it very easy to locate my workflow for a given project.
|
||||
|
||||
Other desktop environments have really good implementations of the concept of workspaces, but GNOME's implementation works best for me.
|
||||
|
||||
### Simplicity
|
||||
|
||||
Another thing I love about GNOME is that it's simple and straight to the point. By default, there is only one panel, and it's at the top of the screen. This panel shows you a small amount of information, such as the date, time, and battery usage. GNOME 2 had two panels, so seeing GNOME stripped down to a single panel is welcome and saves room on the screen. Most of the things you don't need to see all the time are hidden within the Activities overview, leaving you with the maximum amount of screen space for the application(s) you are working on. GNOME just stays out of the way and lets you focus on getting your work done, and stays away from fancy widgets and desktop gadgets that just aren't necessary.
|
||||
|
||||
|
||||
In addition, GNOME has really great support for keyboard shortcuts. Most of GNOME's features I can access without needing to touch my mouse, such as SUPER+Page Up and Super Page Down to switch workspaces, Super+Up arrow to maximize windows, etc. In addition, I am able to easily create my own keyboard shortcuts for all of my favorite applications.
|
||||
|
||||
### GNOME Boxes
|
||||
|
||||
GNOME's Boxes app is an underrated gem. This utility makes it very easy to spin up a virtual machine, which is a godsend among developers and those that like to test configurations on multiple distributions and platforms. With Boxes, you can spin up a virtual machine at any time, and it will even automate the installation process for you. For example, if you want a new Ubuntu VM, you simply choose Ubuntu as your desired platform, fill out your username and any related information, and you will have a new Ubuntu VM in a few minutes. When you're done with it, you can power it down or trash it.
|
||||
|
||||
For me, I do a lot of DevOps-style work as well as system administration. Being able to test a configuration on a virtual machine before deploying to another environment is great. Sure, you can do the exact same thing in VirtualBox, and VirtualBox is a great piece of software. However, Boxes is built right into GNOME, and desktop environments generally don't offer their own solution for virtualization.
|
||||
|
||||
### GNOME Music
|
||||
|
||||
While I work, I have difficulty tuning out noise in my environment. Therefore, I like to listen to music while I complete projects and tune out the rest of the world. GNOME's Music app is very simplistic and works very well. With most of the music industry gravitating toward streaming music online, and many once-popular [open source music players][7] becoming abandoned projects, it's nice to see GNOME support a built-in music player that can play my music collection. It's great to listen to my music collection while I work, and it helps me zone-in to what I am doing.
|
||||
|
||||
### GNOME Games
|
||||
|
||||
When work is done for the day, it's time to play! There's nothing like playing a classic game such as Final Fantasy VI or Super Metroid after a hard day's work. The thing is, I am a huge fan of classic gaming, and I have 22 working gaming consoles and somewhere near 1,000 physical games in my collection. But I may not always have a moment to hook up one of my retro-consoles, so GNOME Games allows me quick-access to emulated versions of my collection. In addition to that, it also works with Libretro cores as well, so it seems to me that the developers of this application have really thought-out what fans of classic gaming like me are looking for in a frontend for gaming.
|
||||
|
||||
These are the major features I enjoy most in the GNOME desktop. What are some of yours?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/what-i-love-about-gnome
|
||||
|
||||
作者:[Jay LaCroix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlacroix
|
||||
[1]:https://opensource.com/article/18/8/how-navigate-your-gnome-linux-desktop-only-keyboard
|
||||
[2]:https://opensource.com/article/17/8/reasons-i-come-back-gnome
|
||||
[3]:https://opensource.com/article/16/12/open-gaming-news-december-31
|
||||
[4]:https://opensource.com/file/407221
|
||||
[5]:https://opensource.com/sites/default/files/uploads/gnome3-cheatsheet.png (GNOME 3 Cheat Sheet)
|
||||
[6]:https://opensource.com/downloads/cheat-sheet-gnome-3
|
||||
[7]:https://opensource.com/article/18/6/open-source-music-players
|
@ -0,0 +1,53 @@
|
||||
How the L1 Terminal Fault vulnerability affects Linux systems
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/08/l1tf-copy-100768129-large.jpg)
|
||||
|
||||
Announced just yesterday in security advisories from Intel, Microsoft and Red Hat, a newly discovered vulnerability affecting Intel processors (and, thus, Linux) called L1TF or “L1 Terminal Fault” is grabbing the attention of Linux users and admins. Exactly what is this vulnerability and who should be worrying about it?
|
||||
|
||||
### L1TF, L1 Terminal Fault, and Foreshadow
|
||||
|
||||
The processor vulnerability goes by L1TF, L1 Terminal Fault, and Foreshadow. Researchers who discovered the problem back in January and reported it to Intel called it "Foreshadow". It is similar to vulnerabilities discovered in the past (such as Spectre).
|
||||
|
||||
This vulnerability is Intel-specific. Other processors are not affected. And like some other vulnerabilities, it exists because of design choices that were implemented to optimize kernel processing speed but exposed data in ways that allowed access by other processes.
|
||||
|
||||
**[ Read also:[22 essential Linux security commands][1] ]**
|
||||
|
||||
Three CVEs have been assigned to this issue:
|
||||
|
||||
* CVE-2018-3615 for Intel Software Guard Extensions (Intel SGX)
|
||||
* CVE-2018-3620 for operating systems and System Management Mode (SMM)
|
||||
* CVE-2018-3646 for impacts to virtualization
|
||||
|
||||
|
||||
|
||||
An Intel spokesman made this statement regarding this issue: _" L1 Terminal Fault is addressed by microcode updates released earlier this year, coupled with corresponding updates to operating system and hypervisor software that are available starting today. We’ve provided more information on our web site and continue to encourage everyone to keep their systems up-to-date, as it's one of the best ways to stay protected. We’d like to extend our thanks to the researchers at imec-DistriNet, KU Leuven, Technion- Israel Institute of Technology, University of Michigan, University of Adelaide and Data61 and our industry partners for their collaboration in helping us identify and address this issue."_
|
||||
|
||||
### Does L1TF affect your Linux system?
|
||||
|
||||
The short answer is "probably not." You should be safe if you’ve patched your system since the earlier [Spectre and Meltdown vulnerabilities][2] were exposed back in January. As with Spectre and Meltdown, Intel claims that no real-world cases of systems being affected have been reported or detected. They also have said that the changes are unlikely to incur noticeable performance hits on individual systems, but they might represent significant performance hits for data centers using virtualized operating systems.
|
||||
|
||||
Even so, frequent patches are always recommended. To check your current kernel level, use the **uname -r** command:
|
||||
```
|
||||
$ uname -r
|
||||
4.18.0-041800-generic
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3298157/linux/linux-and-l1tf.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.networkworld.com/article/3272286/open-source-tools/22-essential-security-commands-for-linux.html
|
||||
[2]:https://www.networkworld.com/article/3245813/security/meltdown-and-spectre-exploits-cutting-through-the-fud.html
|
||||
[3]:https://www.facebook.com/NetworkWorld/
|
||||
[4]:https://www.linkedin.com/company/network-world
|
@ -0,0 +1,109 @@
|
||||
我的Lisp体验和GNU Emacs的开发
|
||||
(Richard Stallman的演讲稿,2002年10月28日,国际Lisp会议)。
|
||||
|
||||
由于我的常见演讲都没有与Lisp有任何关系,因此它们都不适用于今天。所以我将不得不放弃它。由于我在与Lisp相关的职业生涯中做了足够的事情,我应该能够说些有趣的事情。
|
||||
|
||||
我第一次使用Lisp是在高中时阅读Lisp 1.5手册。就在那时,我的想法是因为可能有类似的计算机语言。我第一次有机会和Lisp做任何事情的时候是我在哈佛大学的新生,我为PDP-11写了一个Lisp解释器。这是一个非常小的机器 - 它有类似8k的内存 - 我设法用一千个指令编写解释器。这为我提供了一些数据空间。那是在我看到真正的软件之前,它做了真正的系统工作。
|
||||
|
||||
一旦我开始在麻省理工学院工作,我开始与JonL White一起开展真正的Lisp实现工作。我在人工智能实验室雇用的不是JonL,而是来自Russ Noftsker,考虑到将要发生的事情,这是最具讽刺意味的 - 他当然一定非常后悔。
|
||||
|
||||
在20世纪70年代,在我的生活因可怕的事件而变得政治化之前,我只是为了各种程序而一个接一个地进行扩展,其中大多数与Lisp没有任何关系。但是,在此过程中,我写了一个文本编辑器,Emacs。关于Emacs的有趣想法是它有一种编程语言,用户的编辑命令将用这种解释的编程语言编写,这样你就可以在编辑时将新命令加载到编辑器中。您可以编辑正在使用的程序,然后继续编辑它们。所以,我们有一个对编程以外的东西有用的系统,但你可以在使用它的时候对它进行编程。我不知道它是否是第一个,但它肯定是第一个这样的编辑。
|
||||
|
||||
这种构建巨大而复杂的程序以便在您自己编辑中使用,然后与其他人交换这种精神的精神,激发了我们在AI实验室进行的自由合作的精神。我们的想法是,您可以将任何程序的副本提供给想要其副本的人。我们与想要使用它们的人共享程序,它们是人类的知识。因此,尽管没有有组织的政治思想与我们将软件与Emacs的设计共享的方式联系起来,但我确信它们之间存在联系,也许是一种无意识的联系。我认为这是我们在AI实验室生活的方式的本质,它导致了Emacs并使它成为现实。
|
||||
|
||||
最初的Emacs里面没有Lisp。较低级别的语言,非解释性语言 - 是PDP-10汇编程序。我们写的解释实际上并不是为Emacs编写的,它是为TECO编写的。这是我们的文本编辑器,并且是一种非常难看的编程语言,尽可能难看。原因是它不是一种编程语言,它被设计成一种编辑器和命令语言。有一些命令,如'5l',意思是'移动五行',或'i'然后是一个字符串,然后是一个ESC来插入该字符串。您将键入一个字符串,该字符串是一系列命令,称为命令字符串。你会用ESC ESC结束它,它会被执行。
|
||||
|
||||
好吧,人们想用编程工具扩展这种语言,所以他们添加了一些。例如,第一个是循环结构,它是<>。你会把它们放在周围,它会循环。还有其他神秘的命令可用于有条件地退出循环。为了制作Emacs,我们(1)添加了具有名称的子程序的工具。在此之前,它有点像Basic,子程序只能用单个字母作为名称。这很难用大型程序编程,因此我们添加了代码以便它们可以有更长的名称。实际上,有一些相当复杂的设施; 我认为Lisp得到了TECO的放松保护设施。
|
||||
|
||||
我们开始使用相当复杂的设施,所有这些都是你能想到的最丑陋的语法,并且它起作用了 - 人们无论如何都能够在其中编写大型程序。显而易见的教训是,像TECO这样的语言并没有被设计成编程语言,这是错误的方法。您构建扩展的语言不应该被认为是事后的编程语言; 它应该被设计为编程语言。实际上,我们发现用于此目的的最佳编程语言是Lisp。
|
||||
|
||||
是伯尼格林伯格,他发现它是(2)。他在Multics MacLisp中编写了一个版本的Emacs,并且他以直截了当的方式在MacLisp中编写了他的命令。编辑器本身完全是用Lisp编写的。事实证明,Multics Emacs取得了巨大成功 - 编写新的编辑命令非常方便,甚至他办公室的秘书也开始学习如何使用它。他们使用了某人编写的手册,其中展示了如何扩展Emacs,但并没有说这是一个编程。所以那些认为自己无法编程的秘书并没有被吓跑。他们阅读了手册,发现他们可以做有用的事情并且他们学会了编程。
|
||||
|
||||
因此,伯尼看到了一个应用程序 - 一个对你有用的程序 - 里面有Lisp,你可以通过重写Lisp程序来扩展它,实际上是人们学习编程的一种非常好的方式。它让他们有机会编写对他们有用的小程序,这在大多数领域是你不可能做到的。他们可以鼓励他们自己的实际用途 - 在最困难的阶段 - 他们不相信他们可以编程,直到他们达到程序员的程度。
|
||||
|
||||
那时,人们开始想知道如何在他们没有完整服务Lisp实现的平台上得到这样的东西。Multics MacLisp有一个编译器和一个解释器 - 它是一个成熟的Lisp系统 - 但是人们希望在其他尚未编写Lisp编译器的系统上实现类似的东西。好吧,如果你没有Lisp编译器,你就无法在Lisp中编写整个编辑器 - 如果它必须运行解释的Lisp,那就太慢了,尤其是重新编译。所以我们开发了一种混合技术。我的想法是编写一个Lisp解释器和编辑器的低级部分,以便编辑器的部分内置Lisp工具。这些将是我们认为必须优化的任何部分。这是我们已经在原始Emacs中有意识地实践的一种技术,因为我们在机器语言中重新实现了某些相当高级别的特性,使它们成为TECO原语。例如,有一个TECO原语来填充段落(实际上,要完成填充段落的大部分工作,因为一些耗时较少的部分工作将由TECO程序在更高级别完成) 。您可以通过编写TECO程序来完成整个工作,但这太慢了,所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法(在混合技术中),大多数编辑器都是用Lisp编写的,但是必须以特别快的速度运行的某些部分才能在较低级别编写。因为我们在机器语言中重新实现了某些相当高级别的功能,使它们成为TECO原语。例如,有一个TECO原语来填充段落(实际上,要完成填充段落的大部分工作,因为一些耗时较少的部分工作将由TECO程序在更高级别完成) 。您可以通过编写TECO程序来完成整个工作,但这太慢了,所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法(在混合技术中),大多数编辑器都是用Lisp编写的,但是必须以特别快的速度运行的某些部分才能在较低级别编写。因为我们在机器语言中重新实现了某些相当高级别的功能,使它们成为TECO原语。例如,有一个TECO原语来填充段落(实际上,要完成填充段落的大部分工作,因为一些耗时较少的部分工作将由TECO程序在更高级别完成) 。您可以通过编写TECO程序来完成整个工作,但这太慢了,所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法(在混合技术中),大多数编辑器都是用Lisp编写的,但是必须以特别快的速度运行的某些部分才能在较低级别编写。完成填写段落的大部分工作,因为一些耗时较少的工作部分将由TECO计划在更高层次完成。您可以通过编写TECO程序来完成整个工作,但这太慢了,所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法(在混合技术中),大多数编辑器都是用Lisp编写的,但是必须以特别快的速度运行的某些部分才能在较低级别编写。完成填写段落的大部分工作,因为一些耗时较少的工作部分将由TECO计划在更高层次完成。您可以通过编写TECO程序来完成整个工作,但这太慢了,所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法(在混合技术中),大多数编辑器都是用Lisp编写的,但是必须以特别快的速度运行的某些部分才能在较低级别编写。
|
||||
|
||||
因此,当我编写第二个Emacs实现时,我遵循了同样的设计。低级语言不再是机器语言,它是C. C是便携式程序在类Unix操作系统中运行的一种好的,高效的语言。有一个Lisp解释器,但我直接在C中实现了专用编辑工作的工具 - 操作编辑缓冲区,插入前导文本,读取和写入文件,重新显示屏幕上的缓冲区,管理编辑器窗口。
|
||||
|
||||
现在,这不是第一个用C编写并在Unix上运行的Emacs。第一部由James Gosling撰写,被称为GosMacs。他身上发生了一件奇怪的事。起初,他似乎受到原始Emacs的共享和合作精神的影响。我首先向麻省理工学院的人们发布了最初的Emacs。有人希望将它移植到Twenex上运行 - 它最初只运行在我们在麻省理工学院使用的不兼容的分时系统。他们将它移植到Twenex,这意味着全世界有几百个可能会使用它的安装。我们开始将它分发给他们,其规则是“您必须将所有改进发回”,这样我们才能受益。没有人试图强制执行,但据我所知,人们确实合作。
|
||||
|
||||
起初,戈斯林似乎参与了这种精神。他在手册中写道,他称Emacs计划希望社区中的其他人能够改进它,直到它值得这个名字。这是采用社区的正确方法 - 要求他们加入并使计划更好。但在那之后,他似乎改变了精神,并把它卖给了一家公司。
|
||||
|
||||
那时我正在研究GNU系统(一种类似Unix的自由软件操作系统,许多人错误称之为“Linux”)。没有在Unix上运行的免费软件Emacs编辑器。然而,我确实有一位朋友曾参与开发Gosling的Emacs。戈斯林通过电子邮件允许他分发自己的版本。他向我建议我使用那个版本。然后我发现Gosling的Emacs没有真正的Lisp。它有一种被称为'mocklisp'的编程语言,它在语法上看起来像Lisp,但没有Lisp的数据结构。所以程序不是数据,而且缺少Lisp的重要元素。它的数据结构是字符串,数字和一些其他专门的东西。
|
||||
|
||||
我总结说我无法使用它并且必须全部替换它,第一步是编写一个实际的Lisp解释器。我逐渐调整了编辑器的每个部分,基于真正的Lisp数据结构,而不是ad hoc数据结构,使得编辑器内部的数据结构可以由用户的Lisp程序公开和操作。
|
||||
|
||||
唯一的例外是重新显示。很长一段时间,重新显示是一个替代世界。编辑器将进入重新显示的世界,并且会继续使用非常特殊的数据结构,这些数据结构对于垃圾收集是不安全的,不会安全中断,并且在此期间您无法运行任何Lisp程序。我们已经改变了 - 因为现在可以在重新显示期间运行Lisp代码。这是一件非常方便的事情。
|
||||
|
||||
第二个Emacs计划是现代意义上的“自由软件” - 它是使软件免费的明确政治运动的一部分。这次活动的实质是每个人都应该自由地做我们过去在麻省理工学院做的事情,共同开发软件并与想与我们合作的任何人一起工作。这是自由软件运动的基础 - 我拥有的经验,我在麻省理工学院人工智能实验室的生活 - 致力于人类知识,而不是阻碍任何人进一步使用和进一步传播人类知识。
|
||||
|
||||
当时,您可以制作一台与其他不适合Lisp的计算机价格相同的计算机,除了它可以比它们更快地运行Lisp,并且在每个操作中都进行全类型检查。普通计算机通常迫使您在执行速度和良好的类型检查之间做出选择。所以,是的,你可以拥有一个Lisp编译器并快速运行你的程序,但是当他们试图获取car一个数字时,它会得到无意义的结果并最终在某些时候崩溃。
|
||||
|
||||
Lisp机器能够以与其他机器一样快的速度执行指令,但是每条指令 - 汽车指令都会进行数据类型检查 - 所以当你试图在编译的程序中得到一个数字的汽车时,它会立即给你一个错误。我们构建了机器,并为它配备了Lisp操作系统。它几乎完全是用Lisp编写的,唯一的例外是在微码中编写的部分。人们开始对制造它们感兴趣,这意味着他们应该创办公司。
|
||||
|
||||
关于这家公司应该是什么样的,有两种不同的想法。格林布莱特希望开始他所谓的“黑客”公司。这意味着它将成为一家由黑客运营的公司,并以有利于黑客的方式运营。另一个目标是维持AI Lab文化(3)。不幸的是,Greenblatt没有任何商业经验,所以Lisp机器组的其他人说他们怀疑自己能否成功。他们认为他避免外来投资的计划是行不通的。
|
||||
|
||||
他为什么要避免外来投资?因为当一家公司有外部投资者时,他们会接受控制,他们不会让你有任何顾忌。最后,如果你有任何顾忌,他们也会取代你作为经理。
|
||||
|
||||
所以Greenblatt认为他会找到一个会提前支付购买零件的顾客。他们会建造机器并交付它们; 通过这些零件的利润,他们将能够为更多的机器购买零件,销售这些零件,然后为更多的机器购买零件,等等。小组中的其他人认为这不可行。
|
||||
|
||||
然后Greenblatt招募了聘请我的Russell Noftsker,后来他离开了AI Lab并创建了一家成功的公司。据信拉塞尔有商业天赋。他通过对群体中的其他人说:“让我们放弃格林布拉特,忘记他的想法,我们将成为另一家公司。”他展示了这种商业天赋。在后面刺伤,显然是一个真正的商人。那些人决定他们组建一家名为Symbolics的公司。他们会得到外部投资,没有顾忌,并尽一切可能获胜。
|
||||
|
||||
但格林布拉特没有放弃。无论如何,他和忠于他的少数人决定启动Lisp Machines Inc.并继续他们的计划。你知道什么,他们成功了!他们得到了第一个客户,并提前付款。他们建造机器并出售它们,并建造了更多的机器和更多的机器。尽管他们没有得到团队中大多数人的帮助,但他们确实取得了成功。Symbolics也取得了成功的开始,所以你有两个竞争的Lisp机器公司。当Symbolics看到LMI不会掉在脸上时,他们开始寻找破坏它的方法。
|
||||
|
||||
因此,放弃我们的实验室之后在我们的实验室中进行了“战争”。除了我和少数在LMI兼职工作的人之外,当Symbolics雇佣了所有的黑客时,遗弃了。然后他们引用了一条规则并且淘汰了为麻省理工学院做兼职工作的人,所以他们不得不完全离开,只剩下我。人工智能实验室现在无能为力。麻省理工学院与这两家公司做了非常愚蠢的安排。这是一份三方合同,两家公司都许可使用Lisp机器系统源。这些公司被要求让麻省理工学院使用他们的变化。但它没有在合同中说MIT有权将它们放入两家公司获得许可的MIT Lisp机器系统中。没有人设想AI实验室的黑客组织会被彻底消灭,但确实如此。
|
||||
|
||||
因此,Symbolics提出了一个计划(4)。他们对实验室说:“我们将继续对可供您使用的系统进行更改,但您无法将其置于MIT Lisp机器系统中。相反,我们将允许您访问Symbolics的Lisp机器系统,您可以运行它,但这就是您所能做的一切。“
|
||||
|
||||
实际上,这意味着他们要求我们必须选择一个侧面,并使用MIT版本的系统或Symbolics版本。无论我们做出哪种选择,都决定了我们改进的系统。如果我们研究并改进了Symbolics版本,我们就会单独支持Symbolics。如果我们使用并改进了MIT版本的系统,我们将为两家公司提供工作,但是Symbolics认为我们将支持LMI,因为我们将帮助它们继续存在。所以我们不再被允许保持中立。
|
||||
|
||||
直到那时,我还没有站在任何一家公司的一边,尽管让我很难看到我们的社区和软件发生了什么。但现在,Symbolics迫使这个问题。因此,为了帮助保持Lisp Machines Inc.(5) - 我开始复制Symbolics对Lisp机器系统所做的所有改进。我自己再次写了相同的改进(即代码是我自己的)。
|
||||
|
||||
过了一会儿(6),我得出结论,如果我甚至不看他们的代码那将是最好的。当他们发布了发布说明的beta版时,我会看到这些功能是什么然后实现它们。当他们真正发布时,我也做了。
|
||||
|
||||
通过这种方式,两年来,我阻止他们消灭Lisp Machines Incorporated,两家公司继续进行。但是,我不想花费数年和数年来惩罚某人,只是挫败了一个邪恶的行为。我认为他们受到了相当彻底的惩罚,因为他们被那些没有离开或将要消失的竞争所困扰(7)。与此同时,现在是时候开始建立一个新社区来取代他们的行动和其他人已经消灭的社区。
|
||||
|
||||
70年代的Lisp社区不仅限于麻省理工学院人工智能实验室,而且黑客并非都在麻省理工学院。Symbolics开始的战争是麻省理工学院的战争,但当时还有其他事件正在发生。有人放弃了合作,共同消灭了社区,没有多少人离开。
|
||||
|
||||
一旦我停止惩罚Symbolics,我必须弄清楚下一步该做什么。我必须制作一个免费的操作系统,这很清楚 - 人们可以一起工作和共享的唯一方法是使用免费的操作系统。
|
||||
|
||||
起初,我想过制作一个基于Lisp的系统,但我意识到这在技术上不是一个好主意。要拥有像Lisp机器系统这样的东西,你需要特殊用途的微码。这使得以其他计算机运行程序的速度运行程序成为可能,并且仍然可以获得类型检查的好处。没有它,你将被简化为类似其他机器的Lisp编译器。程序会更快,但不稳定。如果你在分时系统上运行一个程序就好了 - 如果一个程序崩溃,那不是灾难,这是你的程序偶尔会做的事情。但这并不能很好地编写操作系统,所以我拒绝了制作像Lisp机器这样的系统的想法。
|
||||
|
||||
我决定改造一个类似Unix的操作系统,让Lisp实现作为用户程序运行。内核不会用Lisp编写,但我们有Lisp。因此,操作系统GNU操作系统的开发使我编写了GNU Emacs。在这样做的过程中,我的目标是实现绝对最小化的Lisp实现。节目的规模是一个巨大的问题。
|
||||
|
||||
那些日子里,1985年有人拥有一台没有虚拟内存的1兆字节机器。他们希望能够使用GNU Emacs。这意味着我必须保持程序尽可能小。
|
||||
|
||||
例如,当时唯一的循环结构是'while',这非常简单。没有办法打破'while'语句,你只需要执行catch和throw,或者测试运行循环的变量。这表明我在努力保持小事做多远。我们没有'caar'和'cadr'等等; “挤出一切可能”是从一开始就是Emacs Lisp精神的GNU Emacs的精神。
|
||||
|
||||
显然,机器现在变大了,我们不再这样做了。我们放入'caar'和'cadr'等等,我们可能会在其中一天进行另一个循环构造。我们现在愿意扩展它,但我们不想将它扩展到常见的Lisp级别。我在Lisp机器上实现了一次Common Lisp,我对此并不满意。我不太喜欢的一件事是关键字参数(8)。他们对我来说似乎不太好看; 我有时候会这样做,但是当我这样做时,我会尽量减少。
|
||||
|
||||
这不是与Lisp有关的GNU项目的结束。后来在1995年左右,我们开始考虑启动一个图形桌面项目。很明显,对于桌面上的程序,我们希望编程语言能够编写大量内容以使其易于扩展,就像编辑器一样。问题是应该是什么。
|
||||
|
||||
当时,为了这个目的,TCL正在被大力推动。我对TCL的看法很低,主要是因为它不是Lisp。它看起来有点像Lisp,但在语义上它不是,而且它不那么干净。然后有人向我展示了一则广告,其中Sun试图聘请某人在TCL工作,使其成为世界“事实上的标准扩展语言”。我想,“我们必须阻止这种情况发生。”因此我们开始将Scheme作为GNU的标准可扩展性语言。不是Common Lisp,因为它太大了。我们的想法是,我们将有一个Scheme解释器,设计为以与TCL链接到应用程序相同的方式链接到应用程序中。然后我们建议将其作为所有GNU程序的首选可扩展性包。
|
||||
|
||||
使用像Lisp这样强大的语言作为主要的可扩展性语言,可以获得一个有趣的好处。您可以通过将其翻译成主要语言来实现其他语言。如果您的主要语言是TCL,则无法通过将其翻译成TCL来轻松实现它。但是如果你的主要语言是Lisp,那么通过翻译来实现其他东西并不困难。我们的想法是,如果每个可扩展的应用程序都支持Scheme,您可以在Scheme中编写TCL或Python或Perl的实现,将该程序转换为Scheme。然后,您可以将其加载到任何应用程序中,并使用您喜欢的语言进行自定义,它也可以与其他自定义项一起使用。
|
||||
|
||||
只要可扩展性语言很弱,用户就必须只使用您提供的语言。这意味着喜欢任何特定语言的人必须竞争应用程序开发人员的选择 - 说“请,应用程序开发人员,将我的语言放入您的应用程序,而不是他的语言。”然后用户根本没有选择 - 无论哪个他们正在使用的应用程序带有一种语言,并且它们被[该语言]所困扰。但是,如果你有一种强大的语言可以通过翻译来实现其他语言,那么你就可以让用户选择语言了,我们不再需要语言大战了。这就是我们希望'Guile',我们的计划翻译,会做的。我们去年夏天有一个人在完成从Python到Scheme的翻译工作。我不知道是不是' 已完全完成,但对于对此项目感兴趣的任何人,请与我们联系。这就是我们未来的计划。
|
||||
|
||||
我没有谈过自由软件,但让我简要地告诉你一些关于这意味着什么的信息。自由软件不涉及价格; 这并不意味着你是免费获得的。(您可能已经支付了一份副本,或者免费获得了一份副本。)这意味着您拥有作为用户的自由。关键是你可以自由运行程序,自由学习它的功能,可以随意改变它以满足你的需求,可以自由地重新发布其他人的副本并免费发布改进的扩展版本。这就是自由软件的含义。如果你使用非免费程序,你就失去了至关重要的自由,所以不要这样做。
|
||||
|
||||
GNU项目的目的是通过提供免费软件来取代它,使人们更容易拒绝自由践踏,用户主导,非自由软件。对于那些没有道德勇气拒绝非自由软件的人来说,当这意味着一些实际的不便时,我们试图做的是提供一个免费的替代方案,这样你就可以在没有混乱的情况下转向自由。实际上的牺牲。牺牲越少越好。我们希望让您更轻松地生活在自由,合作中。
|
||||
|
||||
这是合作自由的问题。我们习惯于思考自由和与社会的合作,好像他们是对立的。但在这里他们是在同一边。使用免费软件,您可以自由地与其他人合作,也可以自由地帮助自己。使用非自由软件,有人会主宰你并让人们分裂。你不能与他们分享,你不能自由地合作或帮助社会,而不是你可以自由地帮助自己。分裂和无助的是使用非自由软件的用户状态。
|
||||
|
||||
我们已经制作了大量的免费软件。我们做过人们说我们永远做不到的事情; 我们有两个免费软件操作系统。我们有很多应用程序,显然我们还有很长的路要走。所以我们需要你的帮助。我想请你自愿参加GNU项目; 帮助我们开发更多工作的免费软件。看看 http://www.gnu.org/help 找到如何提供帮助的建议。如果您想订购商品,可以在主页上找到相关链接。如果你想阅读哲学问题,请查看/哲学。如果您正在寻找可以使用的免费软件,请查看/directory,其中列出了大约1900个软件包(这是所有免费软件的一小部分)。请写更多信息并为我们做出贡献。我的论文“自由软件和自由社会”正在出售,可以在www.gnu.org上购买。快乐的黑客!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.gnu.org/gnu/rms-lisp.html
|
||||
|
||||
作者:[Richard Stallman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekmar](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.gnu.org
|
||||
[1]:https://www.gnu.org/help/
|
||||
[2]:http://www.gnu.org/
|
@ -1,199 +0,0 @@
|
||||
不要安装 Yaourt!在 Arch 上使用以下这些替代品。
|
||||
======
|
||||
**前略:Yaourt 曾是最流行的 AUR 助手,但现已停止开发。在这篇文章中,我们会为 Arch 衍生发行版们列出 Yaourt 最佳的替代品。**
|
||||
|
||||
[Arch User Repository][1] 或者叫 AUR,是一个为 Arch 用户而生的社区驱动软件仓库。Debian/Ubuntu 用户的对应类比是 PPA。
|
||||
|
||||
AUR 包含了不直接被 [Arch Linux][2] 官方所背书的软件。如果有人想在 Arch 上发布软件或者包,他可以通过 AUR 提供给客户。这让末端用户们可以使用到比默认仓库里更多的软件。
|
||||
|
||||
所以你该如何使用 AUR 呢?简单来说,你需要不同的工具以从 AUR 中安装软件。Arch 的包管理器 [pacman][3] 不直接支持 AUR。那些支持 AUR 的特殊工具我们称之为 [AUR Helpers][4]。
|
||||
|
||||
Yaourt (Yet AnOther User Repository Tool) 曾是一个以便于用户从 AUR 下载软件的, pacman 的再包装。他基本上使用和 pacman 一样的语法。Yaourt 对于 AUR 的搜索,安装,乃至冲突解决和包依赖都有着良好的支持。
|
||||
|
||||
然而,Yaourt 的开发进度近来十分缓慢,甚至在 Arch Wiki 上已经被[列为][5]“停止或有问题”。[许多 Arch 用户认为它不安全][6] 进而开始寻找其他的 AUR 助手。
|
||||
|
||||
![Yaourt 以外的 AUR Helpers][7]
|
||||
|
||||
在这篇文章中,我们会介绍 Yaourt 最佳的替代品以便于你从 AUR 下载安装案件。
|
||||
|
||||
### AUR Helper 最好的选择
|
||||
|
||||
我刻意忽略掉了例如 Trizen 和 Packer 这样的选择,因为他们也被列为“停止或有问题”的了。
|
||||
|
||||
#### 1\. aurman
|
||||
|
||||
[aurman][8] 是最好的 AUR 助手之一,也能胜任 Yaourt 替代品。他对所有 pacman 的操作有着一样的语法。你可以搜索 AUR,解决包依赖,安装前检查 PKGBUILD 的内容等等。
|
||||
|
||||
##### aurman 的特性
|
||||
|
||||
* aurman 支持所有 pacman 操作并且引入了可靠的包依赖解决,冲突判定和分包(split package)支持
|
||||
* 分线程的 sudo 循环会在后来运行所以你每次安装只需要输入一次管理员密码
|
||||
* 提供开发者包支持并且可以区分显性安装和隐性安装的包
|
||||
* 支持搜索AUR
|
||||
* 你可以检视并编辑 PKGBUILD 的内容
|
||||
* 可以用作单独的 [包依赖解决][9]
|
||||
|
||||
|
||||
|
||||
##### 安装 aurman
|
||||
```
|
||||
git clone https://aur.archlinux.org/aurman.git
|
||||
cd aurman
|
||||
makepkg -si
|
||||
|
||||
```
|
||||
|
||||
##### 使用 aurman
|
||||
|
||||
用名字搜索:
|
||||
```
|
||||
aurman -Ss <package-name>
|
||||
|
||||
```
|
||||
安装:
|
||||
```
|
||||
aurman -S <package-name>
|
||||
|
||||
```
|
||||
|
||||
#### 2\. yay
|
||||
|
||||
[yay][10] 是我们列表上下一个选项。它使用 Go 语言写成,宗旨是提供 pacman 的界面并且让用户输入最少化,yay 自己几乎没有任何依赖软件。
|
||||
|
||||
##### yay 的特性
|
||||
|
||||
* yay 提供 AUR 表格补全并且从 ABS 或 AUR 下载 PKGBUILD
|
||||
* 支持收窄搜索,并且不需要引用 PKGBUILD 源
|
||||
* yay 的二进制文件除了 pacman 以外别无依赖
|
||||
* 提供先进的包依赖解决以及在编译安装之后移除编译时的依赖
|
||||
* 支持日色彩输出,使用 /etc/pacman.conf 文件配置
|
||||
* yay 可被配置成只支持 AUR 或者 repo 里的软件包
|
||||
|
||||
|
||||
|
||||
##### 安装 yay
|
||||
|
||||
你可以从 git 克隆并编译安装
|
||||
```
|
||||
git clone https://aur.archlinux.org/yay.git
|
||||
cd yay
|
||||
makepkg -si
|
||||
|
||||
```
|
||||
|
||||
##### 使用 yay
|
||||
|
||||
搜索:
|
||||
```
|
||||
yay -Ss <package-name>
|
||||
|
||||
```
|
||||
|
||||
安装:
|
||||
```
|
||||
yay -S <package-name>
|
||||
|
||||
```
|
||||
|
||||
#### 3\. pakku
|
||||
|
||||
[Pakku][11] 是另一个还在开发早期的 pacman 再包装,虽然它还处于开放早期,但这部说明它逊于其他任何 AUR 助手。Pakku 能良好地支持搜索和安安装,并且也可以在安装后移除不必要的编译依赖。
|
||||
|
||||
##### pakku 的特性
|
||||
|
||||
* 从 AUR 搜索安装软件
|
||||
* 检视不同 build 之间的文件变化
|
||||
* 从官方仓库编译并事后移除编译依赖
|
||||
* 获取 PKGBUILD 以及 pacman 整合
|
||||
* 类 pacman 的用户界面和选项支持
|
||||
* 支持pacman 配置文件以及无需 PKGBUILD soucing
|
||||
|
||||
|
||||
|
||||
##### 安装 pakku
|
||||
```
|
||||
git clone https://aur.archlinux.org/pakku.git
|
||||
cd pakku
|
||||
makepkg -si
|
||||
|
||||
```
|
||||
|
||||
##### 使用 pakku
|
||||
|
||||
搜索:
|
||||
```
|
||||
pakku -Ss spotify
|
||||
|
||||
```
|
||||
|
||||
安装:
|
||||
```
|
||||
pakku -S spotify
|
||||
|
||||
```
|
||||
|
||||
#### 4\. aurutils
|
||||
|
||||
[aurutils][12] 本质上是一堆自动化脚本的集合。他可以搜索 AUR,检查更新,并且解决包依赖。
|
||||
|
||||
##### aurutils 的特性
|
||||
|
||||
* 不同的任务可以有多个仓库
|
||||
* aursync -u 一键同步所有本地代码库
|
||||
* aursearch 搜索提供 pkgbase,long format 和 raw 支持
|
||||
* 能忽略指定包
|
||||
|
||||
|
||||
|
||||
##### 安装 aurutils
|
||||
```
|
||||
git clone https://aur.archlinux.org/aurutils.git
|
||||
cd aurutils
|
||||
makepkg -si
|
||||
|
||||
```
|
||||
|
||||
##### 使用 aurutils
|
||||
|
||||
搜索:
|
||||
```
|
||||
aurutils -Ss <package-name>
|
||||
|
||||
```
|
||||
|
||||
安装:
|
||||
```
|
||||
aurutils -S <package-name>
|
||||
|
||||
```
|
||||
|
||||
所有这些包,在有 Yaourt 或者其他 AUR 助手的情况下都可以直接安装。
|
||||
|
||||
#### 写在最后
|
||||
|
||||
Arch Linux 有着[很多 AUR 助手][4] 可以自动完成 AUR 各方面的日常任务。很多用户依然使用 Yaourt 来完成 AUR 相关任务,每个人都有自己不一样的偏好,欢迎留言告诉我们你在 Arch 里使用什么,又有什么心得?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-aur-helpers/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Moelf](https://github.com/Moelf)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://wiki.archlinux.org/index.php/Arch_User_Repository
|
||||
[2]:https://www.archlinux.org/
|
||||
[3]:https://wiki.archlinux.org/index.php/pacman
|
||||
[4]:https://wiki.archlinux.org/index.php/AUR_helpers
|
||||
[5]:https://wiki.archlinux.org/index.php/AUR_helpers#Comparison_table
|
||||
[6]:https://www.reddit.com/r/archlinux/comments/4azqyb/whats_so_bad_with_yaourt/
|
||||
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/no-yaourt-arch-800x450.jpeg
|
||||
[8]:https://github.com/polygamma/aurman
|
||||
[9]:https://github.com/polygamma/aurman/wiki/Using-aurman-as-dependency-solver
|
||||
[10]:https://github.com/Jguer/yay
|
||||
[11]:https://github.com/kitsunyan/pakku
|
||||
[12]:https://github.com/AladW/aurutils
|
@ -0,0 +1,178 @@
|
||||
查看一个归档或压缩文件的内容而无需解压它
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/07/View-The-Contents-Of-An-Archive-Or-Compressed-File-720x340.png)
|
||||
|
||||
在本教程中,我们将学习如何在类 Unix 系统中查看一个归档或者压缩文件的内容而无需实际解压它。在深入之前,让我们先厘清归档和压缩文件的概念,它们之间有显著不同。归档是将多个文件或者目录归并到一个文件的过程,因此这个生成的文件是没有被压缩过的。而压缩则是结合多个文件或者目录到一个文件并最终压缩这个文件的方法。归档文件不是一个压缩文件,但压缩文件可以是一个归档文件,清楚了吗?好,那就让我们进入今天的主题。
|
||||
|
||||
### 查看一个归档或者压缩文件的内容而无需解压它
|
||||
|
||||
得益于 Linux 社区,有很多命令行工具可以来达成上面的目标。下面就让我们来看看使用它们的一些示例。
|
||||
|
||||
**1 使用 Vim 编辑器**
|
||||
|
||||
Vim 不只是一个编辑器,使用它我们可以干很多事情。下面的命令展示的是在没有解压的情况下使用 Vim 查看一个压缩的归档文件的内容:
|
||||
|
||||
```
|
||||
$ vim ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
你甚至还可以浏览归档文件的内容,打开其中的文本文件(假如有的话)。要打开一个文本文件,只需要用方向键将鼠标的游标放置到文件的前面,然后敲 ENTER 键来打开它。
|
||||
|
||||
**2 使用 Tar 命令**
|
||||
|
||||
为了列出一个 tar 归档文件的内容,可以运行:
|
||||
```
|
||||
$ tar -tf ostechnix.tar
|
||||
ostechnix/
|
||||
ostechnix/image.jpg
|
||||
ostechnix/file.pdf
|
||||
ostechnix/song.mp3
|
||||
|
||||
```
|
||||
|
||||
或者使用 **-v** 选项来查看归档文件的具体属性,例如它的文件所有者、属组、创建日期等等。
|
||||
```
|
||||
$ tar -tvf ostechnix.tar
|
||||
drwxr-xr-x sk/users 0 2018-07-02 19:30 ostechnix/
|
||||
-rw-r--r-- sk/users 53632 2018-06-29 15:57 ostechnix/image.jpg
|
||||
-rw-r--r-- sk/users 156831 2018-06-04 12:37 ostechnix/file.pdf
|
||||
-rw-r--r-- sk/users 9702219 2018-04-25 20:35 ostechnix/song.mp3
|
||||
|
||||
```
|
||||
|
||||
**3 使用 Rar 命令**
|
||||
|
||||
要查看一个 rar 文件的内容,只需要执行:
|
||||
```
|
||||
$ rar v ostechnix.rar
|
||||
|
||||
RAR 5.60 Copyright (c) 1993-2018 Alexander Roshal 24 Jun 2018
|
||||
Trial version Type 'rar -?' for help
|
||||
|
||||
Archive: ostechnix.rar
|
||||
Details: RAR 5
|
||||
|
||||
Attributes Size Packed Ratio Date Time Checksum Name
|
||||
----------- --------- -------- ----- ---------- ----- -------- ----
|
||||
-rw-r--r-- 53632 52166 97% 2018-06-29 15:57 70260AC4 ostechnix/image.jpg
|
||||
-rw-r--r-- 156831 139094 88% 2018-06-04 12:37 C66C545E ostechnix/file.pdf
|
||||
-rw-r--r-- 9702219 9658527 99% 2018-04-25 20:35 DD875AC4 ostechnix/song.mp3
|
||||
----------- --------- -------- ----- ---------- ----- -------- ----
|
||||
9912682 9849787 99% 3
|
||||
|
||||
```
|
||||
|
||||
**4 使用 Unrar 命令**
|
||||
|
||||
你也可以使用带有 **l** 选项的 **Unrar** 来做到与上面相同的事情,展示如下:
|
||||
```
|
||||
$ unrar l ostechnix.rar
|
||||
|
||||
UNRAR 5.60 freeware Copyright (c) 1993-2018 Alexander Roshal
|
||||
|
||||
Archive: ostechnix.rar
|
||||
Details: RAR 5
|
||||
|
||||
Attributes Size Date Time Name
|
||||
----------- --------- ---------- ----- ----
|
||||
-rw-r--r-- 53632 2018-06-29 15:57 ostechnix/image.jpg
|
||||
-rw-r--r-- 156831 2018-06-04 12:37 ostechnix/file.pdf
|
||||
-rw-r--r-- 9702219 2018-04-25 20:35 ostechnix/song.mp3
|
||||
----------- --------- ---------- ----- ----
|
||||
9912682 3
|
||||
|
||||
```
|
||||
|
||||
**5 使用 Zip 命令**
|
||||
|
||||
为了查看一个 zip 文件的内容而无需解压它,可以使用下面的 **zip** 命令:
|
||||
```
|
||||
$ zip -sf ostechnix.zip
|
||||
Archive contains:
|
||||
Life advices.jpg
|
||||
Total 1 entries (597219 bytes)
|
||||
|
||||
```
|
||||
|
||||
**6. 使用 Unzip 命令**
|
||||
|
||||
你也可以像下面这样使用 **-l** 选项的 **Unzip** 命令来呈现一个 zip 文件的内容:
|
||||
```
|
||||
$ unzip -l ostechnix.zip
|
||||
Archive: ostechnix.zip
|
||||
Length Date Time Name
|
||||
--------- ---------- ----- ----
|
||||
597219 2018-04-09 12:48 Life advices.jpg
|
||||
--------- -------
|
||||
597219 1 file
|
||||
|
||||
```
|
||||
|
||||
**7 使用 Zipinfo 命令**
|
||||
|
||||
```
|
||||
$ zipinfo ostechnix.zip
|
||||
Archive: ostechnix.zip
|
||||
Zip file size: 584859 bytes, number of entries: 1
|
||||
-rw-r--r-- 6.3 unx 597219 bx defN 18-Apr-09 12:48 Life advices.jpg
|
||||
1 file, 597219 bytes uncompressed, 584693 bytes compressed: 2.1%
|
||||
|
||||
```
|
||||
|
||||
如你所见,上面的命令展示了一个 zip 文件的内容、它的权限、创建日期和压缩百分比等等信息。
|
||||
|
||||
**8. 使用 Zcat 命令**
|
||||
|
||||
要一个压缩的归档文件的内容而不解压它,使用 **zcat** 命令,我们可以得到:
|
||||
```
|
||||
$ zcat ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
zcat 和 `gunzip -c` 命令相同。所以你可以使用下面的命令来查看归档或者压缩文件的内容:
|
||||
```
|
||||
$ gunzip -c ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
**9. 使用 Zless 命令**
|
||||
|
||||
要使用 Zless 命令来查看一个归档或者压缩文件的内容,只需:
|
||||
```
|
||||
$ zless ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
这个命令类似于 `less` 命令,它将一页一页地展示其输出。
|
||||
|
||||
**10. 使用 Less 命令**
|
||||
|
||||
可能你已经知道 **less** 命令可以打开文件来交互式地阅读它,并且它支持滚动和搜索。
|
||||
|
||||
运行下面的命令来使用 less 命令查看一个归档或者压缩文件的内容:
|
||||
```
|
||||
$ less ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
上面便是全部的内容了。现在你知道了如何在 Linux 中使用各种命令查看一个归档或者压缩文件的内容了。希望本文对你有用。更多好的内容将呈现给大家,希望继续关注我们!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-view-the-contents-of-an-archive-or-compressed-file-without-extracting-it/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/vim.png
|
@ -1,224 +1,131 @@
|
||||
A gawk script to convert smart quotes
|
||||
一个转换花引号的 gawk 脚本
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourceprescription.png?itok=gFrc_GTH)
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourceprescription.png?itok=gFrc_GTH)
|
||||
|
||||
I manage a personal website and edit the web pages by hand. Since I don't have many pages on my site, this works well for me, letting me "scratch the itch" of getting into the site's code.
|
||||
我管理着一个个人网站,同时手工编辑网站上的网页。由于网站上的页面并不多,这种方法对我很适合,可以让我对网站代码的细节一清二楚。
|
||||
|
||||
When I updated my website's design recently, I decided to turn all the plain quotes into "smart quotes," or quotes that look like those used in print material: “” instead of "".
|
||||
最近我升级了网站的设计样式,我决定把所有的普通引号都转换成 "花引号",即在打印材料中使用的那种引号:用 “” 来代替 ""。
|
||||
|
||||
Editing all of the quotes by hand would take too long, so I decided to automate the process of converting the quotes in all of my HTML files. But doing so via a script or program requires some intelligence. The script needs to know when to convert a plain quote to a smart quote, and which quote to use.
|
||||
手工修改所有的引号太耗时了,因此我决定将转换所有 HTML 文件中引号的过程自动化。不过通过程序或脚本来实现该功能需要费点劲。这个脚本需要知道何时将普通引号转换成花引号,并决定使用哪种引号(译注:左引号还是右引号,单引号还是双引号)。
|
||||
|
||||
You can use different methods to convert quotes. Greg Pittman wrote a [Python script][1] for fixing smart quotes in text. I wrote mine in GNU [awk][2] (gawk).
|
||||
有多种方法可以转换引号。Greg Pittman 写过一个 [Python 脚本 ][1] 来修正文本中的花引号。而我自己使用 GNU [awk][2] (gawk) 来实现。
|
||||
|
||||
> Get our awk cheat sheet. [Free download][3].
|
||||
> 下载我的 awk 备忘录。[免费下载 ][3]。
|
||||
|
||||
To start, I wrote a simple gawk function to evaluate a single character. If that character is a quote, the function determines if it should output a plain quote or a smart quote. The function looks at the previous character; if the previous character is a space, the function outputs a left smart quote. Otherwise, the function outputs a right smart quote. The script does the same for single quotes.
|
||||
开始之前,我写了一个简单的 gawk 函数来评估单个字符。若该字符是一个引号,这该函数判断是输出普通引号还是花引号。函数查看前一个字符; 若前一个字符是空格,则函数输出左花引号。否则函数输出右花引号。脚本对单引号的处理方式也一样。
|
||||
```
|
||||
function smartquote (char, prevchar) {
|
||||
|
||||
# print smart quotes depending on the previous character
|
||||
|
||||
# otherwise just print the character as-is
|
||||
|
||||
|
||||
|
||||
if (prevchar ~ /\s/) {
|
||||
|
||||
# prev char is a space
|
||||
|
||||
if (char == "'") {
|
||||
|
||||
printf("‘");
|
||||
|
||||
}
|
||||
|
||||
else if (char == "\"") {
|
||||
|
||||
printf("“");
|
||||
|
||||
}
|
||||
|
||||
else {
|
||||
|
||||
printf("%c", char);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
else {
|
||||
|
||||
# prev char is not a space
|
||||
|
||||
if (char == "'") {
|
||||
|
||||
printf("’");
|
||||
|
||||
}
|
||||
|
||||
else if (char == "\"") {
|
||||
|
||||
printf("”");
|
||||
|
||||
}
|
||||
|
||||
else {
|
||||
|
||||
printf("%c", char);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
With that function, the body of the gawk script processes the HTML input file character by character. The script prints all text verbatim when inside an HTML tag (for example, `<html lang="en">`. Outside any HTML tags, the script uses the `smartquote()` function to print text. The `smartquote()` function does the work of evaluating when to print plain quotes or smart quotes.
|
||||
这个 gawk 脚本的主体部分通过该函数处理 HTML 输入文件的一个个字符。该脚本在 HTML 标签内部逐字原样输出所有内容(比如,`<html lang="en">`)。在 HTML 标签外,脚本使用 `smartquote()` 函数来输出文本。`smartquote()` 函数来评估是输出普通引号还是花引号。
|
||||
```
|
||||
function smartquote (char, prevchar) {
|
||||
|
||||
...
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
BEGIN {htmltag = 0}
|
||||
|
||||
|
||||
|
||||
{
|
||||
|
||||
# for each line, scan one letter at a time:
|
||||
|
||||
|
||||
|
||||
linelen = length($0);
|
||||
|
||||
|
||||
|
||||
prev = "\n";
|
||||
|
||||
|
||||
|
||||
for (i = 1; i <= linelen; i++) {
|
||||
|
||||
char = substr($0, i, 1);
|
||||
|
||||
|
||||
|
||||
if (char == "<") {
|
||||
|
||||
htmltag = 1;
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
if (htmltag == 1) {
|
||||
|
||||
printf("%c", char);
|
||||
|
||||
}
|
||||
|
||||
else {
|
||||
|
||||
smartquote(char, prev);
|
||||
|
||||
prev = char;
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
if (char == ">") {
|
||||
|
||||
htmltag = 0;
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
# add trailing newline at end of each line
|
||||
|
||||
printf ("\n");
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Here's an example:
|
||||
下面是一个例子:
|
||||
```
|
||||
gawk -f quotes.awk test.html > test2.html
|
||||
|
||||
```
|
||||
|
||||
Sample input:
|
||||
其输入为:
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
|
||||
<title>Test page</title>
|
||||
|
||||
<link rel="stylesheet" type="text/css" href="/test.css" />
|
||||
|
||||
<meta charset="UTF-8">
|
||||
|
||||
<meta name="viewport" content="width=device-width" />
|
||||
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<h1><a href="/"><img src="logo.png" alt="Website logo" /></a></h1>
|
||||
|
||||
<p>"Hi there!"</p>
|
||||
|
||||
<p>It's and its.</p>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
其输出为:
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
|
||||
<title>Test page</title>
|
||||
|
||||
<link rel="stylesheet" type="text/css" href="/test.css" />
|
||||
|
||||
<meta charset="UTF-8">
|
||||
|
||||
<meta name="viewport" content="width=device-width" />
|
||||
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<h1><a href="/"><img src="logo.png" alt="Website logo" /></a></h1>
|
||||
|
||||
<p>“Hi there!”</p>
|
||||
|
||||
<p>It’s and its.</p>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -227,12 +134,12 @@ via: https://opensource.com/article/18/8/gawk-script-convert-smart-quotes
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jim-hall
|
||||
[1]:https://opensource.com/article/17/3/python-scribus-smart-quotes
|
||||
[2]:/downloads/cheat-sheet-awk-features
|
||||
[2]:https://opensource.com/downloads/cheat-sheet-awk-features
|
||||
[3]:https://opensource.com/downloads/cheat-sheet-awk-features
|
Loading…
Reference in New Issue
Block a user