mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
cff7b1f53b
@ -1,28 +1,25 @@
|
|||||||
使用 Kafka 和 MongoDB 进行 Go 异步处理
|
使用 Kafka 和 MongoDB 进行 Go 异步处理
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
在我前面的博客文章 ["使用 MongoDB 和 Docker 多阶段构建我的第一个 Go 微服务][9] 中,我创建了一个 Go 微服务示例,它发布一个 REST 式的 http 端点,并将从 HTTP POST 中接收到的数据保存到 MongoDB 数据库。
|
在我前面的博客文章 “[我的第一个 Go 微服务:使用 MongoDB 和 Docker 多阶段构建][9]” 中,我创建了一个 Go 微服务示例,它发布一个 REST 式的 http 端点,并将从 HTTP POST 中接收到的数据保存到 MongoDB 数据库。
|
||||||
|
|
||||||
在这个示例中,我将保存数据到 MongoDB 和创建另一个微服务去处理它解耦了。我还添加了 Kafka 为消息层服务,这样微服务就可以异步地处理它自己关心的东西了。
|
在这个示例中,我将数据的保存和 MongoDB 分离,并创建另一个微服务去处理它。我还添加了 Kafka 为消息层服务,这样微服务就可以异步处理它自己关心的东西了。
|
||||||
|
|
||||||
> 如果你有时间去看,我将这个博客文章的整个过程录制到 [这个视频中了][1] :)
|
> 如果你有时间去看,我将这个博客文章的整个过程录制到 [这个视频中了][1] :)
|
||||||
|
|
||||||
下面是这个使用了两个微服务的简单的异步处理示例的高级架构。
|
下面是这个使用了两个微服务的简单的异步处理示例的上层架构图。
|
||||||
|
|
||||||
![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg)
|
![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg)
|
||||||
|
|
||||||
微服务 1 —— 是一个 REST 式微服务,它从一个 /POST http 调用中接收数据。接收到请求之后,它从 http 请求中检索数据,并将它保存到 Kafka。保存之后,它通过 /POST 发送相同的数据去响应调用者。
|
微服务 1 —— 是一个 REST 式微服务,它从一个 /POST http 调用中接收数据。接收到请求之后,它从 http 请求中检索数据,并将它保存到 Kafka。保存之后,它通过 /POST 发送相同的数据去响应调用者。
|
||||||
|
|
||||||
微服务 2 —— 是一个在 Kafka 中订阅一个主题的微服务,在这里就是微服务 1 保存的数据。一旦消息被微服务消费之后,它接着保存数据到 MongoDB 中。
|
微服务 2 —— 是一个订阅了 Kafka 中的一个主题的微服务,微服务 1 的数据保存在该主题。一旦消息被微服务消费之后,它接着保存数据到 MongoDB 中。
|
||||||
|
|
||||||
在你继续之前,我们需要能够去运行这些微服务的几件东西:
|
在你继续之前,我们需要能够去运行这些微服务的几件东西:
|
||||||
|
|
||||||
1. [下载 Kafka][2] —— 我使用的版本是 kafka_2.11-1.1.0
|
1. [下载 Kafka][2] —— 我使用的版本是 kafka_2.11-1.1.0
|
||||||
|
|
||||||
2. 安装 [librdkafka][3] —— 不幸的是,这个库应该在目标系统中
|
2. 安装 [librdkafka][3] —— 不幸的是,这个库应该在目标系统中
|
||||||
|
|
||||||
3. 安装 [Kafka Go 客户端][4]
|
3. 安装 [Kafka Go 客户端][4]
|
||||||
|
|
||||||
4. 运行 MongoDB。你可以去看我的 [以前的文章][5] 中关于这一块的内容,那篇文章中我使用了一个 MongoDB docker 镜像。
|
4. 运行 MongoDB。你可以去看我的 [以前的文章][5] 中关于这一块的内容,那篇文章中我使用了一个 MongoDB docker 镜像。
|
||||||
|
|
||||||
我们开始吧!
|
我们开始吧!
|
||||||
@ -32,14 +29,12 @@
|
|||||||
```
|
```
|
||||||
$ cd /<download path>/kafka_2.11-1.1.0
|
$ cd /<download path>/kafka_2.11-1.1.0
|
||||||
$ bin/zookeeper-server-start.sh config/zookeeper.properties
|
$ bin/zookeeper-server-start.sh config/zookeeper.properties
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
接着运行 Kafka —— 我使用 9092 端口连接到 Kafka。如果你需要改变端口,只需要在 `config/server.properties` 中配置即可。如果你像我一样是个新手,我建议你现在还是使用默认端口。
|
接着运行 Kafka —— 我使用 9092 端口连接到 Kafka。如果你需要改变端口,只需要在 `config/server.properties` 中配置即可。如果你像我一样是个新手,我建议你现在还是使用默认端口。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ bin/kafka-server-start.sh config/server.properties
|
$ bin/kafka-server-start.sh config/server.properties
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Kafka 跑起来之后,我们需要 MongoDB。它很简单,只需要使用这个 `docker-compose.yml` 即可。
|
Kafka 跑起来之后,我们需要 MongoDB。它很简单,只需要使用这个 `docker-compose.yml` 即可。
|
||||||
@ -61,17 +56,15 @@ volumes:
|
|||||||
|
|
||||||
networks:
|
networks:
|
||||||
network1:
|
network1:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
使用 Docker Compose 去运行 MongoDB docker 容器。
|
使用 Docker Compose 去运行 MongoDB docker 容器。
|
||||||
|
|
||||||
```
|
```
|
||||||
docker-compose up
|
docker-compose up
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
这里是微服务 1 的相关代码。我只是修改了我前面的示例去保存到 Kafka 而不是 MongoDB。
|
这里是微服务 1 的相关代码。我只是修改了我前面的示例去保存到 Kafka 而不是 MongoDB:
|
||||||
|
|
||||||
[rest-to-kafka/rest-kafka-sample.go][10]
|
[rest-to-kafka/rest-kafka-sample.go][10]
|
||||||
|
|
||||||
@ -133,15 +126,13 @@ func saveJobToKafka(job Job) {
|
|||||||
}, nil)
|
}, nil)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
这里是微服务 2 的代码。在这个代码中最重要的东西是从 Kafka 中消耗数据,保存部分我已经在前面的博客文章中讨论过了。这里代码的重点部分是从 Kafka 中消费数据。
|
这里是微服务 2 的代码。在这个代码中最重要的东西是从 Kafka 中消费数据,保存部分我已经在前面的博客文章中讨论过了。这里代码的重点部分是从 Kafka 中消费数据:
|
||||||
|
|
||||||
[kafka-to-mongo/kafka-mongo-sample.go][11]
|
[kafka-to-mongo/kafka-mongo-sample.go][11]
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
||||||
//Create MongoDB session
|
//Create MongoDB session
|
||||||
@ -206,14 +197,12 @@ func saveJobToMongo(jobString string) {
|
|||||||
fmt.Printf("Saved to MongoDB : %s", jobString)
|
fmt.Printf("Saved to MongoDB : %s", jobString)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
我们来演示一下,运行微服务 1。确保 Kafka 已经运行了。
|
我们来演示一下,运行微服务 1。确保 Kafka 已经运行了。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ go run rest-kafka-sample.go
|
$ go run rest-kafka-sample.go
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
我使用 Postman 向微服务 1 发送数据。
|
我使用 Postman 向微服务 1 发送数据。
|
||||||
@ -228,7 +217,6 @@ $ go run rest-kafka-sample.go
|
|||||||
|
|
||||||
```
|
```
|
||||||
$ go run kafka-mongo-sample.go
|
$ go run kafka-mongo-sample.go
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在,你将在微服务 2 上看到消费的数据,并将它保存到了 MongoDB。
|
现在,你将在微服务 2 上看到消费的数据,并将它保存到了 MongoDB。
|
||||||
@ -239,27 +227,26 @@ $ go run kafka-mongo-sample.go
|
|||||||
|
|
||||||
![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png)
|
![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png)
|
||||||
|
|
||||||
完整的源代码可以在这里找到
|
完整的源代码可以在这里找到:
|
||||||
|
|
||||||
[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12]
|
[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12]
|
||||||
|
|
||||||
现在是广告时间:如果你喜欢这篇文章,请在 Twitter [@donvito][6] 上关注我。我的 Twitter 上有关于 Docker、Kubernetes、GoLang、Cloud、DevOps、Agile 和 Startups 的内容。欢迎你们在 [GitHub][7] 和 [LinkedIn][8] 关注我。
|
现在是广告时间:如果你喜欢这篇文章,请在 Twitter [@donvito][6] 上关注我。我的 Twitter 上有关于 Docker、Kubernetes、GoLang、Cloud、DevOps、Agile 和 Startups 的内容。欢迎你们在 [GitHub][7] 和 [LinkedIn][8] 关注我。
|
||||||
|
|
||||||
[视频](https://youtu.be/xa0Yia1jdu8)
|
|
||||||
|
|
||||||
开心地玩吧!
|
开心地玩吧!
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/
|
via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/
|
||||||
|
|
||||||
作者:[Melvin Vivas ][a]
|
作者:[Melvin Vivas][a]
|
||||||
译者:[qhwdw](https://github.com/qhwdw)
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
[a]:https://www.melvinvivas.com/author/melvin/
|
[a]:https://www.melvinvivas.com/author/melvin/
|
||||||
[1]:https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/#video1
|
[1]:https://youtu.be/xa0Yia1jdu8
|
||||||
[2]:https://kafka.apache.org/downloads
|
[2]:https://kafka.apache.org/downloads
|
||||||
[3]:https://github.com/confluentinc/confluent-kafka-go
|
[3]:https://github.com/confluentinc/confluent-kafka-go
|
||||||
[4]:https://github.com/confluentinc/confluent-kafka-go
|
[4]:https://github.com/confluentinc/confluent-kafka-go
|
@ -1,52 +1,54 @@
|
|||||||
什么是 CI/CD?
|
什么是 CI/CD?
|
||||||
======
|
======
|
||||||
|
|
||||||
|
在软件开发中经常会提到<ruby>持续集成<rt>Continuous Integration</rt></ruby>(CI)和<ruby>持续交付<rt>Continuous Delivery</rt></ruby>(CD)这几个术语。但它们真正的意思是什么呢?
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh)
|
||||||
|
|
||||||
在谈论软件开发时,经常会提到<ruby>持续集成<rt>Continuous Integration</rt></ruby>(CI)和<ruby>持续交付<rt>Continuous Delivery</rt></ruby>(CD)这几个术语。但它们真正的意思是什么呢?在本文中,我将解释这些和相关术语背后的含义和意义,例如<ruby>持续测试<rt>Continuous Testing</rt></ruby>和<ruby>持续部署<rt>Continuous Deployment</rt></ruby>。
|
在谈论软件开发时,经常会提到<ruby>持续集成<rt>Continuous Integration</rt></ruby>(CI)和<ruby>持续交付<rt>Continuous Delivery</rt></ruby>(CD)这几个术语。但它们真正的意思是什么呢?在本文中,我将解释这些和相关术语背后的含义和意义,例如<ruby>持续测试<rt>Continuous Testing</rt></ruby>和<ruby>持续部署<rt>Continuous Deployment</rt></ruby>。
|
||||||
|
|
||||||
### 概览
|
### 概览
|
||||||
|
|
||||||
工厂里的装配线以快速、自动化、可重复的方式从原材料生产出消费品。同样,软件交付管道以快速,自动化和可重复的方式从源代码生成发布版本。如何完成这项工作的总体设计称为“持续交付”。启动装配线的过程称为“持续集成”。确保质量的过程称为“持续测试”,将最终产品提供给用户的过程称为“持续部署”。一些专家让这一切简单、顺畅、高效地运行,这些人被称为<ruby>运维开发<rt>DevOps</rt></ruby>。
|
工厂里的装配线以快速、自动化、可重复的方式从原材料生产出消费品。同样,软件交付管道以快速、自动化和可重复的方式从源代码生成发布版本。如何完成这项工作的总体设计称为“持续交付”(CD)。启动装配线的过程称为“持续集成”(CI)。确保质量的过程称为“持续测试”,将最终产品提供给用户的过程称为“持续部署”。一些专家让这一切简单、顺畅、高效地运行,这些人被称为<ruby>运维开发<rt>DevOps</rt></ruby>践行者。
|
||||||
|
|
||||||
### 持续(Continuous)是什么意思?
|
### “持续”是什么意思?
|
||||||
|
|
||||||
Continuous 用于描述遵循我在此提到的许多不同流程实践。这并不意味着“一直在运行”,而是“随时可运行”。在软件开发领域,它还包括几个核心概念/最佳实践。这些是:
|
“持续”用于描述遵循我在此提到的许多不同流程实践。这并不意味着“一直在运行”,而是“随时可运行”。在软件开发领域,它还包括几个核心概念/最佳实践。这些是:
|
||||||
|
|
||||||
* **频繁发布**:持续实践背后目标是能够频繁地交付高质量的软件。此处的交付频率是可变的,可由开发团队或公司定义。对于某些产品,一季度,一个月,一周或一天交付一次可能已经足够频繁了。对于另一些来说,一天可能需要多次交付也是可行的。所谓持续也有“偶尔,按需”的方面。最终目标是相同的:在可重复,可靠的过程中为最终用户提供高质量的软件更新。通常,这可以通过很少甚至没有用户的交互或知识来完成(想想设备更新)。
|
* **频繁发布**:持续实践背后的目标是能够频繁地交付高质量的软件。此处的交付频率是可变的,可由开发团队或公司定义。对于某些产品,一季度、一个月、一周或一天交付一次可能已经足够频繁了。对于另一些来说,一天可能需要多次交付也是可行的。所谓持续也有“偶尔、按需”的方面。最终目标是相同的:在可重复、可靠的过程中为最终用户提供高质量的软件更新。通常,这可以通过很少甚至无需用户的交互或掌握的知识来完成(想想设备更新)。
|
||||||
* **自动化流程**:实现此频率的关键是用自动化流程来处理软件生产中的方方面面。这包括构建、测试、分析、版本控制,以及在某些情况下的部署。
|
* **自动化流程**:实现此频率的关键是用自动化流程来处理软件生产中的方方面面。这包括构建、测试、分析、版本控制,以及在某些情况下的部署。
|
||||||
* **可重复**:如果我们使用的自动化流程在给定相同输入的情况下始终具有相同的行为,则这个过程应该是可重复的。也就是说,如果我们把某个历史版本的代码作为输入,我们应该得到对应相同的可交付产出。这也假设我们有相同版本的外部依赖项(即我们不创建该版本代码使用的其它交付物)。理想情况下,这也意味着可以对管道中的流程进行版本控制和重建(请参阅稍后的 DevOps 讨论)。
|
* **可重复**:如果我们使用的自动化流程在给定相同输入的情况下始终具有相同的行为,则这个过程应该是可重复的。也就是说,如果我们把某个历史版本的代码作为输入,我们应该得到对应相同的可交付产出。这也假设我们有相同版本的外部依赖项(即我们不创建该版本代码使用的其它交付物)。理想情况下,这也意味着可以对管道中的流程进行版本控制和重建(请参阅稍后的 DevOps 讨论)。
|
||||||
* **快速迭代**:“快速”在这里是个相对术语,但无论软件更新/发布的频率如何,预期的持续过程都会以高效的方式将源代码转换为交付物。自动化负责大部分工作,但自动化处理的过程可能仍然很慢。例如,对于每天需要多次发布候选版更新的产品来说,一轮<ruby>集成测试<rt>integrated testing</rt></ruby>下来耗时就要大半天可能就太慢了。
|
* **快速迭代**:“快速”在这里是个相对术语,但无论软件更新/发布的频率如何,预期的持续过程都会以高效的方式将源代码转换为交付物。自动化负责大部分工作,但自动化处理的过程可能仍然很慢。例如,对于每天需要多次发布候选版更新的产品来说,一轮<ruby>集成测试<rt>integrated testing</rt></ruby>下来耗时就要大半天可能就太慢了。
|
||||||
|
|
||||||
### 什么是持续交付管道(continuous delivery pipeline)?
|
### 什么是“持续交付管道”?
|
||||||
|
|
||||||
将源代码转换为可发布品的多个不同的<ruby>任务<rt>task</rt></ruby>和<ruby>作业<rt>job</rt></ruby>通常串联成一个软件“管道”,一个自动流程成功完成后会启动管道中的下一个流程。这些管道有许多不同的叫法,例如持续交付管道,部署管道和软件开发管道。大体上讲,程序管理者在管道执行时管理管道各部分的定义、运行、监控和报告。
|
将源代码转换为可发布产品的多个不同的<ruby>任务<rt>task</rt></ruby>和<ruby>作业<rt>job</rt></ruby>通常串联成一个软件“管道”,一个自动流程成功完成后会启动管道中的下一个流程。这些管道有许多不同的叫法,例如持续交付管道、部署管道和软件开发管道。大体上讲,程序管理者在管道执行时管理管道各部分的定义、运行、监控和报告。
|
||||||
|
|
||||||
### 持续交付管道是如何工作的?
|
### 持续交付管道是如何工作的?
|
||||||
|
|
||||||
软件交付管道的实际实现可以有很大不同。有许多程序可用在管道中,用于源跟踪、构建、测试、指标采集,版本管理等各个方面。但整体工作流程通常是相同的。单个业务流程/工作流应用程序管理整个管道,每个流程作为独立的作业运行或由该应用程序进行阶段管理。通常,在业务流程中,这些独立作业是应用程序可理解并可作为工作流程管理的语法和结构中定义的。
|
软件交付管道的实际实现可以有很大不同。有许多程序可用在管道中,用于源代码跟踪、构建、测试、指标采集,版本管理等各个方面。但整体工作流程通常是相同的。单个业务流程/工作流应用程序管理整个管道,每个流程作为独立的作业运行或由该应用程序进行阶段管理。通常,在业务流程中,这些独立作业是以应用程序可理解并可作为工作流程管理的语法和结构定义的。
|
||||||
|
|
||||||
这些作业被用于一个或多个功能(构建、测试、部署等)。每个作业可能使用不同的技术或多种技术。关键是作业是自动化的,高效的,并且可重复的。如果作业成功,则工作流管理器将触发管道中的下一个作业。如果作业失败,工作流管理器会向开发人员,测试人员和其他人发出警报,以便他们尽快纠正问题。这个过程是自动化的,所以比手动运行一组过程可更快地找到错误。这种快速排错称为<ruby>快速失败<rt>fail fast</rt></ruby>,并且在获取管道端点方面同样有价值。
|
这些作业被用于一个或多个功能(构建、测试、部署等)。每个作业可能使用不同的技术或多种技术。关键是作业是自动化的、高效的,并且可重复的。如果作业成功,则工作流管理器将触发管道中的下一个作业。如果作业失败,工作流管理器会向开发人员、测试人员和其他人发出警报,以便他们尽快纠正问题。这个过程是自动化的,所以比手动运行一组过程可更快地找到错误。这种快速排错称为<ruby>快速失败<rt>fail fast</rt></ruby>,并且在抵达管道端点方面同样有价值。
|
||||||
|
|
||||||
### 快速失败(fail fast)是什么意思?
|
### “快速失败”是什么意思?
|
||||||
|
|
||||||
管道的工作之一就是快速处理变更。另一个是监视创建发布的不同任务/作业。由于编译失败或测试未通过的代码可以阻止管道继续运行,因此快速通知用户此类情况非常重要。快速失败指的是在管道流程中尽快发现问题并快速通知用户的想法,这样可以及时修正问题并重新提交代码以便使管道再次运行。通常在管道流程中可通过查看历史记录来确定是谁做了那次修改并通知此人及其团队。
|
管道的工作之一就是快速处理变更。另一个是监视创建发布的不同任务/作业。由于编译失败或测试未通过的代码可以阻止管道继续运行,因此快速通知用户此类情况非常重要。快速失败指的是在管道流程中尽快发现问题并快速通知用户的方式,这样可以及时修正问题并重新提交代码以便使管道再次运行。通常在管道流程中可通过查看历史记录来确定是谁做了那次修改并通知此人及其团队。
|
||||||
|
|
||||||
### 所有持续交付管道都应该被自动化吗?
|
### 所有持续交付管道都应该被自动化吗?
|
||||||
|
|
||||||
管道的几乎所有部分都是应该自动化的。对于某些部分,有一些人为干预/互动的地方可能是有意义的。一个例子可能是<ruby>用户验收测试<rt>user-acceptance testing</rt></ruby>(让最终用户试用软件并确保它能达到他们想要/期望的水平)。另一种情况可能是部署到生产环境时用户希望拥有更多的人为控制。当然,如果代码不正确或不能运行,则需要人工干预。
|
管道的几乎所有部分都是应该自动化的。对于某些部分,有一些人为干预/互动的地方可能是有意义的。一个例子可能是<ruby>用户验收测试<rt>user-acceptance testing</rt></ruby>(让最终用户试用软件并确保它能达到他们想要/期望的水平)。另一种情况可能是部署到生产环境时用户希望拥有更多的人为控制。当然,如果代码不正确或不能运行,则需要人工干预。
|
||||||
|
|
||||||
有了对 continuous 理解的背景,让我们看看不同类型的持续流程以及它们在软件管道上下文中的含义。
|
有了对“持续”含义理解的背景,让我们看看不同类型的持续流程以及它们在软件管道上下文中的含义。
|
||||||
|
|
||||||
### 什么是持续集成(continuous integration)?
|
### 什么是“持续集成”?
|
||||||
|
|
||||||
持续集成(CI)是在源代码变更后自动检测、拉取、构建和(在大多数情况下)进行单元测试的过程。持续集成是启动管道的环节(尽管某些预验证 —— 通常称为<ruby>上线前检查<rt>pre-flight checks</rt></ruby> —— 有时会被归在持续集成之前)。
|
持续集成(CI)是在源代码变更后自动检测、拉取、构建和(在大多数情况下)进行单元测试的过程。持续集成是启动管道的环节(尽管某些预验证 —— 通常称为<ruby>上线前检查<rt>pre-flight checks</rt></ruby> —— 有时会被归在持续集成之前)。
|
||||||
|
|
||||||
持续信札的目标是快速确保开发人员新提交的变更是好的,并且适合在代码库中进一步使用。
|
持续集成的目标是快速确保开发人员新提交的变更是好的,并且适合在代码库中进一步使用。
|
||||||
|
|
||||||
### 持续集成是如何工作的?
|
### 持续集成是如何工作的?
|
||||||
|
|
||||||
持续集成的基本思想是让一个自动化过程监测一个或多个源代码仓库是否有变更。当变更被推送到仓库时,它会监测到更改,下载副本,构建并运行任何相关的单元测试。
|
持续集成的基本思想是让一个自动化过程监测一个或多个源代码仓库是否有变更。当变更被推送到仓库时,它会监测到更改、下载副本、构建并运行任何相关的单元测试。
|
||||||
|
|
||||||
### 持续集成如何监测变更?
|
### 持续集成如何监测变更?
|
||||||
|
|
||||||
@ -54,27 +56,27 @@ Continuous 用于描述遵循我在此提到的许多不同流程实践。这并
|
|||||||
|
|
||||||
* **轮询**:监测程序反复询问代码管理系统,“代码仓库里有什么我感兴趣的新东西吗?”当代码管理系统有新的变更时,监测程序会“唤醒”并完成其工作以获取新代码并构建/测试它。
|
* **轮询**:监测程序反复询问代码管理系统,“代码仓库里有什么我感兴趣的新东西吗?”当代码管理系统有新的变更时,监测程序会“唤醒”并完成其工作以获取新代码并构建/测试它。
|
||||||
* **定期**:监测程序配置为定期启动构建,无论源码是否有变更。理想情况下,如果没有变更,则不会构建任何新内容,因此这不会增加额外的成本。
|
* **定期**:监测程序配置为定期启动构建,无论源码是否有变更。理想情况下,如果没有变更,则不会构建任何新内容,因此这不会增加额外的成本。
|
||||||
* **推送**:这与用于代码管理系统检查的监测程序相反。在这种情况下,代码管理系统被配置为提交变更到仓库时将通知“推送”到监测程序。最常见的是,这可以以 webhook 的形式完成 —— 在新代码被推送时一个<ruby>挂勾<rt>hook</rt></ruby>的程序通过互联网向监测程序发送通知。为此,监测程序必须具有可以通过网络接收 webhook 信息的开放端口。
|
* **推送**:这与用于代码管理系统检查的监测程序相反。在这种情况下,代码管理系统被配置为提交变更到仓库时将“推送”一个通知到监测程序。最常见的是,这可以以 webhook 的形式完成 —— 在新代码被推送时一个<ruby>挂勾<rt>hook</rt></ruby>的程序通过互联网向监测程序发送通知。为此,监测程序必须具有可以通过网络接收 webhook 信息的开放端口。
|
||||||
|
|
||||||
### 什么是预检查(pre-checks 又称上线前检查 pre-flight checks)?
|
### 什么是“预检查”(又称“上线前检查”)?
|
||||||
|
|
||||||
在将代码引入仓库并触发持续集成之前,可以进行其它验证。这遵循了最佳实践,例如<ruby>测试构建<rt>test builds</rt></ruby>和<ruby>代码审查<rt>code review</rt></ruby>。它们通常在代码引入管道之前构建到开发过程中。但是一些管道也可能将它们作为其监控流程或工作流的一部分。
|
在将代码引入仓库并触发持续集成之前,可以进行其它验证。这遵循了最佳实践,例如<ruby>测试构建<rt>test build</rt></ruby>和<ruby>代码审查<rt>code review</rt></ruby>。它们通常在代码引入管道之前构建到开发过程中。但是一些管道也可能将它们作为其监控流程或工作流的一部分。
|
||||||
|
|
||||||
例如,一个名为 [Gerrit][2] 的工具允许在开发人员推送代码之后但在允许进入([Git][3] 远程)仓库之前进行正式的代码审查,验证和测试构建。Gerrit 位于开发人员的工作区和 Git 远程仓库之间。它会“接收”来自开发人员的推送,并且可以执行通过/失败验证以确保它们在被允许进入仓库之前的检查是通过的。这可以包括检测新变更并启动构建测试(CI 的一种形式)。它还允许开发者在那时进行正式的代码审查。这种方式有一种额外的可信度评估机制,即当变更的代码被合并到代码库中时不会破坏任何内容。
|
例如,一个名为 [Gerrit][2] 的工具允许在开发人员推送代码之后但在允许进入([Git][3] 远程)仓库之前进行正式的代码审查、验证和测试构建。Gerrit 位于开发人员的工作区和 Git 远程仓库之间。它会“接收”来自开发人员的推送,并且可以执行通过/失败验证以确保它们在被允许进入仓库之前的检查是通过的。这可以包括检测新变更并启动构建测试(CI 的一种形式)。它还允许开发者在那时进行正式的代码审查。这种方式有一种额外的可信度评估机制,即当变更的代码被合并到代码库中时不会破坏任何内容。
|
||||||
|
|
||||||
### 什么是单元测试(unit test)?
|
### 什么是“单元测试”?
|
||||||
|
|
||||||
单元测试(也称为“提交测试”),是由开发人员编写的小型的专项测试,以确保新代码独立工作。“独立”这里意味着不依赖或调用其它不可直接访问的代码,也不依赖外部数据源或其它模块。如果运行代码需要这样的依赖关系,那么这些资源可以用<ruby>模拟<rt>mock</rt></ruby>来表示。模拟是指使用看起来像资源的<ruby>代码存根<rt>code stub</rt></ruby>,可以返回值但不实现任何功能。
|
单元测试(也称为“提交测试”),是由开发人员编写的小型的专项测试,以确保新代码独立工作。“独立”这里意味着不依赖或调用其它不可直接访问的代码,也不依赖外部数据源或其它模块。如果运行代码需要这样的依赖关系,那么这些资源可以用<ruby>模拟<rt>mock</rt></ruby>来表示。模拟是指使用看起来像资源的<ruby>代码存根<rt>code stub</rt></ruby>,可以返回值,但不实现任何功能。
|
||||||
|
|
||||||
在大多数组织中,开发人员负责创建单元测试以证明其代码正确。事实上,一种称为<ruby>测试驱动开发<rt>test-driven develop</rt></ruby>(TDD)的模型要求将首先设计单元测试作为清楚地验证代码功能的基础。因为这样的代码更改速度快且改动量大,所以它们也必须执行很快。
|
在大多数组织中,开发人员负责创建单元测试以证明其代码正确。事实上,一种称为<ruby>测试驱动开发<rt>test-driven develop</rt></ruby>(TDD)的模型要求将首先设计单元测试作为清楚地验证代码功能的基础。因为这样的代码可以更改速度快且改动量大,所以它们也必须执行很快。
|
||||||
|
|
||||||
由于这与持续集成工作流有关,因此开发人员在本地工作环境中编写或更新代码,并通单元测试来确保新开发的功能或方法正确。通常,这些测试采用断言形式,即函数或方法的给定输入集产生给定的输出集。它们通常进行测试以确保正确标记和处理出错条件。有很多单元测试框架都很有用,例如用于 Java 开发的 [JUnit][4]。
|
由于这与持续集成工作流有关,因此开发人员在本地工作环境中编写或更新代码,并通单元测试来确保新开发的功能或方法正确。通常,这些测试采用断言形式,即函数或方法的给定输入集产生给定的输出集。它们通常进行测试以确保正确标记和处理出错条件。有很多单元测试框架都很有用,例如用于 Java 开发的 [JUnit][4]。
|
||||||
|
|
||||||
### 什么是持续测试(continuous testing)?
|
### 什么是“持续测试”?
|
||||||
|
|
||||||
持续测试是指在代码通过持续交付管道时运行扩展范围的自动化测试的实践。单元测试通常与构建过程集成,作为持续集成阶段的一部分,并专注于和其它与之交互的代码隔离的测试。
|
持续测试是指在代码通过持续交付管道时运行扩展范围的自动化测试的实践。单元测试通常与构建过程集成,作为持续集成阶段的一部分,并专注于和其它与之交互的代码隔离的测试。
|
||||||
|
|
||||||
除此之外,还有各种形式的测试可以或应该出现。这些可包括:
|
除此之外,可以有或者应该有各种形式的测试。这些可包括:
|
||||||
|
|
||||||
* **集成测试** 验证组件和服务组合在一起是否正常。
|
* **集成测试** 验证组件和服务组合在一起是否正常。
|
||||||
* **功能测试** 验证产品中执行功能的结果是否符合预期。
|
* **功能测试** 验证产品中执行功能的结果是否符合预期。
|
||||||
@ -86,27 +88,27 @@ Continuous 用于描述遵循我在此提到的许多不同流程实践。这并
|
|||||||
|
|
||||||
除了测试是否通过之外,还有一些应用程序可以告诉我们测试用例执行(覆盖)的源代码行数。这是一个可以衡量代码量指标的例子。这个指标称为<ruby>代码覆盖率<rt>code-coverage</rt></ruby>,可以通过工具(例如用于 Java 的 [JaCoCo][5])进行统计。
|
除了测试是否通过之外,还有一些应用程序可以告诉我们测试用例执行(覆盖)的源代码行数。这是一个可以衡量代码量指标的例子。这个指标称为<ruby>代码覆盖率<rt>code-coverage</rt></ruby>,可以通过工具(例如用于 Java 的 [JaCoCo][5])进行统计。
|
||||||
|
|
||||||
还有很多其它类型的指标统计,例如代码行数,复杂度以及代码结构对比分析等。诸如 [SonarQube][6] 之类的工具可以检查源代码并计算这些指标。此外,用户还可以为他们可接受的“合格”范围的指标设置阈值。然后可以在管道中针对这些阈值设置一个检查,如果结果不在可接受范围内,则流程终端上。SonarQube 等应用程序具有很高的可配置性,可以设置仅检查团队感兴趣的内容。
|
还有很多其它类型的指标统计,例如代码行数、复杂度以及代码结构对比分析等。诸如 [SonarQube][6] 之类的工具可以检查源代码并计算这些指标。此外,用户还可以为他们可接受的“合格”范围的指标设置阈值。然后可以在管道中针对这些阈值设置一个检查,如果结果不在可接受范围内,则流程终端上。SonarQube 等应用程序具有很高的可配置性,可以设置仅检查团队感兴趣的内容。
|
||||||
|
|
||||||
### 什么是持续交付(continuous delivery)?
|
### 什么是“持续交付”?
|
||||||
|
|
||||||
持续交付(CD)通常是指整个流程链(管道),它自动监测源代码变更并通过构建,测试,打包和相关操作运行它们以生成可部署的版本,基本上没有任何人为干预。
|
持续交付(CD)通常是指整个流程链(管道),它自动监测源代码变更并通过构建、测试、打包和相关操作运行它们以生成可部署的版本,基本上没有任何人为干预。
|
||||||
|
|
||||||
持续交付在软件开发过程中的目标是自动化、效率、可靠性、可重复性和质量保障(通过持续测试)。
|
持续交付在软件开发过程中的目标是自动化、效率、可靠性、可重复性和质量保障(通过持续测试)。
|
||||||
|
|
||||||
持续交付包含持续集成(自动检测源代码变更,执行构建过程,运行单元测试以验证变更),持续测试(对代码运行各种测试以保障代码质量),和(可选)持续部署(通过管道发布版本自动提供给用户)。
|
持续交付包含持续集成(自动检测源代码变更、执行构建过程、运行单元测试以验证变更),持续测试(对代码运行各种测试以保障代码质量),和(可选)持续部署(通过管道发布版本自动提供给用户)。
|
||||||
|
|
||||||
### 如何在管道中识别/跟踪多个版本?
|
### 如何在管道中识别/跟踪多个版本?
|
||||||
|
|
||||||
版本控制是持续交付和管道的关键概念。持续意味着能够经常集成新代码并提供更新版本。但这并不意味着每个人都想要“最新,最好的”。对于想要开发或测试已知的稳定版本的内部团队来说尤其如此。因此,管道创建并轻松存储和访问的这些版本化对象非常重要。
|
版本控制是持续交付和管道的关键概念。持续意味着能够经常集成新代码并提供更新版本。但这并不意味着每个人都想要“最新、最好的”。对于想要开发或测试已知的稳定版本的内部团队来说尤其如此。因此,管道创建并轻松存储和访问的这些版本化对象非常重要。
|
||||||
|
|
||||||
在管道中从源代码创建的对象通常可以称为<ruby>工件<rt>artifacts</rt></ruby>。工件在构建时应该有应用于它们的版本。将版本号分配给工件的推荐策略称为<ruby>语义化版本控制<rt>semantic versioning</rt></ruby>。(这也适用于从外部源引入的依赖工件的版本。)
|
在管道中从源代码创建的对象通常可以称为<ruby>工件<rt>artifact</rt></ruby>。工件在构建时应该有应用于它们的版本。将版本号分配给工件的推荐策略称为<ruby>语义化版本控制<rt>semantic versioning</rt></ruby>。(这也适用于从外部源引入的依赖工件的版本。)
|
||||||
|
|
||||||
语义版本号有三个部分:major,minor 和 patch。(例如,1.4.3 反映了主要版本 1,次要版本 4 和补丁版本 3。)这个想法是,其中一个部分的更改表示工件中的更新级别。主要版本仅针对不兼容的 API 更改而递增。当以<ruby>向后兼容<rt>backward-compatible</rt></ruby>的方式添加功能时,次要版本会增加。当进行向后兼容的版本 bug 修复时,补丁版本会增加。这些是建议的指导方针,但只要团队在整个组织内以一致且易于理解的方式这样做,团队就可以自由地改变这种方法。例如,每次为发布完成构建时增加的数字可以放在补丁字段中。
|
语义版本号有三个部分:<ruby>主要版本<rt>major</rt></ruby>、<ruby>次要版本<rt>minor</rt></ruby> 和 <ruby>补丁版本<rt>patch</rt></ruby>。(例如,1.4.3 反映了主要版本 1,次要版本 4 和补丁版本 3。)这个想法是,其中一个部分的更改表示工件中的更新级别。主要版本仅针对不兼容的 API 更改而递增。当以<ruby>向后兼容<rt>backward-compatible</rt></ruby>的方式添加功能时,次要版本会增加。当进行向后兼容的版本 bug 修复时,补丁版本会增加。这些是建议的指导方针,但只要团队在整个组织内以一致且易于理解的方式这样做,团队就可以自由地改变这种方法。例如,每次为发布完成构建时增加的数字可以放在补丁字段中。
|
||||||
|
|
||||||
### 如何“分销”(promote)工件?
|
### 如何“分销”工件?
|
||||||
|
|
||||||
团队可以为工件分配分销级别以指示适用于测试,生产等环境或用途。有很多方法。可以用 Jenkins 或 [Artifactory][7] 等应用程序进行分销。或者一个简单的方案可以在版本号字符串的末尾添加标签。例如,`-snapshot` 可以指示用于构建工件的代码的最新版本(快照)。可以使用各种分销策略或工具将工件“提升”到其它级别,例如 `-milestone` 或 `-production`,作为工件稳定性和完备性版本的标记。
|
团队可以为工件分配<ruby>分销<rt>promotion</rt></ruby>级别以指示适用于测试、生产等环境或用途。有很多方法。可以用 Jenkins 或 [Artifactory][7] 等应用程序进行分销。或者一个简单的方案可以在版本号字符串的末尾添加标签。例如,`-snapshot` 可以指示用于构建工件的代码的最新版本(快照)。可以使用各种分销策略或工具将工件“提升”到其它级别,例如 `-milestone` 或 `-production`,作为工件稳定性和完备性版本的标记。
|
||||||
|
|
||||||
### 如何存储和访问多个工件版本?
|
### 如何存储和访问多个工件版本?
|
||||||
|
|
||||||
@ -114,9 +116,9 @@ Continuous 用于描述遵循我在此提到的许多不同流程实践。这并
|
|||||||
|
|
||||||
管道用户可以指定他们想要使用的版本,并在这些版本中使用管道。
|
管道用户可以指定他们想要使用的版本,并在这些版本中使用管道。
|
||||||
|
|
||||||
### 什么是持续部署(continuous deployment)?
|
### 什么是“持续部署”?
|
||||||
|
|
||||||
持续部署(CD)是指能够自动提供持续交付管道中发布版本给最终用户使用的想法。根据用户的安装方式,可能是在云环境中自动部署,app 升级(如手机上的应用程序),更新网站或只更新可用版本列表。
|
持续部署(CD)是指能够自动提供持续交付管道中发布版本给最终用户使用的想法。根据用户的安装方式,可能是在云环境中自动部署、app 升级(如手机上的应用程序)、更新网站或只更新可用版本列表。
|
||||||
|
|
||||||
这里的一个重点是,仅仅因为可以进行持续部署并不意味着始终部署来自管道的每组可交付成果。它实际上指,通过管道每套可交付成果都被证明是“可部署的”。这在很大程度上是由持续测试的连续级别完成的(参见本文中的持续测试部分)。
|
这里的一个重点是,仅仅因为可以进行持续部署并不意味着始终部署来自管道的每组可交付成果。它实际上指,通过管道每套可交付成果都被证明是“可部署的”。这在很大程度上是由持续测试的连续级别完成的(参见本文中的持续测试部分)。
|
||||||
|
|
||||||
@ -126,9 +128,9 @@ Continuous 用于描述遵循我在此提到的许多不同流程实践。这并
|
|||||||
|
|
||||||
由于必须回滚/撤消对所有用户的部署可能是一种代价高昂的情况(无论是技术上还是用户的感知),已经有许多技术允许“尝试”部署新功能并在发现问题时轻松“撤消”它们。这些包括:
|
由于必须回滚/撤消对所有用户的部署可能是一种代价高昂的情况(无论是技术上还是用户的感知),已经有许多技术允许“尝试”部署新功能并在发现问题时轻松“撤消”它们。这些包括:
|
||||||
|
|
||||||
#### 蓝/绿测试/部署(blue/green testing/deployments)
|
#### 蓝/绿测试/部署
|
||||||
|
|
||||||
在这种部署软件的方法中,维护了两个相同的主机环境 —— 一个 _蓝色_ 和一个 _绿色_。(颜色并不重要,仅作为标识。)对应来说,其中一个是 _生产环境_,另一个是 _预发布环境_。
|
在这种部署软件的方法中,维护了两个相同的主机环境 —— 一个“蓝色” 和一个“绿色”。(颜色并不重要,仅作为标识。)对应来说,其中一个是“生产环境”,另一个是“预发布环境”。
|
||||||
|
|
||||||
在这些实例的前面是调度系统,它们充当产品或应用程序的客户“网关”。通过将调度系统指向蓝色或绿色实例,可以将客户流量引流到期望的部署环境。通过这种方式,切换指向哪个部署实例(蓝色或绿色)对用户来说是快速,简单和透明的。
|
在这些实例的前面是调度系统,它们充当产品或应用程序的客户“网关”。通过将调度系统指向蓝色或绿色实例,可以将客户流量引流到期望的部署环境。通过这种方式,切换指向哪个部署实例(蓝色或绿色)对用户来说是快速,简单和透明的。
|
||||||
|
|
||||||
@ -136,27 +138,27 @@ Continuous 用于描述遵循我在此提到的许多不同流程实践。这并
|
|||||||
|
|
||||||
同理,如果在最新部署中发现问题并且之前的生产实例仍然可用,则简单的更改可以将客户流量引流回到之前的生产实例 —— 有效地将问题实例“下线”并且回滚到以前的版本。然后有问题的新实例可以在其它区域中修复。
|
同理,如果在最新部署中发现问题并且之前的生产实例仍然可用,则简单的更改可以将客户流量引流回到之前的生产实例 —— 有效地将问题实例“下线”并且回滚到以前的版本。然后有问题的新实例可以在其它区域中修复。
|
||||||
|
|
||||||
#### 金丝雀测试/部署(canary testing/deployment)
|
#### 金丝雀测试/部署
|
||||||
|
|
||||||
在某些情况下,通过蓝/绿发布切换整个部署可能不可行或不是期望的那样。另一种方法是为 _金丝雀_ 测试/部署。在这种模型中,一部分客户流量被重新引流到新的版本部署中。例如,新版本的搜索服务可以与当前服务的生产版本一起部署。然后,可以将 10% 的搜索查询引流到新版本,以在生产环境中对其进行测试。
|
在某些情况下,通过蓝/绿发布切换整个部署可能不可行或不是期望的那样。另一种方法是为<ruby>金丝雀<rt>canary</rt></ruby>测试/部署。在这种模型中,一部分客户流量被重新引流到新的版本部署中。例如,新版本的搜索服务可以与当前服务的生产版本一起部署。然后,可以将 10% 的搜索查询引流到新版本,以在生产环境中对其进行测试。
|
||||||
|
|
||||||
如果服务那些流量的新版本没问题,那么可能会有更多的流量会被逐渐引流过去。如果仍然没有问题出现,那么随着时间的推移,可以对新版本增量部署,直到 100% 的流量都调度到新版本。这有效地“更替”了以前版本的服务,并让新版本对所有客户生效。
|
如果服务那些流量的新版本没问题,那么可能会有更多的流量会被逐渐引流过去。如果仍然没有问题出现,那么随着时间的推移,可以对新版本增量部署,直到 100% 的流量都调度到新版本。这有效地“更替”了以前版本的服务,并让新版本对所有客户生效。
|
||||||
|
|
||||||
#### 功能开关(feature toggles)
|
#### 功能开关
|
||||||
|
|
||||||
对于可能需要轻松关掉的新功能(如果发现问题),开发人员可以添加功能开关。这是代码中的 `if-then` 软件功能开关,仅在设置数据值时才激活新代码。此数据值可以是全局可访问的位置,部署的应用程序将检查该位置是否应执行新代码。如果设置了数据值,则执行代码;如果没有,则不执行。
|
对于可能需要轻松关掉的新功能(如果发现问题),开发人员可以添加<ruby>功能开关<rt>feature toggles</rt></ruby>。这是代码中的 `if-then` 软件功能开关,仅在设置数据值时才激活新代码。此数据值可以是全局可访问的位置,部署的应用程序将检查该位置是否应执行新代码。如果设置了数据值,则执行代码;如果没有,则不执行。
|
||||||
|
|
||||||
这为开发人员提供了一个远程“终止开关”,以便在部署到生产环境后发现问题时关闭新功能。
|
这为开发人员提供了一个远程“终止开关”,以便在部署到生产环境后发现问题时关闭新功能。
|
||||||
|
|
||||||
#### 暗箱发布(dark launch)
|
#### 暗箱发布
|
||||||
|
|
||||||
在这种实践中,代码被逐步测试/部署到生产环境中,但是用户不会看到更改(因此名称中有 dark 一词)。例如,在生产版本中,网页查询的某些部分可能会重定向到查询新数据源的服务。开发人员可收集此信息进行分析,而不会将有关接口,事务或结果的任何信息暴露给用户。
|
在<ruby>暗箱发布<rt>dark launch</rt></ruby>中,代码被逐步测试/部署到生产环境中,但是用户不会看到更改(因此名称中有<ruby>暗箱<rt>dark</rt></ruby>一词)。例如,在生产版本中,网页查询的某些部分可能会重定向到查询新数据源的服务。开发人员可收集此信息进行分析,而不会将有关接口,事务或结果的任何信息暴露给用户。
|
||||||
|
|
||||||
这个想法是想获取候选版本在生产环境负载下如何执行的真实信息,而不会影响用户或改变他们的经验。随着时间的推移,可以调度更多负载,直到遇到问题或认为新功能已准备好供所有人使用。实际上功能开关标志可用于这种暗箱发布机制。
|
这个想法是想获取候选版本在生产环境负载下如何执行的真实信息,而不会影响用户或改变他们的经验。随着时间的推移,可以调度更多负载,直到遇到问题或认为新功能已准备好供所有人使用。实际上功能开关标志可用于这种暗箱发布机制。
|
||||||
|
|
||||||
### 什么是运维开发(DevOps)?
|
### 什么是“运维开发”?
|
||||||
|
|
||||||
[DevOps][9] 是关于如何使开发和运维团队更容易合作开发和发布软件的一系列想法和推荐的实践。从历史上看,开发团队研发了产品,但没有像客户那样以常规,可重复的方式安装/部署它们。在整个周期中,这组安装/部署任务(以及其它支持任务)留给运维团队负责。这经常导致很多混乱和问题,因为运维团队在后期才开始介入,并且必须在短时间内完成他们的工作。同样,开发团队经常处于不利地位 —— 因为他们没有充分测试产品的安装/部署功能,他们可能会对该过程中出现的问题感到惊讶。
|
<ruby>[运维开发][9]<rt>DevOps</rt></ruby> 是关于如何使开发和运维团队更容易合作开发和发布软件的一系列想法和推荐的实践。从历史上看,开发团队研发了产品,但没有像客户那样以常规、可重复的方式安装/部署它们。在整个周期中,这组安装/部署任务(以及其它支持任务)留给运维团队负责。这经常导致很多混乱和问题,因为运维团队在后期才开始介入,并且必须在短时间内完成他们的工作。同样,开发团队经常处于不利地位 —— 因为他们没有充分测试产品的安装/部署功能,他们可能会对该过程中出现的问题感到惊讶。
|
||||||
|
|
||||||
这往往导致开发和运维团队之间严重脱节和缺乏合作。DevOps 理念主张是贯穿整个开发周期的开发和运维综合协作的工作方式,就像持续交付那样。
|
这往往导致开发和运维团队之间严重脱节和缺乏合作。DevOps 理念主张是贯穿整个开发周期的开发和运维综合协作的工作方式,就像持续交付那样。
|
||||||
|
|
||||||
@ -166,13 +168,13 @@ Continuous 用于描述遵循我在此提到的许多不同流程实践。这并
|
|||||||
|
|
||||||
说得更远一些,DevOps 建议实现管道的基础架构也会被视为代码。也就是说,它应该自动配置、可跟踪、易于修改,并在管道发生变化时触发新一轮运行。这可以通过将管道实现为代码来完成。
|
说得更远一些,DevOps 建议实现管道的基础架构也会被视为代码。也就是说,它应该自动配置、可跟踪、易于修改,并在管道发生变化时触发新一轮运行。这可以通过将管道实现为代码来完成。
|
||||||
|
|
||||||
### 什么是管道即代码(pipeline-as-code)?
|
### 什么是“管道即代码”?
|
||||||
|
|
||||||
<ruby>管道即代码<rt>pipeline-as-code</rt></ruby>是通过编写代码创建管道作业/任务的通用术语,就像开发人员编写代码一样。它的目标是将管道实现表示为代码,以便它可以与代码一起存储、评审、跟踪,如果出现问题并且必须终止管道,则可以轻松地重建。有几个工具允许这样做,如 [Jenkins 2][1]。
|
<ruby>管道即代码<rt>pipeline-as-code</rt></ruby>是通过编写代码创建管道作业/任务的通用术语,就像开发人员编写代码一样。它的目标是将管道实现表示为代码,以便它可以与代码一起存储、评审、跟踪,如果出现问题并且必须终止管道,则可以轻松地重建。有几个工具允许这样做,如 [Jenkins 2][1]。
|
||||||
|
|
||||||
### DevOps 如何影响生产软件的基础设施?
|
### DevOps 如何影响生产软件的基础设施?
|
||||||
|
|
||||||
传统意义上,管道中使用的各个硬件系统都有配套的软件(操作系统,应用程序,开发工具等)。在极端情况下,每个系统都是手工设置来定制的。这意味着当系统出现问题或需要更新时,这通常也是一项自定义任务。这种方法违背了持续交付的基本理念,即具有易于重现和可跟踪的环境。
|
传统意义上,管道中使用的各个硬件系统都有配套的软件(操作系统、应用程序、开发工具等)。在极端情况下,每个系统都是手工设置来定制的。这意味着当系统出现问题或需要更新时,这通常也是一项自定义任务。这种方法违背了持续交付的基本理念,即具有易于重现和可跟踪的环境。
|
||||||
|
|
||||||
多年来,很多应用被开发用于标准化交付(安装和配置)系统。同样,<ruby>虚拟机<rt>virtual machine</rt></ruby>被开发为模拟在其它计算机之上运行的计算机程序。这些 VM 要有管理程序才能在底层主机系统上运行,并且它们需要自己的操作系统副本才能运行。
|
多年来,很多应用被开发用于标准化交付(安装和配置)系统。同样,<ruby>虚拟机<rt>virtual machine</rt></ruby>被开发为模拟在其它计算机之上运行的计算机程序。这些 VM 要有管理程序才能在底层主机系统上运行,并且它们需要自己的操作系统副本才能运行。
|
||||||
|
|
||||||
@ -191,7 +193,7 @@ via: https://opensource.com/article/18/8/what-cicd
|
|||||||
作者:[Brent Laster][a]
|
作者:[Brent Laster][a]
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
译者:[pityonline](https://github.com/pityonline)
|
译者:[pityonline](https://github.com/pityonline)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
91
sources/talk/20180719 Finding Jobs in Software.md
Normal file
91
sources/talk/20180719 Finding Jobs in Software.md
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
translating by lujun9972
|
||||||
|
Finding Jobs in Software
|
||||||
|
======
|
||||||
|
|
||||||
|
A [PDF of this article][1] is available.
|
||||||
|
|
||||||
|
I was back home in Lancaster last week, chatting with a [friend from grad school][2] who’s remained in academia, and naturally we got to talking about what advice he could give his computer science students to better prepare them for their probable future careers.
|
||||||
|
|
||||||
|
In some later follow-up emails we got to talking about how engineers find jobs. I’ve fielded this question about a dozen times over the last couple years, so I thought it was about time to crystallize it into a blog post for future linking.
|
||||||
|
|
||||||
|
Here are some strategies for finding jobs, ordered roughly from most to least useful:
|
||||||
|
|
||||||
|
### Friend-of-a-friend networking
|
||||||
|
|
||||||
|
Many of the best jobs never make it to the open market at all, and it’s all about who you know. This makes sense for employers, since good engineers are hard to find and a reliable reference can be invaluable.
|
||||||
|
|
||||||
|
In the case of my current job at Iterable, for example, a mutual colleague from thoughtbot (a previous employer) suggested that I should talk to Iterable’s VP of engineering, since he’d worked with both of us and thought we’d get along well. We did, and I liked the team, so I went through the interview process and took the job.
|
||||||
|
|
||||||
|
Like many companies, thoughtbot has an alumni Slack group with a `#job-board` channel. Those sorts of semi-formal corporate alumni networks can definitely be useful, but you’ll probably find yourself relying more on individual connections.
|
||||||
|
|
||||||
|
“Networking” isn’t a dirty word, and it’s not about handing out business cards at a hotel bar. It’s about getting to know people in a friendly and sincere way, being interested in them, and helping them out (by, say, writing a lengthy blog post about how their students might find jobs). I’m not the type to throw around words like karma, but if I were, I would.
|
||||||
|
|
||||||
|
Go to (and speak at!) [meetups][3], offer help and advice when you can, and keep in touch with friends and ex-colleagues. In a couple of years you’ll have a healthy network. Easy-peasy.
|
||||||
|
|
||||||
|
This strategy doesn’t usually work at the beginning of a career, of course, but new grads and students should know that it’s eventually how things happen.
|
||||||
|
|
||||||
|
### Applying directly to specific companies
|
||||||
|
|
||||||
|
I keep a text file of companies where I might want to work. As I come across companies that catch my eye, I add ‘em to the list. When I’m on the hunt for a new job I just consult my list.
|
||||||
|
|
||||||
|
Lots of things might convince me to add a company to the list. They might have an especially appealing mission or product, use some particular technology, or employ some specific people that I’d like to work with and learn from.
|
||||||
|
|
||||||
|
One shockingly good heuristic that identifies a great workplace is whether a company sponsors or organizes meetups, and specifically if they sponsor groups related to minorities in tech. Plenty of great companies don’t do that, and they still may be terrific, but if they do it’s an extremely good sign.
|
||||||
|
|
||||||
|
### Job boards
|
||||||
|
|
||||||
|
I generally don’t use job boards, myself, because I find networking and targeted applications to be more valuable.
|
||||||
|
|
||||||
|
The big sites like Indeed and Dice are rarely useful. While some genuinely great companies do cross-post jobs there, there are so many atrocious jobs mixed in that I don’t bother with them.
|
||||||
|
|
||||||
|
However, smaller and more targeted job boards can be really handy. Someone has created a job site for any given technology (language, framework, database, whatever). If you’re really interested in working with a specific tool or in a particular market niche, it might be worthwhile for you to track down the appropriate board.
|
||||||
|
|
||||||
|
Similarly, if you’re interested in remote work, there are a few boards that cater specifically to that. [We Work Remotely][4] is a prominent and reputable one.
|
||||||
|
|
||||||
|
The enormously popular tech news site [Hacker News][5] posts a monthly “Who’s Hiring?” thread ([an example][6]). HN focuses mainly on startups and is almost adorably obsessed with trends, tech-wise, so it’s a thoroughly biased sample, but it’s still a huge selection of relatively high-quality jobs. Browsing it can also give you an idea of what technologies are currently in vogue. Some folks have also built [sites that make it easier to filter][7] those listings.
|
||||||
|
|
||||||
|
### Recruiters
|
||||||
|
|
||||||
|
These are the folks that message you on LinkedIn. Recruiters fall into two categories: internal and external.
|
||||||
|
|
||||||
|
An internal recruiter is an employee of a specific company and hires engineers to work for that company. They’re almost invariably non-technical, but they often have a fairly clear idea of what technical skills they’re looking for. They have no idea who you are, or what your goals are, but they’re encouraged to find a good fit for the company and are generally harmless.
|
||||||
|
|
||||||
|
It’s normal to work with an internal recruiter as part of the application process at a software company, especially a larger one.
|
||||||
|
|
||||||
|
An external recruiter works independently or for an agency. They’re market makers; they have a stable of companies who have contracted with them to find employees, and they get a placement fee for every person that one of those companies hires. As such, they have incentives to make as many matches as possible as quickly as possible, and they rarely have to deal with the fallout if the match isn’t a good one.
|
||||||
|
|
||||||
|
In my experience they add nothing to the job search process and, at best, just gum up the works as unnecessary middlemen. Less reputable ones may edit your resume without your approval, forward it along to companies that you’d never want to work with, and otherwise mangle your reputation. I avoid them.
|
||||||
|
|
||||||
|
Helpful and ethical external recruiters are a bit like UFOs. I’m prepared to acknowledge that they might, possibly, exist, but I’ve never seen one myself or spoken directly with anyone who’s encountered one, and I’ve only heard about them through confusing and doubtful chains of testimonials (and such testimonials usually make me question the testifier more than my assumptions).
|
||||||
|
|
||||||
|
### University career services
|
||||||
|
|
||||||
|
I’ve never found these to be of any use. The software job market is extraordinarily specialized, and it’s virtually impossible for a career services employee (who needs to be able to place every sort of student in every sort of job) to be familiar with it.
|
||||||
|
|
||||||
|
A recruiter, whose purview is limited to the software world, will often try to estimate good matches by looking at resume keywords like “Python” or “natural language processing.” A university career services employee needs to rely on even more amorphous keywords like “software” or “programming.” It’s hard for a non-technical person to distinguish a job engineering compilers from one hooking up printers.
|
||||||
|
|
||||||
|
Exceptions exist, of course (MIT and Stanford, for example, have predictably excellent software-specific career services), but they’re thoroughly exceptional.
|
||||||
|
|
||||||
|
There are plenty of other ways to find jobs, of course (job fairs at good industrial conferences—like [PyCon][8] or [Strange Loop][9]—aren’t bad, for example, though I’ve never taken a job through one). But the avenues above are the most common ways that job-finding happens. Good luck!
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://harryrschwartz.com/2018/07/19/finding-jobs-in-software.html
|
||||||
|
|
||||||
|
作者:[Harry R. Schwartz][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[lujun9972](https://github.com/lujun9972)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://harryrschwartz.com/
|
||||||
|
[1]:https://harryrschwartz.com/assets/documents/articles/finding-jobs-in-software.pdf
|
||||||
|
[2]:https://www.fandm.edu/ed-novak
|
||||||
|
[3]:https://meetup.com
|
||||||
|
[4]:https://weworkremotely.com
|
||||||
|
[5]:https://news.ycombinator.com
|
||||||
|
[6]:https://news.ycombinator.com/item?id=13764728
|
||||||
|
[7]:https://www.hnhiring.com
|
||||||
|
[8]:https://us.pycon.org
|
||||||
|
[9]:https://thestrangeloop.com
|
@ -0,0 +1,116 @@
|
|||||||
|
Debian Turns 25! Here are Some Interesting Facts About Debian Linux
|
||||||
|
======
|
||||||
|
One of the oldest Linux distribution still in development, Debian has just turned 25. Let’s have a look at some interesting facts about this awesome FOSS project.
|
||||||
|
|
||||||
|
### 10 Interesting facts about Debian Linux
|
||||||
|
|
||||||
|
![Interesting facts about Debian Linux][1]
|
||||||
|
|
||||||
|
The facts presented here have been collected from various sources available from the internet. They are true to my knowledge, but in case of any error, please remind me to update the article.
|
||||||
|
|
||||||
|
#### 1\. One of the oldest Linux distributions still under active development
|
||||||
|
|
||||||
|
[Debian project][2] was announced on 16th August 1993 by Ian Murdock, Debian Founder. Like Linux creator [Linus Torvalds][3], Ian was a college student when he announced Debian project.
|
||||||
|
|
||||||
|
![](https://farm6.staticflickr.com/5710/20006308374_7f51ae2a5c_z.jpg)
|
||||||
|
|
||||||
|
#### 2\. Some people get tattoo while some name their project after their girlfriend’s name
|
||||||
|
|
||||||
|
The project was named by combining the name of Ian and his then-girlfriend Debra Lynn. Ian and Debra got married and had three children. Debra and Ian got divorced in 2008.
|
||||||
|
|
||||||
|
#### 3\. Ian Murdock: The Maverick behind the creation of Debian project
|
||||||
|
|
||||||
|
![Debian Founder Ian Murdock][4]
|
||||||
|
Ian Murdock
|
||||||
|
|
||||||
|
[Ian Murdock][5] led the Debian project from August 1993 until March 1996. He shaped Debian into a community project based on the principals of Free Software. The [Debian Manifesto][6] and the [Debian Social Contract][7] are still governing the project.
|
||||||
|
|
||||||
|
He founded a commercial Linux company called [Progeny Linux Systems][8] and worked for a number of Linux related companies such as Sun Microsystems, Linux Foundation and Docker.
|
||||||
|
|
||||||
|
Sadly, [Ian committed suicide in December 2015][9]. His contribution to Debian is certainly invaluable.
|
||||||
|
|
||||||
|
#### 4\. Debian is a community project in the true sense
|
||||||
|
|
||||||
|
Debian is a community based project in true sense. No one ‘owns’ Debian. Debian is being developed by volunteers from all over the world. It is not a commercial project, backed by corporates like many other Linux distributions.
|
||||||
|
|
||||||
|
Debian Linux distribution is composed of Free Software only. It’s one of the few Linux distributions that is true to the spirit of [Free Software][10] and takes proud in being called a GNU/Linux distribution.
|
||||||
|
|
||||||
|
Debian has its non-profit organization called [Software in Public Interest][11] (SPI). Along with Debian, SPI supports many other open source projects financially.
|
||||||
|
|
||||||
|
#### 5\. Debian and its 3 branches
|
||||||
|
|
||||||
|
Debian has three branches or versions: Debian Stable, Debian Unstable (Sid) and Debian Testing.
|
||||||
|
|
||||||
|
Debian Stable, as the name suggests, is the stable branch that has all the software and packages well tested to give you a rock solid stable system. Since it takes time before a well-tested software lands in the stable branch, Debian Stable often contains older versions of programs and hence people joke that Debian Stable means stale.
|
||||||
|
|
||||||
|
[Debian Unstable][12] codenamed Sid is the version where all the development of Debian takes place. This is where the new packages first land or developed. After that, these changes are propagated to the testing version.
|
||||||
|
|
||||||
|
[Debian Testing][13] is the next release after the current stable release. If the current stable release is N, Debian testing would be the N+1 release. The packages from Debian Unstable are tested in this version. After all the new changes are well tested, Debian Testing is then ‘promoted’ as the new Stable version.
|
||||||
|
|
||||||
|
There is no strict release schedule for Debian.
|
||||||
|
|
||||||
|
#### 7\. There was no Debian 1.0 release
|
||||||
|
|
||||||
|
Debian 1.0 was never released. The CD vendor, InfoMagic, accidentally shipped a development release of Debian and entitled it 1.0 in 1996. To prevent confusion between the CD version and the actual Debian release, the Debian Project renamed its next release to “Debian 1.1”.
|
||||||
|
|
||||||
|
#### 8\. Debian releases are codenamed after Toy Story characters
|
||||||
|
|
||||||
|
![Toy Story Characters][14]
|
||||||
|
|
||||||
|
Debian releases are codenamed after the characters from Pixar’s hit animation movie series [Toy Story][15].
|
||||||
|
|
||||||
|
Debian 1.1 was the first release with a codename. It was named Buzz after the Toy Story character Buzz Lightyear.
|
||||||
|
|
||||||
|
It was in 1996 and [Bruce Perens][16] had taken over leadership of the Project from Ian Murdock. Bruce was working at Pixar at the time.
|
||||||
|
|
||||||
|
This trend continued and all the subsequent releases had codenamed after Toy Story characters. For example, the current stable release is Stretch while the upcoming release has been codenamed Buster.
|
||||||
|
|
||||||
|
The unstable Debian version is codenamed Sid. This character in Toy Story is a kid with emotional problems and he enjoys breaking toys. This is symbolic in the sense that Debian Unstable might break your system with untested packages.
|
||||||
|
|
||||||
|
#### 9\. Debian also has a BSD ditribution
|
||||||
|
|
||||||
|
Debian is not limited to Linux. Debian also has a distribution based on FreeBSD kernel. It is called [Debian GNU/kFreeBSD][17].
|
||||||
|
|
||||||
|
#### 10\. Google uses Debian
|
||||||
|
|
||||||
|
[Google uses Debian][18] as its in-house development platform. Earlier, Google used a customized version of Ubuntu as its development platform. Recently they opted for Debian based gLinux.
|
||||||
|
|
||||||
|
#### Happy 25th birthday Debian
|
||||||
|
|
||||||
|
![Happy 25th birthday Debian][19]
|
||||||
|
|
||||||
|
I hope you liked these little facts about Debian. Stuff like these are reasons why people love Debian.
|
||||||
|
|
||||||
|
I wish a very happy 25th birthday to Debian. Please continue to be awesome. Cheers :)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/debian-facts/
|
||||||
|
|
||||||
|
作者:[Abhishek Prakash][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://itsfoss.com/author/abhishek/
|
||||||
|
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Interesting-facts-about-debian.jpeg
|
||||||
|
[2]:https://www.debian.org
|
||||||
|
[3]:https://itsfoss.com/linus-torvalds-facts/
|
||||||
|
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/ian-murdock.jpg
|
||||||
|
[5]:https://en.wikipedia.org/wiki/Ian_Murdock
|
||||||
|
[6]:https://www.debian.org/doc/manuals/project-history/ap-manifesto.en.html
|
||||||
|
[7]:https://www.debian.org/social_contract
|
||||||
|
[8]:https://en.wikipedia.org/wiki/Progeny_Linux_Systems
|
||||||
|
[9]:https://itsfoss.com/ian-murdock-dies-mysteriously/
|
||||||
|
[10]:https://www.fsf.org/
|
||||||
|
[11]:https://www.spi-inc.org/
|
||||||
|
[12]:https://www.debian.org/releases/sid/
|
||||||
|
[13]:https://www.debian.org/releases/testing/
|
||||||
|
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/toy-story-characters.jpeg
|
||||||
|
[15]:https://en.wikipedia.org/wiki/Toy_Story_(franchise)
|
||||||
|
[16]:https://perens.com/about-bruce-perens/
|
||||||
|
[17]:https://wiki.debian.org/Debian_GNU/kFreeBSD
|
||||||
|
[18]:https://itsfoss.com/goobuntu-glinux-google/
|
||||||
|
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/happy-25th-birthday-Debian.jpeg
|
@ -1,601 +0,0 @@
|
|||||||
Translating by DavidChenLiang
|
|
||||||
|
|
||||||
The evolution of package managers
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
|
|
||||||
|
|
||||||
Every computerized device uses some form of software to perform its intended tasks. In the early days of software, products were stringently tested for bugs and other defects. For the last decade or so, software has been released via the internet with the intent that any bugs would be fixed by applying new versions of the software. In some cases, each individual application has its own updater. In others, it is left up to the user to figure out how to obtain and upgrade software.
|
|
||||||
|
|
||||||
Linux adopted early the practice of maintaining a centralized location where users could find and install software. In this article, I'll discuss the history of software installation on Linux and how modern operating systems are kept up to date against the never-ending torrent of [CVEs][1].
|
|
||||||
|
|
||||||
### How was software on Linux installed before package managers?
|
|
||||||
|
|
||||||
Historically, software was provided either via FTP or mailing lists (eventually this distribution would grow to include basic websites). Only a few small files contained the instructions to create a binary (normally in a tarfile). You would untar the files, read the readme, and as long as you had GCC or some other form of C compiler, you would then typically run a `./configure` script with some list of attributes, such as pathing to library files, location to create new binaries, etc. In addition, the `configure` process would check your system for application dependencies. If any major requirements were missing, the configure script would exit and you could not proceed with the installation until all the dependencies were met. If the configure script completed successfully, a `Makefile` would be created.
|
|
||||||
|
|
||||||
Once a `Makefile` existed, you would then proceed to run the `make` command (this command is provided by whichever compiler you were using). The `make` command has a number of options called make flags, which help optimize the resulting binaries for your system. In the earlier days of computing, this was very important because hardware struggled to keep up with modern software demands. Today, compilation options can be much more generic as most hardware is more than adequate for modern software.
|
|
||||||
|
|
||||||
Finally, after the `make` process had been completed, you would need to run `make install` (or `sudo make install`) in order to actually install the software. As you can imagine, doing this for every single piece of software was time-consuming and tedious—not to mention the fact that updating software was a complicated and potentially very involved process.
|
|
||||||
|
|
||||||
### What is a package?
|
|
||||||
|
|
||||||
Packages were invented to combat this complexity. Packages collect multiple data files together into a single archive file for easier portability and storage, or simply compress files to reduce storage space. The binaries included in a package are precompiled with according to the sane defaults the developer chosen. Packages also contain metadata, such as the software's name, a description of its purpose, a version number, and a list of dependencies necessary for the software to run properly.
|
|
||||||
|
|
||||||
Several flavors of Linux have created their own package formats. Some of the most commonly used package formats include:
|
|
||||||
|
|
||||||
* .deb: This package format is used by Debian, Ubuntu, Linux Mint, and several other derivatives. It was the first package type to be created.
|
|
||||||
* .rpm: This package format was originally called Red Hat Package Manager. It is used by Red Hat, Fedora, SUSE, and several other smaller distributions.
|
|
||||||
* .tar.xz: While it is just a compressed tarball, this is the format that Arch Linux uses.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
While packages themselves don't manage dependencies directly, they represented a huge step forward in Linux software management.
|
|
||||||
|
|
||||||
### What is a software repository?
|
|
||||||
|
|
||||||
A few years ago, before the proliferation of smartphones, the idea of a software repository was difficult for many users to grasp if they were not involved in the Linux ecosystem. To this day, most Windows users still seem to be hardwired to open a web browser to search for and install new software. However, those with smartphones have gotten used to the idea of a software "store." The way smartphone users obtain software and the way package managers work are not dissimilar. While there have been several attempts at making an attractive UI for software repositories, the vast majority of Linux users still use the command line to install packages. Software repositories are a centralized listing of all of the available software for any repository the system has been configured to use. Below are some examples of searching a repository for a specifc package (note that these have been truncated for brevity):
|
|
||||||
|
|
||||||
Arch Linux with aurman
|
|
||||||
```
|
|
||||||
user@arch ~ $ aurman -Ss kate
|
|
||||||
|
|
||||||
extra/kate 18.04.2-2 (kde-applications kdebase)
|
|
||||||
Advanced Text Editor
|
|
||||||
aur/kate-root 18.04.0-1 (11, 1.139399)
|
|
||||||
Advanced Text Editor, patched to be able to run as root
|
|
||||||
aur/kate-git r15288.15d26a7-1 (1, 1e-06)
|
|
||||||
An advanced editor component which is used in numerous KDE applications requiring a text editing component
|
|
||||||
```
|
|
||||||
|
|
||||||
CentOS 7 using YUM
|
|
||||||
```
|
|
||||||
[user@centos ~]$ yum search kate
|
|
||||||
|
|
||||||
kate-devel.x86_64 : Development files for kate
|
|
||||||
kate-libs.x86_64 : Runtime files for kate
|
|
||||||
kate-part.x86_64 : Kate kpart plugin
|
|
||||||
```
|
|
||||||
|
|
||||||
Ubuntu using APT
|
|
||||||
```
|
|
||||||
user@ubuntu ~ $ apt search kate
|
|
||||||
Sorting... Done
|
|
||||||
Full Text Search... Done
|
|
||||||
|
|
||||||
kate/xenial 4:15.12.3-0ubuntu2 amd64
|
|
||||||
powerful text editor
|
|
||||||
|
|
||||||
kate-data/xenial,xenial 4:4.14.3-0ubuntu4 all
|
|
||||||
shared data files for Kate text editor
|
|
||||||
|
|
||||||
kate-dbg/xenial 4:15.12.3-0ubuntu2 amd64
|
|
||||||
debugging symbols for Kate
|
|
||||||
|
|
||||||
kate5-data/xenial,xenial 4:15.12.3-0ubuntu2 all
|
|
||||||
shared data files for Kate text editor
|
|
||||||
```
|
|
||||||
|
|
||||||
### What are the most prominent package managers?
|
|
||||||
|
|
||||||
As suggested in the above output, package managers are used to interact with software repositories. The following is a brief overview of some of the most prominent package managers.
|
|
||||||
|
|
||||||
#### RPM-based package managers
|
|
||||||
|
|
||||||
Updating RPM-based systems, particularly those based on Red Hat technologies, has a very interesting and detailed history. In fact, the current versions of [yum][2] (for enterprise distributions) and [DNF][3] (for community) combine several open source projects to provide their current functionality.
|
|
||||||
|
|
||||||
Initially, Red Hat used a package manager called [RPM][4] (Red Hat Package Manager), which is still in use today. However, its primary use is to install RPMs, which you have locally, not to search software repositories. The package manager named `up2date` was created to inform users of updates to packages and enable them to search remote repositories and easily install dependencies. While it served its purpose, some community members felt that `up2date` had some significant shortcomings.
|
|
||||||
|
|
||||||
The current incantation of yum came from several different community efforts. Yellowdog Updater (YUP) was developed in 1999-2001 by folks at Terra Soft Solutions as a back-end engine for a graphical installer of [Yellow Dog Linux][5]. Duke University liked the idea of YUP and decided to improve upon it. They created [Yellowdog Updater, Modified (yum)][6] which was eventually adapted to help manage the university's Red Hat Linux systems. Yum grew in popularity, and by 2005 it was estimated to be used by more than half of the Linux market. Today, almost every distribution of Linux that uses RPMs uses yum for package management (with a few notable exceptions).
|
|
||||||
|
|
||||||
#### Working with yum
|
|
||||||
|
|
||||||
In order for yum to download and install packages out of an internet repository, files must be located in `/etc/yum.repos.d/` and they must have the extension `.repo`. Here is an example repo file:
|
|
||||||
```
|
|
||||||
[local_base]
|
|
||||||
name=Base CentOS (local)
|
|
||||||
baseurl=http://7-repo.apps.home.local/yum-repo/7/
|
|
||||||
enabled=1
|
|
||||||
gpgcheck=0
|
|
||||||
```
|
|
||||||
|
|
||||||
This is for one of my local repositories, which explains why the GPG check is off. If this check was on, each package would need to be signed with a cryptographic key and a corresponding key would need to be imported into the system receiving the updates. Because I maintain this repository myself, I trust the packages and do not bother signing them.
|
|
||||||
|
|
||||||
Once a repository file is in place, you can start installing packages from the remote repository. The most basic command is `yum update`, which will update every package currently installed. This does not require a specific step to refresh the information about repositories; this is done automatically. A sample of the command is shown below:
|
|
||||||
```
|
|
||||||
[user@centos ~]$ sudo yum update
|
|
||||||
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager
|
|
||||||
local_base | 3.6 kB 00:00:00
|
|
||||||
local_epel | 2.9 kB 00:00:00
|
|
||||||
local_rpm_forge | 1.9 kB 00:00:00
|
|
||||||
local_updates | 3.4 kB 00:00:00
|
|
||||||
spideroak-one-stable | 2.9 kB 00:00:00
|
|
||||||
zfs | 2.9 kB 00:00:00
|
|
||||||
(1/6): local_base/group_gz | 166 kB 00:00:00
|
|
||||||
(2/6): local_updates/primary_db | 2.7 MB 00:00:00
|
|
||||||
(3/6): local_base/primary_db | 5.9 MB 00:00:00
|
|
||||||
(4/6): spideroak-one-stable/primary_db | 12 kB 00:00:00
|
|
||||||
(5/6): local_epel/primary_db | 6.3 MB 00:00:00
|
|
||||||
(6/6): zfs/x86_64/primary_db | 78 kB 00:00:00
|
|
||||||
local_rpm_forge/primary_db | 125 kB 00:00:00
|
|
||||||
Determining fastest mirrors
|
|
||||||
Resolving Dependencies
|
|
||||||
--> Running transaction check
|
|
||||||
```
|
|
||||||
|
|
||||||
If you are sure you want yum to execute any command without stopping for input, you can put the `-y` flag in the command, such as `yum update -y`.
|
|
||||||
|
|
||||||
Installing a new package is just as easy. First, search for the name of the package with `yum search`:
|
|
||||||
```
|
|
||||||
[user@centos ~]$ yum search kate
|
|
||||||
|
|
||||||
artwiz-aleczapka-kates-fonts.noarch : Kates font in Artwiz family
|
|
||||||
ghc-highlighting-kate-devel.x86_64 : Haskell highlighting-kate library development files
|
|
||||||
kate-devel.i686 : Development files for kate
|
|
||||||
kate-devel.x86_64 : Development files for kate
|
|
||||||
kate-libs.i686 : Runtime files for kate
|
|
||||||
kate-libs.x86_64 : Runtime files for kate
|
|
||||||
kate-part.i686 : Kate kpart plugin
|
|
||||||
```
|
|
||||||
|
|
||||||
Once you have the name of the package, you can simply install the package with `sudo yum install kate-devel -y`. If you installed a package you no longer need, you can remove it with `sudo yum remove kate-devel -y`. By default, yum will remove the package plus its dependencies.
|
|
||||||
|
|
||||||
There may be times when you do not know the name of the package, but you know the name of the utility. For example, suppose you are looking for the utility `updatedb`, which creates/updates the database used by the `locate` command. Attempting to install `updatedb` returns the following results:
|
|
||||||
```
|
|
||||||
[user@centos ~]$ sudo yum install updatedb
|
|
||||||
Loaded plugins: fastestmirror, langpacks
|
|
||||||
Loading mirror speeds from cached hostfile
|
|
||||||
No package updatedb available.
|
|
||||||
Error: Nothing to do
|
|
||||||
```
|
|
||||||
|
|
||||||
You can find out what package the utility comes from by running:
|
|
||||||
```
|
|
||||||
[user@centos ~]$ yum whatprovides *updatedb
|
|
||||||
Loaded plugins: fastestmirror, langpacks
|
|
||||||
Loading mirror speeds from cached hostfile
|
|
||||||
|
|
||||||
bacula-director-5.2.13-23.1.el7.x86_64 : Bacula Director files
|
|
||||||
Repo : local_base
|
|
||||||
Matched from:
|
|
||||||
Filename : /usr/share/doc/bacula-director-5.2.13/updatedb
|
|
||||||
|
|
||||||
mlocate-0.26-8.el7.x86_64 : An utility for finding files by name
|
|
||||||
Repo : local_base
|
|
||||||
Matched from:
|
|
||||||
Filename : /usr/bin/updatedb
|
|
||||||
```
|
|
||||||
|
|
||||||
The reason I have used an asterisk `*` in front of the command is because `yum whatprovides` uses the path to the file in order to make a match. Since I was not sure where the file was located, I used an asterisk to indicate any path.
|
|
||||||
|
|
||||||
There are, of course, many more options available to yum. I encourage you to view the man page for yum for additional options.
|
|
||||||
|
|
||||||
[Dandified Yum (DNF)][7] is a newer iteration on yum. Introduced in Fedora 18, it has not yet been adopted in the enterprise distributions, and as such is predominantly used in Fedora (and derivatives). Its usage is almost exactly the same as that of yum, but it was built to address poor performance, undocumented APIs, slow/broken dependency resolution, and occasional high memory usage. DNF is meant as a drop-in replacement for yum, and therefore I won't repeat the commands—wherever you would use `yum`, simply substitute `dnf`.
|
|
||||||
|
|
||||||
#### Working with Zypper
|
|
||||||
|
|
||||||
[Zypper][8] is another package manager meant to help manage RPMs. This package manager is most commonly associated with [SUSE][9] (and [openSUSE][10]) but has also seen adoption by [MeeGo][11], [Sailfish OS][12], and [Tizen][13]. It was originally introduced in 2006 and has been iterated upon ever since. There is not a whole lot to say other than Zypper is used as the back end for the system administration tool [YaST][14] and some users find it to be faster than yum.
|
|
||||||
|
|
||||||
Zypper's usage is very similar to that of yum. To search for, update, install or remove a package, simply use the following:
|
|
||||||
```
|
|
||||||
zypper search kate
|
|
||||||
zypper update
|
|
||||||
zypper install kate
|
|
||||||
zypper remove kate
|
|
||||||
```
|
|
||||||
Some major differences come into play in how repositories are added to the system with `zypper`. Unlike the package managers discussed above, `zypper` adds repositories using the package manager itself. The most common way is via a URL, but `zypper` also supports importing from repo files.
|
|
||||||
```
|
|
||||||
suse:~ # zypper addrepo http://download.videolan.org/pub/vlc/SuSE/15.0 vlc
|
|
||||||
Adding repository 'vlc' [done]
|
|
||||||
Repository 'vlc' successfully added
|
|
||||||
|
|
||||||
Enabled : Yes
|
|
||||||
Autorefresh : No
|
|
||||||
GPG Check : Yes
|
|
||||||
URI : http://download.videolan.org/pub/vlc/SuSE/15.0
|
|
||||||
Priority : 99
|
|
||||||
```
|
|
||||||
|
|
||||||
You remove repositories in a similar manner:
|
|
||||||
```
|
|
||||||
suse:~ # zypper removerepo vlc
|
|
||||||
Removing repository 'vlc' ...................................[done]
|
|
||||||
Repository 'vlc' has been removed.
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the `zypper repos` command to see what the status of repositories are on your system:
|
|
||||||
```
|
|
||||||
suse:~ # zypper repos
|
|
||||||
Repository priorities are without effect. All enabled repositories share the same priority.
|
|
||||||
|
|
||||||
# | Alias | Name | Enabled | GPG Check | Refresh
|
|
||||||
---|---------------------------|-----------------------------------------|---------|-----------|--------
|
|
||||||
1 | repo-debug | openSUSE-Leap-15.0-Debug | No | ---- | ----
|
|
||||||
2 | repo-debug-non-oss | openSUSE-Leap-15.0-Debug-Non-Oss | No | ---- | ----
|
|
||||||
3 | repo-debug-update | openSUSE-Leap-15.0-Update-Debug | No | ---- | ----
|
|
||||||
4 | repo-debug-update-non-oss | openSUSE-Leap-15.0-Update-Debug-Non-Oss | No | ---- | ----
|
|
||||||
5 | repo-non-oss | openSUSE-Leap-15.0-Non-Oss | Yes | ( p) Yes | Yes
|
|
||||||
6 | repo-oss | openSUSE-Leap-15.0-Oss | Yes | ( p) Yes | Yes
|
|
||||||
```
|
|
||||||
|
|
||||||
`zypper` even has a similar ability to determine what package name contains files or binaries. Unlike YUM, it uses a hyphen in the command (although this method of searching is deprecated):
|
|
||||||
```
|
|
||||||
localhost:~ # zypper what-provides kate
|
|
||||||
Command 'what-provides' is replaced by 'search --provides --match-exact'.
|
|
||||||
See 'help search' for all available options.
|
|
||||||
Loading repository data...
|
|
||||||
Reading installed packages...
|
|
||||||
|
|
||||||
S | Name | Summary | Type
|
|
||||||
---|------|----------------------|------------
|
|
||||||
i+ | Kate | Advanced Text Editor | application
|
|
||||||
i | kate | Advanced Text Editor | package
|
|
||||||
```
|
|
||||||
|
|
||||||
As with YUM and DNF, Zypper has a much richer feature set than covered here. Please consult with the official documentation for more in-depth information.
|
|
||||||
|
|
||||||
#### Debian-based package managers
|
|
||||||
|
|
||||||
One of the oldest Linux distributions currently maintained, Debian's system is very similar to RPM-based systems. They use `.deb` packages, which can be managed by a tool called dpkg. dpkg is very similar to rpm in that it was designed to manage packages that are available locally. It does no dependency resolution (although it does dependency checking), and has no reliable way to interact with remote repositories. In order to improve the user experience and ease of use, the Debian project commissioned a project called Deity. This codename was eventually abandoned and changed to [Advanced Package Tool (APT)][15].
|
|
||||||
|
|
||||||
Released as test builds in 1998 (before making an appearance in Debian 2.1 in 1999), many users consider APT one of the defining features of Debian-based systems. It makes use of repositories in a similar fashion to RPM-based systems, but instead of individual `.repo` files that `yum` uses, `apt` has historically used `/etc/apt/sources.list` to manage repositories. More recently, it also ingests files from `/etc/apt/sources.d/`. Following the examples in the RPM-based package managers, to accomplish the same thing on Debian-based distributions you have a few options. You can edit/create the files manually in the aforementioned locations from the terminal, or in some cases, you can use a UI front end (such as `Software & Updates` provided by Ubuntu et al.). To provide the same treatment to all distributions, I will cover only the command-line options. To add a repository without directly editing a file, you can do something like this:
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ sudo apt-add-repository "deb http://APT.spideroak.com/ubuntu-spideroak-hardy/ release restricted"
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create a `spideroakone.list` file in `/etc/apt/sources.list.d`. Obviously, these lines change depending on the repository being added. If you are adding a Personal Package Archive (PPA), you can do this:
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ sudo apt-add-repository ppa:gnome-desktop
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
NOTE: Debian does not support PPAs natively.
|
|
||||||
|
|
||||||
After a repository has been added, Debian-based systems need to be made aware that there is a new location to search for packages. This is done via the `apt-get update` command:
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ sudo apt-get update
|
|
||||||
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
|
|
||||||
Hit:2 http://APT.spideroak.com/ubuntu-spideroak-hardy release InRelease
|
|
||||||
Hit:3 http://ca.archive.ubuntu.com/ubuntu xenial InRelease
|
|
||||||
Get:4 http://ca.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
|
|
||||||
Get:5 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [517 kB]
|
|
||||||
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [455 kB]
|
|
||||||
Get:7 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [221 kB]
|
|
||||||
...
|
|
||||||
|
|
||||||
Fetched 6,399 kB in 3s (2,017 kB/s)
|
|
||||||
Reading package lists... Done
|
|
||||||
```
|
|
||||||
|
|
||||||
Now that the new repository is added and updated, you can search for a package using the `apt-cache` command:
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ apt-cache search kate
|
|
||||||
aterm-ml - Afterstep XVT - a VT102 emulator for the X window system
|
|
||||||
frescobaldi - Qt4 LilyPond sheet music editor
|
|
||||||
gitit - Wiki engine backed by a git or darcs filestore
|
|
||||||
jedit - Plugin-based editor for programmers
|
|
||||||
kate - powerful text editor
|
|
||||||
kate-data - shared data files for Kate text editor
|
|
||||||
kate-dbg - debugging symbols for Kate
|
|
||||||
katepart - embeddable text editor component
|
|
||||||
```
|
|
||||||
|
|
||||||
To install `kate`, simply run the corresponding install command:
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ sudo apt-get install kate
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
To remove a package, use `apt-get remove`:
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ sudo apt-get remove kate
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
When it comes to package discovery, APT does not provide any functionality that is similar to `yum whatprovides`. There are a few ways to get this information if you are trying to find where a specific file on disk has come from.
|
|
||||||
|
|
||||||
Using dpkg
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ dpkg -S /bin/ls
|
|
||||||
coreutils: /bin/ls
|
|
||||||
```
|
|
||||||
|
|
||||||
Using apt-file
|
|
||||||
```
|
|
||||||
user@ubuntu:~$ sudo apt-get install apt-file -y
|
|
||||||
|
|
||||||
user@ubuntu:~$ sudo apt-file update
|
|
||||||
|
|
||||||
user@ubuntu:~$ apt-file search kate
|
|
||||||
```
|
|
||||||
|
|
||||||
The problem with `apt-file search` is that it, unlike `yum whatprovides`, it is overly verbose unless you know the exact path, and it automatically adds a wildcard search so that you end up with results for anything with the word kate in it:
|
|
||||||
```
|
|
||||||
kate: /usr/bin/kate
|
|
||||||
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katebacktracebrowserplugin.so
|
|
||||||
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katebuildplugin.so
|
|
||||||
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katecloseexceptplugin.so
|
|
||||||
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katectagsplugin.so
|
|
||||||
```
|
|
||||||
|
|
||||||
Most of these examples have used `apt-get`. Note that most of the current tutorials for Ubuntu specifically have taken to simply using `apt`. The single `apt` command was designed to implement only the most commonly used commands in the APT arsenal. Since functionality is split between `apt-get`, `apt-cache`, and other commands, `apt` looks to unify these into a single command. It also adds some niceties such as colorization, progress bars, and other odds and ends. Most of the commands noted above can be replaced with `apt`, but not all Debian-based distributions currently receiving security patches support using `apt` by default, so you may need to install additional packages.
|
|
||||||
|
|
||||||
#### Arch-based package managers
|
|
||||||
|
|
||||||
[Arch Linux][16] uses a package manager called [pacman][17]. Unlike `.deb` or `.rpm` files, pacman uses a more traditional tarball with the LZMA2 compression (`.tar.xz`). This enables Arch Linux packages to be much smaller than other forms of compressed archives (such as gzip). Initially released in 2002, pacman has been steadily iterated and improved. One of the major benefits of pacman is that it supports the [Arch Build System][18], a system for building packages from source. The build system ingests a file called a PKGBUILD, which contains metadata (such as version numbers, revisions, dependencies, etc.) as well as a shell script with the required flags for compiling a package conforming to the Arch Linux requirements. The resulting binaries are then packaged into the aforementioned `.tar.xz` file for consumption by pacman.
|
|
||||||
|
|
||||||
This system led to the creation of the [Arch User Repository][19] (AUR) which is a community-driven repository containing PKGBUILD files and supporting patches or scripts. This allows for a virtually endless amount of software to be available in Arch. The obvious advantage of this system is that if a user (or maintainer) wishes to make software available to the public, they do not have to go through official channels to get it accepted in the main repositories. The downside is that it relies on community curation similar to [Docker Hub][20], Canonical's Snap packages, or other similar mechanisms. There are numerous AUR-specific package managers that can be used to download, compile, and install from the PKGBUILD files in the AUR (we will look at this later).
|
|
||||||
|
|
||||||
#### Working with pacman and official repositories
|
|
||||||
|
|
||||||
Arch's main package manager, pacman, uses flags instead of command words like `yum` and `apt`. For example, to search for a package, you would use `pacman -Ss`. As with most commands on Linux, you can find both a `manpage` and inline help. Most of the commands for `pacman `use the sync (-S) flag. For example:
|
|
||||||
```
|
|
||||||
user@arch ~ $ pacman -Ss kate
|
|
||||||
|
|
||||||
extra/kate 18.04.2-2 (kde-applications kdebase)
|
|
||||||
Advanced Text Editor
|
|
||||||
extra/libkate 0.4.1-6 [installed]
|
|
||||||
A karaoke and text codec for embedding in ogg
|
|
||||||
extra/libtiger 0.3.4-5 [installed]
|
|
||||||
A rendering library for Kate streams using Pango and Cairo
|
|
||||||
extra/ttf-cheapskate 2.0-12
|
|
||||||
TTFonts collection from dustimo.com
|
|
||||||
community/haskell-cheapskate 0.1.1-100
|
|
||||||
Experimental markdown processor.
|
|
||||||
```
|
|
||||||
|
|
||||||
Arch also uses repositories similar to other package managers. In the output above, search results are prefixed with the repository they are found in (`extra/` and `community/` in this case). Similar to both Red Hat and Debian-based systems, Arch relies on the user to add the repository information into a specific file. The location for these repositories is `/etc/pacman.conf`. The example below is fairly close to a stock system. I have enabled the `[multilib]` repository for Steam support:
|
|
||||||
```
|
|
||||||
[options]
|
|
||||||
Architecture = auto
|
|
||||||
|
|
||||||
Color
|
|
||||||
CheckSpace
|
|
||||||
|
|
||||||
SigLevel = Required DatabaseOptional
|
|
||||||
LocalFileSigLevel = Optional
|
|
||||||
|
|
||||||
[core]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
|
|
||||||
[extra]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
|
|
||||||
[community]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
|
|
||||||
[multilib]
|
|
||||||
Include = /etc/pacman.d/mirrorlist
|
|
||||||
```
|
|
||||||
|
|
||||||
It is possible to specify a specific URL in `pacman.conf`. This functionality can be used to make sure all packages come from a specific point in time. If, for example, a package has a bug that affects you severely and it has several dependencies, you can roll back to a specific point in time by adding a specific URL into your `pacman.conf` and then running the commands to downgrade the system:
|
|
||||||
```
|
|
||||||
[core]
|
|
||||||
Server=https://archive.archlinux.org/repos/2017/12/22/$repo/os/$arch
|
|
||||||
```
|
|
||||||
|
|
||||||
Like Debian-based systems, Arch does not update its local repository information until you tell it to do so. You can refresh the package database by issuing the following command:
|
|
||||||
```
|
|
||||||
user@arch ~ $ sudo pacman -Sy
|
|
||||||
|
|
||||||
:: Synchronizing package databases...
|
|
||||||
core 130.2 KiB 851K/s 00:00 [##########################################################] 100%
|
|
||||||
extra 1645.3 KiB 2.69M/s 00:01 [##########################################################] 100%
|
|
||||||
community 4.5 MiB 2.27M/s 00:02 [##########################################################] 100%
|
|
||||||
multilib is up to date
|
|
||||||
```
|
|
||||||
|
|
||||||
As you can see in the above output, `pacman` thinks that the multilib package database is up to date. You can force a refresh if you think this is incorrect by running `pacman -Syy`. If you want to update your entire system (excluding packages installed from the AUR), you can run `pacman -Syu`:
|
|
||||||
```
|
|
||||||
user@arch ~ $ sudo pacman -Syu
|
|
||||||
|
|
||||||
:: Synchronizing package databases...
|
|
||||||
core is up to date
|
|
||||||
extra is up to date
|
|
||||||
community is up to date
|
|
||||||
multilib is up to date
|
|
||||||
:: Starting full system upgrade...
|
|
||||||
resolving dependencies...
|
|
||||||
looking for conflicting packages...
|
|
||||||
|
|
||||||
Packages (45) ceph-13.2.0-2 ceph-libs-13.2.0-2 debootstrap-1.0.105-1 guile-2.2.4-1 harfbuzz-1.8.2-1 harfbuzz-icu-1.8.2-1 haskell-aeson-1.3.1.1-20
|
|
||||||
haskell-attoparsec-0.13.2.2-24 haskell-tagged-0.8.6-1 imagemagick-7.0.8.4-1 lib32-harfbuzz-1.8.2-1 lib32-libgusb-0.3.0-1 lib32-systemd-239.0-1
|
|
||||||
libgit2-1:0.27.2-1 libinput-1.11.2-1 libmagick-7.0.8.4-1 libmagick6-6.9.10.4-1 libopenshot-0.2.0-1 libopenshot-audio-0.1.6-1 libosinfo-1.2.0-1
|
|
||||||
libxfce4util-4.13.2-1 minetest-0.4.17.1-1 minetest-common-0.4.17.1-1 mlt-6.10.0-1 mlt-python-bindings-6.10.0-1 ndctl-61.1-1 netctl-1.17-1
|
|
||||||
nodejs-10.6.0-1
|
|
||||||
|
|
||||||
Total Download Size: 2.66 MiB
|
|
||||||
Total Installed Size: 879.15 MiB
|
|
||||||
Net Upgrade Size: -365.27 MiB
|
|
||||||
|
|
||||||
:: Proceed with installation? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
In the scenario mentioned earlier regarding downgrading a system, you can force a downgrade by issuing `pacman -Syyuu`. It is important to note that this should not be undertaken lightly. This should not cause a problem in most cases; however, there is a chance that downgrading of a package or several packages will cause a cascading failure and leave your system in an inconsistent state. USE WITH CAUTION!
|
|
||||||
|
|
||||||
To install a package, simply use `pacman -S kate`:
|
|
||||||
```
|
|
||||||
user@arch ~ $ sudo pacman -S kate
|
|
||||||
|
|
||||||
resolving dependencies...
|
|
||||||
looking for conflicting packages...
|
|
||||||
|
|
||||||
Packages (7) editorconfig-core-c-0.12.2-1 kactivities-5.47.0-1 kparts-5.47.0-1 ktexteditor-5.47.0-2 syntax-highlighting-5.47.0-1 threadweaver-5.47.0-1
|
|
||||||
kate-18.04.2-2
|
|
||||||
|
|
||||||
Total Download Size: 10.94 MiB
|
|
||||||
Total Installed Size: 38.91 MiB
|
|
||||||
|
|
||||||
:: Proceed with installation? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
To remove a package, you can run `pacman -R kate`. This removes only the package and not its dependencies:
|
|
||||||
```
|
|
||||||
user@arch ~ $ sudo pacman -S kate
|
|
||||||
|
|
||||||
checking dependencies...
|
|
||||||
|
|
||||||
Packages (1) kate-18.04.2-2
|
|
||||||
|
|
||||||
Total Removed Size: 20.30 MiB
|
|
||||||
|
|
||||||
:: Do you want to remove these packages? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
If you want to remove the dependencies that are not required by other packages, you can run `pacman -Rs:`
|
|
||||||
```
|
|
||||||
user@arch ~ $ sudo pacman -Rs kate
|
|
||||||
|
|
||||||
checking dependencies...
|
|
||||||
|
|
||||||
Packages (7) editorconfig-core-c-0.12.2-1 kactivities-5.47.0-1 kparts-5.47.0-1 ktexteditor-5.47.0-2 syntax-highlighting-5.47.0-1 threadweaver-5.47.0-1
|
|
||||||
kate-18.04.2-2
|
|
||||||
|
|
||||||
Total Removed Size: 38.91 MiB
|
|
||||||
|
|
||||||
:: Do you want to remove these packages? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
Pacman, in my opinion, offers the most succinct way of searching for the name of a package for a given utility. As shown above, `yum` and `apt` both rely on pathing in order to find useful results. Pacman makes some intelligent guesses as to which package you are most likely looking for:
|
|
||||||
```
|
|
||||||
user@arch ~ $ sudo pacman -Fs updatedb
|
|
||||||
core/mlocate 0.26.git.20170220-1
|
|
||||||
usr/bin/updatedb
|
|
||||||
|
|
||||||
user@arch ~ $ sudo pacman -Fs kate
|
|
||||||
extra/kate 18.04.2-2
|
|
||||||
usr/bin/kate
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Working with the AUR
|
|
||||||
|
|
||||||
There are several popular AUR package manager helpers. Of these, `yaourt` and `pacaur` are fairly prolific. However, both projects are listed as discontinued or problematic on the [Arch Wiki][21]. For that reason, I will discuss `aurman`. It works almost exactly like `pacman,` except it searches the AUR and includes some helpful, albeit potentially dangerous, options. Installing a package from the AUR will initiate use of the package maintainer's build scripts. You will be prompted several times for permission to continue (I have truncated the output for brevity):
|
|
||||||
```
|
|
||||||
aurman -S telegram-desktop-bin
|
|
||||||
~~ initializing aurman...
|
|
||||||
~~ the following packages are neither in known repos nor in the aur
|
|
||||||
...
|
|
||||||
~~ calculating solutions...
|
|
||||||
|
|
||||||
:: The following 1 package(s) are getting updated:
|
|
||||||
aur/telegram-desktop-bin 1.3.0-1 -> 1.3.9-1
|
|
||||||
|
|
||||||
?? Do you want to continue? Y/n: Y
|
|
||||||
|
|
||||||
~~ looking for new pkgbuilds and fetching them...
|
|
||||||
Cloning into 'telegram-desktop-bin'...
|
|
||||||
|
|
||||||
remote: Counting objects: 301, done.
|
|
||||||
remote: Compressing objects: 100% (152/152), done.
|
|
||||||
remote: Total 301 (delta 161), reused 286 (delta 147)
|
|
||||||
Receiving objects: 100% (301/301), 76.17 KiB | 639.00 KiB/s, done.
|
|
||||||
Resolving deltas: 100% (161/161), done.
|
|
||||||
?? Do you want to see the changes of telegram-desktop-bin? N/y: N
|
|
||||||
|
|
||||||
[sudo] password for user:
|
|
||||||
|
|
||||||
...
|
|
||||||
==> Leaving fakeroot environment.
|
|
||||||
==> Finished making: telegram-desktop-bin 1.3.9-1 (Thu 05 Jul 2018 11:22:02 AM EDT)
|
|
||||||
==> Cleaning up...
|
|
||||||
loading packages...
|
|
||||||
resolving dependencies...
|
|
||||||
looking for conflicting packages...
|
|
||||||
|
|
||||||
Packages (1) telegram-desktop-bin-1.3.9-1
|
|
||||||
|
|
||||||
Total Installed Size: 88.81 MiB
|
|
||||||
Net Upgrade Size: 5.33 MiB
|
|
||||||
|
|
||||||
:: Proceed with installation? [Y/n]
|
|
||||||
```
|
|
||||||
|
|
||||||
Sometimes you will be prompted for more input, depending on the complexity of the package you are installing. To avoid this tedium, `aurman` allows you to pass both the `--noconfirm` and `--noedit` options. This is equivalent to saying "accept all of the defaults, and trust that the package maintainers scripts will not be malicious." **USE THIS OPTION WITH EXTREME CAUTION!** While these options are unlikely to break your system on their own, you should never blindly accept someone else's scripts.
|
|
||||||
|
|
||||||
### Conclusion
|
|
||||||
|
|
||||||
This article, of course, only scratches the surface of what package managers can do. There are also many other package managers available that I could not cover in this space. Some distributions, such as Ubuntu or Elementary OS, have gone to great lengths to provide a graphical approach to package management.
|
|
||||||
|
|
||||||
If you are interested in some of the more advanced functions of package managers, please post your questions or comments below and I would be glad to write a follow-up article.
|
|
||||||
|
|
||||||
### Appendix
|
|
||||||
```
|
|
||||||
# search for packages
|
|
||||||
yum search <package>
|
|
||||||
dnf search <package>
|
|
||||||
zypper search <package>
|
|
||||||
apt-cache search <package>
|
|
||||||
apt search <package>
|
|
||||||
pacman -Ss <package>
|
|
||||||
|
|
||||||
# install packages
|
|
||||||
yum install <package>
|
|
||||||
dnf install <package>
|
|
||||||
zypper install <package>
|
|
||||||
apt-get install <package>
|
|
||||||
apt install <package>
|
|
||||||
pacman -Ss <package>
|
|
||||||
|
|
||||||
# update package database, not required by yum, dnf and zypper
|
|
||||||
apt-get update
|
|
||||||
apt update
|
|
||||||
pacman -Sy
|
|
||||||
|
|
||||||
# update all system packages
|
|
||||||
yum update
|
|
||||||
dnf update
|
|
||||||
zypper update
|
|
||||||
apt-get upgrade
|
|
||||||
apt upgrade
|
|
||||||
pacman -Su
|
|
||||||
|
|
||||||
# remove an installed package
|
|
||||||
yum remove <package>
|
|
||||||
dnf remove <package>
|
|
||||||
apt-get remove <package>
|
|
||||||
apt remove <package>
|
|
||||||
pacman -R <package>
|
|
||||||
pacman -Rs <package>
|
|
||||||
|
|
||||||
# search for the package name containing specific file or folder
|
|
||||||
yum whatprovides *<binary>
|
|
||||||
dnf whatprovides *<binary>
|
|
||||||
zypper what-provides <binary>
|
|
||||||
zypper search --provides <binary>
|
|
||||||
apt-file search <binary>
|
|
||||||
pacman -Sf <binary>
|
|
||||||
```
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/7/evolution-package-managers
|
|
||||||
|
|
||||||
作者:[Steve Ovens][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/stratusss
|
|
||||||
[1]:https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
|
|
||||||
[2]:https://en.wikipedia.org/wiki/Yum_(software)
|
|
||||||
[3]:https://fedoraproject.org/wiki/DNF
|
|
||||||
[4]:https://en.wikipedia.org/wiki/Rpm_(software)
|
|
||||||
[5]:https://en.wikipedia.org/wiki/Yellow_Dog_Linux
|
|
||||||
[6]:https://searchdatacenter.techtarget.com/definition/Yellowdog-Updater-Modified-YUM
|
|
||||||
[7]:https://en.wikipedia.org/wiki/DNF_(software)
|
|
||||||
[8]:https://en.opensuse.org/Portal:Zypper
|
|
||||||
[9]:https://www.suse.com/
|
|
||||||
[10]:https://www.opensuse.org/
|
|
||||||
[11]:https://en.wikipedia.org/wiki/MeeGo
|
|
||||||
[12]:https://sailfishos.org/
|
|
||||||
[13]:https://www.tizen.org/
|
|
||||||
[14]:https://en.wikipedia.org/wiki/YaST
|
|
||||||
[15]:https://en.wikipedia.org/wiki/APT_(Debian)
|
|
||||||
[16]:https://www.archlinux.org/
|
|
||||||
[17]:https://wiki.archlinux.org/index.php/pacman
|
|
||||||
[18]:https://wiki.archlinux.org/index.php/Arch_Build_System
|
|
||||||
[19]:https://aur.archlinux.org/
|
|
||||||
[20]:https://hub.docker.com/
|
|
||||||
[21]:https://wiki.archlinux.org/index.php/AUR_helpers#Discontinued_or_problematic
|
|
@ -1,130 +0,0 @@
|
|||||||
translating---geekpi
|
|
||||||
|
|
||||||
How To Switch Between Multiple PHP Versions In Ubuntu
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/php-720x340.png)
|
|
||||||
|
|
||||||
Sometimes, the most recent version of an installed package might not work as you expected. Your application may not compatible with the updated package and support only a specific old version of package. In such cases, you can simply downgrade the problematic package to its earlier working version in no time. Refer our old guides on how to downgrade a package in Ubuntu and its variants [**here**][1] and how to downgrade a package in Arch Linux and its derivatives [**here**][2]. However, you need not to downgrade some packages. We can use multiple versions at the same time. For instance, let us say you are testing a PHP application in [**LAMP stack**][3] deployed in Ubuntu 18.04 LTS. After a while you find out that the application worked fine in PHP5.6, but not in PHP 7.2 (Ubuntu 18.04 LTS installs PHP 7.x by default). Are you going to reinstall PHP or the whole LAMP stack again? Not necessary, though. You don’t even have to downgrade the PHP to its earlier version. In this brief tutorial, I will show you how to switch between multiple PHP versions in Ubuntu 18.04 LTS. It’s not that difficult as you may think. Read on.
|
|
||||||
|
|
||||||
### Switch Between Multiple PHP Versions
|
|
||||||
|
|
||||||
To check the default installed version of PHP, run:
|
|
||||||
```
|
|
||||||
$ php -v
|
|
||||||
PHP 7.2.7-0ubuntu0.18.04.2 (cli) (built: Jul 4 2018 16:55:24) ( NTS )
|
|
||||||
Copyright (c) 1997-2018 The PHP Group
|
|
||||||
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
|
|
||||||
with Zend OPcache v7.2.7-0ubuntu0.18.04.2, Copyright (c) 1999-2018, by Zend Technologies
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
As you can see, the installed version of PHP is 7.2.7. After testing your application for couple days, you find out that your application doesn’t support PHP7.2. In such cases, it is a good idea to have both PHP5.x version and PHP7.x version, so that you can easily switch between to/from any supported version at any time.
|
|
||||||
|
|
||||||
You don’t need to remove PHP7.x or reinstall LAMP stack. You can use both PHP5.x and 7.x versions together.
|
|
||||||
|
|
||||||
I assume you didn’t uninstall php5.6 in your system yet. Just in case, you removed it already, you can install it again using a PPA like below.
|
|
||||||
|
|
||||||
You can install PHP5.6 from a PPA:
|
|
||||||
```
|
|
||||||
$ sudo add-apt-repository -y ppa:ondrej/php
|
|
||||||
$ sudo apt update
|
|
||||||
$ sudo apt install php5.6
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Switch from PHP7.x to PHP5.x
|
|
||||||
|
|
||||||
First disable PHP7.2 module using command:
|
|
||||||
```
|
|
||||||
$ sudo a2dismod php7.2
|
|
||||||
Module php7.2 disabled.
|
|
||||||
To activate the new configuration, you need to run:
|
|
||||||
systemctl restart apache2
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, enable PHP5.6 module:
|
|
||||||
```
|
|
||||||
$ sudo a2enmod php5.6
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Set PHP5.6 as default version:
|
|
||||||
```
|
|
||||||
$ sudo update-alternatives --set php /usr/bin/php5.6
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Alternatively, you can run the following command to to set which system wide version of PHP you want to use by default.
|
|
||||||
```
|
|
||||||
$ sudo update-alternatives --config php
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Enter the selection number to set it as default version or simply press ENTER to keep the current choice.
|
|
||||||
|
|
||||||
In case, you have installed other PHP extensions, set them as default as well.
|
|
||||||
```
|
|
||||||
$ sudo update-alternatives --set phar /usr/bin/phar5.6
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Finally, restart your Apache web server:
|
|
||||||
```
|
|
||||||
$ sudo systemctl restart apache2
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Now, check if PHP5.6 is the default version or not:
|
|
||||||
```
|
|
||||||
$ php -v
|
|
||||||
PHP 5.6.37-1+ubuntu18.04.1+deb.sury.org+1 (cli)
|
|
||||||
Copyright (c) 1997-2016 The PHP Group
|
|
||||||
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
|
|
||||||
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Switch from PHP5.x to PHP7.x
|
|
||||||
|
|
||||||
Likewise, you can switch from PHP5.x to PHP7.x version as shown below.
|
|
||||||
```
|
|
||||||
$ sudo a2enmod php7.2
|
|
||||||
|
|
||||||
$ sudo a2dismod php5.6
|
|
||||||
|
|
||||||
$ sudo update-alternatives --set php /usr/bin/php7.2
|
|
||||||
|
|
||||||
$ sudo systemctl restart apache2
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
**A word of caution:**
|
|
||||||
|
|
||||||
The final stable PHP5.6 version has reached the [**end of active support**][4] as of 19 Jan 2017. However, PHP 5.6 will continue to receive support for critical security issues until 31 Dec 2018. So, It is recommended to upgrade all your PHP applications to be compatible with PHP7.x as soon as possible.
|
|
||||||
|
|
||||||
If you want prevent PHP to be automatically upgraded in future, refer the following guide.
|
|
||||||
|
|
||||||
And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
|
|
||||||
|
|
||||||
Cheers!
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.ostechnix.com/how-to-switch-between-multiple-php-versions-in-ubuntu/
|
|
||||||
|
|
||||||
作者:[SK][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://www.ostechnix.com/author/sk/
|
|
||||||
[1]:https://www.ostechnix.com/how-to-downgrade-a-package-in-ubuntu/
|
|
||||||
[2]:https://www.ostechnix.com/downgrade-package-arch-linux/
|
|
||||||
[3]:https://www.ostechnix.com/install-apache-mariadb-php-lamp-stack-ubuntu-16-04/
|
|
||||||
[4]:http://php.net/supported-versions.php
|
|
@ -1,3 +1,6 @@
|
|||||||
|
Translating by MjSeven
|
||||||
|
|
||||||
|
|
||||||
How to display data in a human-friendly way on Linux
|
How to display data in a human-friendly way on Linux
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -0,0 +1,95 @@
|
|||||||
|
Top Linux developers' recommended programming books
|
||||||
|
======
|
||||||
|
Without question, Linux was created by brilliant programmers who employed good computer science knowledge. Let the Linux programmers whose names you know share the books that got them started and the technology references they recommend for today's developers. How many of them have you read?
|
||||||
|
|
||||||
|
Linux is, arguably, the operating system of the 21st century. While Linus Torvalds made a lot of good business and community decisions in building the open source community, the primary reason networking professionals and developers adopted Linux is the quality of its code and its usefulness. While Torvalds is a programming genius, he has been assisted by many other brilliant developers.
|
||||||
|
|
||||||
|
I asked Torvalds and other top Linux developers which books helped them on their road to programming excellence. This is what they told me.
|
||||||
|
|
||||||
|
### By shining C
|
||||||
|
|
||||||
|
Linux was developed in the 1990s, as were other fundamental open source applications. As a result, the tools and languages the developers used reflected the times, which meant a lot of C programming language. While [C is no longer as popular][1], for many established developers it was their first serious language, which is reflected in their choice of influential books.
|
||||||
|
|
||||||
|
“You shouldn't start programming with the languages I started with or the way I did,” says Torvalds. He started with BASIC, moved on to machine code (“not even assembly language, actual ‘just numbers’ machine code,” he explains), then assembly language and C.
|
||||||
|
|
||||||
|
“None of those languages are what anybody should begin with anymore,” Torvalds says. “Some of them make no sense at all today (BASIC and machine code). And while C is still a major language, I don't think you should begin with it.”
|
||||||
|
|
||||||
|
It's not that he dislikes C. After all, Linux is written in [GNU C][2]. "I still think C is a great language with a pretty simple syntax and is very good for many things,” he says. But the effort to get started with it is much too high for it to be a good beginner language by today's standards. “I suspect you'd just get frustrated. Going from your first ‘Hello World’ program to something you might actually use is just too big of a step."
|
||||||
|
|
||||||
|
From that era, the only programming book that stood out for Torvalds is Brian W. Kernighan and Dennis M. Ritchie's [C Programming Language][3], known in serious programming circles as K&R. “It was small, clear, concise,” he explains. “But you need to already have a programming background to appreciate it."
|
||||||
|
|
||||||
|
Torvalds is not the only open source developer to recommend K&R. Several others cite their well-thumbed copies as influential references, among them Wim Coekaerts, senior vice president for Linux and virtualization development at Oracle; Linux developer Alan Cox; Google Cloud CTO Brian Stevens; and Pete Graner, Canonical's vice president of technical operations.
|
||||||
|
|
||||||
|
If you want to tackle C today, Jeremy Allison, co-founder of Samba, recommends [21st Century C][4]. Then, Allison suggests, follow it up with the older but still thorough [Expert C Programming][5] as well as the 20-year-old [Programming with POSIX Threads][6].
|
||||||
|
|
||||||
|
### If not C, what?
|
||||||
|
|
||||||
|
Linux developers’ recommendations for current programming books naturally are an offshoot of the tools and languages they think are most suitable for today’s development projects. They also reflect the developers’ personal preferences. For example, Allison thinks young developers would be well served by learning Go with the help of [The Go Programming Language][7] and Rust with [Programming Rust][8].
|
||||||
|
|
||||||
|
But it may make sense to think beyond programming languages (and thus books to teach you their techniques). To do something meaningful today, “start from some environment with a toolkit that does 99 percent of the obscure details for you, so that you can script things around it," Torvalds recommends.
|
||||||
|
|
||||||
|
"Honestly, the language itself isn't nearly as important as the infrastructure around it,” he continues. “Maybe you'd start with Java or Kotlin—not because of those languages per se, but because you want to write an app for your phone and the Android SDK ends up making those better choices. Or, maybe you're interested in games, so you start with one of the game engines, which often have some scripting language of their own."
|
||||||
|
|
||||||
|
That infrastructure includes programming books specific to the operating system itself. Graner followed K&R by reading W. Richard Stevens' [Unix Network Programming][10] books. In particular, Stevens' [TCP/IP Illustrated, Volume 1: The Protocols][11] is considered still relevant even though it's almost 30 years old. Because Linux development is largely [relevant to networking infrastructure][12], Graner also recommends the many O’Reilly books on [Sendmail][13], [Bash][14], [DNS][15], and [IMAP/POP][16].
|
||||||
|
|
||||||
|
Coekaerts is also fond of Maurice Bach's [The Design of the Unix Operating System][17]. So is James Bottomley, a Linux kernel developer who used Bach's tome to pull apart Linux when the OS was new.
|
||||||
|
|
||||||
|
### Design knowledge never goes stale
|
||||||
|
|
||||||
|
But even that may be too tech-specific. "All developers should start with design before syntax,” says Stevens. “[The Design of Everyday Things][18] is one of my favorites.”
|
||||||
|
|
||||||
|
Coekaerts likes Kernighan and Rob Pike's [The Practice of Programming][19]. The design-practice book wasn't around when Coekaerts was in school, “but I recommend it to everyone to read," he says.
|
||||||
|
|
||||||
|
Whenever you ask serious long-term developers about their favorite books, sooner or later someone's going to mention Donald Knuth’s [The Art of Computer Programming][20]. Dirk Hohndel, VMware's chief open source officer, considers it timeless though, admittedly, “not necessarily super-useful today."
|
||||||
|
|
||||||
|
### Read code. Lots of code
|
||||||
|
|
||||||
|
While programming books can teach you a lot, don’t miss another opportunity that is unique to the open source community: [reading the code][21]. There are untold megabytes of examples of how to solve a given programming problem—and how you can get in trouble, too. Stevens says his No. 1 “book” for honing programming skills is having access to the Unix source code.
|
||||||
|
|
||||||
|
Don’t overlook the opportunity to learn in person, too. “I learned BASIC by being in a computer club with other people all learning together,” says Cox. “In my opinion, that is still by far the best way to learn." He learned machine code from [Mastering Machine Code on Your ZX81][22] and the Honeywell L66 B compiler manuals, but working with other developers made a big difference.
|
||||||
|
|
||||||
|
“I still think the way to learn best remains to be with a group of people having fun and trying to solve a problem you care about together,” says Cox. “It doesn't matter if you are 5 or 55."
|
||||||
|
|
||||||
|
What struck me the most about these recommendations is how often the top Linux developers started at a low level—not just C or assembly language but machine language. Obviously, it’s been very useful in helping developers understand how computing works at a very basic level.
|
||||||
|
|
||||||
|
So, ready to give hard-core Linux development a try? Greg Kroah-Hartman, the Linux stable branch kernel maintainer, recommends Steve Oualline's [Practical C Programming][23] and Samuel Harbison and Guy Steele's [C: A Reference Manual][24]. Next, read "[HOWTO do Linux kernel development][25]." Then, says Kroah-Hartman, you'll be ready to start.
|
||||||
|
|
||||||
|
In the meantime, study hard, program lots, and best of luck to you in following the footsteps of Linux's top programmers.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.hpe.com/us/en/insights/articles/top-linux-developers-recommended-programming-books-1808.html
|
||||||
|
|
||||||
|
作者:[Steven Vaughan-Nichols][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html
|
||||||
|
[1]:https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/
|
||||||
|
[2]:https://www.gnu.org/software/gnu-c-manual/
|
||||||
|
[3]:https://amzn.to/2nhyjEO
|
||||||
|
[4]:https://amzn.to/2vsL8k9
|
||||||
|
[5]:https://amzn.to/2KBbWn9
|
||||||
|
[6]:https://amzn.to/2M0rfeR
|
||||||
|
[7]:https://amzn.to/2nhyrnMe
|
||||||
|
[8]:http://shop.oreilly.com/product/0636920040385.do
|
||||||
|
[9]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_linuxbooks_containerebook0818
|
||||||
|
[10]:https://amzn.to/2MfpbyC
|
||||||
|
[11]:https://amzn.to/2MpgrTn
|
||||||
|
[12]:https://www.hpe.com/us/en/insights/articles/how-to-see-whats-going-on-with-your-linux-system-right-now-1807.html
|
||||||
|
[13]:http://shop.oreilly.com/product/9780596510299.do
|
||||||
|
[14]:http://shop.oreilly.com/product/9780596009656.do
|
||||||
|
[15]:http://shop.oreilly.com/product/9780596100575.do
|
||||||
|
[16]:http://shop.oreilly.com/product/9780596000127.do
|
||||||
|
[17]:https://amzn.to/2vsCJgF
|
||||||
|
[18]:https://amzn.to/2APzt3Z
|
||||||
|
[19]:https://www.amazon.com/Practice-Programming-Addison-Wesley-Professional-Computing/dp/020161586X/ref=as_li_ss_tl?ie=UTF8&linkCode=sl1&tag=thegroovycorpora&linkId=e6bbdb1ca2182487069bf9089fc8107e&language=en_US
|
||||||
|
[20]:https://amzn.to/2OknFsJ
|
||||||
|
[21]:https://amzn.to/2M4VVL3
|
||||||
|
[22]:https://amzn.to/2OjccJA
|
||||||
|
[23]:http://shop.oreilly.com/product/9781565923065.do
|
||||||
|
[24]:https://amzn.to/2OjzgrT
|
||||||
|
[25]:https://www.kernel.org/doc/html/v4.16/process/howto.html
|
@ -1,3 +1,5 @@
|
|||||||
|
translating---geekpi
|
||||||
|
|
||||||
How the L1 Terminal Fault vulnerability affects Linux systems
|
How the L1 Terminal Fault vulnerability affects Linux systems
|
||||||
======
|
======
|
||||||
|
|
||||||
|
File diff suppressed because it is too large
Load Diff
121
sources/tech/20180816 Garbage collection in Perl 6.md
Normal file
121
sources/tech/20180816 Garbage collection in Perl 6.md
Normal file
@ -0,0 +1,121 @@
|
|||||||
|
Garbage collection in Perl 6
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/garbage-trash-waste.png?itok=2jisoOXn)
|
||||||
|
|
||||||
|
In the [first article][1] in this series on migrating Perl 5 code to Perl 6, we looked into some of the issues you might encounter when porting your code. In this second article, we’ll get into how garbage collection differs in Perl 6.
|
||||||
|
|
||||||
|
There is no timely destruction of objects in Perl 6. This revelation usually comes as quite a shock to people used to the semantics of object destruction in Perl 5. But worry not, there are other ways in Perl 6 to get the same behavior, albeit requiring a little more thought by the developer. Let’s first examine a little background on the situation in Perl 5.
|
||||||
|
|
||||||
|
### Reference counting
|
||||||
|
|
||||||
|
In Perl 5, timely destruction of objects “going out of scope” is achieved by [reference counting][2]. When something is created in Perl 5, it has a reference count of 1 or more, which keeps it alive. In its simplest case it looks like this:
|
||||||
|
```
|
||||||
|
{
|
||||||
|
|
||||||
|
my $a = 42; # reference count of $a = 1, because lives in lexical pad
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
# lexical pad is gone, reference count to 0
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
In Perl 5, if the value is an object (aka blessed), the `DESTROY` method will be called on it.
|
||||||
|
```
|
||||||
|
{
|
||||||
|
|
||||||
|
my $a = Foo->new;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
# $a->DESTROY called
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
If no external resources are involved, timely destruction is just another way of managing memory used by a program. And you, as a programmer, shouldn’t need to care about how and when things get recycled. Having said that, timely destruction is a very nice feature to have if you need to deal with external resources, such as database handles (of which there are generally only a limited number provided by the database server). And reference counting can provide that.
|
||||||
|
|
||||||
|
However, reference counting has several drawbacks. It has taken Perl 5 core developers many years to get reference counting working correctly. And if you’re working in [XS][3], you always need to be aware of reference counting to prevent memory leakage or premature destruction.
|
||||||
|
|
||||||
|
Keeping things in sync gets more difficult in a multi-threaded environment, as you do not want to lose any updates to references made from multiple threads at the same time (as that would cause memory leakage and/or external resources to not be released). To circumvent that, some kind of locking or atomic updates would be needed, neither of which are cheap.
|
||||||
|
|
||||||
|
> Please note that Perl 5 ithreads are more like an in-memory fork with unshared memory between interpreters than threads in programming languages such as C. So, it still doesn’t need any locking for its reference counting.
|
||||||
|
|
||||||
|
Reference counting also has the basic drawback that if two objects contain references to each other, they will never be destroyed as they keep each other’s reference count above 0 (a circular reference). In practice, this often goes much deeper, more like `A -> B -> C -> A`, where A, B, and C are all keeping each other alive.
|
||||||
|
|
||||||
|
The concept of a weak reference was developed to circumvent these situations in Perl 5. Although this can fix the circular reference issue, it has performance implications and doesn’t fix the problem of having (and finding) circular references in the first place. You need to be able to find out where a weak reference can be used in the best way; otherwise, you might get unwanted premature object destruction.
|
||||||
|
|
||||||
|
### Reachability analysis
|
||||||
|
|
||||||
|
Since Perl 6 is multi-threaded in its core, it was decided at a very early stage that reference counting would be problematic performance-wise and maintenance-wise. Instead, objects are evicted from memory when more memory is needed and the object can be safely removed.
|
||||||
|
|
||||||
|
`DESTROY` method, just as you can in Perl 5. But you cannot be sure when (if ever) it will be called.
|
||||||
|
|
||||||
|
In Perl 6 you can create amethod, just as you can in Perl 5. But you cannot be sure when (if ever) it will be called.
|
||||||
|
|
||||||
|
Without getting into [too much detail][4], objects in Perl 6 are destroyed only when a garbage collection run is initiated, e.g., when a certain memory limit has been reached. Only then, if an object cannot be reached anymore by other objects in memory and it has a `DESTROY` method, will it be called just prior to the object being removed.
|
||||||
|
|
||||||
|
No garbage collection is done by Perl 6 when a program exits. Applicable [phasers][5] (such as `LEAVE` and `END`) will get called, but no garbage collection will be done other than what is (indirectly) initiated by the code run in the phasers.
|
||||||
|
|
||||||
|
If you always need an orderly shutdown of external resources used by your program (such as database handles), you can use a phaser to make sure the external resource is freed in a proper and timely manner.
|
||||||
|
|
||||||
|
For example, you can use the `END` phaser (known as an `END` block in Perl 5) to disconnect properly from a database when the program exits (for whatever reason):
|
||||||
|
```
|
||||||
|
my $dbh = DBIish.connect( ... ) or die "Couldn't connect";
|
||||||
|
|
||||||
|
END $dbh.disconnect;DBIishENDdisconnect
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `END` phaser does not need to have a block (like `{ ... }`) in Perl 6. If it doesn’t, the code in the phaser shares the lexical pad (lexpad) with the surrounding code.
|
||||||
|
|
||||||
|
There is one flaw in the code above: If the program exits before the database connection is made or if the database connection failed for whatever reason, it will still attempt to call the `.disconnect` method on whatever is in `$dbh`, which will result in an execution error. There is however a simple idiom to circumvent this situation in Perl 6 [using with][6].
|
||||||
|
```
|
||||||
|
END .disconnect with $dbh;
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
The postfix `with` matches only if the given value is defined (generally, an instantiated object) and then topicalizes it to `$_`. The `.disconnect` is short for `$_.disconnect`.
|
||||||
|
|
||||||
|
If you would like to have an external resource clean up whenever a specific scope is exited, you can use the `LEAVE` phaser inside that scope.
|
||||||
|
```
|
||||||
|
if DBIish.connect( ... ) -> $dbh {
|
||||||
|
|
||||||
|
LEAVE $dbh.disconnect; # no need for `with` here
|
||||||
|
|
||||||
|
# do your stuff with the database
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
else {
|
||||||
|
|
||||||
|
say "Could not do the stuff that needed to be done";
|
||||||
|
|
||||||
|
}DBIishLEAVEdisconnectsay
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Whenever the scope of the `if` is left, any `LEAVE` phaser will be executed. Thus the database resource will be freed whenever the code has run in that scope.
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
Even though Perl 6 does not have the timely destruction of objects that Perl 5 users are used to, it does have easy-to-use alternative ways to ensure management of external resources, similar to those in Perl 5.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/8/garbage-collection-perl-6
|
||||||
|
|
||||||
|
作者:[Elizabeth Mattijsen][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/lizmat
|
||||||
|
[1]:https://opensource.com/article/18/7/migrating-perl-5-perl-6
|
||||||
|
[2]:https://en.wikipedia.org/wiki/Reference_counting
|
||||||
|
[3]:https://en.wikipedia.org/wiki/XS_%28Perl%29
|
||||||
|
[4]:https://github.com/MoarVM/MoarVM/blob/master/docs/gc.markdown
|
||||||
|
[5]:https://docs.perl6.org/language/phasers
|
||||||
|
[6]:https://docs.perl6.org/syntax/with%20orwith%20without
|
634
translated/tech/20180726 The evolution of package managers.md
Normal file
634
translated/tech/20180726 The evolution of package managers.md
Normal file
@ -0,0 +1,634 @@
|
|||||||
|
包管理器的进化
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
|
||||||
|
|
||||||
|
今天,每个可计算设备都会使用某种软件来完成预定的任务。在软件开发的上古时期,为了找出软件中的“虫”和其他缺陷,软件会被严格的测试。在近十年间,人们希望通过持续不断的安装新版本的软件来解决软件的缺陷问题,软件被通过互联网来频繁分发,而不是在软件尚未发布前进行过多的测试。在很多情况下,每个独立的应用软件都有其自带的更新器。而其他一些软件则让用户自己去搞明白如何获取和升级软件。
|
||||||
|
|
||||||
|
Linux较早的采用了维护一个中心化的软件仓库来发布软件更新这种做法,用户可以在这个软件仓库里查找并安装软件。在这篇文章里, 笔者将回顾在Linux上的如何进行软件安装的崎岖历史,以及现代操作系统如何在软件安全漏洞不断被曝光中保持软件始终得到更新。
|
||||||
|
|
||||||
|
### 那么在包管理器出现之前在Linux上是如何安装软件的呢?
|
||||||
|
曾几何时,软件都是通过FTP下载到本地或邮件列表(译注:即通过邮件列表发布源代码的补丁包)来分发的(最终这些发布方式在互联网的迅猛发展下都演化成为一个个现今常见的软件发布网站)。通常几个小补丁文件会被压缩成一个Tar格式的包,你需要做的是先解压这个包,然后仔细阅读当中的README文件, 如果你的系统上恰好有GCC(译注:GNU C Compiler)或者其他厂商的C编译器的话,你得首先运行./configure脚本,并在脚本后添加相应的参数如库函数的路径,创建可执行文件的路径等等,除此之外,configure脚本也会检查你操作系统上的软件依赖是否满足安装要求,如果configure正常执行完毕,一个Makefile文件将会被创建。
|
||||||
|
|
||||||
|
如果一个Makefile文件被成功创建, 你就可以接下去执行‘make’ 命令(这由你的编译器提供)。make命令也有很多参数,被称为make标识,这些标识能帮助优化最终生成出来的二进制可执行文件。在计算机世界的早期,这些优化是非常重要的,因为彼时的计算机硬件正在为了跟上软件迅速的发展而疲于奔命。今日今时,编译标识变得更加通用而不是为了优化哪些具体的硬件型号,这得益于现代硬件和现代软件相比已经变得成本低廉,唾手可得。
|
||||||
|
|
||||||
|
|
||||||
|
最后,在make 完成之后, 你需要运行'make install'(或'sudo make install')(译注:依赖于你的用户权限) 来‘真正’将这个软件安装到你的系统上。可以想象,为你系统上的每一个软件都执行上述的流程将是多么无聊费时,更不用说如果更新一个已经安装的软件将会多复杂,多么需要精力投入。
|
||||||
|
(译注:上述流程也称CMMI安装, 即Configure, Make, Make Install)
|
||||||
|
### 那么软件包是什么?
|
||||||
|
|
||||||
|
’软件包‘(译注:下文简称包)这个概念是用来解决在软件安装升级过程中的复杂性的。包将软件安装升级中需要的多个数据文件合并成一个单独的文件,这将极大的提高可移植性和减小存储空间(译注:减少存储空间这一点在现在已经不再重要),包中的二进制可执行文件已经预先用安装开发者所选择的编译标识预编译。包本身包括了所有需要的元数据,如软件的名字,软件的说明,版本号,以及要运行这个软件所需要的依赖包等等。
|
||||||
|
|
||||||
|
各个不同的Linux发行版都创造了他们自己的包格式,其中做常用的包格式有:
|
||||||
|
|
||||||
|
* .deb: 这种包格式由Debian, Ubuntu, Linux Mint以及其他的变种使用。这是最早被发明的包类型。
|
||||||
|
* .rpm: 这种包格式源自红帽子包管理器(译注: 取自英文的首字母)。使用这种包的Linux发行版有Red Hat, Fedora, SUSE以及其他一些较小的发行版。
|
||||||
|
* .tar.xz: 这种包格式只是一个软件压缩包而已,Arch Linux使用这种发行版中立的格式来安装软件。
|
||||||
|
|
||||||
|
|
||||||
|
尽管上述的包格式自身并不能管理软件的依赖问题,但是他们的出现将Linux软件包管理向前推进了一大步。
|
||||||
|
|
||||||
|
### 软件仓库到底是什么?
|
||||||
|
|
||||||
|
多年以前(当智能电话还没有像现在这样流行时),非Linux世界的用户是很难理解软件仓库的概念的。甚至今时今日,大多数完全工作在Windows下的用户还是习惯于打开浏览器,搜索要安装的软件(或升级包),下载然后安装。但是,智能电话传播了软件’商店’(译注: 对应Linux里的软件仓库)这样一个概念。智能电话用户获取软件和包管理器的工作方式已经非常相近了。些许不同的是,尽管大多数软件商店还在费力美化它的图形界面来吸引用户,大多数Linux用户还是愿意使用命令行来安装软件。总而言之,软件仓库是一个中心化的可安装软件列表,上面列举了在当前系统中预先配置好的软件仓库里所有可以安装的软件。下面我们举一些例子来说在各个不同的Linux发行版下如何在对应的软件仓库里搜寻某个特定的软件。
|
||||||
|
|
||||||
|
在Arch Linux下使用aurman
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ aurman -Ss kate
|
||||||
|
|
||||||
|
extra/kate 18.04.2-2 (kde-applications kdebase)
|
||||||
|
Advanced Text Editor
|
||||||
|
aur/kate-root 18.04.0-1 (11, 1.139399)
|
||||||
|
Advanced Text Editor, patched to be able to run as root
|
||||||
|
aur/kate-git r15288.15d26a7-1 (1, 1e-06)
|
||||||
|
An advanced editor component which is used in numerous KDE applications requiring a text editing component
|
||||||
|
```
|
||||||
|
|
||||||
|
在CentOS 7下使用 YUM
|
||||||
|
|
||||||
|
```
|
||||||
|
[user@centos ~]$ yum search kate
|
||||||
|
|
||||||
|
kate-devel.x86_64 : Development files for kate
|
||||||
|
kate-libs.x86_64 : Runtime files for kate
|
||||||
|
kate-part.x86_64 : Kate kpart plugin
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
在Ubuntu下使用APT
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu ~ $ apt search kate
|
||||||
|
Sorting... Done
|
||||||
|
Full Text Search... Done
|
||||||
|
|
||||||
|
kate/xenial 4:15.12.3-0ubuntu2 amd64
|
||||||
|
powerful text editor
|
||||||
|
|
||||||
|
kate-data/xenial,xenial 4:4.14.3-0ubuntu4 all
|
||||||
|
shared data files for Kate text editor
|
||||||
|
|
||||||
|
kate-dbg/xenial 4:15.12.3-0ubuntu2 amd64
|
||||||
|
debugging symbols for Kate
|
||||||
|
|
||||||
|
kate5-data/xenial,xenial 4:15.12.3-0ubuntu2 all
|
||||||
|
shared data files for Kate text editor
|
||||||
|
```
|
||||||
|
|
||||||
|
### 最好用的包管理器有哪些?
|
||||||
|
|
||||||
|
如上示例的输出,包管理器用来和相应的软件仓库交互,获取软件的相应信息。下面对他们做一个简短介绍。
|
||||||
|
|
||||||
|
### 基于PRM包格式的包管理器
|
||||||
|
|
||||||
|
更新基于RPM的系统,特别是那些基于Red Hat技术的系统,有着非常有趣而又详细的历史。实际上,现在的[YUM][2]版本(企业级发布版)和[DNF][3](社区版)就融合了好几个开源项目来提供他们现在的功能。
|
||||||
|
|
||||||
|
Red Hat最初使用的包管理器,即[RPM][4](红帽包管理器),今时今日还在广泛使用着。不过,它的主要作用是安装本地的RPM包,而不是去在软件仓库搜索软件。一个叫'up2date'的包管理器被开发出来,它被用来通知用户包的最新更新,还能让用户在远程仓库里搜索软件并便捷的安装软件的依赖。尽管这个包管理器尽职尽责,一些社区成员还是感觉'up2date'有着明显的不足。
|
||||||
|
|
||||||
|
现在的YUM来自于好几个不同社区的努力。1999-2001年一群在Terra Soft Solution的伙计们开发了黄狗更新器(YUP),将其作为[Yellow Dog Linux][5]图形安装器的后端。杜克大学喜欢这个主意就决定去增强它的功能,他们开发了[黄狗更新器--修改版(YUM)][16],这最终被用来帮助管理杜克大学的Red Hat系统。Yum壮大的很快,到2005年,它已经被超过一半的Linux市场所采用。今日,几乎所有的使用RPM的的Linux都会使用YUM来进行包管理(当然也有一些例外)。
|
||||||
|
|
||||||
|
### 使用YUM
|
||||||
|
|
||||||
|
为了能让YUM正常工作,比如从一个软件仓库里下载和安装包,仓库说明文件必须放在/etc/yum.repos.d/目录下且必须以'.repo'作为扩展名。如下是repo文件的内容:
|
||||||
|
|
||||||
|
```
|
||||||
|
[local_base]
|
||||||
|
name=Base CentOS (local)
|
||||||
|
baseurl=http://7-repo.apps.home.local/yum-repo/7/
|
||||||
|
enabled=1
|
||||||
|
gpgcheck=0
|
||||||
|
```
|
||||||
|
|
||||||
|
这是笔者本地仓库之一,这也是为什么gpgcheck值为零的原因。如果这个值为1的话,每个包都需要被秘钥签名,相应的秘钥也要导入到安装软件的系统上。因为这个软件仓库是笔者本人维护的且笔者信任这个仓库里的包,所以就不去对他们一一签名了。
|
||||||
|
|
||||||
|
|
||||||
|
当一个仓库文件准备好时,你就能开始从远程软件仓库开始安装文件了。最基本的命令是'yum update',这将会更新所有已安装的包。你也不需要用特殊的命令来更新仓库本身,所有这一切都已自动完成了。运行命令示例如下:
|
||||||
|
|
||||||
|
```
|
||||||
|
[user@centos ~]$ sudo yum update
|
||||||
|
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager
|
||||||
|
local_base | 3.6 kB 00:00:00
|
||||||
|
local_epel | 2.9 kB 00:00:00
|
||||||
|
local_rpm_forge | 1.9 kB 00:00:00
|
||||||
|
local_updates | 3.4 kB 00:00:00
|
||||||
|
spideroak-one-stable | 2.9 kB 00:00:00
|
||||||
|
zfs | 2.9 kB 00:00:00
|
||||||
|
(1/6): local_base/group_gz | 166 kB 00:00:00
|
||||||
|
(2/6): local_updates/primary_db | 2.7 MB 00:00:00
|
||||||
|
(3/6): local_base/primary_db | 5.9 MB 00:00:00
|
||||||
|
(4/6): spideroak-one-stable/primary_db | 12 kB 00:00:00
|
||||||
|
(5/6): local_epel/primary_db | 6.3 MB 00:00:00
|
||||||
|
(6/6): zfs/x86_64/primary_db | 78 kB 00:00:00
|
||||||
|
local_rpm_forge/primary_db | 125 kB 00:00:00
|
||||||
|
Determining fastest mirrors
|
||||||
|
Resolving Dependencies
|
||||||
|
--> Running transaction check
|
||||||
|
```
|
||||||
|
|
||||||
|
如果你确定想让YUM在执行任何命令时不要停下来等待用户输入,你可以命令里放‘-y’标志,如'yum update -y'
|
||||||
|
|
||||||
|
|
||||||
|
安装一个新包很简单。首先,用'yum search'搜索包的名字。
|
||||||
|
|
||||||
|
```
|
||||||
|
[user@centos ~]$ yum search kate
|
||||||
|
|
||||||
|
artwiz-aleczapka-kates-fonts.noarch : Kates font in Artwiz family
|
||||||
|
ghc-highlighting-kate-devel.x86_64 : Haskell highlighting-kate library development files
|
||||||
|
kate-devel.i686 : Development files for kate
|
||||||
|
kate-devel.x86_64 : Development files for kate
|
||||||
|
kate-libs.i686 : Runtime files for kate
|
||||||
|
kate-libs.x86_64 : Runtime files for kate
|
||||||
|
kate-part.i686 : Kate kpart plugin
|
||||||
|
```
|
||||||
|
|
||||||
|
当你找到你要安装的包后,你可以用‘sudo yum install kate-devel -y’来安装。如果你安装了你不需要的软件,可以用‘sudo yum remove kdate-devel -y’来从系统上删除它,默认情况下,YUM会删除软件包以及它的依赖。
|
||||||
|
|
||||||
|
有些时候你甚至都不清楚要安装的包的名称,你只知道某个实用程序的名字。(译注:可以理解实用程序是安装包的子集)。例如,你想找实用程序‘updatedb‘(它是用来创建/更新由‘locate’命令创建的数据库的),直接试图安装'updatedb'会返回下面的结果:
|
||||||
|
|
||||||
|
```
|
||||||
|
[user@centos ~]$ sudo yum install updatedb
|
||||||
|
Loaded plugins: fastestmirror, langpacks
|
||||||
|
Loading mirror speeds from cached hostfile
|
||||||
|
No package updatedb available.
|
||||||
|
Error: Nothing to do
|
||||||
|
```
|
||||||
|
|
||||||
|
你可以搜索实用程序来自哪个包:
|
||||||
|
|
||||||
|
```
|
||||||
|
[user@centos ~]$ yum whatprovides *updatedb
|
||||||
|
Loaded plugins: fastestmirror, langpacks
|
||||||
|
Loading mirror speeds from cached hostfile
|
||||||
|
|
||||||
|
bacula-director-5.2.13-23.1.el7.x86_64 : Bacula Director files
|
||||||
|
Repo : local_base
|
||||||
|
Matched from:
|
||||||
|
Filename : /usr/share/doc/bacula-director-5.2.13/updatedb
|
||||||
|
|
||||||
|
mlocate-0.26-8.el7.x86_64 : An utility for finding files by name
|
||||||
|
Repo : local_base
|
||||||
|
Matched from:
|
||||||
|
Filename : /usr/bin/updatedb
|
||||||
|
```
|
||||||
|
|
||||||
|
笔者使用星号的原因是'yum whatprovides'使用路径去匹配文件。笔者不确定文件在哪里,所以使用星号去指代任意路径。
|
||||||
|
|
||||||
|
当然YUM还有很多其他的可选项。这里笔者希望你能够自己查看YUM的手册来找到其他额外的可选额。
|
||||||
|
|
||||||
|
[Dandified Yum (DNF)][7]是YUM的下一代接班人。从Fedora 18开始被作为包管理器引入系统,不过它并没有被企业版所采用,所以它只在Fedora(以及变种)上占据了主导地位。DNF的用法和YUM几乎一模一样,它主要是用来解决性能问题,晦涩无说明的API,缓慢的依赖解析以及偶尔的高内存占用。DNF是作为YUM的直接替代品来开发的,因此这里笔者就不重复它的用法了,你只用简单的将YUM替换为DNF就行了。
|
||||||
|
|
||||||
|
###使用Zypper
|
||||||
|
|
||||||
|
[Zypper][8]是用来管理RPM包的另外一个包管理器。这个包管理器主要用于[SUSE][9](和[openSUSE][10]),在[MeeGo][11],[Sailfish OS][12],[Tizen][13]上也有使用。它在2006年被开发出来,也已经经过了多次迭代发布。除了作为[YaST][14]的系统管理工具和有些用户认为它比YUM要快之外也没有什么好多说的。
|
||||||
|
|
||||||
|
Zypper's使用与YUM非常相像。它被用来搜索,更新,安装和删除包,请使用如下命令:
|
||||||
|
|
||||||
|
```
|
||||||
|
zypper search kate
|
||||||
|
zypper update
|
||||||
|
zypper install kate
|
||||||
|
zypper remove kate
|
||||||
|
```
|
||||||
|
|
||||||
|
使用Zypper的系统在添加软件仓库的方面的做法上有些许不同,与上述讨论的包管理器不同,zypper使用包管理器本身来添加软件仓库。最通用的方法是通过一个URL,但是Zypper也支持从仓库文件里导入。
|
||||||
|
|
||||||
|
```
|
||||||
|
suse:~ # zypper addrepo http://download.videolan.org/pub/vlc/SuSE/15.0 vlc
|
||||||
|
Adding repository 'vlc' [done]
|
||||||
|
Repository 'vlc' successfully added
|
||||||
|
|
||||||
|
Enabled : Yes
|
||||||
|
Autorefresh : No
|
||||||
|
GPG Check : Yes
|
||||||
|
URI : http://download.videolan.org/pub/vlc/SuSE/15.0
|
||||||
|
Priority : 99
|
||||||
|
```
|
||||||
|
|
||||||
|
你也能用相似的手段来删除软件仓库:
|
||||||
|
|
||||||
|
```
|
||||||
|
suse:~ # zypper removerepo vlc
|
||||||
|
Removing repository 'vlc' ...................................[done]
|
||||||
|
Repository 'vlc' has been removed.
|
||||||
|
```
|
||||||
|
|
||||||
|
使用'zypper repos'命令来查看当前系统上的软件仓库的状态:
|
||||||
|
|
||||||
|
```
|
||||||
|
suse:~ # zypper repos
|
||||||
|
Repository priorities are without effect. All enabled repositories share the same priority.
|
||||||
|
|
||||||
|
# | Alias | Name | Enabled | GPG Check | Refresh
|
||||||
|
---|---------------------------|-----------------------------------------|---------|-----------|--------
|
||||||
|
1 | repo-debug | openSUSE-Leap-15.0-Debug | No | ---- | ----
|
||||||
|
2 | repo-debug-non-oss | openSUSE-Leap-15.0-Debug-Non-Oss | No | ---- | ----
|
||||||
|
3 | repo-debug-update | openSUSE-Leap-15.0-Update-Debug | No | ---- | ----
|
||||||
|
4 | repo-debug-update-non-oss | openSUSE-Leap-15.0-Update-Debug-Non-Oss | No | ---- | ----
|
||||||
|
5 | repo-non-oss | openSUSE-Leap-15.0-Non-Oss | Yes | ( p) Yes | Yes
|
||||||
|
6 | repo-oss | openSUSE-Leap-15.0-Oss | Yes | ( p) Yes | Yes
|
||||||
|
```
|
||||||
|
|
||||||
|
'zypper'甚至还有和YUM相同的功能:搜索包存在哪个实用程序。和YUM有所不同的是,它在命令行里使用破折号(但是这个搜索方法现在被废除了...)
|
||||||
|
|
||||||
|
```
|
||||||
|
localhost:~ # zypper what-provides kate
|
||||||
|
Command 'what-provides' is replaced by 'search --provides --match-exact'.
|
||||||
|
See 'help search' for all available options.
|
||||||
|
Loading repository data...
|
||||||
|
Reading installed packages...
|
||||||
|
|
||||||
|
S | Name | Summary | Type
|
||||||
|
---|------|----------------------|------------
|
||||||
|
i+ | Kate | Advanced Text Editor | application
|
||||||
|
i | kate | Advanced Text Editor | package
|
||||||
|
```
|
||||||
|
|
||||||
|
YUM,DNF和Zypper三剑客拥有的功能比在这篇小文里讨论的要多得多,请查看官方文档来得到更深入的信息。
|
||||||
|
|
||||||
|
###基于Debian 的包管理器
|
||||||
|
|
||||||
|
作为一个现今仍在被积极维护的历史悠久的Linux发布版本,Debian的包管理系统和基于RPM的系统的包管理系统非常类似。它使用扩展名为’.deb‘的包,这种文件能被一个叫做dpkg的工具所管理。dpgk同RPM非常相似,它被设计成用来管理在存在于本地(硬盘)的包。它不会去做包依赖关系解析(它会做依赖关系检查,不过仅此而已),而且在同远程软件仓库交互上也并无可靠的途径,为了提高用户体验并便于使用,Debian项目开始了一个软件项目:Deity,最终这个代号被丢弃并改成了现在的[Advanced Pack Tool(APT)][15]
|
||||||
|
|
||||||
|
在1998年,APT测试版本发布(甚至早于1999年的Debian 2.1正式版发布),许多用户认为APT是基于Debian系统的默认包管理器。APT使用了和RPM一样的风格来管理,不过和YUM使用单独的’.repo‘不同,’apt‘曾经使用’/etc/apt/sources.list‘文件来管理软件仓库,后来的变成也可以使用’/etc/apt/sources.d‘目录来管理。如同基于RPM的系统一样,你也有很多很多选项配置来完成同样的事情。你可以编辑/创建前述的文件,或者使用图形界面来完成上述工作(如Ubuntu的’Software & Updates‘),为了给所有的Linux发行版统一的待遇,笔者将会只介绍命令行的选项。
|
||||||
|
要想不直接编辑文件内容而直接增加软件仓库的话,用如下命令:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ sudo apt-add-repository "deb http://APT.spideroak.com/ubuntu-spideroak-hardy/ release restricted"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
这个命令将会在’/etc/apt/sources.list.d‘目录里创建一个’spideroakone.list‘文件。显而易见,文件里的内容依赖于所添加的软件仓库,如果你想加一个个人软件包存档的话(译注:PPA),你可以用如下的办法:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ sudo apt-add-repository ppa:gnome-desktop
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
注意: Debian并不支持本地PPAs。
|
||||||
|
|
||||||
|
在添加了一个软件仓库后,需要通知Debian有一个新的仓库可以用来搜索包,可以运行’apt-get update‘来完成。
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ sudo apt-get update
|
||||||
|
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
|
||||||
|
Hit:2 http://APT.spideroak.com/ubuntu-spideroak-hardy release InRelease
|
||||||
|
Hit:3 http://ca.archive.ubuntu.com/ubuntu xenial InRelease
|
||||||
|
Get:4 http://ca.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
|
||||||
|
Get:5 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [517 kB]
|
||||||
|
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [455 kB]
|
||||||
|
Get:7 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [221 kB]
|
||||||
|
...
|
||||||
|
|
||||||
|
Fetched 6,399 kB in 3s (2,017 kB/s)
|
||||||
|
Reading package lists... Done
|
||||||
|
```
|
||||||
|
|
||||||
|
现在新的软件仓库已经在你的系统里安装并更新好了,你可以用’apt-cache‘来搜索你想要的包了。
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ apt-cache search kate
|
||||||
|
aterm-ml - Afterstep XVT - a VT102 emulator for the X window system
|
||||||
|
frescobaldi - Qt4 LilyPond sheet music editor
|
||||||
|
gitit - Wiki engine backed by a git or darcs filestore
|
||||||
|
jedit - Plugin-based editor for programmers
|
||||||
|
kate - powerful text editor
|
||||||
|
kate-data - shared data files for Kate text editor
|
||||||
|
kate-dbg - debugging symbols for Kate
|
||||||
|
katepart - embeddable text editor component
|
||||||
|
```
|
||||||
|
|
||||||
|
要安装‘kate’,简单的运行下面的命令:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ sudo apt-get install kate
|
||||||
|
|
||||||
|
```
|
||||||
|
To remove a package, use `apt-get remove`:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ sudo apt-get remove kate
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
APT并没有提供一个类似于'yum whatprovides'的功能,如果你想深入包内部去确定一个特定的文件的话,也有一些别的方法能帮你完成这个目标,
|
||||||
|
|
||||||
|
如: 用dpkg
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ dpkg -S /bin/ls
|
||||||
|
coreutils: /bin/ls
|
||||||
|
```
|
||||||
|
或者: apt-file
|
||||||
|
|
||||||
|
```
|
||||||
|
user@ubuntu:~$ sudo apt-get install apt-file -y
|
||||||
|
|
||||||
|
user@ubuntu:~$ sudo apt-file update
|
||||||
|
|
||||||
|
user@ubuntu:~$ apt-file search kate
|
||||||
|
```
|
||||||
|
|
||||||
|
'apt-file search'的问题是和‘yum whatprovides’不同,因为自动添加了通配符搜索而输出过于详细,在结果里包括了所有包含有‘kate’的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
kate: /usr/bin/kate
|
||||||
|
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katebacktracebrowserplugin.so
|
||||||
|
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katebuildplugin.so
|
||||||
|
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katecloseexceptplugin.so
|
||||||
|
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katectagsplugin.so
|
||||||
|
```
|
||||||
|
|
||||||
|
上面这些例子大部分都使用了‘apt-get’.请注意现今大多数的Ubuntu教程里都径直使用了'apt'. 单独一个‘apt’是用来实现最常用的那些APT命令的。APT看上去是用来整合那些被分散在'apt-get','apt-cache'以及其他一些命令的的功能的。它还加上了一些额外的改进,如色彩,进度条以及其他一些小功能。上述的常用apt功能都能被apt替代,但是并不是所有的基于Debian的系统都能使用‘apt’接受安全包补丁的,你有可能要安装额外的包的实现上述功能。
|
||||||
|
|
||||||
|
###基于Arch的包管理器
|
||||||
|
|
||||||
|
[Arch Linux][16]使用称为[packman][17]的包管理器。和‘.deb’以及'.rpm’不同,它使用更为传统的压缩包形式'.tar.xz'。这帮助Arch Linux包能够使用更小的包尺寸。自从2002年发布以来,pacman一直在稳定发布和改善。使用它最大的好处之一是他支持[Arch Build System][18],一个从源代码级别构建包的构建系统。这个构建系统使用一个叫PKGBUILD的文件,这个文件包含了如版本号,发布号,依赖等等的元数据,以及为编译一个遵守Arch Linux需求的包所需要的带有必须编译选项的脚本。而编译的结果就是前文所提的被pacman所使用的‘.tar.xz’的文件。
|
||||||
|
|
||||||
|
上述的这套系统技术上导致了[Arch User Respository][19](AUR)的产生,这是一个社区驱动的软件仓库,仓库里包括有PKGBUILD文件以及支持的补丁包或脚本。这给Arch Linux带了无穷无尽的软件资源。最为明显的好处是如果一个用户(或开发者)希望他开发的软件能被广大公众所使用,他不必通过官方途径去在主流软件仓库获得许可。而不利之处则是它必须将依赖社区的流程,类似于[Docker Hub][20],Canonical's Snap Packages(译注: Canonical是Ubuntu的发行公司),或者其他相似的机制。有很多特定于AUR的包管理器能被用来从ARU里的PGKBUILD文件下载,编译,安装,下面我们来仔细看看怎么做。
|
||||||
|
|
||||||
|
###使用pacman和官方软件仓库
|
||||||
|
|
||||||
|
Arch的主要包管理器:pacman,使用标识位而不是像’yum‘或’apt‘一样使用命令词。例如,要搜索一个包,你要用'pacman -Ss'。和Linux上别的命令一样,你可以找到pacman 的’manpage‘和在线帮助。pacman大多数的命令都使用了同步(-S)这个标识位。例如:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ pacman -Ss kate
|
||||||
|
|
||||||
|
extra/kate 18.04.2-2 (kde-applications kdebase)
|
||||||
|
Advanced Text Editor
|
||||||
|
extra/libkate 0.4.1-6 [installed]
|
||||||
|
A karaoke and text codec for embedding in ogg
|
||||||
|
extra/libtiger 0.3.4-5 [installed]
|
||||||
|
A rendering library for Kate streams using Pango and Cairo
|
||||||
|
extra/ttf-cheapskate 2.0-12
|
||||||
|
TTFonts collection from dustimo.com
|
||||||
|
community/haskell-cheapskate 0.1.1-100
|
||||||
|
Experimental markdown processor.
|
||||||
|
```
|
||||||
|
|
||||||
|
Arch也使用和别的包管理器类似的软件仓库。在上面的输出中,搜索结果前面有标明它是从哪个仓库里搜索到的(这里是'extra/'和’community/‘)。同Red Hat和Debian系统一样,Arch依靠用户将软件仓库的信息加入到一个特定的文件里:/etc/pacman.conf。下面的例子非常接近一个仓库系统。笔者还打开了'[multilib]'仓库来支持Steam:
|
||||||
|
|
||||||
|
```
|
||||||
|
[options]
|
||||||
|
Architecture = auto
|
||||||
|
|
||||||
|
Color
|
||||||
|
CheckSpace
|
||||||
|
|
||||||
|
SigLevel = Required DatabaseOptional
|
||||||
|
LocalFileSigLevel = Optional
|
||||||
|
|
||||||
|
[core]
|
||||||
|
Include = /etc/pacman.d/mirrorlist
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
Include = /etc/pacman.d/mirrorlist
|
||||||
|
|
||||||
|
[community]
|
||||||
|
Include = /etc/pacman.d/mirrorlist
|
||||||
|
|
||||||
|
[multilib]
|
||||||
|
Include = /etc/pacman.d/mirrorlist
|
||||||
|
```
|
||||||
|
|
||||||
|
你也可以指定在’pacman.conf‘里指定具体的URL。这个功能可以用来确定在某一时刻所有的包来自一个确定的地方,比如,如果一个安装包存在严重的功能缺陷并且很不幸它恰好还有几个包依赖,你能及时回滚到一个安全点,如果你已经在’pacman.conf‘里加入了具体的URL的话,你就用用这个命令降级你的系统。
|
||||||
|
|
||||||
|
```
|
||||||
|
[core]
|
||||||
|
Server=https://archive.archlinux.org/repos/2017/12/22/$repo/os/$arch
|
||||||
|
```
|
||||||
|
|
||||||
|
和Debian系统一样,Arch并不会自动更新它的本地仓库。你可以用下面的命令来刷新包管理器的数据库:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ sudo pacman -Sy
|
||||||
|
|
||||||
|
:: Synchronizing package databases...
|
||||||
|
core 130.2 KiB 851K/s 00:00 [##########################################################] 100%
|
||||||
|
extra 1645.3 KiB 2.69M/s 00:01 [##########################################################] 100%
|
||||||
|
community 4.5 MiB 2.27M/s 00:02 [##########################################################] 100%
|
||||||
|
multilib is up to date
|
||||||
|
```
|
||||||
|
|
||||||
|
你可以看到在上述的输出中,'pacman'认为multilib包数据库是更新到最新状态的。如果你认为这个结果不正确的话,你可以强制运行刷新:pacman -Syy。如果你想升级你的整个系统的话(不包括从AUR安装的包),你可以运行’pacman -Syu‘:
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ sudo pacman -Syu
|
||||||
|
|
||||||
|
:: Synchronizing package databases...
|
||||||
|
core is up to date
|
||||||
|
extra is up to date
|
||||||
|
community is up to date
|
||||||
|
multilib is up to date
|
||||||
|
:: Starting full system upgrade...
|
||||||
|
resolving dependencies...
|
||||||
|
looking for conflicting packages...
|
||||||
|
|
||||||
|
Packages (45) ceph-13.2.0-2 ceph-libs-13.2.0-2 debootstrap-1.0.105-1 guile-2.2.4-1 harfbuzz-1.8.2-1 harfbuzz-icu-1.8.2-1 haskell-aeson-1.3.1.1-20
|
||||||
|
haskell-attoparsec-0.13.2.2-24 haskell-tagged-0.8.6-1 imagemagick-7.0.8.4-1 lib32-harfbuzz-1.8.2-1 lib32-libgusb-0.3.0-1 lib32-systemd-239.0-1
|
||||||
|
libgit2-1:0.27.2-1 libinput-1.11.2-1 libmagick-7.0.8.4-1 libmagick6-6.9.10.4-1 libopenshot-0.2.0-1 libopenshot-audio-0.1.6-1 libosinfo-1.2.0-1
|
||||||
|
libxfce4util-4.13.2-1 minetest-0.4.17.1-1 minetest-common-0.4.17.1-1 mlt-6.10.0-1 mlt-python-bindings-6.10.0-1 ndctl-61.1-1 netctl-1.17-1
|
||||||
|
nodejs-10.6.0-1
|
||||||
|
|
||||||
|
Total Download Size: 2.66 MiB
|
||||||
|
Total Installed Size: 879.15 MiB
|
||||||
|
Net Upgrade Size: -365.27 MiB
|
||||||
|
|
||||||
|
:: Proceed with installation? [Y/n]
|
||||||
|
```
|
||||||
|
|
||||||
|
在前面提到的降级系统的情景中,你可以运行’pacman -Syyuu‘来强行降级系统。你必须重视这一点:虽然在大多数情况下这不会引起问题,但是这种可能性还是存在,即降级一个包或几个包将会引起级联传播的失败并会将你的系统处于不一致的状态(译注:即系统进入无法正常使用的状态),请务必小心!
|
||||||
|
|
||||||
|
运行’pacman -S kate‘来安装一个包。
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ sudo pacman -S kate
|
||||||
|
|
||||||
|
resolving dependencies...
|
||||||
|
looking for conflicting packages...
|
||||||
|
|
||||||
|
Packages (7) editorconfig-core-c-0.12.2-1 kactivities-5.47.0-1 kparts-5.47.0-1 ktexteditor-5.47.0-2 syntax-highlighting-5.47.0-1 threadweaver-5.47.0-1
|
||||||
|
kate-18.04.2-2
|
||||||
|
|
||||||
|
Total Download Size: 10.94 MiB
|
||||||
|
Total Installed Size: 38.91 MiB
|
||||||
|
|
||||||
|
:: Proceed with installation? [Y/n]
|
||||||
|
```
|
||||||
|
|
||||||
|
你可以运行’pacman -R kate‘来删除一个包。这将会只删除这个包自身而不会去删除它的依赖包。
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ sudo pacman -S kate
|
||||||
|
|
||||||
|
checking dependencies...
|
||||||
|
|
||||||
|
Packages (1) kate-18.04.2-2
|
||||||
|
|
||||||
|
Total Removed Size: 20.30 MiB
|
||||||
|
|
||||||
|
:: Do you want to remove these packages? [Y/n]
|
||||||
|
```
|
||||||
|
|
||||||
|
如果你想删除没有被其他包依赖的包,你可以运行‘pacman -Rs:’
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ sudo pacman -Rs kate
|
||||||
|
|
||||||
|
checking dependencies...
|
||||||
|
|
||||||
|
Packages (7) editorconfig-core-c-0.12.2-1 kactivities-5.47.0-1 kparts-5.47.0-1 ktexteditor-5.47.0-2 syntax-highlighting-5.47.0-1 threadweaver-5.47.0-1
|
||||||
|
kate-18.04.2-2
|
||||||
|
|
||||||
|
Total Removed Size: 38.91 MiB
|
||||||
|
|
||||||
|
:: Do you want to remove these packages? [Y/n]
|
||||||
|
```
|
||||||
|
|
||||||
|
在笔者看来,Pacman是搜索一个指定实用程序中的指定包的最齐全的工具。如上所示,'YUM'和'APT'都依赖硬编码‘路径’去搜索到有用的结果而Pacman则做了一些智能的猜测,它会去猜测你最有可能想搜索的包。
|
||||||
|
|
||||||
|
```
|
||||||
|
user@arch ~ $ sudo pacman -Fs updatedb
|
||||||
|
core/mlocate 0.26.git.20170220-1
|
||||||
|
usr/bin/updatedb
|
||||||
|
|
||||||
|
user@arch ~ $ sudo pacman -Fs kate
|
||||||
|
extra/kate 18.04.2-2
|
||||||
|
usr/bin/kate
|
||||||
|
```
|
||||||
|
|
||||||
|
###使用AUR
|
||||||
|
|
||||||
|
有很多流行的AUR包管理器助手。其中'yaourt' 和'pacaur' 颇为流行。不过,这两个项目已经被列为不继续开发以及有已知的问题未解决[Arch Wiki][21]。因为这个原因,这里直接讨论‘aurman’,除了会搜索AUR以及包含几个有帮助的(其实很危险)的选项之外,它的工作机制和'pacman'极其类似。从AUR安装一个包将会初始化包维护者的构建脚本。你将会被要求输入几次授权以便让程序继续进行下去(为了简短起见,笔者截断了输出)。
|
||||||
|
|
||||||
|
```
|
||||||
|
aurman -S telegram-desktop-bin
|
||||||
|
~~ initializing aurman...
|
||||||
|
~~ the following packages are neither in known repos nor in the aur
|
||||||
|
...
|
||||||
|
~~ calculating solutions...
|
||||||
|
|
||||||
|
:: The following 1 package(s) are getting updated:
|
||||||
|
aur/telegram-desktop-bin 1.3.0-1 -> 1.3.9-1
|
||||||
|
|
||||||
|
?? Do you want to continue? Y/n: Y
|
||||||
|
|
||||||
|
~~ looking for new pkgbuilds and fetching them...
|
||||||
|
Cloning into 'telegram-desktop-bin'...
|
||||||
|
|
||||||
|
remote: Counting objects: 301, done.
|
||||||
|
remote: Compressing objects: 100% (152/152), done.
|
||||||
|
remote: Total 301 (delta 161), reused 286 (delta 147)
|
||||||
|
Receiving objects: 100% (301/301), 76.17 KiB | 639.00 KiB/s, done.
|
||||||
|
Resolving deltas: 100% (161/161), done.
|
||||||
|
?? Do you want to see the changes of telegram-desktop-bin? N/y: N
|
||||||
|
|
||||||
|
[sudo] password for user:
|
||||||
|
|
||||||
|
...
|
||||||
|
==> Leaving fakeroot environment.
|
||||||
|
==> Finished making: telegram-desktop-bin 1.3.9-1 (Thu 05 Jul 2018 11:22:02 AM EDT)
|
||||||
|
==> Cleaning up...
|
||||||
|
loading packages...
|
||||||
|
resolving dependencies...
|
||||||
|
looking for conflicting packages...
|
||||||
|
|
||||||
|
Packages (1) telegram-desktop-bin-1.3.9-1
|
||||||
|
|
||||||
|
Total Installed Size: 88.81 MiB
|
||||||
|
Net Upgrade Size: 5.33 MiB
|
||||||
|
|
||||||
|
:: Proceed with installation? [Y/n]
|
||||||
|
```
|
||||||
|
|
||||||
|
依照你所安装的包的复杂性程度的高低,有时你将会被要求给出进一步的输入确定,为了避免这些反复的输入,'aurman' 允许你使用'--noconfirm'和'--noedit'选项。这相当于说‘接受所有的预定设置,并相信包管理器不会干坏事。**使用这两个选项时请务必小心!!**,虽然这些选项本身不太会破坏你的系统,你也不能盲目的接受他人的脚本程序。
|
||||||
|
|
||||||
|
|
||||||
|
###总结
|
||||||
|
|
||||||
|
这篇文章当然只能触及包管理器的皮毛。还有很多别的包管理器笔者没有在这片文章里谈及。有些Linux发布版,如Ubuntu或Elementary OS,已经在图形版的包管理器的开发上有了长远的进展。
|
||||||
|
|
||||||
|
如果你对包管理器的更高级功能有进一步的兴趣,请在评论区留言,笔者很乐意进一步的写一写相关的文章。
|
||||||
|
|
||||||
|
###附录
|
||||||
|
```
|
||||||
|
# search for packages
|
||||||
|
yum search <package>
|
||||||
|
dnf search <package>
|
||||||
|
zypper search <package>
|
||||||
|
apt-cache search <package>
|
||||||
|
apt search <package>
|
||||||
|
pacman -Ss <package>
|
||||||
|
|
||||||
|
# install packages
|
||||||
|
yum install <package>
|
||||||
|
dnf install <package>
|
||||||
|
zypper install <package>
|
||||||
|
apt-get install <package>
|
||||||
|
apt install <package>
|
||||||
|
pacman -Ss <package>
|
||||||
|
|
||||||
|
# update package database, not required by yum, dnf and zypper
|
||||||
|
apt-get update
|
||||||
|
apt update
|
||||||
|
pacman -Sy
|
||||||
|
|
||||||
|
# update all system packages
|
||||||
|
yum update
|
||||||
|
dnf update
|
||||||
|
zypper update
|
||||||
|
apt-get upgrade
|
||||||
|
apt upgrade
|
||||||
|
pacman -Su
|
||||||
|
|
||||||
|
# remove an installed package
|
||||||
|
yum remove <package>
|
||||||
|
dnf remove <package>
|
||||||
|
apt-get remove <package>
|
||||||
|
apt remove <package>
|
||||||
|
pacman -R <package>
|
||||||
|
pacman -Rs <package>
|
||||||
|
|
||||||
|
# search for the package name containing specific file or folder
|
||||||
|
yum whatprovides *<binary>
|
||||||
|
dnf whatprovides *<binary>
|
||||||
|
zypper what-provides <binary>
|
||||||
|
zypper search --provides <binary>
|
||||||
|
apt-file search <binary>
|
||||||
|
pacman -Sf <binary>
|
||||||
|
```
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/evolution-package-managers
|
||||||
|
|
||||||
|
作者:[Steve Ovens][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[DavidChenLiang](https://github.com/davidchenliang)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/stratusss
|
||||||
|
[1]:https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
|
||||||
|
[2]:https://en.wikipedia.org/wiki/Yum_(software)
|
||||||
|
[3]:https://fedoraproject.org/wiki/DNF
|
||||||
|
[4]:https://en.wikipedia.org/wiki/Rpm_(software)
|
||||||
|
[5]:https://en.wikipedia.org/wiki/Yellow_Dog_Linux
|
||||||
|
[6]:https://searchdatacenter.techtarget.com/definition/Yellowdog-Updater-Modified-YUM
|
||||||
|
[7]:https://en.wikipedia.org/wiki/DNF_(software)
|
||||||
|
[8]:https://en.opensuse.org/Portal:Zypper
|
||||||
|
[9]:https://www.suse.com/
|
||||||
|
[10]:https://www.opensuse.org/
|
||||||
|
[11]:https://en.wikipedia.org/wiki/MeeGo
|
||||||
|
[12]:https://sailfishos.org/
|
||||||
|
[13]:https://www.tizen.org/
|
||||||
|
[14]:https://en.wikipedia.org/wiki/YaST
|
||||||
|
[15]:https://en.wikipedia.org/wiki/APT_(Debian)
|
||||||
|
[16]:https://www.archlinux.org/
|
||||||
|
[17]:https://wiki.archlinux.org/index.php/pacman
|
||||||
|
[18]:https://wiki.archlinux.org/index.php/Arch_Build_System
|
||||||
|
[19]:https://aur.archlinux.org/
|
||||||
|
[20]:https://hub.docker.com/
|
||||||
|
[21]:https://wiki.archlinux.org/index.php/AUR_helpers#Discontinued_or_problematic
|
@ -0,0 +1,128 @@
|
|||||||
|
如何在 Ubuntu 中切换多个 PHP 版本
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://www.ostechnix.com/wp-content/uploads/2018/08/php-720x340.png)
|
||||||
|
|
||||||
|
有时,最新版本的安装包可能无法按预期工作。你的程序可能与更新的软件包不兼容,并且仅支持特定的旧版软件包。在这种情况下,你可以立即将有问题的软件包降级到其早期的工作版本。请参阅我们的旧指南,[**在这**][1]了解如何降级 Ubuntu 及其衍生版中的软件包以及[**在这**][1]了解如何降级 Arch Linux 及其衍生版中的软件包。但是,你无需降级某些软件包。我们可以同时使用多个版本。例如,假设你在测试部署在 Ubuntu 18.04 LTS 中的[**LAMP 栈**][3]的 PHP 程序。过了一段时间,你发现应用程序在 PHP5.6 中工作正常,但在 PHP 7.2 中不正常(Ubuntu 18.04 LTS 默认安装 PHP 7.x)。你打算重新安装 PHP 或整个 LAMP 栈吗?但是没有必要。你甚至不必将 PHP 降级到其早期版本。在这个简短的教程中,我将向你展示如何在 Ubuntu 18.04 LTS 中切换多个 PHP 版本。它没你想的那么难。请继续阅读。
|
||||||
|
|
||||||
|
### 在多个 PHP 版本之间切换
|
||||||
|
|
||||||
|
要查看 PHP 的默认安装版本,请运行:
|
||||||
|
```
|
||||||
|
$ php -v
|
||||||
|
PHP 7.2.7-0ubuntu0.18.04.2 (cli) (built: Jul 4 2018 16:55:24) ( NTS )
|
||||||
|
Copyright (c) 1997-2018 The PHP Group
|
||||||
|
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
|
||||||
|
with Zend OPcache v7.2.7-0ubuntu0.18.04.2, Copyright (c) 1999-2018, by Zend Technologies
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
如你所见,已安装的 PHP 的版本为 7.2.7。在测试你的程序几天后,你会发现你的程序不支持 PHP7.2。在这种情况下,同时使用 PHP5.x 和 PHP7.x 是个不错的主意,这样你就可以随时轻松地在任何支持的版本之间切换。
|
||||||
|
|
||||||
|
你不必删除 PHP7.x 或重新安装 LAMP 栈。你可以同时使用 PHP5.x 和 7.x 版本。
|
||||||
|
|
||||||
|
我假设你还没有在你的系统中卸载 php5.6。万一你已将其删除,你可以使用下面的 PPA 再次安装它。
|
||||||
|
|
||||||
|
你可以从 PPA 中安装 PHP5.6:
|
||||||
|
```
|
||||||
|
$ sudo add-apt-repository -y ppa:ondrej/php
|
||||||
|
$ sudo apt update
|
||||||
|
$ sudo apt install php5.6
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 从 PHP7.x 切换到 PHP5.x.
|
||||||
|
|
||||||
|
首先使用命令禁用 PHP7.2 模块:
|
||||||
|
```
|
||||||
|
$ sudo a2dismod php7.2
|
||||||
|
Module php7.2 disabled.
|
||||||
|
To activate the new configuration, you need to run:
|
||||||
|
systemctl restart apache2
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
接下来,启用 PHP5.6 模块:
|
||||||
|
```
|
||||||
|
$ sudo a2enmod php5.6
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
将 PHP5.6 设置为默认版本:
|
||||||
|
```
|
||||||
|
$ sudo update-alternatives --set php /usr/bin/php5.6
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
或者,你可以运行以下命令来设置默认情况下要使用的全局 PHP 版本。
|
||||||
|
```
|
||||||
|
$ sudo update-alternatives --config php
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
输入选择的号码将其设置为默认版本,或者只需按 ENTER 键保持当前选择。
|
||||||
|
|
||||||
|
如果你已安装其他 PHP 扩展,请将它们设置为默认值。
|
||||||
|
```
|
||||||
|
$ sudo update-alternatives --set phar /usr/bin/phar5.6
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
最后,重启 Apache Web 服务器:
|
||||||
|
```
|
||||||
|
$ sudo systemctl restart apache2
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
现在,检查 PHP5.6 是否是默认版本:
|
||||||
|
```
|
||||||
|
$ php -v
|
||||||
|
PHP 5.6.37-1+ubuntu18.04.1+deb.sury.org+1 (cli)
|
||||||
|
Copyright (c) 1997-2016 The PHP Group
|
||||||
|
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
|
||||||
|
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 从 PHP5.x 切换到 PHP7.x.
|
||||||
|
|
||||||
|
同样,你可以从 PHP5.x 切换到 PHP7.x 版本,如下所示。
|
||||||
|
```
|
||||||
|
$ sudo a2enmod php7.2
|
||||||
|
|
||||||
|
$ sudo a2dismod php5.6
|
||||||
|
|
||||||
|
$ sudo update-alternatives --set php /usr/bin/php7.2
|
||||||
|
|
||||||
|
$ sudo systemctl restart apache2
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**提醒一句:**
|
||||||
|
|
||||||
|
最终稳定版 PHP5.6 于 2017 年 1 月 19 日达到[**活跃支持截止**][4]。但是,直到 2018 年 12 月 31 日,PHP 5.6 将继续获得对关键安全问题的支持。所以,建议尽快升级所有 PHP 程序并与 PHP7.x 兼容。
|
||||||
|
|
||||||
|
如果你希望防止 PHP 将来自动升级,请参阅以下指南。
|
||||||
|
|
||||||
|
就是这些了。希望这有帮助。还有更多的好东西。敬请关注!
|
||||||
|
|
||||||
|
干杯!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.ostechnix.com/how-to-switch-between-multiple-php-versions-in-ubuntu/
|
||||||
|
|
||||||
|
作者:[SK][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.ostechnix.com/author/sk/
|
||||||
|
[1]:https://www.ostechnix.com/how-to-downgrade-a-package-in-ubuntu/
|
||||||
|
[2]:https://www.ostechnix.com/downgrade-package-arch-linux/
|
||||||
|
[3]:https://www.ostechnix.com/install-apache-mariadb-php-lamp-stack-ubuntu-16-04/
|
||||||
|
[4]:http://php.net/supported-versions.php
|
Loading…
Reference in New Issue
Block a user