mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-03 01:10:13 +08:00
commit
8c9c33f2be
140
published/20160512 Rapid prototyping with docker-compose.md
Normal file
140
published/20160512 Rapid prototyping with docker-compose.md
Normal file
@ -0,0 +1,140 @@
|
||||
通过 docker-compose 进行快速原型设计
|
||||
========================================
|
||||
|
||||
在这篇文章中,我们将考察一个 Node.js 开发原型,该原型用于从英国三个主要折扣网店查找“Raspberry PI Zero”的库存。
|
||||
|
||||
我写好了代码,然后经过一晚的鼓捣把它部署在 Aure 上的 Ubuntu 虚拟机上。Docker 和 docker-compose 工具使得部署和更新过程非常快。
|
||||
|
||||
### 还记得链接指令(link)吗?
|
||||
|
||||
如果你已经阅读过 [Hands-on Docker tutorial][1],那么你应该已经可以使用命令行链接 Docker 容器。通过命令行将 Node.js 的计数器链接到 Redis 服务器,其命令可能如下所示:
|
||||
|
||||
```
|
||||
$ docker run -d -P --name redis1
|
||||
$ docker run -d hit_counter -p 3000:3000 --link redis1:redis
|
||||
```
|
||||
|
||||
现在假设你的应用程序分为三层:
|
||||
|
||||
- Web 前端
|
||||
- 处理长时间运行任务的批处理层
|
||||
- Redis 或者 mongo 数据库
|
||||
|
||||
通过`--link`的显式链接只是管理几个容器是可以的,但是可能会因为我们向应用程序添加更多层或容器而失控。
|
||||
|
||||
### 加入 docker-compose
|
||||
|
||||

|
||||
|
||||
*Docker Compose logo*
|
||||
|
||||
docker-compose 工具是标准 Docker 工具箱的一部分,也可以单独下载。 它提供了一组丰富的功能,通过纯文本 YAML 文件配置所有应用程序的部件。
|
||||
|
||||
上面的例子看起来像这样:
|
||||
|
||||
```
|
||||
version: "2.0"
|
||||
services:
|
||||
redis1:
|
||||
image: redis
|
||||
hit_counter:
|
||||
build: ./hit_counter
|
||||
ports:
|
||||
- 3000:3000
|
||||
```
|
||||
|
||||
从 Docker 1.10 开始,我们可以利用网络覆盖(network overlays)来帮助我们在多个主机上进行扩展。 在此之前,链接仅能工作在单个主机上。 `docker-compose scale` 命令可以用来在需要时带来更多的计算能力。
|
||||
|
||||
> 查看 docker.com 上的 [docker-compose][2] 参考
|
||||
|
||||
### 真实工作示例:Raspberry PI 库存警示
|
||||
|
||||

|
||||
|
||||
*新的 Raspberry PI Zero v1.3 图片,由 Pimoroni 提供*
|
||||
|
||||
Raspberry PI Zero 嗡嗡作响 - 它是一个极小的微型计算机,具有 1GHz CPU 和 512MB RAM,可以运行完整的Linux、Docker、Node.js、Ruby 和其他许多流行的开源工具。 PI Zero 最好的优点之一就是它成本只有 5 美元。 这也意味着它销售的速度非常之快。
|
||||
|
||||
*如果你想在 PI 上尝试 Docker 和 Swarm,请查看下面的教程:[Docker Swarm on the PI Zero][3]*
|
||||
|
||||
### 原始网站:whereismypizero.com
|
||||
|
||||
我发现一个网页,它使用屏幕抓取以找出 4-5 个最受欢迎的折扣网店是否有库存。
|
||||
|
||||
- 网站包含静态 HTML 网页
|
||||
- 向每个折扣网店发出一个 XMLHttpRequest 访问 /public/api/
|
||||
- 服务器向每个网店发出 HTTP 请求并执行抓屏
|
||||
|
||||
每一次对 /public/api/ 的调用,其执行花 3 秒钟,而使用 Apache Bench(ab),我每秒只能完成 0.25 个请求。
|
||||
|
||||
### 重新发明轮子
|
||||
|
||||
零售商似乎并不介意 whereismypizero.com 抓取他们的网站的商品库存信息,所以我开始从头写一个类似的工具。 我尝试通过缓存和解耦 web 层来处理更多的抓取请求。 Redis 是执行这项工作的完美工具。 它允许我设置一个自动过期的键/值对(即一个简单的缓存),还可以通过 pub/sub 在 Node.js 进程之间传输消息。
|
||||
|
||||
> 复刻或者追踪放在 github 上的代码: [alexellis/pi_zero_stock][4]
|
||||
|
||||
如果之前使用过 Node.js,你肯定知道它是单线程的,并且任何 CPU 密集型任务,如解析 HTML 或 JSON 都可能导致速度放缓。一种缓解这种情况的方法是使用一个工作进程和 Redis 消息通道作为它和 web 层之间的连接组织。
|
||||
|
||||
- Web 层
|
||||
- 使用 200 代表缓冲命中(该商店的 Redis 键存在)
|
||||
- 使用 202 代表高速缓存未命中(该商店的 Redis 键不存在,因此发出消息)
|
||||
- 因为我们只是读一个 Redis 键,响应时间非常快。
|
||||
- 库存抓取器
|
||||
- 执行 HTTP 请求
|
||||
- 用于在不同类型的网店上抓屏
|
||||
- 更新 Redis 键的缓存失效时间为 60 秒
|
||||
- 另外,锁定一个 Redis 键,以防止对网店过多的 HTTP 请求。
|
||||
|
||||
```
|
||||
version: "2.0"
|
||||
services:
|
||||
web:
|
||||
build: ./web/
|
||||
ports:
|
||||
- "3000:3000"
|
||||
stock_fetch:
|
||||
build: ./stock_fetch/
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
|
||||
*来自示例的 docker-compose.yml 文件*
|
||||
|
||||
一旦本地正常工作,再向 Azure 的 Ubuntu 16.04 镜像云部署就轻车熟路,只花了不到 5 分钟。 我登录、克隆仓库并键入`docker compose up -d`, 这就是所有的工作 - 快速实现整个系统的原型不会比这几个步骤更多。 任何人(包括 whereismypizero.com 的所有者)只需两行命令就可以部署新解决方案:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/alexellis/pi_zero_stock
|
||||
$ docker-compose up -d
|
||||
```
|
||||
|
||||
更新网站很容易,只需要一个`git pull`命令,然后执行`docker-compose up -d`命令,该命令需要带上`--build`参数。
|
||||
|
||||
如果你仍然手动链接你的 Docker 容器,请自己或使用如下我的代码尝试 Docker Compose:
|
||||
|
||||
> 复刻或者追踪在 github 上的代码: [alexellis/pi_zero_stock][5]
|
||||
|
||||
### 一睹测试网站芳容
|
||||
|
||||
目前测试网站使用 docker-compose 部署:[stockalert.alexellis.io][6]
|
||||
|
||||

|
||||
|
||||
*预览于 2016 年 5 月 16 日*
|
||||
|
||||
----------
|
||||
via: http://blog.alexellis.io/rapid-prototype-docker-compose/
|
||||
|
||||
作者:[Alex Ellis][a]
|
||||
译者:[firstadream](https://github.com/firstadream)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://blog.alexellis.io/author/alex/
|
||||
[1]: http://blog.alexellis.io/handsondocker
|
||||
[2]: https://docs.docker.com/compose/compose-file/
|
||||
[3]: http://blog.alexellis.io/dockerswarm-pizero/
|
||||
[4]: https://github.com/alexellis/pi_zero_stock
|
||||
[5]: https://github.com/alexellis/pi_zero_stock
|
||||
[6]: http://stockalert.alexellis.io/
|
||||
|
191
published/20160531 Linux vs. Windows device driver model.md
Normal file
191
published/20160531 Linux vs. Windows device driver model.md
Normal file
@ -0,0 +1,191 @@
|
||||
Linux 与 Windows 的设备驱动模型比对:架构、API 和开发环境比较
|
||||
============================================================================================
|
||||
|
||||
> 名词缩写:
|
||||
> - API 应用程序接口(Application Program Interface )
|
||||
> - ABI 应用系统二进制接口(Application Binary Interface)
|
||||
|
||||
设备驱动是操作系统的一部分,它能够通过一些特定的编程接口便于硬件设备的使用,这样软件就可以控制并且运行那些设备了。因为每个驱动都对应不同的操作系统,所以你就需要不同的 Linux、Windows 或 Unix 设备驱动,以便能够在不同的计算机上使用你的设备。这就是为什么当你雇佣一个驱动开发者或者选择一个研发服务商提供者的时候,查看他们为各种操作系统平台开发驱动的经验是非常重要的。
|
||||
|
||||

|
||||
|
||||
驱动开发的第一步是理解每个操作系统处理它的驱动的不同方式、底层驱动模型、它使用的架构、以及可用的开发工具。例如,Linux 驱动程序模型就与 Windows 非常不同。虽然 Windows 提倡驱动程序开发和操作系统开发分别进行,并通过一组 ABI 调用来结合驱动程序和操作系统,但是 Linux 设备驱动程序开发不依赖任何稳定的 ABI 或 API,所以它的驱动代码并没有被纳入内核中。每一种模型都有自己的优点和缺点,但是如果你想为你的设备提供全面支持,那么重要的是要全面的了解它们。
|
||||
|
||||
在本文中,我们将比较 Windows 和 Linux 设备驱动程序,探索不同的架构,API,构建开发和分发,希望让您比较深入的理解如何开始为每一个操作系统编写设备驱动程序。
|
||||
|
||||
### 1. 设备驱动架构
|
||||
|
||||
Windows 设备驱动程序的体系结构和 Linux 中使用的不同,它们各有优缺点。差异主要受以下原因的影响:Windows 是闭源操作系统,而 Linux 是开源操作系统。比较 Linux 和 Windows 设备驱动程序架构将帮助我们理解 Windows 和 Linux 驱动程序背后的核心差异。
|
||||
|
||||
#### 1.1. Windows 驱动架构
|
||||
|
||||
虽然 Linux 内核分发时带着 Linux 驱动,而 Windows 内核则不包括设备驱动程序。与之不同的是,现代 Windows 设备驱动程序编写使用 Windows 驱动模型(WDM),这是一种完全支持即插即用和电源管理的模型,所以可以根据需要加载和卸载驱动程序。
|
||||
|
||||
处理来自应用的请求,是由 Windows 内核的中被称为 I/O 管理器的部分来完成的。I/O 管理器的作用是是转换这些请求到 I/O 请求数据包(IO Request Packets)(IRP),IRP 可以被用来在驱动层识别请求并且传输数据。
|
||||
|
||||
Windows 驱动模型 WDM 提供三种驱动, 它们形成了三个层:
|
||||
|
||||
- 过滤(Filter)驱动提供关于 IRP 的可选附加处理。
|
||||
- 功能(Function)驱动是实现接口和每个设备通信的主要驱动。
|
||||
- 总线(Bus)驱动服务不同的配适器和不同的总线控制器,来实现主机模式控制设备。
|
||||
|
||||
一个 IRP 通过这些层就像它们经过 I/O 管理器到达底层硬件那样。每个层能够独立的处理一个 IRP 并且把它们送回 I/O 管理器。在硬件底层中有硬件抽象层(HAL),它提供一个通用的接口到物理设备。
|
||||
|
||||
#### 1.2. Linux 驱动架构
|
||||
|
||||
相比于 Windows 设备驱动,Linux 设备驱动架构根本性的不同就是 Linux 没有一个标准的驱动模型也没有一个干净分隔的层。每一个设备驱动都被当做一个能够自动的从内核中加载和卸载的模块来实现。Linux 为即插即用设备和电源管理设备提供一些方式,以便那些驱动可以使用它们来正确地管理这些设备,但这并不是必须的。
|
||||
|
||||
模式输出那些它们提供的函数,并通过调用这些函数和传入随意定义的数据结构来沟通。请求来自文件系统或网络层的用户应用,并被转化为需要的数据结构。模块能够按层堆叠,在一个模块进行处理之后,另外一个再处理,有些模块提供了对一类设备的公共调用接口,例如 USB 设备。
|
||||
|
||||
Linux 设备驱动程序支持三种设备:
|
||||
|
||||
- 实现一个字节流接口的字符(Character)设备。
|
||||
- 用于存放文件系统和处理多字节数据块 IO 的块(Block)设备。
|
||||
- 用于通过网络传输数据包的网络(Network)接口。
|
||||
|
||||
Linux 也有一个硬件抽象层(HAL),它实际扮演了物理硬件的设备驱动接口。
|
||||
|
||||
### 2. 设备驱动 API
|
||||
|
||||
Linux 和 Windows 驱动 API 都属于事件驱动类型:只有当某些事件发生的时候,驱动代码才执行——当用户的应用程序希望从设备获取一些东西,或者当设备有某些请求需要告知操作系统。
|
||||
|
||||
#### 2.1. 初始化
|
||||
|
||||
在 Windows 上,驱动被表示为 `DriverObject` 结构,它在 `DriverEntry` 函数的执行过程中被初始化。这些入口点也注册一些回调函数,用来响应设备的添加和移除、驱动卸载和处理新进入的 IRP。当一个设备连接的时候,Windows 创建一个设备对象,这个设备对象在设备驱动后面处理所有应用请求。
|
||||
|
||||
相比于 Windows,Linux 设备驱动生命周期由内核模块的 `module_init` 和 `module_exit` 函数负责管理,它们分别用于模块的加载和卸载。它们负责注册模块来通过使用内核接口来处理设备的请求。这个模块需要创建一个设备文件(或者一个网络接口),为其所希望管理的设备指定一个数字识别号,并注册一些当用户与设备文件交互的时候所使用的回调函数。
|
||||
|
||||
#### 2.2. 命名和声明设备
|
||||
|
||||
##### 在 Windows 上注册设备
|
||||
|
||||
Windows 设备驱动在新连接设备时是由回调函数 `AddDevice` 通知的。它接下来就去创建一个设备对象(device object),用于识别该设备的特定的驱动实例。取决于驱动的类型,设备对象可以是物理设备对象(Physical Device Object)(PDO),功能设备对象(Function Device Object)(FDO),或者过滤设备对象(Filter Device Object )(FIDO)。设备对象能够堆叠,PDO 在底层。
|
||||
|
||||
设备对象在这个设备连接在计算机期间一直存在。`DeviceExtension` 结构能够被用于关联到一个设备对象的全局数据。
|
||||
|
||||
设备对象可以有如下形式的名字 `\Device\DeviceName`,这被系统用来识别和定位它们。应用可以使用 `CreateFile` API 函数来打开一个有上述名字的文件,获得一个可以用于和设备交互的句柄。
|
||||
|
||||
然而,通常只有 PDO 有自己的名字。未命名的设备能够通过设备级接口来访问。设备驱动注册一个或多个接口,以 128 位全局唯一标识符(GUID)来标示它们。用户应用能够使用已知的 GUID 来获取一个设备的句柄。
|
||||
|
||||
##### 在 Linux 上注册设备
|
||||
|
||||
在 Linux 平台上,用户应用通过文件系统入口访问设备,它通常位于 `/dev` 目录。在模块初始化的时候,它通过调用内核函数 `register_chrdev` 创建了所有需要的入口。应用可以发起 `open` 系统调用来获取一个文件描述符来与设备进行交互。这个调用后来被发送到回调函数,这个调用(以及将来对该返回的文件描述符的进一步调用,例如 `read`、`write` 或`close`)会被分配到由该模块安装到 `file_operations` 或者 `block_device_operations`这样的数据结构中的回调函数。
|
||||
|
||||
设备驱动模块负责分配和保持任何需要用于操作的数据结构。传送进文件系统回调函数的 `file` 结构有一个 `private_data` 字段,它可以被用来存放指向具体驱动数据的指针。块设备和网络接口 API 也提供类似的字段。
|
||||
|
||||
虽然应用使用文件系统的节点来定位设备,但是 Linux 在内部使用一个主设备号(major numbers)和次设备号(minor numbers)的概念来识别设备及其驱动。主设备号被用来识别设备驱动,而次设备号由驱动使用来识别它所管理的设备。驱动为了去管理一个或多个固定的主设备号,必须首先注册自己或者让系统来分配未使用的设备号给它。
|
||||
|
||||
目前,Linux 为主次设备对(major-minor pairs)使用一个 32 位的值,其中 12 位分配主设备号,并允许多达 4096 个不同的设备。主次设备对对于字符设备和块设备是不同的,所以一个字符设备和一个块设备能使用相同的设备对而不导致冲突。网络接口是通过像 eth0 的符号名来识别,这些又是区别于主次设备的字符设备和块设备的。
|
||||
|
||||
#### 2.3. 交换数据
|
||||
|
||||
Linux 和 Windows 都支持在用户级应用程序和内核级驱动程序之间传输数据的三种方式:
|
||||
|
||||
- **缓冲型输入输出(Buffered Input-Output)** 它使用由内核管理的缓冲区。对于写操作,内核从用户空间缓冲区中拷贝数据到内核分配的缓冲区,并且把它传送到设备驱动中。读操作也一样,由内核将数据从内核缓冲区中拷贝到应用提供的缓冲区中。
|
||||
- **直接型输入输出(Direct Input-Output)** 它不使用拷贝功能。代替它的是,内核在物理内存中钉死一块用户分配的缓冲区以便它可以一直留在那里,以便在数据传输过程中不被交换出去。
|
||||
- **内存映射(Memory mapping)** 它也能够由内核管理,这样内核和用户空间应用就能够通过不同的地址访问同样的内存页。
|
||||
|
||||
##### **Windows 上的驱动程序 I/O 模式**
|
||||
|
||||
支持缓冲型 I/O 是 WDM 的内置功能。缓冲区能够被设备驱动通过在 IRP 结构中的 `AssociatedIrp.SystemBuffer` 字段访问。当需要和用户空间通讯的时候,驱动只需从这个缓冲区中进行读写操作。
|
||||
|
||||
Windows 上的直接 I/O 由内存描述符列表(memory descriptor lists)(MDL)介导。这种半透明的结构是通过在 IRP 中的 `MdlAddress` 字段来访问的。它们被用来定位由用户应用程序分配的缓冲区的物理地址,并在 I/O 请求期间钉死不动。
|
||||
|
||||
在 Windows 上进行数据传输的第三个选项称为 `METHOD_NEITHER`。 在这种情况下,内核需要传送用户空间的输入输出缓冲区的虚拟地址给驱动,而不需要确定它们有效或者保证它们映射到一个可以由设备驱动访问的物理内存地址。设备驱动负责处理这些数据传输的细节。
|
||||
|
||||
##### **Linux 上的驱动程序 I/O 模式**
|
||||
|
||||
Linux 提供许多函数例如,`clear_user`、`copy_to_user`、`strncpy_from_user` 和一些其它的用来在内核和用户内存之间进行缓冲区数据传输的函数。这些函数保证了指向数据缓存区指针的有效,并且通过在内存区域之间安全地拷贝数据缓冲区来处理数据传输的所有细节。
|
||||
|
||||
然而,块设备的驱动对已知大小的整个数据块进行操作,它可以在内核和用户地址区域之间被快速移动而不需要拷贝它们。这种情况是由 Linux 内核来自动处理所有的块设备驱动。块请求队列处理传送数据块而不用多余的拷贝,而 Linux 系统调用接口来转换文件系统请求到块请求中。
|
||||
|
||||
最终,设备驱动能够从内核地址区域分配一些存储页面(不可交换的)并且使用 `remap_pfn_range` 函数来直接映射这些页面到用户进程的地址空间。然后应用能获取这些缓冲区的虚拟地址并且使用它来和设备驱动交流。
|
||||
|
||||
### 3. 设备驱动开发环境
|
||||
|
||||
#### 3.1. 设备驱动框架
|
||||
|
||||
##### Windows 驱动程序工具包
|
||||
|
||||
Windows 是一个闭源操作系统。Microsoft 提供 Windows 驱动程序工具包以方便非 Microsoft 供应商开发 Windows 设备驱动。工具包中包含开发、调试、检验和打包 Windows 设备驱动等所需的所有内容。
|
||||
|
||||
Windows 驱动模型(Windows Driver Model)(WDM)为设备驱动定义了一个干净的接口框架。Windows 保持这些接口的源代码和二进制的兼容性。编译好的 WDM 驱动通常是前向兼容性:也就是说,一个较旧的驱动能够在没有重新编译的情况下在较新的系统上运行,但是它当然不能够访问系统提供的新功能。但是,驱动不保证后向兼容性。
|
||||
|
||||
##### **Linux 源代码**
|
||||
|
||||
和 Windows 相对比,Linux 是一个开源操作系统,因此 Linux 的整个源代码是用于驱动开发的 SDK。没有驱动设备的正式框架,但是 Linux 内核包含许多提供了如驱动注册这样的通用服务的子系统。这些子系统的接口在内核头文件中描述。
|
||||
|
||||
尽管 Linux 有定义接口,但这些接口在设计上并不稳定。Linux 不提供有关前向和后向兼容的任何保证。设备驱动对于不同的内核版本需要重新编译。没有稳定性的保证可以让 Linux 内核进行快速开发,因为开发人员不必去支持旧的接口,并且能够使用最好的方法解决手头的这些问题。
|
||||
|
||||
当为 Linux 写树内(in-tree)(指当前 Linux 内核开发主干)驱动程序时,这种不断变化的环境不会造成任何问题,因为它们作为内核源代码的一部分,与内核本身同步更新。然而,闭源驱动必须单独开发,并且在树外(out-of-tree),必须维护它们以支持不同的内核版本。因此,Linux 鼓励设备驱动程序开发人员在树内维护他们的驱动。
|
||||
|
||||
#### 3.2. 为设备驱动构建系统
|
||||
|
||||
Windows 驱动程序工具包为 Microsoft Visual Studio 添加了驱动开发支持,并包括用来构建驱动程序代码的编译器。开发 Windows 设备驱动程序与在 IDE 中开发用户空间应用程序没有太大的区别。Microsoft 提供了一个企业 Windows 驱动程序工具包,提供了类似于 Linux 命令行的构建环境。
|
||||
|
||||
Linux 使用 Makefile 作为树内和树外系统设备驱动程序的构建系统。Linux 构建系统非常发达,通常是一个设备驱动程序只需要少数行就产生一个可工作的二进制代码。开发人员可以使用任何 [IDE][5],只要它可以处理 Linux 源代码库和运行 `make` ,他们也可以很容易地从终端手动编译驱动程序。
|
||||
|
||||
#### 3.3. 文档支持
|
||||
|
||||
Windows 对于驱动程序的开发有良好的文档支持。Windows 驱动程序工具包包括文档和示例驱动程序代码,通过 MSDN 可获得关于内核接口的大量信息,并存在大量的有关驱动程序开发和 Windows 底层的参考和指南。
|
||||
|
||||
Linux 文档不是描述性的,但整个 Linux 源代码可供驱动开发人员使用缓解了这一问题。源代码树中的 Documentation 目录描述了一些 Linux 的子系统,但是有[几本书][4]介绍了关于 Linux 设备驱动程序开发和 Linux 内核概览,它们更详细。
|
||||
|
||||
Linux 没有提供设备驱动程序的指定样本,但现有生产级驱动程序的源代码可用,可以用作开发新设备驱动程序的参考。
|
||||
|
||||
#### 3.4. 调试支持
|
||||
|
||||
Linux 和 Windows 都有可用于追踪调试驱动程序代码的日志机制。在 Windows 上将使用 `DbgPrint` 函数,而在 Linux 上使用的函数称为 `printk`。然而,并不是每个问题都可以通过只使用日志记录和源代码来解决。有时断点更有用,因为它们允许检查驱动代码的动态行为。交互式调试对于研究崩溃的原因也是必不可少的。
|
||||
|
||||
Windows 通过其内核级调试器 `WinDbg` 支持交互式调试。这需要通过一个串行端口连接两台机器:一台计算机运行被调试的内核,另一台运行调试器和控制被调试的操作系统。Windows 驱动程序工具包包括 Windows 内核的调试符号,因此 Windows 的数据结构将在调试器中部分可见。
|
||||
|
||||
Linux 还支持通过 `KDB` 和 `KGDB` 进行的交互式调试。调试支持可以内置到内核,并可在启动时启用。之后,可以直接通过物理键盘调试系统,或通过串行端口从另一台计算机连接到它。KDB 提供了一个简单的命令行界面,这是唯一的在同一台机器上来调试内核的方法。然而,KDB 缺乏源代码级调试支持。KGDB 通过串行端口提供了一个更复杂的接口。它允许使用像 GDB 这样标准的应用程序调试器来调试 Linux 内核,就像任何其它用户空间应用程序一样。
|
||||
|
||||
### 4. 设备驱动分发
|
||||
|
||||
##### 4.1. 安装设备驱动
|
||||
|
||||
在 Windows 上安装的驱动程序,是由被称为为 INF 的文本文件描述的,通常存储在 `C:\Windows\INF` 目录中。这些文件由驱动供应商提供,并且定义哪些设备由该驱动程序服务,哪里可以找到驱动程序的二进制文件,和驱动程序的版本等。
|
||||
|
||||
当一个新设备插入计算机时,Windows 通过查看已经安装的驱动程序并且选择适当的一个加载。当设备被移除的时候,驱动会自动卸载它。
|
||||
|
||||
在 Linux 上,一些驱动被构建到内核中并且保持永久的加载。非必要的驱动被构建为内核模块,它们通常是存储在 `/lib/modules/kernel-version` 目录中。这个目录还包含各种配置文件,如 `modules.dep`,用于描述内核模块之间的依赖关系。
|
||||
|
||||
虽然 Linux 内核可以在自身启动时加载一些模块,但通常模块加载由用户空间应用程序监督。例如,`init` 进程可能在系统初始化期间加载一些模块,`udev` 守护程序负责跟踪新插入的设备并为它们加载适当的模块。
|
||||
|
||||
#### 4.2. 更新设备驱动
|
||||
|
||||
Windows 为设备驱动程序提供了稳定的二进制接口,因此在某些情况下,无需与系统一起更新驱动程序二进制文件。任何必要的更新由 Windows Update 服务处理,它负责定位、下载和安装适用于系统的最新版本的驱动程序。
|
||||
|
||||
然而,Linux 不提供稳定的二进制接口,因此有必要在每次内核更新时重新编译和更新所有必需的设备驱动程序。显然,内置在内核中的设备驱动程序会自动更新,但是树外模块会产生轻微的问题。 维护最新的模块二进制文件的任务通常用 [DKMS][3] 来解决:这是一个当安装新的内核版本时自动重建所有注册的内核模块的服务。
|
||||
|
||||
#### 4.3. 安全方面的考虑
|
||||
|
||||
所有 Windows 设备驱动程序在 Windows 加载它们之前必须被数字签名。在开发期间可以使用自签名证书,但是分发给终端用户的驱动程序包必须使用 Microsoft 信任的有效证书进行签名。供应商可以从 Microsoft 授权的任何受信任的证书颁发机构获取软件出版商证书(Software Publisher Certificate)。然后,此证书由 Microsoft 交叉签名,并且生成的交叉证书用于在发行之前签署驱动程序包。
|
||||
|
||||
Linux 内核也能配置为在内核模块被加载前校验签名,并禁止不可信的内核模块。被内核所信任的公钥集在构建时是固定的,并且是完全可配置的。由内核执行的检查,这个检查严格性在构建时也是可配置的,范围从简单地为不可信模块发出警告,到拒绝加载有效性可疑的任何东西。
|
||||
|
||||
### 5. 结论
|
||||
|
||||
如上所示,Windows 和 Linux 设备驱动程序基础设施有一些共同点,例如调用 API 的方法,但更多的细节是相当不同的。最突出的差异源于 Windows 是由商业公司开发的封闭源操作系统这个事实。这使得 Windows 上有好的、文档化的、稳定的驱动 ABI 和正式框架,而在 Linux 上,更多的是源代码做了一个有益的补充。文档支持也在 Windows 环境中更加发达,因为 Microsoft 具有维护它所需的资源。
|
||||
|
||||
另一方面,Linux 不会使用框架来限制设备驱动程序开发人员,并且内核和产品级设备驱动程序的源代码可以在需要的时候有所帮助。缺乏接口稳定性也有其作用,因为它意味着最新的设备驱动程序总是使用最新的接口,内核本身承载较小的后向兼容性负担,这带来了更干净的代码。
|
||||
|
||||
了解这些差异以及每个系统的具体情况是为您的设备提供有效的驱动程序开发和支持的关键的第一步。我们希望这篇文章对 Windows 和 Linux 设备驱动程序开发做的对比,有助于您理解它们,并在设备驱动程序开发过程的研究中,将此作为一个伟大的起点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/linux-vs-windows-device-driver-model.html
|
||||
|
||||
作者:[Dennis Turpitka][a]
|
||||
译者:[FrankXinqi &YangYang](https://github.com/FrankXinqi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://xmodulo.com/author/dennis
|
||||
[1]: http://xmodulo.com/linux-vs-windows-device-driver-model.html?format=pdf
|
||||
[2]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=PBHS9R4MB9RX4
|
||||
[3]: http://xmodulo.com/build-kernel-module-dkms-linux.html
|
||||
[4]: http://xmodulo.com/go/linux_device_driver_books
|
||||
[5]: https://linux.cn/article-7704-1.html
|
@ -1,18 +1,18 @@
|
||||
WIRE:一个极酷、专注于个人隐私的开源聊天应用程序已经来到了 LINUX 上
|
||||
Wire:一个极酷、专注于个人隐私的开源聊天应用程序已经来到了 Linux 上
|
||||
===========
|
||||
|
||||
[][21]
|
||||
|
||||
|
||||
回到大约两年前,[Skype][20]背后的一些开发人员发行了一个漂亮的新聊天应用个程序:[Wire][19]。当我说它漂亮的时候,只是谈论它的“外貌”。Wire 具有一个许多其他聊天应用程序所没有的整洁优美的“外貌”,但这并不是它最大的卖点。
|
||||
回到大约两年前,一些曾开发 [Skype][20] 的开发人员发行了一个漂亮的新聊天应用个程序:[Wire][19]。当我说它漂亮的时候,只是谈论它的“外貌”。Wire 具有一个许多其他聊天应用程序所没有的整洁优美的“外貌”,但这并不是它最大的卖点。
|
||||
|
||||
从一开始,Wire 就推销自己是[世界上最注重隐私的聊天应用程序][18]。无论是文本、语音电话,还是图表、图像等基本的内容,它都提供端到端的加密。
|
||||
|
||||
WhatsApp 也提供‘端到端加密’,但是考虑一下它的所有者[Facebook 为了吸引用户而把 WhatsApp 的数据分享出去][17]。我不太相信 WhatsApp 以及它的加密手段。
|
||||
WhatsApp 也提供‘端到端加密’,但是考虑一下它的所有者 [Facebook 为了吸引用户而把 WhatsApp 的数据分享出去][17]。我不太相信 WhatsApp 以及它的加密手段。
|
||||
|
||||
使 Wire 对于我们这些 FOSS【自由/开源软件】爱好者来说更加特殊的是,几个月前[Wire 开源了][16]。几个月下来我们开发了一个针对 Linux 的 beta 版本 Wire 桌面应用程序。
|
||||
使 Wire 对于我们这些 FOSS(自由/开源软件)爱好者来说更加重要的是,几个月前 [Wire 开源了][16]。几个月下来我们见到了一个用于 Linux 的 beta 版本 Wire 桌面应用程序。
|
||||
|
||||
除了一个包装器以外,桌面版的 Wire 并没有比 web 版多任何东西。感谢[Electron 开源项目][15]提供了一种开发跨平台桌面应用程序的简单方式。许多其他应用程序也通过使用 Electron 为 Linux 带去了一个本地桌面应用程序,包括[Skype][14]。
|
||||
除了一个包装器以外,桌面版的 Wire 并没有比 web 版多任何东西。感谢 [Electron 开源项目][15]提供了一种开发跨平台桌面应用程序的简单方式。许多其他应用程序也通过使用 Electron 为 Linux 带去了一个本地桌面应用程序,包括 [Skype][14]。
|
||||
|
||||
### WIRE 的特性:
|
||||
|
||||
@ -35,31 +35,29 @@ Wire 有一些更棒的特性,尤其是和[Snapchat][13]类似的音频过滤
|
||||
|
||||
在安装 Wire 到 Linux 上之前,让我先警告你它目前还处于 beta 阶段。所以,如果你遇到一些故障,请不要生气。
|
||||
|
||||
Wire 有一个 64 位系统可使用的 .deb 客户机。如果你有一台[32 位或者 64 位系统][12]的电脑,你可以使用这些技巧来找到它。你可以从下面的链接下载 .deb 文件。
|
||||
Wire 有一个 64 位系统可使用的 .deb 客户端。如果你有一台 [32 位或者 64 位系统][12]的电脑,你可以使用这些技巧来找到它。你可以从下面的链接下载 .deb 文件。
|
||||
|
||||
[下载 Linux 版 Wire [Beta]][11]
|
||||
- [下载 Linux 版 Wire [Beta]][11]
|
||||
|
||||
如果感兴趣的话,你也可以看一看它的源代码:
|
||||
|
||||
[桌面版 Wire 源代码][10]
|
||||
- [桌面版 Wire 源代码][10]
|
||||
|
||||
这是 Wire 的默认界面,看起来像[初级 Loki 操作系统][9]:
|
||||
这是 Wire 的默认界面,看起来像 [elementary OS Loki][9]:
|
||||
|
||||
[][8]
|
||||

|
||||
|
||||
你看,它们甚至能在这儿得到机器人:)
|
||||
你看,他们甚至还弄了一个机器人:)
|
||||
|
||||
你已经开始使用 Wire 了吗?如果是,你的体验是什么样的?如果没有,你将尝试一下吗?因为它现在是[开源的][7]并且可以在 Linux 上使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/wire-messaging-linux/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ItsFoss+%28Its+FOSS%21+An+Open+Source+Blog%29
|
||||
|
||||
作者:[ Abhishek Prakash ][a]
|
||||
via: https://itsfoss.com/wire-messaging-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,50 +1,45 @@
|
||||
OneNewLife translated
|
||||
|
||||
怎样在 RHEL,CentOS 和 Fedora 上安装 Git 并设置 Git 账号
|
||||
怎样在 RHEL、CentOS 和 Fedora 上安装 Git 及设置 Git 账号
|
||||
=========
|
||||
|
||||
对于新手来说,Git 是一个免费、开源、高效的分布式版本控制系统(VCS),它是为了给广泛的小规模软件开发项目提供速度、高性能以及数据一致性而设计的。
|
||||
对于新手来说,Git 是一个自由、开源、高效的分布式版本控制系统(VCS),它是基于速度、高性能以及数据一致性而设计的,以支持从小规模到大体量的软件开发项目。
|
||||
|
||||
Git 是一个软件仓库,它可以让你追踪你的软件改动,回滚到之前的版本以及创建新版本的目录和文件。
|
||||
Git 是一个可以让你追踪软件改动、版本回滚以及创建另外一个版本的目录和文件的软件仓库。
|
||||
|
||||
Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的 shell 脚本。它主要在 Linux 内核上运行,并且有以下列举的卓越的性能:
|
||||
Git 主要是用 C 语言来写的,混杂了少量的 Perl 脚本和各种 shell 脚本。它主要在 Linux 内核上运行,并且有以下列举的卓越的性能:
|
||||
|
||||
1. 容易上手
|
||||
2. 运行速度飞快且大部分操作在本地进行,因此,它为需要与远程服务器通信的集中式系统提供了巨大的速度。
|
||||
3. 高性能
|
||||
4. 提供数据一致性检查
|
||||
5. 启用低开销的本地分支
|
||||
6. 提供非常便利的暂存区
|
||||
7. 可以集成其它工具来维持大量工作流
|
||||
- 易于上手
|
||||
- 运行速度飞快,且大部分操作在本地进行,因此,它极大的提升了那些需要与远程服务器通信的集中式系统的速度。
|
||||
- 高效
|
||||
- 提供数据一致性检查
|
||||
- 支持低开销的本地分支
|
||||
- 提供非常便利的暂存区
|
||||
- 可以集成其它工具来支持多种工作流
|
||||
|
||||
在这篇操作指南中,我们将介绍在 CentOS/RHEL 7/6 和 Fedora 20-24 Linux 发行版上安装 Git 的必要步骤以及怎么配置 Git,以便于你可以快速开始参与工作。
|
||||
在这篇操作指南中,我们将介绍在 CentOS/RHEL 7/6 和 Fedora 20-24 Linux 发行版上安装 Git 的必要步骤以及怎么配置 Git,以便于你可以快速开始工作。
|
||||
|
||||
### 使用 Yum 安装 Git
|
||||
|
||||
我们应该从系统默认的仓库安装 Git,并通过运行以下 [YUM 包管理器][8] 的更新命令来确保你系统的软件包都是最新的:
|
||||
我们将从系统默认的仓库安装 Git,并通过运行以下 [YUM 包管理器][8] 的更新命令来确保你系统的软件包都是最新的:
|
||||
|
||||
```
|
||||
# yum update
|
||||
|
||||
```
|
||||
|
||||
接着,通过以下命令来安装 Git:
|
||||
|
||||
```
|
||||
# yum install git
|
||||
|
||||
```
|
||||
|
||||
在 Git 成功安装之后,你可以通过以下命令来显示 Git 安装的版本:
|
||||
|
||||
```
|
||||
# git --version
|
||||
|
||||
```
|
||||
|
||||
[][7]
|
||||

|
||||
|
||||
检查 Git 安装的版本
|
||||
*检查 Git 安装的版本*
|
||||
|
||||
注意:从系统默认仓库安装的 Git 会是比较旧的版本。如果你想拥有最新版的 Git,请考虑使用以下说明来编译源代码进行安装。
|
||||
|
||||
@ -55,7 +50,6 @@ Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的
|
||||
```
|
||||
# yum groupinstall "Development Tools"
|
||||
# yum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel
|
||||
|
||||
```
|
||||
|
||||
安装所需的软件依赖包之后,转到官方的 [Git 发布页面][6],抓取最新版的 Git 并使用下列命令编译它的源代码:
|
||||
@ -68,39 +62,36 @@ Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的
|
||||
# ./configure --prefix=/usr/local
|
||||
# make install
|
||||
# git --version
|
||||
|
||||
```
|
||||
|
||||
[][5]
|
||||

|
||||
|
||||
检查 Git 的安装版本
|
||||
*检查 Git 的安装版本*
|
||||
|
||||
**推荐阅读:** [Linux 下 11 个最好用的 Git 客户端和 Git 仓库查看器][4]
|
||||
**推荐阅读:** [Linux 下 11 个最好用的 Git 客户端和 Git 仓库查看器][4]。
|
||||
|
||||
### 在 Linux 设置 Git 账户
|
||||
|
||||
在这个环节中,我们将介绍如何使用正确的用户信息(如:姓名、邮件地址)和 `git config` 命令来设置 Git 账户,以避免出现提交错误。
|
||||
|
||||
注意:确保将用户名替换为在你的系统上创建和使用的 Git 用户的真实名称。
|
||||
注意:确保将下面的 `username` 替换为在你的系统上创建和使用的 Git 用户的真实名称。
|
||||
|
||||
你可以使用下面的 [useradd 命令][3] 创建一个 Git 用户,其中 `-m` 选项用于在 `/home` 目录下创建用户主目录,`-s` 选项用于指定用户默认的 shell。
|
||||
|
||||
```
|
||||
# useradd -m -s /bin/bash username
|
||||
# passwd username
|
||||
|
||||
```
|
||||
|
||||
现在,将新用户添加到 `wheel` 用户组以启用其使用 `sudo` 命令的权限:
|
||||
|
||||
```
|
||||
# usermod username -aG wheel
|
||||
|
||||
```
|
||||
|
||||
[][2]
|
||||

|
||||
|
||||
创建 Git 用户账号
|
||||
*创建 Git 用户账号*
|
||||
|
||||
然后通过以下命令使用新用户配置 Git:
|
||||
|
||||
@ -108,40 +99,36 @@ Git 主要是用 C 语言来写的,当中还混入了 Perl 和各种各样的
|
||||
# su username
|
||||
$ sudo git config --global user.name "Your Name"
|
||||
$ sudo git config --global user.email "you@example.com"
|
||||
|
||||
```
|
||||
|
||||
现在通过下面的命令校验 Git 的配置。
|
||||
|
||||
```
|
||||
$ sudo git config --list
|
||||
|
||||
```
|
||||
|
||||
如果配置没有错误的话,你应该能够查看具有以下详细信息的输出:
|
||||
如果配置没有错误的话,你应该能够看到类似以下详细信息的输出:
|
||||
|
||||
```
|
||||
user.name=username
|
||||
user.email= username@some-domian.com
|
||||
|
||||
```
|
||||
|
||||
[][1]
|
||||
>在 Linux 设置 Git 用户
|
||||

|
||||
|
||||
##### 总结
|
||||
*在 Linux 设置 Git 用户*
|
||||
|
||||
### 总结
|
||||
|
||||
在这个简单的教程中,我们已经了解怎么在你的 Linux 系统上安装 Git 以及配置它。我相信你应该可以驾轻就熟。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
编译自: http://www.tecmint.com/install-git-centos-fedora-redhat/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
via: http://www.tecmint.com/install-git-centos-fedora-redhat/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[OneNewLife](https://github.com/OneNewLife)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -153,4 +140,4 @@ user.email= username@some-domian.com
|
||||
[5]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Source-Version.png
|
||||
[6]:https://github.com/git/git/releases
|
||||
[7]:http://www.tecmint.com/wp-content/uploads/2016/10/Check-Git-Version.png
|
||||
[8]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
|
||||
[8]:https://linux.cn/article-2272-1.html
|
@ -1,13 +1,13 @@
|
||||
在CentOS下根据依赖性来下载RPM软件包
|
||||
===
|
||||
怎样在 CentOS 里下载 RPM 包及其所有依赖包
|
||||
========
|
||||
|
||||

|
||||

|
||||
|
||||
前几天我尝试去创建一个我们在CentOS 7下经常使用的软件的本地的仓库。当然我们可以使用 curl 或者 wget 下载任何软件包。然而这些命令并不能下载要求的依赖软件包。你必须去花一些时间而且手动的去寻找和下载被安装的软件所依赖的软件包。然而,这并不是我们必须要做的事情。在这个简短的教程中,我将会带领你学习如何根据依赖性来下载软件包的两种方法。我已经在CentOS 7 下进行了测试,尽管这些相同的步骤或许在其他基于 RPM 管理系统的发行版上也可以工作,例如 RHEL,Fedora 和 Scientific Linux。
|
||||
前几天我尝试去创建一个仅包含了我们经常在 CentOS 7 下使用的软件的本地仓库。当然,我们可以使用 curl 或者 wget 下载任何软件包,然而这些命令并不能下载要求的依赖软件包。你必须去花一些时间而且手动的去寻找和下载被安装的软件所依赖的软件包。然而,我们并不是必须这样。在这个简短的教程中,我将会带领你以两种方式下载软件包及其所有依赖包。我已经在 CentOS 7 下进行了测试,不过这些相同的步骤或许在其他基于 RPM 管理系统的发行版上也可以工作,例如 RHEL,Fedora 和 Scientific Linux。
|
||||
|
||||
### 方法 1-利用“Downloadonly”插件来根据依赖性来下载RPM软件包
|
||||
### 方法 1 利用 “Downloadonly” 插件下载 RPM 软件包及其所有依赖包
|
||||
|
||||
我们可以通过 yum 命令的“Downloadonly”插件来很容易地根据依赖性来下载RPM软件包。
|
||||
我们可以通过 yum 命令的 “Downloadonly” 插件下载 RPM 软件包及其所有依赖包。
|
||||
|
||||
为了安装 Downloadonly 插件,以 root 身份运行以下命令。
|
||||
|
||||
@ -15,13 +15,13 @@
|
||||
yum install yum-plugin-downloadonly
|
||||
```
|
||||
|
||||
现在,运行以下命令去下载一个 RPM 软件包
|
||||
现在,运行以下命令去下载一个 RPM 软件包。
|
||||
|
||||
```
|
||||
yum install --downloadonly <package-name>
|
||||
```
|
||||
|
||||
默认情况下,这个命令将会下载并把软件包保存到 /var/cache/yum/ 在 rhel-{arch}-channel/packageslocation,然而,你可以下载和保存软件包到任何位置,你可以通过 “–downloaddir” 选项来指定。
|
||||
默认情况下,这个命令将会下载并把软件包保存到 `/var/cache/yum/` 的 `rhel-{arch}-channel/packageslocation` 目录,不过,你也可以下载和保存软件包到任何位置,你可以通过 `–downloaddir` 选项来指定。
|
||||
|
||||
```
|
||||
yum install --downloadonly --downloaddir=<directory> <package-name>
|
||||
@ -86,9 +86,9 @@ Total 331 kB/s | 3.0 MB 00:00:09
|
||||
exiting because "Download Only" specified
|
||||
```
|
||||
|
||||
[][6]
|
||||

|
||||
|
||||
现在去你指定的目录位置下。你将会看到那里有下载好的软件包和依赖的软件。在我这种情况下,我已经把软件包下载到/root/mypackages/ 目录下。
|
||||
现在去你指定的目录位置下,你将会看到那里有下载好的软件包和依赖的软件。在我这种情况下,我已经把软件包下载到 `/root/mypackages/` 目录下。
|
||||
|
||||
让我们来查看一下内容。
|
||||
|
||||
@ -108,10 +108,9 @@ mailcap-2.1.41-2.el7.noarch.rpm
|
||||
|
||||
[][5]
|
||||
|
||||
正如你在上面输出所看到的, httpd软件包已经被依据所有依赖性下载完成了.
|
||||
正如你在上面输出所看到的, httpd软件包已经被依据所有依赖性下载完成了 。
|
||||
|
||||
|
||||
请注意,这个插件适用于 “yum install/yum update” 而且并不适用于 “yum groupinstall”。默认情况下,这个插件将会下载仓库中最新可用的软件包。然而你可以通过指定版本号来下载某个特定的软件版本。
|
||||
请注意,这个插件适用于 `yum install/yum update`, 但是并不适用于 `yum groupinstall`。默认情况下,这个插件将会下载仓库中最新可用的软件包。然而你可以通过指定版本号来下载某个特定的软件版本。
|
||||
|
||||
例子:
|
||||
|
||||
@ -119,29 +118,28 @@ mailcap-2.1.41-2.el7.noarch.rpm
|
||||
yum install --downloadonly --downloaddir=/root/mypackages/ httpd-2.2.6-40.el7
|
||||
```
|
||||
|
||||
Also, you can download multiple packages at once as shown below.
|
||||
此外,你也可以如下一次性下载多个包:
|
||||
|
||||
```
|
||||
yum install --downloadonly --downloaddir=/root/mypackages/ httpd vsftpd
|
||||
```
|
||||
|
||||
### 方法 2-使用 Yumdownloader 工具来根据依赖性下载 RPM 软件包,
|
||||
### 方法 2 使用 “Yumdownloader” 工具来下载 RPM 软件包及其所有依赖包
|
||||
|
||||
“Yumdownloader” 是一款简单,但是却十分有用的命令行工具,它可以一次性下载任何 RPM 软件包及其所有依赖包。
|
||||
|
||||
|
||||
Yumdownloader是一款简单,但是却十分有用的命令行工具,它可以一次性根据依赖性下载任何 RPM 软件包
|
||||
|
||||
以 root 身份运行如下命令安装 Yumdownloader 工具
|
||||
以 root 身份运行如下命令安装 “Yumdownloader” 工具。
|
||||
|
||||
```
|
||||
yum install yum-utils
|
||||
```
|
||||
一旦安装完成,运行如下命令去下载一个软件包,例如 httpd
|
||||
一旦安装完成,运行如下命令去下载一个软件包,例如 httpd。
|
||||
|
||||
```
|
||||
yumdownloader httpd
|
||||
```
|
||||
为了根据所有依赖性下载软件包,我们使用 _–resolve 参数:
|
||||
|
||||
为了根据所有依赖性下载软件包,我们使用 `--resolve` 参数:
|
||||
|
||||
```
|
||||
yumdownloader --resolve httpd
|
||||
@ -149,18 +147,19 @@ yumdownloader --resolve httpd
|
||||
|
||||
默认情况下,Yumdownloader 将会下载软件包到当前工作目录下。
|
||||
|
||||
为了将软件下载到一个特定的目录下,我们使用 _–destdir 参数:
|
||||
为了将软件下载到一个特定的目录下,我们使用 `--destdir` 参数:
|
||||
|
||||
```
|
||||
yumdownloader --resolve --destdir=/root/mypackages/ httpd
|
||||
```
|
||||
|
||||
或者
|
||||
或者,
|
||||
|
||||
```
|
||||
yumdownloader --resolve --destdir /root/mypackages/ httpd
|
||||
```
|
||||
|
||||
终端输出:
|
||||
终端输出:
|
||||
|
||||
```
|
||||
Loaded plugins: fastestmirror
|
||||
@ -188,9 +187,10 @@ Loading mirror speeds from cached hostfile
|
||||
(5/5): httpd-2.4.6-40.el7.centos.4.x86_64.rpm | 2.7 MB 00:00:19
|
||||
```
|
||||
|
||||
[][3]
|
||||

|
||||
|
||||
让我们确认一下软件包是否被下载到我们指定的目录下。
|
||||
|
||||
```
|
||||
ls /root/mypackages/
|
||||
```
|
||||
@ -205,7 +205,7 @@ httpd-tools-2.4.6-40.el7.centos.4.x86_64.rpm
|
||||
mailcap-2.1.41-2.el7.noarch.rpm
|
||||
```
|
||||
|
||||
[][2]
|
||||

|
||||
|
||||
不像 Downloadonly 插件,Yumdownload 可以下载一组相关的软件包。
|
||||
|
||||
@ -213,7 +213,7 @@ mailcap-2.1.41-2.el7.noarch.rpm
|
||||
yumdownloader "@Development Tools" --resolve --destdir /root/mypackages/
|
||||
```
|
||||
|
||||
在我看来,我更喜欢 Yumdownloader 应用在 Yum 上面。但是,两者都是十分简单易懂而且可以完成相同的工作。
|
||||
在我看来,我喜欢 Yumdownloader 更胜于 Yum 的 Downloadonly 插件。但是,两者都是十分简单易懂而且可以完成相同的工作。
|
||||
|
||||
这就是今天所有的内容,如果你觉得这份引导教程有用,清在你的社交媒体上面分享一下去让更多的人知道。
|
||||
|
||||
@ -224,10 +224,8 @@ yumdownloader "@Development Tools" --resolve --destdir /root/mypackages/
|
||||
via: https://www.ostechnix.com/download-rpm-package-dependencies-centos/
|
||||
|
||||
作者:[SK][a]
|
||||
|
||||
译者:[LinuxBars](https://github.com/LinuxBars)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,202 @@
|
||||
如何在 Arch Linux 的终端里设定 WiFi 网络
|
||||
===
|
||||
|
||||

|
||||
|
||||
如果你使用的是其他 [Linux 发行版][16] 而不是 [Arch][15] CLI,那么可能会不习惯在终端里设置 WiFi。尽管整个过程有点简单,不过我还是要讲一下。在这篇文章里,我将带领新手们通过一步步的设置向导,把你们的 Arch Linux 接入到你的 WiFi 网络里。
|
||||
|
||||
在 Linux 里有很多程序来设置无线连接,我们可以用 **ip** 和 **iw** 来配置因特网连接,但是对于新手来说有点复杂。所以我们会使用 **netctl** 命令,这是一个基于命令行的工具,用来通过配置文件来设置和管理网络连接。
|
||||
|
||||
注意:所有的设定都需要 root 权限,或者你也可以使用 sudo 命令来完成。
|
||||
|
||||
### 搜索网络
|
||||
|
||||
运行下面的命令来查看你的网络接口:
|
||||
|
||||
```
|
||||
iwconfig
|
||||
```
|
||||
|
||||
运行如下命令启用你的网络接口,如果没有启用的话:
|
||||
|
||||
```
|
||||
ip link set interface up
|
||||
```
|
||||
|
||||
运行下面的命令搜索可用的 WiFi 网络。可以向下翻页来查看。
|
||||
|
||||
```
|
||||
iwlist interface scan | less
|
||||
```
|
||||
|
||||
**注意:** 命令里的 interface 是之前用 **iwconfig** 获取到的实际网络接口。
|
||||
|
||||
扫描完,如果不使用该接口可以运行如下命令关闭:
|
||||
|
||||
```
|
||||
ip link set interface down
|
||||
```
|
||||
|
||||
### 使用 netctl 配置 Wi-Fi:
|
||||
|
||||
在使用 `netctl` 设置连接之前,你必须先检查一下你的网卡在 Linux 下的兼容性。
|
||||
|
||||
运行命令:
|
||||
|
||||
```
|
||||
lspci -k
|
||||
```
|
||||
|
||||
这条命令是用来检查内核是否加载了你的无线网卡驱动。输出必须是像这样的:
|
||||
|
||||

|
||||
|
||||
如果内核没有加载驱动,你就必须使用有线连接来安装一下。这里是 Linux 无线网络的官方维基页面:[https://wireless.wiki.kernel.org/][11]。
|
||||
|
||||
如果你的无线网卡和 Linux 兼容,你可以使用 `netctl configuration`。
|
||||
|
||||
`netctl` 使用配置文件,这是一个包含连接信息的文件。创建这个文件有简单和困难两种方式。
|
||||
|
||||
#### 简单方式 – Wifi-menu
|
||||
|
||||
如果你想用 `wifi-menu`,必须安装 `dialog`。
|
||||
|
||||
1. 运行命令: `wifi-menu`
|
||||
2. 选择你的网络
|
||||
|
||||

|
||||
3. 输入正确的密码并等待
|
||||
|
||||

|
||||
|
||||
如果没有连接失败的信息,你可以用下面的命令确认下:
|
||||
|
||||
```
|
||||
ping -c 3 www.google.com
|
||||
```
|
||||
|
||||
哇!如果你看到正在 ping,意味着网络设置成功。你现在已经在 Arch Linux 下连上 WiFi 了。如果有任何问题,可以倒回去重来。也许漏了什么。
|
||||
|
||||
#### 困难方式
|
||||
|
||||
比起上面的 `wifi-menu` 命令,这种方式会难一点点,所以我叫做困难方式。在上面的命令里,网络配置会自动生成。而在困难方式里,我们将手动修改配置文件。不过不要担心,也没那么难。那我们开始吧!
|
||||
|
||||
1. 首先第一件事,你必须要知道网络接口的名字,通常会是 `wlan0` 或 `wlp2s0`,但是也有很多例外。要确认你自己的网络接口,输入 `iwconfig` 命令并记下来。
|
||||
|
||||
[][8]
|
||||
|
||||
2. 运行命令:
|
||||
|
||||
```
|
||||
cd /etc/netctl/examples
|
||||
```
|
||||
|
||||
在这个目录里,有很多不同的配置文件例子。
|
||||
|
||||
3. 拷贝将用到的配置文件例子到 `/etc/netctl/your_profile`
|
||||
|
||||
```
|
||||
cp /etc/netctl/examples/wireless-wpa /etc/netctl/your_profile
|
||||
```
|
||||
|
||||
4. 你可以用这个命令来查看配置文件内容: `cat /etc/netctl/your_profile`
|
||||
|
||||

|
||||
|
||||
5. 用 `vi` 或者 `nano` 编辑你的配置文件的下面几个部分:
|
||||
|
||||
```
|
||||
nano /etc/netctl/your_profile
|
||||
```
|
||||
|
||||
- `Interface`:比如说 `wlan0`
|
||||
- `ESSID`:你的无线网络名字
|
||||
- `key`:你的无线网络密码
|
||||
|
||||
**注意:**
|
||||
|
||||
如果你不知道怎么用 `nano`,打开文件后,编辑要修改的地方,完了按 `ctrl+o`,然后回车,然后按 `ctrl+x`。
|
||||
|
||||

|
||||
|
||||
### 运行 netctl
|
||||
|
||||
1. 运行命令:
|
||||
```
|
||||
cd /etc/netctl
|
||||
ls
|
||||
```
|
||||
|
||||
你一定会看到 `wifi-menu` 生成的配置文件,比如 `wlan0-SSID`;或者你选择了困难方式,你一定会看到你自己创建的配置文件。
|
||||
|
||||
2. 运行命令启动连接配置:`netctl start your_profile`。
|
||||
|
||||
3. 用下面的命令测试连接:
|
||||
|
||||
```
|
||||
ping -c 3 www.google.com
|
||||
```
|
||||
输出看上去像这样:
|
||||

|
||||
|
||||
4. 最后,你必须运行下面的命令:`netctl enable your_profile`。
|
||||
|
||||
```
|
||||
netctl enable your_profile
|
||||
```
|
||||
这样将创建并激活一个 systemd 服务,然后开机时自动启动。然后欢呼吧!你在你的 Arch Linux 里配置好 wifi 网络啦。
|
||||
|
||||
### 其他工具
|
||||
|
||||
你还可以使用其他程序来设置无线连接:
|
||||
|
||||
iw:
|
||||
|
||||
1. `iw dev wlan0 link` – 状态
|
||||
2. `iw dev wlan0 scan` – 搜索网络
|
||||
3. `iw dev wlan0 connect your_essid` – 连接到开放网络
|
||||
4. `iw dev wlan0 connect your_essid key your_key` - 使用 16 进制密钥连接到 WEP 加密的网络
|
||||
|
||||
wpa_supplicant
|
||||
|
||||
- [https://wiki.archlinux.org/index.php/WPA_supplicant][4]
|
||||
|
||||
Wicd
|
||||
|
||||
- [https://wiki.archlinux.org/index.php/wicd][3]
|
||||
|
||||
NetworkManager
|
||||
|
||||
- [https://wiki.archlinux.org/index.php/NetworkManager][2]
|
||||
|
||||
### 总结
|
||||
|
||||
会了吧!我提供了在 **Arch Linux** 里接入 WiFI 网络的三种方式。这里有一件事我再强调一下,当你执行第一条命令的时候,请记住你的网络接口名字。在接下来搜索网络的命令里,请使用你的网络接口名字比如 `wlan0` 或 `wlp2s0`(上一个命令里得到的),而不是用 interface 这个词。如果你碰到任何问题,可以在下面的评论区里直接留言给我。然后别忘了在你的朋友圈里和大家分享这篇文章哦。谢谢!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal
|
||||
|
||||
作者:[Mohd Sohail][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.linuxandubuntu.com/contact-us.html
|
||||
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-to-setup-wifi-in-arch_orig.png
|
||||
[2]:https://wiki.archlinux.org/index.php/NetworkManager
|
||||
[3]:https://wiki.archlinux.org/index.php/wicd
|
||||
[4]:https://wiki.archlinux.org/index.php/WPA_supplicant
|
||||
[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/check-internet-connection-in-arch-linux_orig.png
|
||||
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-network-profile-in-arch_orig.png
|
||||
[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/view-network-profile-in-arch-linux_orig.png
|
||||
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-wifi-networks-in-arch-linux-cli_orig.png
|
||||
[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-type-wifi-password-in-arch_orig.png?605
|
||||
[10]:http://www.linuxandubuntu.com/home/5-best-arch-linux-based-linux-distributions
|
||||
[11]:https://wireless.wiki.kernel.org/
|
||||
[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-wifi-find-kernel-compatibility_orig.png
|
||||
[13]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
|
||||
[14]:http://www.linuxandubuntu.com/home/category/arch-linux
|
||||
[15]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
|
||||
[16]:http://linuxandubuntu.com/home/category/distros
|
@ -38,35 +38,31 @@ pwgen 14 1
|
||||
|
||||
你也可以使用以下标记:
|
||||
|
||||
<small style="transition: height 0.15s ease-out, width 0.15s ease-out; font-size: 14.45px;"></small>
|
||||
|
||||
```
|
||||
-c 或 --capitalize
|
||||
- -c 或 --capitalize
|
||||
生成的密码中至少包含一个大写字母
|
||||
-A 或 --no-capitalize
|
||||
- -A 或 --no-capitalize
|
||||
生成的密码中不含大写字母
|
||||
-n 或 --numerals
|
||||
- -n 或 --numerals
|
||||
生成的密码中至少包含一个数字
|
||||
-0 或 --no-numerals
|
||||
- -0 或 --no-numerals
|
||||
生成的密码中不含数字
|
||||
-y 或 --symbols
|
||||
- -y 或 --symbols
|
||||
生成的密码中至少包含一个特殊字符
|
||||
-s 或 --secure
|
||||
- -s 或 --secure
|
||||
生成一个完全随机的密码
|
||||
-B 或 --ambiguous
|
||||
生成的密码中不含模糊字符 (ambiguous characters)
|
||||
-h 或 --help
|
||||
- -B 或 --ambiguous
|
||||
生成的密码中不含易混淆字符 (ambiguous characters)
|
||||
- -h 或 --help
|
||||
输出帮助信息
|
||||
-H 或 --sha1=path/to/file[#seed]
|
||||
- -H 或 --sha1=path/to/file[#seed]
|
||||
使用指定文件的 sha1 哈希值作为随机生成器
|
||||
-C
|
||||
- -C
|
||||
按列输出生成的密码
|
||||
-1
|
||||
- -1
|
||||
不按列输出生成的密码
|
||||
-v 或 --no-vowels
|
||||
- -v 或 --no-vowels
|
||||
不使用任何元音,以免意外生成让人讨厌的单词
|
||||
|
||||
```
|
||||
|
||||
### 使用 gpg 生成高强度密码
|
||||
|
||||
@ -76,10 +72,9 @@ pwgen 14 1
|
||||
gpg --gen-random --armor 1 14
|
||||
```
|
||||
|
||||
* * *
|
||||
### 其它方法
|
||||
|
||||
当然,可能还有很多方法可以生成一个高强度密码。比方说,你可以添加以下 bash shell 方法到 `~/.bashrc` 文件:
|
||||
Of course, there are many other ways to generate a strong password. For example, you can add the following bash shell function to your `~/.bashrc` file:
|
||||
|
||||
```
|
||||
genpasswd() {
|
||||
@ -94,10 +89,8 @@ genpasswd() {
|
||||
via: https://www.rosehosting.com/blog/generate-password-linux-command-line/
|
||||
|
||||
作者:[RoseHosting][a]
|
||||
|
||||
译者:[GHLandy](https://github.com/GHLandy)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,34 @@
|
||||
标志性文本编辑器 Vim 迎来其 25 周年纪念日
|
||||
===============
|
||||
|
||||

|
||||
|
||||
*图片由 [Oscar Cortez][8] 提供。由 Opensource.com 编辑,遵循 [CC BY-SA 2.0][7] 协议发布。*
|
||||
|
||||
让我们把时钟往回拨一点。不,别停…再来一点……好了!在 25 年前,当你的某些专家同事还在蹒跚学步时,Bram Moolenaar 已经开始为他的 Amiga 计算机开发一款文本编辑器。他曾经是 Unix 上的 vi 用户,但 Amiga 上却没有与其类似的编辑器。在三年的开发之后,1991 年 11 月 2 日,他发布了“仿 vi 编辑器 (Vi IMitation)”(也就是 [Vim][6])的第一个版本。
|
||||
|
||||
两年之后,随着 2.0 版本的发布,Vim 的功能集已经超过了 vi,所以这款编辑器的全称也被改为了“vi 增强版 (Vi IMproved)”。现在,刚刚度过 25 岁生日的 Vim,已经可以在绝大部分平台上运行——Windows、OS/2、OpenVMS、BSD、Android、iOS——并且已在 OS X 及很多 Linux 发行版上成为标配软件。它受到了许多赞誉,也受到了许多批评,不同组织或开发者也会围绕它来发起争论,甚至有些面试官会问:“你用 Emacs 还是 Vim?”Vim 已拥有自由许可证,它基于[慈善软件证书( charityware license)][5](“帮助乌干达的可怜儿童”)发布,该证书与 GPL 兼容。
|
||||
|
||||
Vim 是一个灵活的、可扩展的编辑器,带有一个强大的插件系统,被深入集成到许多开发工具,并且可以编辑上百种编程语言或文件类型的文件。 在 Vim 诞生二十五年后,Bram Moolenaar 仍然在主导开发并维护它——这真是一个壮举!Vim 曾经在超过 20 年的时间里数次间歇中断开发,但在 2016 年 9 月,它发布了 [8.0 版本](https://linux.cn/article-7766-1.html),添加了许多新功能,为现代程序员用户提供了更多方便。很多年来,Vim 在官网上售卖 T 恤及 logo 贴纸,所得的销售款为支持 [ICCF][4]——一个帮助乌干达儿童的德国慈善机构做出了巨大贡献。Vim 所支持的慈善项目深受 Moolenaar 喜爱,Mooleaar 本人也去过乌干达,在基巴莱的一个儿童中心做志愿者。
|
||||
|
||||
Vim 在开源历史上记下了有趣的一笔:一个工程,在 25 年中,坚持由一个稳定的核心贡献者维护,拥有超多的用户,但很多人从未了解过它的历史。如果你想简单的了解 Vim,[请访问它的网站][3],或者你可以读一读 [Vim 入门的一些小技巧][2],或者在 Opensource.com 阅读一位 [Vim 新用户的故事][1]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/11/happy-birthday-vim-25
|
||||
|
||||
作者:[D Ruth Bavousett][a]
|
||||
译者:[StdioA](https://github.com/StdioA)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/druthb
|
||||
[1]:https://opensource.com/business/16/8/7-reasons-love-vim
|
||||
[2]:https://opensource.com/life/16/7/tips-getting-started-vim
|
||||
[3]:http://www.vim.org/
|
||||
[4]:http://iccf-holland.org/
|
||||
[5]:http://vimdoc.sourceforge.net/htmldoc/uganda.html#license
|
||||
[6]:http://www.vim.org/
|
||||
[7]:https://creativecommons.org/licenses/by-sa/2.0/
|
||||
[8]:https://www.flickr.com/photos/drastudio/7161064915/in/photolist-bUNk7t-d1eWDm-7X6nmx-7AUchG-7AQpe8-9ob1yW-YZfUi-5LqxMi-9rye8j-7xptiR-9AE5Pe-duq1Wu-7DvLFt-7Mt7TN-7xRZHa-e19sFi-7uc6u3-dV7YuK-9DRH37-6oQE3u-9u3TG9-9jbg3J-7ELgDS-5Sgp87-8NXn1u-7ZSBk7-9kytY5-7f1cMC-3sdkMh-8SWLRX-8ebBMm-pfRPHJ-9wsSQW-8iZj4Z-pCaSMa-ejZLDj-7NnKCZ-9PjXb1-92hxHD-7LbXSZ-7cAZuB-7eJgiE-7VKc9d-8Yuun8-9tZReM-dxp7r8-9NH1R4-7QfoWB-7RGWtU-7NCPf9
|
@ -1,3 +1,5 @@
|
||||
Rusking Translating...
|
||||
|
||||
# Arch Linux: In a world of polish, DIY never felt so good
|
||||
|
||||

|
||||
|
@ -1,34 +0,0 @@
|
||||
# The iconic text editor Vim celebrates 25 years
|
||||
|
||||

|
||||
>Image by [Oscar Cortez][8]. Modified by Opensource.com. [CC BY-SA 2.0][7].
|
||||
|
||||
Turn back the dial of time a bit. No, keep turning... a little more... there! Over 25 years ago, when some of your professional colleagues were still toddlers, Bram Moolenaar started working on a text editor for his Amiga. He was a user of vi on Unix, but the Amiga didn't have anything quite like it. On November 2, 1991, after three years in development, he released the first version of the "Vi IMitation" editor, or [Vim][6].
|
||||
|
||||
Two years later, with the 2.0 release, Vim's feature set had exceeded that of vi, so the acronym was changed to "Vi IMproved," Today, having just marked its 25th birthday, Vim is available on a wide array of platforms—Windows, OS/2, OpenVMS, BSD, Android, iOS—and it comes shipped standard with OS X and many Linux distros. It is praised by many, reviled by many, and is a central player in the ongoing conflicts between groups of developers. Interview questions have even been asked: "Emacs or Vim?" Vim is licensed freely, under a [charityware license][5] compatible with the GPL.
|
||||
|
||||
Vim is a flexible, extensible text editor with a powerful plugin system, rock-solid integration with many development tools, and support for hundreds of programming languages and file formats. Twenty-five years after its creation, Bram Moolenaar still leads development and maintenance of the project—a feat in itself! Vim had been chugging along in maintenance mode for more than a decade, but in September 2016 version 8.0 was released, adding new features to the editor of use to modern programmers. For many years now, sales of T-shirts and other Vim logo gear on the website has supported [ICCF][4], a Dutch charity that supports children in Uganda. It's a favorite project of Moolenaar, who has made trips to Uganda to volunteer at the children's center in Kibaale.
|
||||
|
||||
Vim is one of the interesting tidbits of open source history: a project that, for 25 years, has kept the same core contributor and is used by vast numbers of people, many without ever knowing its history. If you'd like to learn a little more about Vim, [check out the website][3], or you can read [some tips for getting started with Vim][2]or [the story of a Vim convert][1] right here on Opensource.com.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/11/happy-birthday-vim-25
|
||||
|
||||
作者:[D Ruth Bavousett][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/druthb
|
||||
[1]:https://opensource.com/business/16/8/7-reasons-love-vim
|
||||
[2]:https://opensource.com/life/16/7/tips-getting-started-vim
|
||||
[3]:http://www.vim.org/
|
||||
[4]:http://iccf-holland.org/
|
||||
[5]:http://vimdoc.sourceforge.net/htmldoc/uganda.html#license
|
||||
[6]:http://www.vim.org/
|
||||
[7]:https://creativecommons.org/licenses/by-sa/2.0/
|
||||
[8]:https://www.flickr.com/photos/drastudio/7161064915/in/photolist-bUNk7t-d1eWDm-7X6nmx-7AUchG-7AQpe8-9ob1yW-YZfUi-5LqxMi-9rye8j-7xptiR-9AE5Pe-duq1Wu-7DvLFt-7Mt7TN-7xRZHa-e19sFi-7uc6u3-dV7YuK-9DRH37-6oQE3u-9u3TG9-9jbg3J-7ELgDS-5Sgp87-8NXn1u-7ZSBk7-9kytY5-7f1cMC-3sdkMh-8SWLRX-8ebBMm-pfRPHJ-9wsSQW-8iZj4Z-pCaSMa-ejZLDj-7NnKCZ-9PjXb1-92hxHD-7LbXSZ-7cAZuB-7eJgiE-7VKc9d-8Yuun8-9tZReM-dxp7r8-9NH1R4-7QfoWB-7RGWtU-7NCPf9
|
@ -1,144 +0,0 @@
|
||||
|
||||
Translating by Firstadream
|
||||
|
||||
Rapid prototyping with docker-compose
|
||||
========================================
|
||||
|
||||
In this write-up we'll look at a Node.js prototype for **finding stock of the Raspberry PI Zero** from three major outlets in the UK.
|
||||
|
||||
I wrote the code and deployed it to an Ubuntu VM in Azure within a single evening of hacking. Docker and the docker-compose tool made the deployment and update process extremely quick.
|
||||
|
||||
### Remember linking?
|
||||
|
||||
If you've already been through the [Hands-On Docker tutorial][1] then you will have experience linking Docker containers on the command line. Linking a Node hit counter to a Redis server on the command line may look like this:
|
||||
|
||||
```
|
||||
$ docker run -d -P --name redis1
|
||||
$ docker run -d hit_counter -p 3000:3000 --link redis1:redis
|
||||
```
|
||||
|
||||
Now imagine your application has three tiers
|
||||
|
||||
- Web front-end
|
||||
- Batch tier for processing long running tasks
|
||||
- Redis or mongo database
|
||||
|
||||
Explicit linking through `--link` is just about manageable with a couple of containers, but can get out of hand as we add more tiers or containers to the application.
|
||||
|
||||
### Enter docker-compose
|
||||
|
||||

|
||||
>Docker Compose logo
|
||||
|
||||
The docker-compose tool is part of the standard Docker Toolbox and can also be downloaded separately. It provides a rich set of features to configure all of an application's parts through a plain-text YAML file.
|
||||
|
||||
The above example would look like this:
|
||||
|
||||
```
|
||||
version: "2.0"
|
||||
services:
|
||||
redis1:
|
||||
image: redis
|
||||
hit_counter:
|
||||
build: ./hit_counter
|
||||
ports:
|
||||
- 3000:3000
|
||||
```
|
||||
|
||||
From Docker 1.10 onwards we can take advantage of network overlays to help us scale out across multiple hosts. Prior to this linking only worked across a single host. The `docker-compose scale` command can be used to bring on more computing power as the need arises.
|
||||
|
||||
>View the [docker-compose][2] reference on docker.com
|
||||
|
||||
### Real-world example: Raspberry PI Stock Alert
|
||||
|
||||

|
||||
>The new Raspberry PI Zero v1.3 image courtesy of Pimoroni
|
||||
|
||||
There is a huge buzz around the Raspberry PI Zero - a tiny microcomputer with a 1GHz CPU and 512MB RAM capable of running full Linux, Docker, Node.js, Ruby and many other popular open-source tools. One of the best things about the PI Zero is that costs only 5 USD. That also means that stock gets snapped up really quickly.
|
||||
|
||||
*If you want to try Docker or Swarm on the PI check out the tutorial below.*
|
||||
|
||||
>[Docker Swarm on the PI Zero][3]
|
||||
|
||||
### Original site: whereismypizero.com
|
||||
|
||||
I found a webpage which used screen scraping to find whether 4-5 of the most popular outlets had stock.
|
||||
|
||||
- The site contained a static HTML page
|
||||
- Issued one XMLHttpRequest per outlet accessing /public/api/
|
||||
- The server issued the HTTP request to each shop and performed the scraping
|
||||
|
||||
Every call to /public/api/ took 3 seconds to execute and using Apache Bench (ab) I was only able to get through 0.25 requests per second.
|
||||
|
||||
### Reinventing the wheel
|
||||
|
||||
The retailers didn't seem to mind whereismypizero.com scraping their sites for stock, so I set about writing a similar tool from the ground up. I had the intention of handing a much higher amount of requests per second through caching and de-coupling the scrape from the web tier. Redis was the perfect tool for the job. It allowed me to set an automatically expiring key/value pair (i.e. a simple cache) and also to transmit messages between Node processes through pub/sub.
|
||||
|
||||
>Fork or star the code on Github: [alexellis/pi_zero_stock][4]
|
||||
|
||||
If you've worked with Node.js before then you will know it is single-threaded and that any CPU intensive tasks such as parsing HTML or JSON could lead to a slow-down. One way to mitigate that is to use a second worker process and a Redis messaging channel as connective tissue between this and the web tier.
|
||||
|
||||
- Web tier
|
||||
-Gives 200 for cache hit (Redis key exists for store)
|
||||
-Gives 202 for cache miss (Redis key doesn't exist, so issues message)
|
||||
-Since we are only ever reading a Redis key the response time is very quick.
|
||||
- Stock Fetcher
|
||||
-Performs HTTP request
|
||||
-Scrapes for different types of web stores
|
||||
-Updates a Redis key with a cache expire of 60 seconds
|
||||
-Also locks a Redis key to prevent too many in-flight HTTP requests to the web stores.
|
||||
```
|
||||
version: "2.0"
|
||||
services:
|
||||
web:
|
||||
build: ./web/
|
||||
ports:
|
||||
- "3000:3000"
|
||||
stock_fetch:
|
||||
build: ./stock_fetch/
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
|
||||
*The docker-compose.yml file from the example.*
|
||||
|
||||
Once I had this working locally deploying to an Ubuntu 16.04 image in the cloud (Azure) took less than 5 minutes. I logged in, cloned the repository and typed in `docker compose up -d`. That was all it took - rapid prototyping a whole system doesn't get much better. Anyone (including the owner of whereismypizero.com) can deploy the new solution with just two lines:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/alexellis/pi_zero_stock
|
||||
$ docker-compose up -d
|
||||
```
|
||||
|
||||
Updating the site is easy and just involves a `git pull` followed by a `docker-compose up -d` with the `--build` argument passed along.
|
||||
|
||||
If you are still linking your Docker containers manually, try Docker Compose for yourself or my code below:
|
||||
|
||||
>Fork or star the code on Github: [alexellis/pi_zero_stock][5]
|
||||
|
||||
### Check out the test site
|
||||
|
||||
The test site is currently deployed now using docker-compose.
|
||||
|
||||
>[stockalert.alexellis.io][6]
|
||||
|
||||

|
||||
|
||||
Preview as of 16th of May 2016
|
||||
|
||||
----------
|
||||
via: http://blog.alexellis.io/rapid-prototype-docker-compose/
|
||||
|
||||
作者:[Alex Ellis][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://blog.alexellis.io/author/alex/
|
||||
[1]: http://blog.alexellis.io/handsondocker
|
||||
[2]: https://docs.docker.com/compose/compose-file/
|
||||
[3]: http://blog.alexellis.io/dockerswarm-pizero/
|
||||
[4]: https://github.com/alexellis/pi_zero_stock
|
||||
[5]: https://github.com/alexellis/pi_zero_stock
|
||||
[6]: http://stockalert.alexellis.io/
|
||||
|
148
sources/tech/20161018 Suspend to Idle.md
Normal file
148
sources/tech/20161018 Suspend to Idle.md
Normal file
@ -0,0 +1,148 @@
|
||||
bjwrkj 翻译中..
|
||||
# Suspend to Idle
|
||||
|
||||
### Introduction
|
||||
|
||||
The Linux kernel supports a variety of sleep states. These states provide power savings by placing the various parts of the system into low power modes. The four sleep states are suspend to idle, power-on standby (standby), suspend to ram, and suspend to disk. These are also referred to sometimes by their ACPI state: S0, S1, S3, and S4, respectively. Suspend to idle is purely software driven and involves keeping the CPUs in their deepest idle state as much as possible. Power-on standby involves placing devices in low power states and powering off all non-boot CPUs. Suspend to ram takes this further by powering off all CPUs and putting the memory into self-refresh. Lastly, suspend to disk gets the greatest power savings through powering off as much of the system as possible, including the memory. The contents of memory are written to disk, and on resume this is read back into memory.
|
||||
|
||||
This blog post focuses on the implementation of suspend to idle. As described above, suspend to idle is a software implemented sleep state. The system goes through a normal platform suspend where it freezes the user space and puts peripherals into low-power states. However, instead of powering off and hotplugging out CPUs, the system is quiesced and forced into an idle cpu state. With peripherals in low power mode, no IRQs should occur, aside from wake related irqs. These wake irqs could be timers set to wake the system (RTC, generic timers, etc), or other sources like power buttons, USB, and other peripherals.
|
||||
|
||||
During freeze, a special cpuidle function is called as processors enter idle. This enter_freeze() function can be as simple as calling the cpuidle enter() function, or can be much more complex. The complexity of the function is dependent on the SoCs requirements and methods for placing the SoC into lower power modes.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
### Platform suspend_ops
|
||||
|
||||
Typically, to support S2I, a system must implement a platform_suspend_ops and provide at least minimal suspend support. This meant filling in at least the valid() function in the platform_suspend_ops. If suspend-to-idle and suspend-to-ram was to be supported, the suspend_valid_only_mem would be used for the valid function.
|
||||
|
||||
Recently, however, automatic support for S2I was added to the kernel. Sudeep Holla proposed a change that would provide S2I support on systems without requiring the implementation of platform_suspend_ops. This patch set was accepted and will be part of the 4.9 release. The patch can be found at: [https://lkml.org/lkml/2016/8/19/474][1]
|
||||
|
||||
With suspend_ops defined, the system will report the valid platform suspend states when the /sys/power/state is read.
|
||||
|
||||
```
|
||||
# cat /sys/power/state
|
||||
```
|
||||
|
||||
freeze mem_
|
||||
|
||||
This example shows that both S0 (suspend to idle) and S3 (suspend to ram) are supported on this platform. With Sudeep’s change, only freeze will show up for platforms which do not implement platform_suspend_ops.
|
||||
|
||||
### Wake IRQ support
|
||||
|
||||
Once the system is placed into a sleep state, the system must receive wake events which will resume the system. These wake events are generated from devices on the system. It is important to make sure that device drivers utilize wake irqs and configure themselves to generate wake events upon receiving wake irqs. If wake devices are not identified properly, the system will take the interrupt and then go back to sleep and will not resume.
|
||||
|
||||
Once devices implement proper wake API usage, they can be used to generate wake events. Make sure DT files also specify wake sources properly. An example of configuring a wakeup-source is the following (arch/arm/boot/dst/am335x-evm.dts):
|
||||
|
||||
```
|
||||
gpio_keys: volume_keys@0 {__
|
||||
compatible = “gpio-keys”;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
autorepeat;
|
||||
|
||||
switch@9 {
|
||||
label = “volume-up”;
|
||||
linux,code = <115>;
|
||||
gpios = <&gpio0 2 GPIO_ACTIVE_LOW>;
|
||||
wakeup-source;
|
||||
};
|
||||
|
||||
switch@10 {
|
||||
label = “volume-down”;
|
||||
linux,code = <114>;
|
||||
gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
|
||||
wakeup-source;
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
As you can see, two gpio keys are defined to be wakeup-sources. Either of these keys, when pressed, would generate a wake event during suspend.
|
||||
|
||||
An alternative to DT configuration is if the device driver itself configures wake support in the code using the typical wakeup facilities.
|
||||
|
||||
### Implementation
|
||||
|
||||
### Freeze function
|
||||
|
||||
Systems should define a enter_freeze() function in their cpuidle driver if they want to take full advantage of suspend to idle. The enter_freeze() function uses a slightly different function prototype than the enter() function. As such, you can’t just specify the enter() for both enter and enter_freeze. At a minimum, it will directly call the enter() function. If no enter_freeze() is specified, the suspend will occur, but the extra things that would have occurred if enter_freeze() was present, like tick_freeze() and stop_critical_timings(), will not occur. This results in timer IRQs waking up the system. This will not result in a resume, as the system will go back into suspend after handling the IRQ.
|
||||
|
||||
During suspend, minimal interrupts should occur (ideally none).
|
||||
|
||||
The picture below shows a plot of power usage vs time. The two spikes on the graph are the suspend and the resume. The small periodic spikes before and after the suspend are the system exiting idle to do bookkeeping operations, scheduling tasks, and handling timers. It takes a certain period of time for the system to go back into the deeper idle state due to latency.
|
||||
|
||||

|
||||
Power Usage Time Progression
|
||||
|
||||
The ftrace capture shown below displays the activity on the 4 CPUs before, during, and after the suspend/resume operation. As you can see, during the suspend, no IPIs or IRQs are handled.
|
||||
|
||||

|
||||
|
||||
Ftrace capture of Suspend/Resume
|
||||
|
||||
### Idle State Support
|
||||
|
||||
You must determine which idle states support freeze. During freeze, the power code will determine the deepest idle state that supports freeze. This is done by iterating through the idle states and looking for which states have defined enter_freeze(). The cpuidle driver or SoC specific suspend code must determine which idle states should implement freeze and it must configure them by specifying the freeze function for all applicable idle states for each cpu.
|
||||
|
||||
As an example, the Qualcomm platform will set the enter_freeze function during the suspend init function in the platform suspend code. This is done after the cpuidle driver is initialized so that all structures are defined and in place.
|
||||
|
||||
### Driver support for Suspend/Resume
|
||||
|
||||
You may encounter buggy drivers during your first successful suspend operation. Many drivers have not had robust testing of suspend/resume paths. You may even find that suspend may not have much to do because pm_runtime has already done everything you would have done in the suspend. Because the user space is frozen, the devices should already be idled and pm_runtime disabled.
|
||||
|
||||
### Testing
|
||||
|
||||
Testing for suspend to idle can be done either manually, or through using something that does an auto suspend (script/process/etc), auto sleep or through something like Android where if a wakelock is not held the system continuously tried to suspend. If done manually, the following will place the system in freeze:
|
||||
|
||||
```
|
||||
/ # echo freeze > /sys/power/state
|
||||
[ 142.580832] PM: Syncing filesystems … done.
|
||||
[ 142.583977] Freezing user space processes … (elapsed 0.001 seconds) done.
|
||||
[ 142.591164] Double checking all user space processes after OOM killer disable… (elapsed 0.000 seconds)
|
||||
[ 142.600444] Freezing remaining freezable tasks … (elapsed 0.001 seconds) done._
|
||||
_[ 142.608073] Suspending console(s) (use no_console_suspend to debug)
|
||||
[ 142.708787] mmc1: Reset 0x1 never completed.
|
||||
[ 142.710608] msm_otg 78d9000.phy: USB in low power mode
|
||||
[ 142.711379] PM: suspend of devices complete after 102.883 msecs
|
||||
[ 142.712162] PM: late suspend of devices complete after 0.773 msecs
|
||||
[ 142.712607] PM: noirq suspend of devices complete after 0.438 msecs
|
||||
< system suspended >
|
||||
….
|
||||
< wake irq triggered >
|
||||
[ 147.700522] PM: noirq resume of devices complete after 0.216 msecs
|
||||
[ 147.701004] PM: early resume of devices complete after 0.353 msecs
|
||||
[ 147.701636] msm_otg 78d9000.phy: USB exited from low power mode
|
||||
[ 147.704492] PM: resume of devices complete after 3.479 msecs
|
||||
[ 147.835599] Restarting tasks … done.
|
||||
/ #
|
||||
```
|
||||
|
||||
In the above example, it should be noted that the MMC driver was responsible for 100ms of that 102.883ms. Some device drivers will still have work to do when suspending. This may be flushing of data out to disk or other tasks which take some time.
|
||||
|
||||
If the system has freeze defined, it will try to suspend the system. If it does not have freeze capabilities, you will see the following:
|
||||
|
||||
```
|
||||
/ # echo freeze > /sys/power/state
|
||||
sh: write error: Invalid argument
|
||||
/ #
|
||||
```
|
||||
|
||||
### Future Developments
|
||||
|
||||
There are two areas where work is currently being done on Suspend to Idle on ARM platforms. The first area was mentioned earlier in the platform_suspend_ops prerequisite section. The work to always allow for the freeze state was accepted and will be part of the 4.9 kernel. The other area that is being worked on is the freeze_function support.
|
||||
|
||||
The freeze_function implementation is currently required if you want the best response/performance. However, since most SoCs will use the ARM cpuidle driver, it makes sense for the ARM cpuidle driver to implement its own generic freeze_function. And in fact, ARM is working to add this generic support. A SoC vendor should only have to implement specialized freeze_functions if they implement their own cpuidle driver or require additional provisioning before entering their deepest freezable idle state.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linaro.org/blog/suspend-to-idle/
|
||||
|
||||
作者:[Andy Gross][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linaro.org/author/andygross/
|
||||
[1]:https://lkml.org/lkml/2016/8/19/474
|
@ -1,201 +0,0 @@
|
||||
zpl1025
|
||||
How To Setup A WiFi Network In Arch Linux Using Terminal
|
||||
===
|
||||
|
||||

|
||||
|
||||
If you're using [Linux distro][16] other than [Arch][15] CLI then it's one of the toughest tasks to setup WiFi on [Arch Linux][14] using terminal. Though the process is slightly straight forward. In this article, I'll walk you newbies through the step-by-step setup guide to connect your Arch Linux to your WiFi network.There are a lot of programs to setup a wireless connection in Linux, we could use **ip** and **iw** to configure an Internet connection, but it would be a little complicated for newbies. So we'll use **netctl**, it's a cli-based tool used to configure and manage network connections via profiles.
|
||||
|
||||
Note: You must be root for all the configurations, also you can use sudo.
|
||||
|
||||
### Scanning Network
|
||||
|
||||
Run the command to know the name of your network interface -
|
||||
|
||||
```
|
||||
iwconfig
|
||||
```
|
||||
|
||||
Run the command -
|
||||
|
||||
```
|
||||
ip link set _interface_ up
|
||||
```
|
||||
|
||||
Run the command to search the available WiFi networks. Now move down to look for your WiFi network.
|
||||
|
||||
```
|
||||
iwlist interface scan | less
|
||||
```
|
||||
|
||||
**Note:** Where interface is your network interface that you found using the above **iwconfig** command.
|
||||
|
||||
Run the command -
|
||||
|
||||
```
|
||||
ip link set interface down
|
||||
```
|
||||
|
||||
### Setup A Wi-Fi Using netctl:
|
||||
|
||||
Before configure a connection with netctl you must check the compatibility of your network card with Linux.
|
||||
|
||||
1. Run the command:
|
||||
|
||||
```
|
||||
lspci -k
|
||||
```
|
||||
|
||||
This command is to check if kernel loaded the driver for wireless card. The output must look like this:
|
||||
|
||||
[][12]
|
||||
|
||||
If the kernel didn't load the driver, you must install it using an Ethernet connection. Here is the Official Linux Wireless Wiki: [https://wireless.wiki.kernel.org/][11]
|
||||
|
||||
If your wireless card is compatible with Linux, you can start with **netctl configuration**.
|
||||
|
||||
**netctl** works with profiles, profile is a file that contains information about connection. A profile can be created by the hard way or the easy way.
|
||||
|
||||
### The Easy Way – Wifi-menu
|
||||
|
||||
If you want use wifi-menu, dialog must be installed.
|
||||
|
||||
1. Run the command: wifi-menu
|
||||
2. Select your Network
|
||||
[][1] |
|
||||
3. Type the correct password and wait.
|
||||
[][9]
|
||||
|
||||
If you don't have a failed connection message, then you can prove it typing the command:
|
||||
|
||||
```
|
||||
ping -c 3 www.google.com
|
||||
```
|
||||
|
||||
Hurrah! If you're watching it pinging, then the network is setup successfully. You're now connected to WiFi network in Arch Linux. If you've any error then follow the above steps again. Perhaps you've missed something to do.
|
||||
|
||||
### The Hard Way
|
||||
|
||||
In camparison to the above wifi-menu method, this method is a little hard. That's I call it the hard way. In the above command, the network profile was setup automatically. In this method, we'll setup the profile manually. But don't worry this is not going to be much more complicated. So let's get started!
|
||||
|
||||
1. The first thing that you must do is know the name of your interface, generally the name is wlan0/wlp2s0, but there are many exceptions. To know the name of your interface, you must type the command iwconfig and note it down.
|
||||
|
||||
[][8]
|
||||
|
||||
2. Run the command:
|
||||
|
||||
```
|
||||
cd /etc/netctl/examples
|
||||
```
|
||||
|
||||
In this subdirectory you can see different profile examples.
|
||||
|
||||
3.Copy your profile example to **_/etc/netctl/your_profile_**
|
||||
|
||||
```
|
||||
cp /etc/netctl/examples/wireless-wpa /etc/netctl/your_profile
|
||||
```
|
||||
|
||||
4. You can see the profile content typing: cat /etc/netctl/your_profile
|
||||
|
||||
[][7]
|
||||
|
||||
5. Edit the following fields of your profile using vi or nano:
|
||||
|
||||
```
|
||||
nano /etc/netctl/your_profile
|
||||
```
|
||||
|
||||
1. **Interface**: it would be wlan0
|
||||
2. **ESSID**: The name of your Internet connection
|
||||
3. **key**: The password of your Internet connection**Note:**
|
||||
|
||||
If you don't know how to use nano, only edit your text, when you finish type ctrl+o and return, then type ctrl+x.
|
||||
|
||||
[][6]
|
||||
|
||||
### Running netctl
|
||||
|
||||
1. Run the command:
|
||||
|
||||
```
|
||||
cd /etc/netctl
|
||||
|
||||
ls
|
||||
```
|
||||
|
||||
You must see the profile created by wifi-menu, for example wlan0-SSID; or if you used the hard way then you must see the profile created by yourself.
|
||||
|
||||
2. Start your connection profile typing the command: netctl start your_profile.
|
||||
3. Test your connection typing:
|
||||
|
||||
```
|
||||
ping -c 3 www.google.com
|
||||
```
|
||||
|
||||
The output must look like this:[][5]
|
||||
|
||||
6. Finally, you must run the following command: netctl enable your_profile.
|
||||
|
||||
```
|
||||
netctl enable your_profile
|
||||
```
|
||||
|
||||
This will create and enable a systemd service that will start when the computer boots. So it's time shout Hurrah! You've setup wifi network in your Arch Linux.
|
||||
|
||||
### Other Utilities
|
||||
|
||||
Also, you can use other programs to setup a wireless connection: For example iw -
|
||||
|
||||
iw
|
||||
|
||||
1. iw dev wlan0 link – status
|
||||
2. iw dev wlan0 scan – Scanning networks
|
||||
3. iw dev wlan0 connect your_essid – Connecting to open network
|
||||
4. iw dev wlan0 connect your_essid key your_key - Connecting to WEP encrypted network using hexadecimal key.
|
||||
|
||||
wpa_supplicant
|
||||
|
||||
[https://wiki.archlinux.org/index.php/WPA_supplicant][4]
|
||||
|
||||
Wicd
|
||||
|
||||
[https://wiki.archlinux.org/index.php/wicd][3]
|
||||
|
||||
NetworkManager
|
||||
|
||||
[https://wiki.archlinux.org/index.php/NetworkManager][2]
|
||||
|
||||
### Conclusion
|
||||
|
||||
So there you go! I have mentioned 3 ways to connect to a WiFi network in your **Arch Linux** . One thing that I want to focus here is that when you're exeuting first command, please note down the interface. In the next command where we're scanning networks, don't just use interface but the name of your interface such wlan0 or wlp2s0 (you got from previous command). If you have any problem, then do talk to me in the comment section below. Also don't forget to share this article with your friends on social media. Thank you!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/how-to-setup-a-wifi-in-arch-linux-using-terminal
|
||||
|
||||
作者:[Mohd Sohail][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.linuxandubuntu.com/contact-us.html
|
||||
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-to-setup-wifi-in-arch_orig.png
|
||||
[2]:https://wiki.archlinux.org/index.php/NetworkManager
|
||||
[3]:https://wiki.archlinux.org/index.php/wicd
|
||||
[4]:https://wiki.archlinux.org/index.php/WPA_supplicant
|
||||
[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/check-internet-connection-in-arch-linux_orig.png
|
||||
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-network-profile-in-arch_orig.png
|
||||
[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/view-network-profile-in-arch-linux_orig.png
|
||||
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-wifi-networks-in-arch-linux-cli_orig.png
|
||||
[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/wifi-menu-type-wifi-password-in-arch_orig.png?605
|
||||
[10]:http://www.linuxandubuntu.com/home/5-best-arch-linux-based-linux-distributions
|
||||
[11]:https://wireless.wiki.kernel.org/
|
||||
[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/arch-wifi-find-kernel-compatibility_orig.png
|
||||
[13]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
|
||||
[14]:http://www.linuxandubuntu.com/home/category/arch-linux
|
||||
[15]:http://www.linuxandubuntu.com/home/arch-linux-take-your-linux-knowledge-to-next-level-review
|
||||
[16]:http://linuxandubuntu.com/home/category/distros
|
@ -0,0 +1,163 @@
|
||||
# Applying the Linus Torvalds “Good Taste” Coding Requirement
|
||||
|
||||
In [a recent interview with Linus Torvalds][1], the creator of Linux, at approximately 14:20 in the interview, he made a quick point about coding with “good taste”. Good taste? The interviewer prodded him for details and Linus came prepared with illustrations.
|
||||
|
||||
He presented a code snippet. But this wasn’t “good taste” code. This snippet was an example of poor taste in order to provide some initial contrast.
|
||||
|
||||

|
||||
|
||||
It’s a function, written in C, that removes an object from a linked list. It contains 10 lines of code.
|
||||
|
||||
He called attention to the if-statement at the bottom. It was _this_ if-statement that he criticized.
|
||||
|
||||
I paused the video and studied the slide. I had recently written code very similar. Linus was effectively saying I had poor taste. I swallowed my pride and continued the video.
|
||||
|
||||
Linus explained to the audience, as I already knew, that when removing an object from a linked list, there are two cases to consider. If the object is at the start of the list there is a different process for its removal than if it is in the middle of the list. And this is the reason for the “poor taste” if-statement.
|
||||
|
||||
But if he admits it is necessary, then why is it so bad?
|
||||
|
||||
Next he revealed a second slide to the audience. This was his example of the same function, but written with “good taste”.
|
||||
|
||||

|
||||
|
||||
The original 10 lines of code had now been reduced to 4.
|
||||
|
||||
But it wasn’t the line count that mattered. It was that if-statement. It’s gone. No longer needed. The code has been refactored so that, regardless of the object’s position in the list, the same process is applied to remove it.
|
||||
|
||||
Linus explained the new code, the elimination of the edge case, and that was it. The interview then moved on to the next topic.
|
||||
|
||||
I studied the code for a moment. Linus was right. The second slide _was_better. If this was a test to determine good taste from poor taste, I would have failed. The thought that it may be possible to eliminate that conditional statement had never occurred to me. And I had written it more than once, since I commonly work with linked lists.
|
||||
|
||||
What’s good about this illustration isn’t just that it teaches you a better way to remove an item from a linked list, but that it makes you consider that the code you’ve written, the little algorithms you’ve sprinkled throughout the program, may have room for improvement in ways you’ve never considered.
|
||||
|
||||
So this was my focus as I went back and reviewed the code in my most recent project. Perhaps it was serendipitous that it also happened to be written in C.
|
||||
|
||||
To the best of my ability to discern, the crux of the “good taste” requirement is the elimination of edge cases, which tend to reveal themselves as conditional statements. The fewer conditions you test for, the better your code “_tastes”_.
|
||||
|
||||
Here is one particular example of an improvement I made that I wanted to share.
|
||||
|
||||
Initializing Grid Edges
|
||||
|
||||
Below is an algorithm I wrote to initialize the points along the edge of a grid, which is represented as a multidimensional array: grid[rows][cols].
|
||||
|
||||
Again, the purpose of this code was to only initialize the values of the points that reside on the edge of the grid — so only the top row, bottom row, left column, and right column.
|
||||
|
||||
To accomplish this I initially looped over every point in the grid and used conditionals to test for the edges. This is what it looked like:
|
||||
|
||||
```
|
||||
for (r = 0; r < GRID_SIZE; ++r) {
|
||||
for (c = 0; c < GRID_SIZE; ++c) {
|
||||
```
|
||||
|
||||
```
|
||||
// Top Edge
|
||||
if (r == 0)
|
||||
grid[r][c] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Left Edge
|
||||
if (c == 0)
|
||||
grid[r][c] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Right Edge
|
||||
if (c == GRID_SIZE - 1)
|
||||
grid[r][c] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Bottom Edge
|
||||
if (r == GRID_SIZE - 1)
|
||||
grid[r][c] = 0;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Even though it works, in hindsight, there are some issues with this construct.
|
||||
|
||||
1. Complexity — The use 4 conditional statements inside 2 embedded loops seems overly complex.
|
||||
2. Efficiency — Given that GRID_SIZE has a value of 64, this loop performs 4096 iterations in order to set values for only the 256 edge points.
|
||||
|
||||
Linus would probably agree, this is not very _tasty_.
|
||||
|
||||
So I did some tinkering with it. After a little bit I was able to reduce the complexity to only a single for_-_loop containing four conditionals. It was only a slight improvement in complexity, but a large improvement in performance, because it only performed 256 loop iterations, one for each point along the edge.
|
||||
|
||||
```
|
||||
for (i = 0; i < GRID_SIZE * 4; ++i) {
|
||||
```
|
||||
|
||||
```
|
||||
// Top Edge
|
||||
if (i < GRID_SIZE)
|
||||
grid[0][i] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Right Edge
|
||||
else if (i < GRID_SIZE * 2)
|
||||
grid[i - GRID_SIZE][GRID_SIZE - 1] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Left Edge
|
||||
else if (i < GRID_SIZE * 3)
|
||||
grid[i - (GRID_SIZE * 2)][0] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Bottom Edge
|
||||
else
|
||||
grid[GRID_SIZE - 1][i - (GRID_SIZE * 3)] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
An improvement, yes. But it looked really ugly. It’s not exactly code that is easy to follow. Based on that alone, I wasn’t satisfied.
|
||||
|
||||
I continued to tinker. Could this really be improved further? In fact, the answer was _YES_. And what I eventually came up with was so astoundingly simple and elegant that I honestly couldn’t believe it took me this long to find it.
|
||||
|
||||
Below is the final version of the code. It has _one for-loop_ and _no conditionals_. Moreover, the loop only performs 64 iterations. It vastly improves both complexity and efficiency.
|
||||
|
||||
```
|
||||
for (i = 0; i < GRID_SIZE; ++i) {
|
||||
```
|
||||
|
||||
```
|
||||
// Top Edge
|
||||
grid[0][i] = 0;
|
||||
|
||||
// Bottom Edge
|
||||
grid[GRID_SIZE - 1][i] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Left Edge
|
||||
grid[i][0] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Right Edge
|
||||
grid[i][GRID_SIZE - 1] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
This code initializes four different edge points for each loop iteration. It’s not complex. It’s highly efficient. It’s easy to read. Compared to the original version, and even the second version, they are like night and day.
|
||||
|
||||
I was quite satisfied.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a
|
||||
|
||||
作者:[Brian Barto][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@bartobri?source=post_header_lockup
|
||||
[1]:https://www.ted.com/talks/linus_torvalds_the_mind_behind_linux
|
209
sources/tech/20161030 I dont understand Pythons Asyncio.md
Normal file
209
sources/tech/20161030 I dont understand Pythons Asyncio.md
Normal file
@ -0,0 +1,209 @@
|
||||
Translating by firstadream
|
||||
|
||||
# I don't understand Python's Asyncio
|
||||
|
||||
Recently I started looking into Python's new [asyncio][4] module a bit more. The reason for this is that I needed to do something that works better with evented IO and I figured I might give the new hot thing in the Python world a try. Primarily what I learned from this exercise is that I it's a much more complex system than I expected and I am now at the point where I am very confident that I do not know how to use it properly.
|
||||
|
||||
It's not conceptionally hard to understand and borrows a lot from Twisted, but it has so many elements that play into it that I'm not sure any more how the individual bits and pieces are supposed to go together. Since I'm not clever enough to actually propose anything better I just figured I share my thoughts about what confuses me instead so that others might be able to use that in some capacity to understand it.
|
||||
|
||||
### The Primitives
|
||||
|
||||
<cite>asyncio</cite> is supposed to implement asynchronous IO with the help of coroutines. Originally implemented as a library around the <cite>yield</cite> and <cite>yield from</cite> expressions it's now a much more complex beast as the language evolved at the same time. So here is the current set of things that you need to know exist:
|
||||
|
||||
* event loops
|
||||
* event loop policies
|
||||
* awaitables
|
||||
* coroutine functions
|
||||
* old style coroutine functions
|
||||
* coroutines
|
||||
* coroutine wrappers
|
||||
* generators
|
||||
* futures
|
||||
* concurrent futures
|
||||
* tasks
|
||||
* handles
|
||||
* executors
|
||||
* transports
|
||||
* protocols
|
||||
|
||||
In addition the language gained a few special methods that are new:
|
||||
|
||||
* <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__aenter__</tt> and <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__aexit__</tt> for asynchronous <cite>with</cite> blocks
|
||||
* <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__aiter__</tt> and <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__anext__</tt> for asynchronous iterators (async loops and async comprehensions). For extra fun that protocol already changed once. In 3.5 it returns an awaitable (a coroutine) in Python 3.6 it will return a newfangled async generator.
|
||||
* <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> for custom awaitables
|
||||
|
||||
That's quite a bit to know and the documentation covers those parts. However here are some notes I made on some of those things to understand them better:
|
||||
|
||||
### Event Loops
|
||||
|
||||
The event loop in asyncio is a bit different than you would expect from first look. On the surface it looks like each thread has one event loop but that's not really how it works. Here is how I think this works:
|
||||
|
||||
* if you are the main thread an event loop is created when you call <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt>
|
||||
* if you are any other thread, a runtime error is raised from <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt>
|
||||
* You can at any point <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.set_event_loop()</tt> to bind an event loop with the current thread. Such an event loop can be created with the <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.new_event_loop()</tt> function.
|
||||
* Event loops can be used without being bound to the current thread.
|
||||
* <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt> returns the thread bound event loop, it does not return the currently running event loop.
|
||||
|
||||
The combination of these behaviors is super confusing for a few reasons. First of all you need to know that these functions are delegates to the underlying event loop policy which is globally set. The default is to bind the event loop to the thread. Alternatively one could in theory bind the event loop to a greenlet or something similar if one would so desire. However it's important to know that library code does not control the policy and as such cannot reason that asyncio will scope to a thread.
|
||||
|
||||
Secondly asyncio does not require event loops to be bound to the context through the policy. An event loop can work just fine in isolation. However this is the first problem for library code as a coroutine or something similar does not know which event loop is responsible for scheduling it. This means that if you call <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_event_loop()</tt> from within a coroutine you might not get the event loop back that ran you. This is also the reason why all APIs take an optional explicit loop parameter. So for instance to figure out which coroutine is currently running one cannot invoke something like this:
|
||||
|
||||
```
|
||||
def get_task():
|
||||
loop = asyncio.get_event_loop()
|
||||
try:
|
||||
return asyncio.Task.get_current(loop)
|
||||
except RuntimeError:
|
||||
return None
|
||||
|
||||
```
|
||||
|
||||
Instead the loop has to be passed explicitly. This furthermore requires you to pass through the loop explicitly everywhere in library code or very strange things will happen. Not sure what the thinking for that design is but if this is not being fixed (that for instance <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">get_event_loop()</tt> returns the actually running loop) then the only other change that makes sense is to explicitly disallow explicit loop passing and require it to be bound to the current context (thread etc.).
|
||||
|
||||
Since the event loop policy does not provide an identifier for the current context it also is impossible for a library to "key" to the current context in any way. There are also no callbacks that would permit to hook the tearing down of such a context which further limits what can be done realistically.
|
||||
|
||||
### Awaitables and Coroutines
|
||||
|
||||
In my humble opinion the biggest design mistake of Python was to overload iterators so much. They are now being used not just for iteration but also for various types of coroutines. One of the biggest design mistakes of iterators in Python is that <cite>StopIteration</cite> bubbles if not caught. This can cause very frustrating problems where an exception somewhere can cause a generator or coroutine elsewhere to abort. This is a long running issue that Jinja for instance has to fight with. The template engine internally renders into a generator and when a template for some reason raises a <cite>StopIteration</cite> the rendering just ends there.
|
||||
|
||||
Python is slowly learning the lesson of overloading this system more. First of all in 3.something the asyncio module landed and did not have language support. So it was decorators and generators all the way down. To implemented the <cite>yield from</cite> support and more, the <cite>StopIteration</cite>was overloaded once more. This lead to surprising behavior like this:
|
||||
|
||||
```
|
||||
>>> def foo(n):
|
||||
... if n in (0, 1):
|
||||
... return [1]
|
||||
... for item in range(n):
|
||||
... yield item * 2
|
||||
...
|
||||
>>> list(foo(0))
|
||||
[]
|
||||
>>> list(foo(1))
|
||||
[]
|
||||
>>> list(foo(2))
|
||||
[0, 2]
|
||||
|
||||
```
|
||||
|
||||
No error, no warning. Just not the behavior you expect. This is because a <cite>return</cite> with a value from a function that is a generator actually raises a <cite>StopIteration</cite> with a single arg that is not picked up by the iterator protocol but just handled in the coroutine code.
|
||||
|
||||
With 3.5 and 3.6 a lot changed because now in addition to generators we have coroutine objects. Instead of making a coroutine by wrapping a generator there is no a separate object which creates a coroutine directly. It's implemented by prefixing a function with <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">async</tt>. For instance <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">async def x()</tt> will make such a coroutine. Now in 3.6 there will be separate async generators that will raise <cite>AsyncStopIteration</cite> to keep it apart. Additionally with Python 3.5 and later there is now a future import (<tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">generator_stop</tt>) that will raise a <cite>RuntimeError</cite> if code raises <cite>StopIteration</cite> in an iteration step.
|
||||
|
||||
Why am I mentioning all this? Because the old stuff does not really go away. Generators still have <cite>send</cite> and <cite>throw</cite> and coroutines still largely behave like generators. That is a lot of stuff you need to know now for quite some time going forward.
|
||||
|
||||
To unify a lot of this duplication we have a few more concepts in Python now:
|
||||
|
||||
* awaitable: an object with an <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> method. This is for instance implemented by native coroutines and old style coroutines and some others.
|
||||
* coroutinefunction: a function that returns a native coroutine. Not to be confused with a function returning a coroutine.
|
||||
* a coroutine: a native coroutine. Note that old asyncio coroutines are not considered coroutines by the current documentation as far as I can tell. At the very least <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutine</tt> does not consider that a coroutine. It's however picked up by the future/awaitable branches.
|
||||
|
||||
In particularly confusing is that <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.iscoroutinefunction</tt> and <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutinefunction</tt> are doing different things. Same with <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutine</tt> and <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">inspect.iscoroutinefunction</tt>. Note that even though inspect does not know anything about asycnio legacy coroutine functions in the type check, it is apparently aware of them when you check for awaitable status even though it does not conform to <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt>.
|
||||
|
||||
### Coroutine Wrappers
|
||||
|
||||
Whenever you run <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">async def</tt> Python invokes a thread local coroutine wrapper. It's set with <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">sys.set_coroutine_wrapper</tt> and it's a function that can wrap this. Looks a bit like this:
|
||||
|
||||
```
|
||||
>>> import sys
|
||||
>>> sys.set_coroutine_wrapper(lambda x: 42)
|
||||
>>> async def foo():
|
||||
... pass
|
||||
...
|
||||
>>> foo()
|
||||
__main__:1: RuntimeWarning: coroutine 'foo' was never awaited
|
||||
42
|
||||
|
||||
```
|
||||
|
||||
In this case I never actually invoke the original function and just give you a hint of what this can do. As far as I can tell this is always thread local so if you swap out the event loop policy you need to figure out separately how to make this coroutine wrapper sync up with the same context if that's something you want to do. New threads spawned will not inherit that flag from the parent thread.
|
||||
|
||||
This is not to be confused with the asyncio coroutine wrapping code.
|
||||
|
||||
### Awaitables and Futures
|
||||
|
||||
Some things are awaitables. As far as I can see the following things are considered awaitable:
|
||||
|
||||
* native coroutines
|
||||
* generators that have the fake <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">CO_ITERABLE_COROUTINE</tt> flag set (we will cover that)
|
||||
* objects with an <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> method
|
||||
|
||||
Essentially these are all objects with an <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> method except that the generators don't for legacy reasons. Where does the <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">CO_ITERABLE_COROUTINE</tt> flag come from? It comes from a coroutine wrapper (now to be confused with <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">sys.set_coroutine_wrapper</tt>) that is <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">@asyncio.coroutine</tt>. That through some indirection will wrap the generator with <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">types.coroutine</tt> (to to be confused with<tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">types.CoroutineType</tt> or <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.coroutine</tt>) which will re-create the internal code object with the additional flag <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">CO_ITERABLE_COROUTINE</tt>.
|
||||
|
||||
So now that we know what those things are, what are futures? First we need to clear up one thing: there are actually two (completely incompatible) types of futures in Python 3. <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.futures.Future</tt> and <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures.Future</tt>. One came before the other but they are also also both still used even within asyncio. For instance <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.run_coroutine_threadsafe()</tt> will dispatch a coroutine to a event loop running in another thread but it will then return a<tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures.Future</tt> object instead of a <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.futures.Future</tt> object. This makes sense because only the <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures.Future</tt> object is thread safe.
|
||||
|
||||
So now that we know there are two incompatible futures we should clarify what futures are in asyncio. Honestly I'm not entirely sure where the differences are but I'm going to call this "eventual" for the moment. It's an object that eventually will hold a value and you can do some handling with that eventual result while it's still computing. Some variations of this are called deferreds, others are called promises. What the exact difference is is above my head.
|
||||
|
||||
What can you do with a future? You can attach a callback that will be invoked once it's ready or you can attach a callback that will be invoked if the future fails. Additionally you can <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">await</tt> it (it implements <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__await__</tt> and is thus awaitable). Additionally futures can be cancelled.
|
||||
|
||||
So how do you get such a future? By calling <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.ensure_future</tt> on an awaitable object. This will also make a good old generator into such a future. However if you read the docs you will read that <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.ensure_future</tt> actually returns a <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">Task</tt>. So what's a task?
|
||||
|
||||
### Tasks
|
||||
|
||||
A task is a future that is wrapping a coroutine in particular. It works like a future but it also has some extra methods to extract the current stack of the contained coroutine. We already saw the tasks mentioned earlier because it's the main way to figure out what an event loop is currently doing via <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">Task.get_current</tt>.
|
||||
|
||||
There is also a difference in how cancellation works for tasks and futures but that's beyond the scope of this. Cancellation is its own entire beast. If you are in a coroutine and you know you are currently running you can get your own task through <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">Task.get_current</tt> as mentioned but this requires knowledge of what event loop you are dispatched on which might or might not be the thread bound one.
|
||||
|
||||
It's not possible for a coroutine to know which loop goes with it. Also the <cite>Task</cite> does not provide that information through a public API. However if you did manage to get hold of a task you can currently access <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">task._loop</tt> to find back to the event loop.
|
||||
|
||||
### Handles
|
||||
|
||||
In addition to all of this there are handles. Handles are opaque objects of pending executions that cannot be awaited but they can be cancelled. In particular if you schedule the execution of a call with <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">call_soon</tt> or <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">call_soon_threadsafe</tt> (and some others) you get that handle you can then use to cancel the execution as a best effort attempt but you can't wait for the call to actually take place.
|
||||
|
||||
### Executors
|
||||
|
||||
Since you can have multiple event loops but it's not obvious what the use of more than one of those things per thread is the obvious assumption can be made that a common setup is to have N threads with an event loop each. So how do you inform another event loop about doing some work? You cannot schedule a callback into an event loop in another thread _and_ get the result back. For that you need to use executors instead.
|
||||
|
||||
Executors come from <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">concurrent.futures</tt> for instance and they allow you to schedule work into threads that itself is not evented. For instance if you use <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">run_in_executor</tt> on the event loop to schedule a function to be called in another thread. The result is then returned as an asyncio coroutine instead of a concurrent coroutine like <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">run_coroutine_threadsafe</tt> would do. I did not yet have enough mental capacity to figure out why those APIs exist, how you are supposed to use and when which one. The documentation suggests that the executor stuff could be used to build multiprocess things.
|
||||
|
||||
### Transports and Protocols
|
||||
|
||||
I always though those would be the confusing things but that's basically a verbatim copy of the same concepts in Twisted. So read those docs if you want to understand them.
|
||||
|
||||
### How to use asyncio
|
||||
|
||||
Now that we know roughly understand asyncio I found a few patterns that people seem to use when they write asyncio code:
|
||||
|
||||
* pass the event loop to all coroutines. That appears to be what a part of the community is doing. Giving a coroutine knowledge about what loop is going to schedule it makes it possible for the coroutine to learn about its task.
|
||||
* alternatively you require that the loop is bound to the thread. That also lets a coroutine learn about that. Ideally support both. Sadly the community is already torn of what to do.
|
||||
* If you want to use contextual data (think thread locals) you are a bit out of luck currently. The most popular workaround is apparently atlassian's <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">aiolocals</tt> which basically requires you to manually propagate contextual information into coroutines spawned since the interpreter does not provide support for this. This means that if you have a utility library spawning coroutines you will lose context.
|
||||
* Ignore that the old coroutine stuff in Python exists. Use 3.5 only with the new <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">async def</tt>keyword and co. In particular you will need that anyways to somewhat enjoy the experience because with older versions you do not have async context managers which turn out to be very necessary for resource management.
|
||||
* Learn to restart the event loop for cleanup. This is something that took me longer to realize than I wish it did but the sanest way to deal with cleanup logic that is written in async code is to restart the event loop a few times until nothing pending is left. Since sadly there is no common pattern to deal with this you will end up with some ugly workaround at time. For instance <cite>aiohttp</cite>'s web support also does this pattern so if you want to combine two cleanup logics you will probably have to reimplement the utility helper that it provides since that helper completely tears down the loop when it's done. This is also not the first library I saw do this :(
|
||||
* Working with subprocesses is non obvious. You need to have an event loop running in the main thread which I suppose is listening in on signal events and then dispatches it to other event loops. This requires that the loop is notified via<tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.get_child_watcher().attach_loop(...)</tt>.
|
||||
* Writing code that supports both async and sync is somewhat of a lost cause. It also gets dangerous quickly when you start being clever and try to support <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">with</tt> and <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">async with</tt> on the same object for instance.
|
||||
* If you want to give a coroutine a better name to figure out why it was not being awaited, setting <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__name__</tt> doesn't help. You need to set <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">__qualname__</tt> instead which is what the error message printer uses.
|
||||
* Sometimes internal type conversations can screw you over. In particular the <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">asyncio.wait()</tt>function will make sure all things passed are futures which means that if you pass coroutines instead you will have a hard time finding out if your coroutine finished or is pending since the input objects no longer match the output objects. In that case the only real sane thing to do is to ensure that everything is a future upfront.
|
||||
|
||||
### Context Data
|
||||
|
||||
Aside from the insane complexity and lack of understanding on my part of how to best write APIs for it my biggest issue is the complete lack of consideration for context local data. This is something that the node community learned by now. <tt class="docutils literal" style="font-family: "Ubuntu Mono", Consolas, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, "Courier New"; font-size: 0.9em; background: rgb(238, 238, 238);">continuation-local-storage</tt> exists but has been accepted as implemented too late. Continuation local storage and similar concepts are regularly used to enforce security policies in a concurrent environment and corruption of that information can cause severe security issues.
|
||||
|
||||
The fact that Python does not even have any store at all for this is more than disappointing. I was looking into this in particular because I'm investigating how to best support [Sentry's breadcrumbs][3] for asyncio and I do not see a sane way to do it. There is no concept of context in asyncio, there is no way to figure out which event loop you are working with from generic code and without monkeypatching the world this information will not be available.
|
||||
|
||||
Node is currently going through the process of [finding a long term solution for this problem][2]. That this is not something to be left ignored can be seen by this being a recurring issue in all ecosystems. It comes up with JavaScript, Python and the .NET environment. The problem [is named async context propagation][1] and solutions go by many names. In Go the context package needs to be used and explicitly passed to all goroutines (not a perfect solution but at least one). .NET has the best solution in the form of local call contexts. It can be a thread context, an web request context, or something similar but it's automatically propagating unless suppressed. This is the gold standard of what to aim for. Microsoft had this solved since more than 15 years now I believe.
|
||||
|
||||
I don't know if the ecosystem is still young enough that logical call contexts can be added but now might still be the time.
|
||||
|
||||
### Personal Thoughts
|
||||
|
||||
Man that thing is complex and it keeps getting more complex. I do not have the mental capacity to casually work with asyncio. It requires constantly updating the knowledge with all language changes and it has tremendously complicated the language. It's impressive that an ecosystem is evolving around it but I can't help but get the impression that it will take quite a few more years for it to become a particularly enjoyable and stable development experience.
|
||||
|
||||
What landed in 3.5 (the actual new coroutine objects) is great. In particular with the changes that will come up there is a sensible base that I wish would have been in earlier versions. The entire mess with overloading generators to be coroutines was a mistake in my mind. With regards to what's in asyncio I'm not sure of anything. It's an incredibly complex thing and super messy internally. It's hard to comprehend how it works in all details. When you can pass a generator, when it has to be a real coroutine, what futures are, what tasks are, how the loop works and that did not even come to the actual IO part.
|
||||
|
||||
The worst part is that asyncio is not even particularly fast. David Beazley's live demo hacked up asyncio replacement is twice as fast as it. There is an enormous amount of complexity that's hard to understand and reason about and then it fails on it's main promise. I'm not sure what to think about it but I know at least that I don't understand asyncio enough to feel confident about giving people advice about how to structure code for it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://lucumr.pocoo.org/2016/10/30/i-dont-understand-asyncio/
|
||||
|
||||
作者:[Armin Ronacher][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://lucumr.pocoo.org/about/
|
||||
[1]:https://docs.google.com/document/d/1tlQ0R6wQFGqCS5KeIw0ddoLbaSYx6aU7vyXOkv-wvlM/edit
|
||||
[2]:https://github.com/nodejs/node-eps/pull/18
|
||||
[3]:https://docs.sentry.io/learn/breadcrumbs/
|
||||
[4]:https://docs.python.org/3/library/asyncio.html
|
@ -1,195 +0,0 @@
|
||||
Finish Translated
|
||||
|
||||
Linux vs. Windows 设备驱动模型:架构,APIs 和开发环境比较
|
||||
============================================================================================
|
||||
API Application Program Interface 应用程序接口
|
||||
ABI application binary interfaces 应用系统二进制接口
|
||||
|
||||
|
||||
|
||||
设备驱动是操作系统的一部分,他能够通过一些特定的编程接口提升硬件设备的使用,这样软件就可以控制并且运行那些设备了。因为每个驱动都对应不同的操作系统,所以你就需要不同的 Linux,Windows 或 Unix 设备驱动,以便能够在不同的计算机上使用你的设备。这就是当雇佣一个驱动开发者或者选择一个研发服务商提供者的时候,查看他们为各种操作系统平台开发驱动的经验是非常重要的。
|
||||
|
||||

|
||||
|
||||
驱动开发的第一步是理解每个操作系统处理它驱动的不同方式、底层驱动模型、它使用的架构、以及可用的开发工具。例如,Linux 驱动程序模型就与 Windows 非常不同。虽然 Windows 提倡驱动程序开发和操作系统开发分别进行,并通过一组 ABI 调用来结合驱动程序和操作系统,但是 Linux 设备驱动程序开发不依赖任何稳定的 ABI 或 API,所以它的驱动代码并没有被纳入内核中。每一个模型都有自己的优点和缺点,但是如果你想为你的设备提供全面支持,那么重要的是要全面的了解他们,。
|
||||
|
||||
在本文中,我们将比较 Windows 和 Linux 设备驱动程序,探索不同的架构,API,开发和分布,希望为您提供一个比较深入理解关于如何开始为每一个操作系统编写设备驱动程序。
|
||||
|
||||
### 1. 设备驱动架构
|
||||
|
||||
Windows 设备驱动程序的体系结构和 Linux 中使用的是不同,他们有自己的优点和缺点。差异主要受以下原因的影响:Windows 是闭源操作系统,而 Linux 是开源操作系统。比较 Linux 和 Windows 设备驱动程序架构将帮助我们理解 Windows 和 Linux 驱动程序背后的核心差异。
|
||||
|
||||
#### 1.1. Windows 驱动架构
|
||||
|
||||
虽然 Linux 内核由 Linux 驱动来分配,但 Windows 内核不包括设备驱动程序。相反的是,现代 Windows 设备驱动程序编写使用 Windows 驱动模型(WDM),这是一种完全支持即插即用和电源管理的模型,所以可以根据需要加载和卸载驱动程序。
|
||||
|
||||
处理来自应用的请求,是由 Windows 内核的一部分调用 I/O 管理器来完成的。I/O 管理器的作用是是转换这些请求到 I/O 请求数据包(IRPs),IRPs 可以被用来在驱动层识别请求并且传输数据。
|
||||
|
||||
Windows 驱动模型 WDM 提供三中驱动, 他们形成了三个层:
|
||||
|
||||
- 过滤驱动提供关于 IRPs 的可选附加处理。
|
||||
- 功能驱动是实现接口和每个设备通信的主要驱动。
|
||||
- 总线驱动服务不同的配适器和不同的总线控制器,来实现主机模式控制设备。
|
||||
|
||||
An IRP 通过这些层就像他们经过 I/O 管理器到达底层硬件那样。每个层能够独立的处理一个 IRP 并且把他们送回 I/O 管理器。在硬件底层中有硬件抽象层(HAL),它提供一个通用的接口到物理设备。
|
||||
|
||||
#### 1.2. Linux 驱动架构
|
||||
|
||||
相比于 Windows 设备驱动,Linux 设备驱动架构核心的不同就是 Linux 没有一个标准的驱动模型也没有一个干净独立的层。每一个设备驱动都被当做一个能够自动的从内核中加载和卸载模块来实现。Linux 为即插即用设备和电源管理设备提供一些方式,以便那些驱动可以使用它们来正确地管理这些设备,但这并不是必须的。
|
||||
|
||||
Linux 提供的外部模块和通信功能通过调用那些函数并且传送在随机数据结构中。来自用户应用的请求实际来自文件系统层或者网络层,它们被转化为需要的数据结构。模块能够被存储在层中,通过一些提供接口到一个设备群的模块处理一个一个的请求,例如 USB 设备。
|
||||
|
||||
Linux 设备驱动程序支持三种设备:
|
||||
|
||||
- 实现一个字节流接口的字符设备。
|
||||
- 主机文件系统和支持多字节块的数据展示的块设备。
|
||||
- 用于通过网络转换数据包的网络接口。
|
||||
|
||||
Linux 也有一个硬件抽象层(HAL),它像一个接口一样连接硬件和设备驱动。
|
||||
|
||||
### 2. 设备驱动 APIs
|
||||
|
||||
Linux 和 Windows 驱动 API 都属于事件驱动类型:只有当某些事件发生的时候,驱动代码才执行:当用户的应用程序希望从设备获取一些东西,或者当设备有某些请求需要告知操作系统。
|
||||
|
||||
#### 2.1. 初始化
|
||||
|
||||
在 Windows 上,驱动被表示为驱动对象结构,驱动对象结构在驱动入口函数的执行过程中被初始化。这些入口点也注册一些回调函数对设备的添加和移除、驱动卸载,和处理新进入的 IRP 做出回应。当一个设备连接的时候,Windows 创建一个设备对象,这个设备对象处理所有应用请求来代表设备驱动。
|
||||
|
||||
相比于 Windows,Linux 设备驱动生命周期由内核模块的 module_init 和 module_exit 函数负责管理,他们分别用于模块的加载和卸载。他们负责注册模块并通过使用内核接口来处理设备的请求。这个模块需要穿件一个设备文件(或者一个网络接口),说明一些它希望管理的数字识别器,和注册一些回调函数,当用户和设备文件交互的时候使用。
|
||||
|
||||
#### 2.2. 命名和声明设备
|
||||
|
||||
##### **在 Windows 上注册设备**
|
||||
|
||||
Windows 设备驱动是由回调函数 AddDevice 在新连接设备时被通知的。它接下来就超过去创建一个用于识别这种特殊驱动的设备对象。取决于驱动的类型,设备对象可以是物理设备对象(PDO),函数设备对象(FDO),或者过滤设备对象(FIDO)。设备对象能够使用PDO来存储在底层。
|
||||
|
||||
设备对象在这个设备连接在计算机的时候一直存在。设备扩展结构能够被用于使用一个设备对象来辅助全局设备。
|
||||
|
||||
设备对象可以有如下形式的名字 **\Device\DeviceName**, 这些被系统用来识别和定位他们。一个应用使用 CreateFile API 函数来打开一个有上述名字的文件,获得一个可以用于和设备交互的句柄。
|
||||
|
||||
然而,通常只有 PDO 有自己的名字。未命名的设备能够通过设备级结构来访问。设备驱动寄存器的一个或多个结构能够通过 128 位全局唯一标识符(GUIDs)来识别他们。用户应用能够使用全局唯一标识符来获取一个句柄。
|
||||
|
||||
##### **在 Linux 上注册设备**
|
||||
|
||||
在 Linux 平台上,用户应用通过位于 /dev 目录的文件系统入口访问设备。在模块初始化的时候,它通过调用内核函数 register_chrdev,创建了所有需要的入口。这个调用后来被发送到回调函数,这个回调函数(以及具有返回的描述符的进一步的系统调用,例如读、写或关闭)是通过模块安装在结构中,就像file_operations 或者 block_device_operations。
|
||||
|
||||
设备驱动模块负责分配和保持任何需要用于运行的数据结构。一个传送进入系统文件回调函数的文件结构有一个 private_data 字段,它可以被用来存放指向具体驱动数据的指针。块设备和网络接口 API 也提供类似的字段。
|
||||
|
||||
虽然其他系统的应用使用文件系统节点来定位设备,但是 Linux 使用一个主设备和次设备号的概念来识别设备和他们的内部驱动。主设备号被用来识别设备驱动,而次设备号由驱动使用来识别被它管理的设备。驱动为了去管理一个或多个固定的主设备号,必须首先注册自己或者让系统来分配未使用的设备号给它。
|
||||
|
||||
目前,Linux 为主次设备对使用一个32位的值,其中12位分配主设备号并允许多达4096个不同的设备。主次设备对对于字符设备和块设备是不同的,所以一个字符设备和一个块设备能使用相同的设备对而不导致冲突。网络接口是通过像 eth0 的标志名来识别,这些又是区别于主次设备的字符设备和块设备的。
|
||||
|
||||
#### 2.3. 交换数据
|
||||
|
||||
Linux 和 Windows 都支持在用户级应用程序和内核级驱动程序之间传输数据的三种方式:
|
||||
|
||||
- **缓冲型输入输出** 它使用由内核管理的缓冲区。对于写操作,内核从用户空间缓冲区中拷贝数据到内核分配缓冲区,并且把它传送到设备驱动中。读操作是一样的,由内核将数据从内核缓冲区中拷贝到应用提供的缓冲区中。
|
||||
- **直接型输入输出** 它不使用拷贝功能。代替它的是,内核在物理内存中引导用户分配缓冲区,这样他就能够保存这些数据,在数据传输过程中而不被换出。
|
||||
- **内存映射** 它也能够由内核管理,这样内核和用户空间应用就能够通过不同的地址访问同样的内存页。
|
||||
|
||||
##### **Windows 上的 I/O 模式**
|
||||
|
||||
支持缓冲型 I/O 是 WDM 的一种内置功能。缓冲区能够通过在 IRP 结构中的 AssociatedIrp.SystemBuffer 字符访问设备驱动。当需要和用户空间交流的时候,驱动只需从缓冲区中进行读写操作。
|
||||
|
||||
Windows 上的直接 I/O 由内存描述符列表(MDLs)介导。这种半透明的结构是通过在 IRP 中的 MdlAddress 字段来访问的。它们被用来定位由用户应用程序分配的缓冲区的物理地址,并在 I/O 请求的持续时间内固定。
|
||||
|
||||
在 Windows 上进行数据传输的第三个选项称为 METHOD_NEITHER。 在这种情况下,内核需要传送用户空间输入输出缓冲区的虚拟地址到驱动,而不需要确定他们有效或者保证他们映射到一个由设备驱动访问的物理储存器。设备驱动也负责处理数据传输的细节。
|
||||
|
||||
##### **Linux 上的驱动程序 I/O 模式**
|
||||
|
||||
Linux 提供许多函数例如,clear_user, copy_to_user, strncpy_from_user 和一些其他的用来在内核和用户内存之间展示缓冲区数据传输。这些函数保证了指向数据缓存区指针的有效,并且通过在存储器区域之间安全拷贝数据缓冲区来处理数据传输的所有细节。
|
||||
|
||||
然而,块设备的驱动对已知大小的整个数据块进行操作,它可以被快速移动在内核和用户地址区域之间而不需要拷贝他们。这些情况都是由 Linux 内核来自动处理所有的块设备驱动。块请求队列处理传送数据块而没有多余的拷贝,而 Linux 系统调用接口来转换文件系统请求到块请求中。
|
||||
|
||||
最终,设备驱动能够为内核地址区域分配一些存储页面(不可用于交换)并且使用 remap_pfn_range 函数来直接映射页面到用户进程的地址空间。然后应用能获取缓冲区的虚拟地址并且使用它来和设备驱动交流。
|
||||
|
||||
### 3. 设备驱动开发环境
|
||||
|
||||
#### 3.1. 设备驱动框架
|
||||
|
||||
##### **Windows 驱动程序工具包**
|
||||
|
||||
Windows 是一个闭源操作系统。Microsoft 提供 Windows 驱动程序工具包以方便非 Microsoft 供应商开发 Windows 设备驱动。工具包中包含开发,调试,检验和 Windows 设备驱动包等所需的所有内容。
|
||||
|
||||
Windows 驱动模型为设备驱动定义了一个干净的接口框架。Windows 保持这些接口的源和二进制兼容性。编译 WDM 驱动通常需要前向兼容性:也就是说,一个较旧的驱动能够在没有重新编译的情况下在较新的系统上运行,但是它当然不能够访问系统提供的新功能。但是,驱动不保证后向兼容性。
|
||||
|
||||
##### **Linux 源代码**
|
||||
|
||||
和 Windows 相对比,Linux 是一个开源操作系统,因此 Linux 的整个源代码是用于驱动开发的 SDK。没有驱动设备的正式框架,但是 Linux 内核包含许多如提供驱动注册的通用服务的子系统。这些子系统的接口被描述为内核头文件。
|
||||
|
||||
尽管 Linux 有定义接口,这些接口在设计上并不稳定。Linux 不提供有关前向和后向兼容的任何保证。设备驱动对于不同的内核版本需要重新编译。没有稳定性的保证允许 Linux 内核的快速开发,因为开发人员不必去支持旧的借口,并且能够使用最好的方法解决手头的这些问题。
|
||||
|
||||
当为 Linux 写树内驱动程序时,这种不断变化的环境不会造成任何问题,因为它们作为内核源代码的一部分,与内核本身同步更新。然而,闭源驱动必须单独开发,并且在树外,并且必须维护它们以支持不同的内核版本。因此,Linux 鼓励设备驱动程序开发人员来维持他们的树内驱动。
|
||||
|
||||
#### 3.2. 为设备驱动构建系统
|
||||
|
||||
Windows 驱动程序工具包为 Microsoft Visual Studio 添加了驱动开发支持,并包括用来构建驱动程序代码的编译器。开发 Windows 设备驱动程序与在 IDE 中开发用户空间应用程序没有太大的区别。Microsoft 提供了一个企业 Windows 驱动程序工具包,使其能够构建类似于 Linux 的命令行环境。
|
||||
|
||||
Linux 使用 Makefile 作为树内和树外系统设备驱动程序的构建系统。Linux 构建系统非常发达,通常是一个设备驱动程序只需要少数行就产生一个可工作的二进制代码。开发人员可以使用任何[IDE][5],只要它可以处理 Linux 源代码库和运行 make ,或者他们可以很容易地从终端手动编译驱动程序。
|
||||
|
||||
#### 3.3. 文档支持
|
||||
|
||||
Windows 具有对于驱动程序的开发的良好文档支持。Windows 驱动程序工具包包括文档和示例驱动程序代码,通过 MSDN 可获得关于内核接口的大量信息,并存在大量的有关驱动程序开发和 Windows 内部的参考和指南。
|
||||
|
||||
Linux 文档不是描述性的,但这缓解了整个 Linux 源代码可供驱动开发人员使用。源代码树中的文档目录记录了一些 Linux 的子系统,但是有更详尽的关于 Linux 设备驱动程序开发和 Linux 内核概述的[multiple books][4]。
|
||||
|
||||
Linux 不提供指定的设备驱动程序的样本,但现有生产驱动程序的源代码可用,可以用作开发新设备驱动程序的参考。
|
||||
|
||||
#### 3.4. 调试支持
|
||||
|
||||
Linux 和 Windows 都有可用于追踪调试驱动程序代码的日志记录工具。在 Windows 上将使用 DbgPrint 函数,而在 Linux 上使用的函数称为 printk。然而,并不是每个问题都可以通过只使用日志记录和源代码来解决。有时断点更有用,因为它们允许检查驱动代码的动态行为。交互式调试对于研究崩溃的原因也是必不可少的。
|
||||
|
||||
Windows 通过其内核级调试器 WinDbg 支持交互式调试。这需要通过一个串行端口连接两台机器:一台计算机运行调试内核,另一台运行调试器和控制被调试的操作系统。Windows 驱动程序工具包包括 Windows 内核的调试符号,因此 Windows 数据结构将在调试器中部分可见。
|
||||
|
||||
Linux 还支持通过 KDB 和 KGDB 进行的交互式调试。调试支持可以内置到内核,也可以在启动时同时启用。之后,可以直接通过物理键盘调试系统,或通过串行端口从另一台计算机连接到它。KDB 提供了一个简单的命令行界面,这是唯一的在同一台机器上来调试内核的方法。然而,KDB 缺乏源代码级调试支持。KGDB 通过串行端口提供了一个更复杂的接口。它允许使用像 GDB 这样的标准应用程序调试器来调试 Linux 内核,就像任何其他用户空间应用程序一样。
|
||||
|
||||
### 4. 设备驱动分配
|
||||
|
||||
##### 4.1. 安装设备驱动
|
||||
|
||||
在 Windows 上安装的驱动程序,是由被称为为 INF 的文本文件描述的,通常存储在 C:\Windows\INF 目录中。这些文件由驱动供应商提供,并且定义哪些是由驱动程序服务的设备,哪里可以找到驱动程序的二进制文件,和驱动程序的版本等。
|
||||
|
||||
当一个新设备插入计算机时,Windows 通过查看已经安装的驱动程序并且选择适当的一个加载。当设备被移除的时候,驱动会自动卸载它。
|
||||
|
||||
在 Linux 上,一些驱动被构建到内核中并且保持永久的加载。非必要的驱动被构建为内核模块,这通常是存储在/lib/modules/kernel-version 目录中。这个目录还包含各种配置文件,如 modules.dep,用于描述内核模块之间的依赖关系。
|
||||
|
||||
虽然 Linux 内核可以自身在启动时加载一些模块,但通常模块加载由用户空间应用程序监督。例如,init 进程可能在系统初始化期间加载一些模块,udev 守护程序负责跟踪新插入的设备并为它们加载适当的模块。
|
||||
|
||||
#### 4.2. 更新设备驱动
|
||||
|
||||
Windows 为设备驱动程序提供了稳定的二进制接口,因此在某些情况下,无需与系统一起更新驱动程序二进制文件。任何必要的更新由 Windows Update 服务处理,Windows 服务负责定位,下载和安装适用于最新版本系统的驱动程序。
|
||||
|
||||
然而,Linux 不提供稳定的二进制接口,因此有必要在每次内核更新时重新编译和更新所有必需的设备驱动程序。显然,内置在内核中的设备驱动程序会自动更新,但是树外模块会产生轻微的问题。 维护最新的模块二进制文件的任务通常用[DKMS] [3]来解决:一个当安装新的内核版本时自动重建所有注册的内核模块的服务。
|
||||
|
||||
|
||||
#### 4.3. 安全注意事项
|
||||
|
||||
所有 Windows 设备驱动程序必须在 Windows 加载它们之前进行数字签名。在开发期间可以使用自签名证书,但是分发给终端用户的驱动程序包必须使用 Microsoft 信任的有效证书进行签名。供应商可以从 Microsoft 授权的任何受信任的证书颁发机构获取软件出版商证书。然后,此证书由 Microsoft 交叉签名,并且生成的交叉证书用于在发行之前签署驱动程序包。
|
||||
|
||||
Linux 内核还可以配置为验证正在加载的内核模块的签名,并禁止不可信的内核模块。被内核所信任的公钥集在构建时是固定的,并且是完全可配置的。由内核执行的检查,它的严格性在构建时也是可配置的,范围从简单地为不可信模块发出警告,到拒绝加载任何可疑的有效性。
|
||||
|
||||
### 5. 结论
|
||||
|
||||
如上所示,Windows 和 Linux 设备驱动程序基础设施有一些共同点,例如调用 API 的方法,但更多的细节是相当不同的。最突出的差异源于 Windows 是由商业公司开发的封闭源操作系统这个事实。这是 Windows 上使变的非常好,文档化,稳定的驱动程序 ABI 和正式框架的一个要求,而在 Linux 上,更多的是到源代码的一个很好的补充。文档支持也在 Windows 环境中更加发达,因为 Microsoft 具有所需的资源来维护它。
|
||||
|
||||
另一方面,Linux 不会使用框架限制设备驱动程序开发人员,并且内核和生产设备驱动程序的源代码可以在需要的时候有所帮助。缺乏接口稳定性也有意义,因为它意味着最新的设备驱动程序总是使用最新的接口,内核本身承载较小的后向兼容性负担,这带来了更干净的代码。
|
||||
|
||||
了解这些差异以及每个系统的具体情况是为您的设备提供有效的驱动程序开发和支持的关键的第一步。我们希望这篇文章 Windows 和 Linux 设备驱动程序开发的对比,有助于您理解它们,并在设备驱动程序开发过程的研究中,将此作为一个伟大的起点。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/linux-vs-windows-device-driver-model.html
|
||||
|
||||
作者:[Dennis Turpitka][a]
|
||||
译者:[译者ID](https://github.com/FrankXinqi(&YangYang))
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://xmodulo.com/author/dennis
|
||||
[1]: http://xmodulo.com/linux-vs-windows-device-driver-model.html?format=pdf
|
||||
[2]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=PBHS9R4MB9RX4
|
||||
[3]: http://xmodulo.com/build-kernel-module-dkms-linux.html
|
||||
[4]: http://xmodulo.com/go/linux_device_driver_books
|
||||
[5]: http://xmodulo.com/good-ide-for-c-cpp-linux.html
|
Loading…
Reference in New Issue
Block a user