mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-24 02:20:09 +08:00
Merge branch 'master' of github.com:LCTT/TranslateProject
This commit is contained in:
commit
7c01190709
75
Dict.md
75
Dict.md
@ -14,55 +14,55 @@
|
||||
### 2.
|
||||
#### B ####
|
||||
### 1. Backbone:骨干
|
||||
>是一个网络的一部分,其作为所有网络运输的一个基本通道,其需要非常高的带宽。一个骨干网络的服务提供者连接许多企业子网和较小服务提供者的网络。一个企业骨干网络连接许多局域网和数据中心。
|
||||
> 是一个网络的一部分,其作为所有网络运输的一个基本通道,其需要非常高的带宽。一个骨干网络的服务提供者连接许多企业子网和较小服务提供者的网络。一个企业骨干网络连接许多局域网和数据中心。
|
||||
|
||||
### 2. B channel(Bearer channel):承载信道
|
||||
>承载信道(Bearer Channel),也叫做B channel,是一个全双工DS0时间槽(64-kbps),其携带模拟语音或数字资料通过综合服务数字网(ISDN)。
|
||||
> 承载信道(Bearer Channel),也叫做B channel,是一个全双工DS0时间槽(64-kbps),其携带模拟语音或数字资料通过综合服务数字网(ISDN)。
|
||||
|
||||
### 3. Backchannel:反向通道
|
||||
>是指当其他实时在线会话在进行中时,习惯使用网络化的计算机来维持一个实时的在线会话。
|
||||
> 是指当其他实时在线会话在进行中时,习惯使用网络化的计算机来维持一个实时的在线会话。
|
||||
|
||||
### 4. Back End:后台
|
||||
>在一个计算机系统中,是指为一个前台作业提供服务的一个节点或软件程序。前台直接影响用户,后台可能与其他系统相连接,如数据库和其它系统。
|
||||
> 在一个计算机系统中,是指为一个前台作业提供服务的一个节点或软件程序。前台直接影响用户,后台可能与其他系统相连接,如数据库和其它系统。
|
||||
|
||||
### 5. Back-haul:回程线路
|
||||
>是一个通信信道,它使携带信息流到远于最终目的地的地方,然后将它送回。这样做是因为传输到更远的远程区域的代价要远比直接发送的代价低地多。
|
||||
> 是一个通信信道,它使携带信息流到远于最终目的地的地方,然后将它送回。这样做是因为传输到更远的远程区域的代价要远比直接发送的代价低地多。
|
||||
|
||||
### 6. Backoff:退避
|
||||
>是指当一个主机已经在有MAC 协议的网络中经历了一个冲突之后试图去重发之前的等待时期。这个退避时间通常是任意的来最小化相同节点再次冲突的可能性。在每次冲突后增加退避时期也能帮助预防重复碰撞,特别当这个网络负担很重时。
|
||||
> 是指当一个主机已经在有MAC 协议的网络中经历了一个冲突之后试图去重发之前的等待时期。这个退避时间通常是任意的来最小化相同节点再次冲突的可能性。在每次冲突后增加退避时期也能帮助预防重复碰撞,特别当这个网络负担很重时。
|
||||
|
||||
### 7. Backplane:附加卡
|
||||
>在许多网络中是一个物理接口模块,例如,连接在一个界面处理器或卡和在一个总线机箱内数据总线和功率分配总线之间的一个路由器或转换器。
|
||||
> 在许多网络中是一个物理接口模块,例如,连接在一个界面处理器或卡和在一个总线机箱内数据总线和功率分配总线之间的一个路由器或转换器。
|
||||
|
||||
### 8. Back Pressure:背压
|
||||
>在计算机系统中,是指网络拥塞信息逆流通过一个Internet网络。
|
||||
> 在计算机系统中,是指网络拥塞信息逆流通过一个Internet网络。
|
||||
|
||||
### 9. Balun(balanced-unbalanced):不平衡变压器
|
||||
>意味着平衡-非平衡。不平衡变压器是一个设计用来转换平衡和不平衡之间的电信号的设备。
|
||||
> 意味着平衡-非平衡。不平衡变压器是一个设计用来转换平衡和不平衡之间的电信号的设备。
|
||||
|
||||
### 10. Baseband:基带
|
||||
>是一种类型的网络技术,在那里仅仅一种载波频率被使用。在一个基带网中,信息在传送介质中以数字的形式被携带在一个单一的多元信号通道中。
|
||||
> 是一种类型的网络技术,在那里仅仅一种载波频率被使用。在一个基带网中,信息在传送介质中以数字的形式被携带在一个单一的多元信号通道中。
|
||||
|
||||
### 11. Bastion Host:防御主机
|
||||
>是在内部网络和外部网络之间的一个网关,它被设计来防御针对内部网络的攻击。这个系统在非武装区(DMZ)的公共一边,不被防火墙或过滤路由器保护,它对攻击是完全暴露的。
|
||||
> 是在内部网络和外部网络之间的一个网关,它被设计来防御针对内部网络的攻击。这个系统在非武装区(DMZ)的公共一边,不被防火墙或过滤路由器保护,它对攻击是完全暴露的。
|
||||
|
||||
### 12: Bc(Committed Burst):约定资讯讯务
|
||||
>是一个用在帧中继系统的术语,是一个帧中继交互网约定接受和传输和通过一个帧中继网络数据链路控制(DLC)和一个特殊的时帧的最大数据量(用比特表示)。
|
||||
> 是一个用在帧中继系统的术语,是一个帧中继交互网约定接受和传输和通过一个帧中继网络数据链路控制(DLC)和一个特殊的时帧的最大数据量(用比特表示)。
|
||||
|
||||
### 13. BCP(Best Current Practices):最优现行方法
|
||||
>是副系列的IETF RFCs,其被用于描述在Internet上的最优配置技术。
|
||||
> 是副系列的IETF RFCs,其被用于描述在Internet上的最优配置技术。
|
||||
|
||||
### 14. BCU(Balanced Configuration Unit):平衡配置单元
|
||||
>是一个综合的IBM解决方法,它由软件和硬件组成。BCUs是综合的和测试作为数据仓库系统的预配置功能块。
|
||||
> 是一个综合的IBM解决方法,它由软件和硬件组成。BCUs是综合的和测试作为数据仓库系统的预配置功能块。
|
||||
|
||||
### 15. BECN(Backward Explicit Congestion Notification):显式拥塞通知
|
||||
>是在帧中继报头的一个1比特域,其发信号到任何接收帧的事物(转换器和数据终端设备),拥塞就发生在帧的反面(后面)。帧中继转换器和数据终端设备可能遵照显式拥塞通知位来减慢那个方向的数据传输率。
|
||||
> 是在帧中继报头的一个1比特域,其发信号到任何接收帧的事物(转换器和数据终端设备),拥塞就发生在帧的反面(后面)。帧中继转换器和数据终端设备可能遵照显式拥塞通知位来减慢那个方向的数据传输率。
|
||||
|
||||
### 16. BER(Bit Error Rate):误码率
|
||||
>是接收到的位包含错误的比率。BER通常被表示成十足的负面力量。
|
||||
> 是接收到的位包含错误的比率。BER通常被表示成十足的负面力量。
|
||||
|
||||
### 17. BIP(Bit Interleaved Parity):位交叉奇偶校验
|
||||
>一个用在ATM中的术语,是一个通常用来检测链接错误的一种方法。一个检测位或字被嵌入到以前发生阻塞或帧的链接中。位错误在有效载荷中能够作为维护信息被删除和报告。
|
||||
> 一个用在ATM中的术语,是一个通常用来检测链接错误的一种方法。一个检测位或字被嵌入到以前发生阻塞或帧的链接中。位错误在有效载荷中能够作为维护信息被删除和报告。
|
||||
|
||||
#### C ####
|
||||
|
||||
@ -90,69 +90,74 @@
|
||||
> 指 Linux 内核的 live patch 支持。
|
||||
|
||||
### 2. LTS(Long Term Support):长期支持
|
||||
>该缩写词多见于操作系统发行版或者软件发行版名称中,表明该版本属于长期支持版。
|
||||
> 该缩写词多见于操作系统发行版或者软件发行版名称中,表明该版本属于长期支持版。
|
||||
|
||||
#### M ####
|
||||
|
||||
#### N ####
|
||||
|
||||
#### O ####
|
||||
### 1. Orchestration:编排
|
||||
> 描述复杂计算机系统、中间件(middleware)和业务的自动化的安排、协调和管理(来自维基百科)。
|
||||
|
||||
#### P ####
|
||||
### 1.P-code(Pseudo-code):伪代码语言
|
||||
>一种解释型语言,执行方式介于编译型语言和解释型语言之间。和解释型语言一样,伪代码编程语言无需编译,在执行时自动转换成二进制形式。然而,和编译型语言不同的是,这种可执行的二进制文件是以伪代码的形式而不是机器语言的形式存储的。伪代码语言的例子有 Java、Python 和 REXX/Object REXX。
|
||||
### 1. P-code(Pseudo-code):伪代码语言
|
||||
> 一种解释型语言,执行方式介于编译型语言和解释型语言之间。和解释型语言一样,伪代码编程语言无需编译,在执行时自动转换成二进制形式。然而,和编译型语言不同的是,这种可执行的二进制文件是以伪代码的形式而不是机器语言的形式存储的。伪代码语言的例子有 Java、Python 和 REXX/Object REXX。
|
||||
|
||||
### 2. PAM(Pluggable Authentication Modules):可插拔认证模块
|
||||
>用于系统安全性的可替换的用户认证模块,它允许在不知道将使用何种认证方案的情况下进行编程。这允许将来用其它模块来替换某个模块,却无需重写软件。
|
||||
> 用于系统安全性的可替换的用户认证模块,它允许在不知道将使用何种认证方案的情况下进行编程。这允许将来用其它模块来替换某个模块,却无需重写软件。
|
||||
|
||||
### 3. Port/Ported/Porting:移植
|
||||
>一个过程,即获取为某个操作系统平台编写的程序,并对其进行修改使之能在另一 OS 上运行,并且具有类似的功能。
|
||||
> 一个过程,即获取为某个操作系统平台编写的程序,并对其进行修改使之能在另一 OS 上运行,并且具有类似的功能。
|
||||
|
||||
### 4. POSIX(Portable Operating System Interface for uniX):UNIX 可移植操作系统接口
|
||||
>一组编程接口标准,它们规定如何编写应用程序源代码以便应用程序可在操作系统之间移植。POSIX 基于 UNIX,它是 The Open Group 的 X/Open 规范的基础。
|
||||
> 一组编程接口标准,它们规定如何编写应用程序源代码以便应用程序可在操作系统之间移植。POSIX 基于 UNIX,它是 The Open Group 的 X/Open 规范的基础。
|
||||
|
||||
#### Q ####
|
||||
|
||||
#### R ####
|
||||
### 1. RCS(Revision Control System):修订控制系统
|
||||
>一组程序,它们控制组环境下文件的共享访问并跟踪文本文件的变化。常用于维护源代码模块的编码工作。
|
||||
> 一组程序,它们控制组环境下文件的共享访问并跟踪文本文件的变化。常用于维护源代码模块的编码工作。
|
||||
|
||||
### 2. RFS(Remote File Sharing):远程文件共享
|
||||
>一个程序,它让用户访问其它计算机上的文件,就好象文件在用户的系统上一样。
|
||||
> 一个程序,它让用户访问其它计算机上的文件,就好象文件在用户的系统上一样。
|
||||
|
||||
#### S ####
|
||||
### 1. shebang [ʃɪ'bæŋ]:释伴
|
||||
>Shebang(也称为Hashbang)是一个由井号和叹号构成的字符序列(#!),出现在文本文件的第一行的前两个字符,后跟解释器路径,如:#!/bin/sh,这通常是Linux中shell脚本的标准起始行。
|
||||
>长期以来,shebang都没有正式的中文名称。Linux中国翻译组将其翻译为:释伴,即解释伴随行的简称,同时又是shebang的音译。
|
||||
> Shebang(也称为Hashbang)是一个由井号和叹号构成的字符序列(#!),出现在文本文件的第一行的前两个字符,后跟解释器路径,如:#!/bin/sh,这通常是Linux中shell脚本的标准起始行。
|
||||
> 长期以来,shebang都没有正式的中文名称。Linux中国翻译组将其翻译为:释伴,即解释伴随行的简称,同时又是shebang的音译。
|
||||
|
||||
### 2. Spool(Simultaneous Peripheral Operation On-Line):假脱机
|
||||
>将数据发送给一个程序,该程序将该数据信息放入队列以备将来使用(例如,打印假脱机程序)
|
||||
> 将数据发送给一个程序,该程序将该数据信息放入队列以备将来使用(例如,打印假脱机程序)
|
||||
|
||||
### 2. Steganography:隐写术
|
||||
>将一段信息隐藏在另一段信息中的做法。一个示例是在数字化照片中放置不可见的数字水印。
|
||||
> 将一段信息隐藏在另一段信息中的做法。一个示例是在数字化照片中放置不可见的数字水印。
|
||||
|
||||
### 3. Swap:交换
|
||||
>暂时将数据(程序和/或数据文件)从随机存取存储器移到磁盘存储器(换出),或反方向移动(换入),以允许处理比物理内存所能容纳的更多的程序和数据。
|
||||
> 暂时将数据(程序和/或数据文件)从随机存取存储器移到磁盘存储器(换出),或反方向移动(换入),以允许处理比物理内存所能容纳的更多的程序和数据。
|
||||
|
||||
### 4. Scheduling:调度
|
||||
> 将任务分配至资源的过程,在计算机或生产处理中尤为重要(来自维基百科)。
|
||||
|
||||
#### T ####
|
||||
### 1. Time-sharing:分时
|
||||
>一种允许多个用户分享处理器的方法,它以时间为基础给每个用户分配一部分处理器资源,按照这些时间段轮流运行每个用户的进程。
|
||||
> 一种允许多个用户分享处理器的方法,它以时间为基础给每个用户分配一部分处理器资源,按照这些时间段轮流运行每个用户的进程。
|
||||
|
||||
### 2. TL;DR:长篇摘要
|
||||
>Too Long;Didn't Read的缩写词,即太长,未阅的意思。该词多见于互联网社区论坛中,用于指出该文太长,没有阅读,或者标示出一篇长文章的摘要。在论坛回复中,该缩写词也多作为灌水用。因此,Linux中国翻译组将其翻译为:长篇摘要。
|
||||
> Too Long;Didn't Read的缩写词,即太长,未阅的意思。该词多见于互联网社区论坛中,用于指出该文太长,没有阅读,或者标示出一篇长文章的摘要。在论坛回复中,该缩写词也多作为灌水用。因此,Linux中国翻译组将其翻译为:长篇摘要。
|
||||
|
||||
#### U ####
|
||||
|
||||
#### V ####
|
||||
### 1. VRML(Virtual Reality Modeling Language):虚拟现实建模语言
|
||||
>一种主要基于 Web 的语言,用于 3D 效果(如构建遍历)。
|
||||
> 一种主要基于 Web 的语言,用于 3D 效果(如构建遍历)。
|
||||
|
||||
#### W ####
|
||||
### 1. Wrapper:封装器
|
||||
>用于启动另一个程序的程序。
|
||||
> 用于启动另一个程序的程序。
|
||||
|
||||
#### X ####
|
||||
|
||||
#### Y ####
|
||||
|
||||
#### Z ####
|
||||
#### Z ####
|
||||
|
@ -0,0 +1,119 @@
|
||||
使用 Docker 和 Kubernetes 将 MongoDB 作为微服务运行
|
||||
===================
|
||||
|
||||
### 介绍
|
||||
|
||||

|
||||
|
||||
想在笔记本电脑上尝试 MongoDB?只需执行一个命令,你就会有一个轻量级的、独立的沙箱。完成后可以删除你所做的所有痕迹。
|
||||
|
||||
想在多个环境中使用相同的<ruby>程序栈<rt>application stack</rt></ruby>副本?构建你自己的容器镜像,让你的开发、测试、运维和支持团队使用相同的环境克隆。
|
||||
|
||||
容器正在彻底改变整个软件生命周期:从最早的技术性实验和概念证明,贯穿了开发、测试、部署和支持。
|
||||
|
||||
编排工具用来管理如何创建、升级多个容器,并使之高可用。编排还控制容器如何连接,以从多个微服务容器构建复杂的应用程序。
|
||||
|
||||
丰富的功能、简单的工具和强大的 API 使容器和编排功能成为 DevOps 团队的首选,将其集成到连续集成(CI) 和连续交付 (CD) 的工作流程中。
|
||||
|
||||
这篇文章探讨了在容器中运行和编排 MongoDB 时遇到的额外挑战,并说明了如何克服这些挑战。
|
||||
|
||||
### MongoDB 的注意事项
|
||||
|
||||
使用容器和编排运行 MongoDB 有一些额外的注意事项:
|
||||
|
||||
* MongoDB 数据库节点是有状态的。如果容器发生故障并被重新编排,数据则会丢失(能够从副本集的其他节点恢复,但这需要时间),这是不合需要的。为了解决这个问题,可以使用诸如 Kubernetes 中的<ruby>数据卷<rt>volume</rt></ruby> 抽象等功能来将容器中临时的 MongoDB 数据目录映射到持久位置,以便数据在容器故障和重新编排过程中存留。
|
||||
* 一个副本集中的 MongoDB 数据库节点必须能够相互通信 - 包括重新编排后。副本集中的所有节点必须知道其所有对等节点的地址,但是当重新编排容器时,可能会使用不同的 IP 地址重新启动。例如,Kubernetes Pod 中的所有容器共享一个 IP 地址,当重新编排 pod 时,IP 地址会发生变化。使用 Kubernetes,可以通过将 Kubernetes 服务与每个 MongoDB 节点相关联来处理,该节点使用 Kubernetes DNS 服务提供“主机名”,以保持服务在重新编排中保持不变。
|
||||
* 一旦每个单独的 MongoDB 节点运行起来(每个都在自己的容器中),则必须初始化副本集,并添加每个节点到其中。这可能需要在编排工具之外提供一些额外的处理。具体来说,必须使用目标副本集中的一个 MongoDB 节点来执行 `rs.initiate` 和 `rs.add` 命令。
|
||||
* 如果编排框架提供了容器的自动化重新编排(如 Kubernetes),那么这将增加 MongoDB 的弹性,因为这可以自动重新创建失败的副本集成员,从而在没有人为干预的情况下恢复完全的冗余级别。
|
||||
* 应该注意的是,虽然编排框架可能监控容器的状态,但是不太可能监视容器内运行的应用程序或备份其数据。这意味着使用 [MongoDB Enterprise Advanced][2] 和 [MongoDB Professional][3] 中包含的 [MongoDB Cloud Manager][1] 等强大的监控和备份解决方案非常重要。可以考虑创建自己的镜像,其中包含你首选的 MongoDB 版本和 [MongoDB Automation Agent][4]。
|
||||
|
||||
### 使用 Docker 和 Kubernetes 实现 MongoDB 副本集
|
||||
|
||||
如上节所述,分布式数据库(如 MongoDB)在使用编排框架(如 Kubernetes)进行部署时,需要稍加注意。本节将介绍详细介绍如何实现。
|
||||
|
||||
我们首先在单个 Kubernetes 集群中创建整个 MongoDB 副本集(通常在一个数据中心内,这显然不能提供地理冗余)。实际上,很少有必要改变成跨多个集群运行,这些步骤将在后面描述。
|
||||
|
||||
副本集的每个成员将作为自己的 pod 运行,并提供一个公开 IP 地址和端口的服务。这个“固定”的 IP 地址非常重要,因为外部应用程序和其他副本集成员都可以依赖于它在重新编排 pod 的情况下保持不变。
|
||||
|
||||
下图说明了其中一个 pod 以及相关的复制控制器和服务。
|
||||
|
||||

|
||||
|
||||
*图 1:MongoDB 副本集成员被配置为 Kubernetes Pod 并作为服务公开*
|
||||
|
||||
逐步介绍该配置中描述的资源:
|
||||
|
||||
* 从核心开始,有一个名为 `mongo-node1` 的容器。`mongo-node1` 包含一个名为 `mongo` 的镜像,这是一个在 [Docker Hub][5] 上托管的一个公开可用的 MongoDB 容器镜像。容器在集群中暴露端口 `27107`。
|
||||
* Kubernetes 的数据卷功能用于将连接器中的 `/data/db` 目录映射到名为 `mongo-persistent-storage1` 的永久存储上,这又被映射到在 Google Cloud 中创建的名为 `mongodb-disk1` 的磁盘中。这是 MongoDB 存储其数据的地方,这样它可以在容器重新编排后保留。
|
||||
* 容器保存在一个 pod 中,该 pod 中有标签命名为 `mongo-node`,并提供一个名为 `rod` 的(任意)示例。
|
||||
* 配置 `mongo-node1` 复制控制器以确保 `mongo-node1` pod 的单个实例始终运行。
|
||||
* 名为 `mongo-svc-a` 的 `负载均衡` 服务给外部开放了一个 IP 地址以及 `27017` 端口,它被映射到容器相同的端口号上。该服务使用选择器来匹配 pod 标签来确定正确的 pod。外部 IP 地址和端口将用于应用程序以及副本集成员之间的通信。每个容器也有本地 IP 地址,但是当容器移动或重新启动时,这些 IP 地址会变化,因此不会用于副本集。
|
||||
|
||||
下一个图显示了副本集的第二个成员的配置。
|
||||
|
||||

|
||||
|
||||
*图 2:第二个 MongoDB 副本集成员配置为 Kubernetes Pod*
|
||||
|
||||
90% 的配置是一样的,只有这些变化:
|
||||
|
||||
* 磁盘和卷名必须是唯一的,因此使用的是 `mongodb-disk2` 和 `mongo-persistent-storage2`
|
||||
* Pod 被分配了一个 `instance: jane` 和 `name: mongo-node2` 的标签,以便新的服务可以使用选择器与图 1 所示的 `rod` Pod 相区分。
|
||||
* 复制控制器命名为 `mongo-rc2`
|
||||
* 该服务名为` mongo-svc-b`,并获得了一个唯一的外部 IP 地址(在这种情况下,Kubernetes 分配了 `104.1.4.5`)
|
||||
|
||||
第三个副本成员的配置遵循相同的模式,下图展示了完整的副本集:
|
||||
|
||||

|
||||
|
||||
*图 3:配置为 Kubernetes 服务的完整副本集成员*
|
||||
|
||||
请注意,即使在三个或更多节点的 Kubernetes 群集上运行图 3 所示的配置,Kubernetes 可能(并且经常会)在同一主机上编排两个或多个 MongoDB 副本集成员。这是因为 Kubernetes 将三个 pod 视为属于三个独立的服务。
|
||||
|
||||
为了在区域内增加冗余,可以创建一个附加的 _headless_ 服务。新服务不向外界提供任何功能(甚至不会有 IP 地址),但是它可以让 Kubernetes 通知三个 MongoDB pod 形成一个服务,所以 Kubernetes 会尝试在不同的节点上编排它们。
|
||||
|
||||

|
||||
|
||||
*图 4:避免同一 MongoDB 副本集成员的 Headless 服务*
|
||||
|
||||
配置和启动 MongoDB 副本集所需的实际配置文件和命令可以在白皮书《[启用微服务:阐述容器和编排][7]》中找到。特别的是,需要一些本文中描述的特殊步骤来将三个 MongoDB 实例组合成具备功能的、健壮的副本集。
|
||||
|
||||
#### 多个可用区域 MongoDB 副本集
|
||||
|
||||
上面创建的副本集存在风险,因为所有内容都在相同的 GCE 集群中运行,因此都在相同的<ruby>可用区<rt>availability zone</rt></ruby>中。如果有一个重大事件使可用区离线,那么 MongoDB 副本集将不可用。如果需要地理冗余,则三个 pod 应该在三个不同的可用区或地区中运行。
|
||||
|
||||
令人惊奇的是,为了创建在三个区域之间分割的类似的副本集(需要三个集群),几乎不需要改变。每个集群都需要自己的 Kubernetes YAML 文件,该文件仅为该副本集中的一个成员定义了 pod、复制控制器和服务。那么为每个区域创建一个集群,永久存储和 MongoDB 节点是一件很简单的事情。
|
||||
|
||||

|
||||
|
||||
*图 5:在多个可用区域上运行的副本集*
|
||||
|
||||
### 下一步
|
||||
|
||||
要了解有关容器和编排的更多信息 - 所涉及的技术和所提供的业务优势 - 请阅读白皮书《[启用微服务:阐述容器和编排][7]》。该文件提供了获取本文中描述的副本集,并在 Google Container Engine 中的 Docker 和 Kubernetes 上运行的完整的说明。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Andrew 是 MongoDB 的产品营销总经理。他在去年夏天离开 Oracle 加入 MongoDB,在 Oracle 他花了 6 年多的时间在产品管理上,专注于高可用性。他可以通过 @andrewmorgan 或者在他的博客(clusterdb.com)评论联系他。
|
||||
|
||||
-------
|
||||
|
||||
via: https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes
|
||||
|
||||
作者:[Andrew Morgan][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.clusterdb.com/
|
||||
[1]:https://www.mongodb.com/cloud/
|
||||
[2]:https://www.mongodb.com/products/mongodb-enterprise-advanced
|
||||
[3]:https://www.mongodb.com/products/mongodb-professional
|
||||
[4]:https://docs.cloud.mongodb.com/tutorial/nav/install-automation-agent/
|
||||
[5]:https://hub.docker.com/_/mongo/
|
||||
[6]:https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained?jmp=inline
|
||||
[7]:https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained
|
||||
[8]:https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained
|
192
published/20161005 Docker Engine swarm mode - Intro tutorial.md
Normal file
192
published/20161005 Docker Engine swarm mode - Intro tutorial.md
Normal file
@ -0,0 +1,192 @@
|
||||
Docker 引擎的 Swarm 模式:入门教程
|
||||
============================
|
||||
|
||||
Swarm,听起来像是一个朋克摇滚乐队。但它确实是个新的编排机制,抑或者是,一个 [Docker][1] 现有编排体制的改进。简单来讲,如果你在用一个旧版本的 Docker,你必须手动配置 Swarm 来创建 Docker 集群。从 [1.12 版][2]开始,Docker 引擎集成了一个原生的实现(LCTT 译注:见下文)来支持无缝的集群设置。也就是为什么会有这篇文章。
|
||||
|
||||
在这篇教程中,我将带你体验一下编排后的 Docker 将能做的事情。这篇文章并不是包含所有细节(如 BnB 一般)或是让你对其全知全能,但它能带你踏上你的集群之路。在我的带领下开始吧。
|
||||
|
||||

|
||||
|
||||
### 技术概要
|
||||
|
||||
如果把 Docker 详细而又好用的文档照搬到这里那将太丢人了,所以我将简要概括下这个技术的概要。我们已经有了 Docker,对吧。现在,你想要更多的服务器作为 Docker 主机,但同时你希望它们属于同一个逻辑上的实体。也就是说,你想建立一个集群。
|
||||
|
||||
我们先从一个主机组成的集群开始。当你在一个主机上初始化一个 Swarm 集群,这台主机将成为这个集群的管理者(manager)。从技术角度来讲,它成为了共识组(consensus group)中的一个<ruby>节点<rt>node</rt></ruby>。其背后的数学逻辑建立在 [Raft][3] 算法之上。管理者(manager)负责调度任务。而具体的任务则会委任给各个加入了 Swarm 集群的工作者(worker)节点。这些操作将由 Node API 所管理。虽说我讨厌 API 这个词汇,但我必须在这里用到它。
|
||||
|
||||
Service API 是这个实现中的第二个组件。它允许管理者(manager)节点在所有的 Swarm 集群节点上创建一个分布式的服务。这个服务可以被复制(replicated),也就是说它们(LCTT 译注:指这些服务)会由平衡机制被分配到集群中(LCTT 译注:指 replicated 模式,多个容器实例将会自动调度任务到集群中的一些满足条件的节点),或者可以分配给全局(LCTT 译注:指 global 模式),也就是说每个节点都会运行一个容器实例。
|
||||
|
||||
此外还有更多的功课需要做,但这些信息已经足够你上路了。现在,我们开始整些实际的。我们的目标平台是 [CentOS 7.2][4],有趣的是在我写这篇教程的时候,它的软件仓库中只有 1.10 版的 Docker,也就是说我必须手动更新以使用 Swarm。我们将在另一篇教程中讨论这个问题。接下来我们还有一个跟进的指南,其中涵盖了如何将新的节点加入我们现有的集群(LCTT 译注:指刚刚建立的单节点集群),并且我们将使用 [Fedora][5] 进行一个非对称的配置。至此,请确保正确的配置已经就位,并有一个工作的集群启动并正在运行(LCTT 译注:指第一个节点的 Docker 已经安装并已进入 Swarm 模式,但到这里笔者并没有介绍如何初始化 Swarm 集群,不过别担心下章会讲)。
|
||||
|
||||
### 配置镜像和服务
|
||||
|
||||
我将尝试配置一个负载均衡的 [Apache][6] 服务,并使用多个容器实例通过唯一的 IP 地址提供页面内容。挺标准的吧(LCTT 译注:指这个负载均衡的网页服务器)。这个例子同时也突出了你想要使用集群的大多数原因:可用性、冗余、横向扩展以及性能。当然,你同时需要考虑[网络][7]和[储存][8]这两块,但它们超出了这篇指南所涉及的范围了。
|
||||
|
||||
这个 Dockerfile 模板其实可以在官方镜像仓库里的 httpd 下找到。你只需一个最简单的设置来起步。至于如何下载或创建自己的镜像,请参考我的入门指南,链接可以在这篇教程的顶部可以找到。
|
||||
|
||||
```
|
||||
docker build -t my-apache2 .
|
||||
Sending build context to Docker daemon 2.048 kB
|
||||
Step 1 : FROM httpd:2.4
|
||||
Trying to pull repository docker.io/library/httpd ...
|
||||
2.4: Pulling from docker.io/library/httpd
|
||||
|
||||
8ad8b3f87b37: Pull complete
|
||||
c95e1f92326d: Pull complete
|
||||
96e8046a7a4e: Pull complete
|
||||
00a0d292c371: Pull complete
|
||||
3f7586acab34: Pull complete
|
||||
Digest: sha256:3ad4d7c4f1815bd1c16788a57f81b413...a915e50a0d3a4
|
||||
Status: Downloaded newer image for docker.io/httpd:2.4
|
||||
---> fe3336dd034d
|
||||
Step 2 : COPY ../public-html/ /usr/local/apache2/htdocs/
|
||||
...
|
||||
```
|
||||
|
||||

|
||||
|
||||
在你继续下面的步骤之前,你应该确保你能无错误的启动一个容器实例并能链接到这个网页服务器上(LCTT 译注:使用下面的命令)。一旦你确保你能连上,我们就可以开始着手创建一个分布式的服务。
|
||||
|
||||
```
|
||||
docker run -dit --name my-running-app my-apache2
|
||||
```
|
||||
|
||||
将这个 IP 地址输入浏览器,看看会出现什么。
|
||||
|
||||
### Swarm 初始化和配置
|
||||
|
||||
下一步就是启动 Swarm 集群了。你将需要这些最基础的命令来开始,它们与 Docker 博客中的例子非常相似:
|
||||
|
||||
```
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
```
|
||||
|
||||
这里我们做了什么?我们创建了一个叫做 `frontent` 的服务,它有五个容器实例。同时我们还将主机的 80 端口和这些容器的 80 端口相绑定。我们将使用刚刚新创建的 Apache 镜像来做这个测试。然而,当你在自己的电脑上直接键入上面的指令时,你将看到下面的错误:
|
||||
|
||||
```
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
|
||||
```
|
||||
|
||||
这意味着你没有将你的主机(节点)配置成一个 Swarm 管理者(manager)。你可以在这台主机上初始化 Swarm 集群或是让它加入一个现有的集群。由于我们目前还没有一个现成的集群,我们将初始化它(LCTT 译注:指初始化 Swarm 集群并使当前节点成为 manager):
|
||||
|
||||
```
|
||||
docker swarm init
|
||||
Swarm initialized: current node (dm58mmsczqemiikazbfyfwqpd) is now a manager.
|
||||
```
|
||||
|
||||
为了向这个 Swarm 集群添加一个工作者(worker),请执行下面的指令:
|
||||
|
||||
```
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-4ofd46a2nfyvrqwu8w5oeetukrbylyznxla
|
||||
9srf9vxkxysj4p8-eu5d68pu5f1ci66s7w4wjps1u \
|
||||
10.0.2.15:2377
|
||||
```
|
||||
|
||||
为了向这个 Swarm 集群添加一个管理者(manager),请执行 `docker swarm join-token manager` 并按照指示操作。
|
||||
|
||||
操作后的输出不用解释已经很清楚明了。我们成功的创建了一个 Swarm 集群。新的节点们将需要正确的令牌(token)来加入这个 Swarm 集群。如果你需要配置防火墙,你还需找到它的 IP 地址和端口(LCTT 译注:指 Docker 的 Swarm 模式通讯所需的端口,默认 2377)。此外,你还可以向 Swarm 集群中添加管理者节点。现在,重新执行刚刚的服务创建指令:
|
||||
|
||||
```
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
6lrx1vhxsar2i50is8arh4ud1
|
||||
```
|
||||
|
||||
### 测试连通性
|
||||
|
||||
现在,我们来验证下我们的服务是否真的工作了。从某些方面讲,这很像我们在 [Vagrant][9] 和 [coreOS][10] 中做的事情那样。毕竟它们的原理几乎相同。相同指导思想的不同实现罢了(LCTT 译注:笔者观点,无法苟同)。首先需要确保 `docker ps` 能够给出正确的输出。你应该能看到所创建服务的多个容器副本。
|
||||
|
||||
```
|
||||
docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
|
||||
NAMES
|
||||
cda532f67d55 my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.1.2sobjfchdyucschtu2xw6ms9a
|
||||
75fe6e0aa77b my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.4.ag77qtdeby9fyvif5v6c4zcpc
|
||||
3ce824d3151f my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.2.b6fqg6sf4hkeqs86ps4zjyq65
|
||||
eda01569181d my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.5.0rmei3zeeh8usagg7fn3olsp4
|
||||
497ef904e381 my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.3.7m83qsilli5dk8rncw3u10g5a
|
||||
```
|
||||
|
||||
我也测试了不同的、非常规的端口,它们都能正常工作。对于你如何连接服务器和收取请求你将会有很多可配置的余地。你可以使用 localhost 或者 Docker 网络接口(笔者注:应该是指 Docker 的默认网桥 docker0,其网关为 172.17.0.1) IP 地址的正确端口去访问。下面的例子使用了端口 1080:
|
||||
|
||||

|
||||
|
||||
至此,这是一个非常粗略、简单的开始。真正的挑战是创建一个优化过的、可扩展的服务,但是它们需要一个准确的技术用例。此外,你还会用到 `docker info` 和 `docker service`(还有 `inspect` 和 `ps`)命令来详细了解你的集群是如何工作的。
|
||||
|
||||
### 可能会遇到的问题
|
||||
|
||||
你可能会在把玩 Docker 和 Swarm 时遇到一些小的问题(也许没那么小)。比如 SELinux 也许会抱怨你正在执行一些非法的操作(LCTT 译注:指在强制访问控制策略中没有权限的操作)。然而,这些错误和警告应该不会对你造成太多阻碍。
|
||||
|
||||

|
||||
|
||||
- `docker service` 不是一条命令(`docker service is not a docker command`)
|
||||
|
||||
当你尝试执行必须的命令去创建一个复制模式(replicated)的服务时,你可能会遇到一条错误说 `docker: 'service' is not a docker command`(LCTT 译注:见下面的例子)。这表示你的 Docker 版本不对(使用 `-v` 选项来检查)。我们将在将来的教程讨论如何修复这个问题。
|
||||
|
||||
```
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
docker: 'service' is not a docker command.
|
||||
```
|
||||
|
||||
- `docker tag` 无法识别(`docker tag not recognized`)
|
||||
|
||||
你也许会看到下面的错误:
|
||||
|
||||
```
|
||||
docker service create -name frontend -replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
Error response from daemon: rpc error: code = 3 desc = ContainerSpec: "-name" is not a valid repository/tag
|
||||
```
|
||||
|
||||
关于这个错误已经有多个相关的[讨论][11]和[帖子][12]了。其实这个错误也许相当无辜。你也许是从浏览器粘贴的命令,在浏览器中的横线也许没被正确解析(笔者注:应该用 `--name` 而不是 `-name`)。就是这么简单的原因所导致的。
|
||||
|
||||
### 扩展阅读
|
||||
|
||||
关于这个话题还有很多可谈的,包含 1.12 版之前的 Swarm 集群实现(笔者注:旧的 Swarm 集群实现,下文亦作`独立版本`,需要 Consul 等应用提供服务发现),以及当前的 Docker 版本提供的(笔者注:新的 Swarm 集群实现,亦被称为 Docker 引擎的 Swarm 模式)。也就是说,请别偷懒花些时间阅读以下内容:
|
||||
|
||||
- Docker Swarm [概述][13](独立版本的 Swarm 集群安装)
|
||||
- [构建][14]一个生产环境的 Swarm 集群(独立版本安装)
|
||||
- [安装并创建][15]一个 Docker Swarm 集群(独立版本安装)
|
||||
- Docker 引擎 Swarm [概述][16](对于 1.12 版)
|
||||
- [Swarm][17] 模式入门(对于 1.12 版)
|
||||
|
||||
### 总结
|
||||
|
||||
你总算看到这里了。到这里仍然无法保证你学到了什么,但我相信你还是会觉得这篇文章有些用的。它涵盖了一些基础的概念,以及一个 Swarm 集群模式是如何工作的以及它能做什么的概述,与此同时我们也成功的下载了并创建了我们的网页服务器的镜像,并且在之后基于它运行了多个集群式的容器实例。虽然我们目前只在单一节点做了以上实验,但是我们会在将来解释清楚(LCTT 译注:以便解释清楚多节点的 Swarm 集群操作)。并且我们解决了一些常见的问题。
|
||||
|
||||
我希望你能认为这篇指南足够有趣。结合着我过去所写的关于 Docker 的文章,这些文章应该能给你一个像样的解释,包括:怎么样操作镜像、网络栈、储存、以及现在的集群。就当热身吧。的确,请享受并期待在新的 Docker 教程中与你见面。我控几不住我记几啊。
|
||||
|
||||
祝你愉快。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.dedoimedo.com/computers/docker-swarm-intro.html
|
||||
|
||||
作者:[Dedoimedo][a]
|
||||
译者:[Viz](https://github.com/vizv)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/computers/docker-swarm-intro.html
|
||||
[1]:http://www.dedoimedo.com/computers/docker-guide.html
|
||||
[2]:https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/
|
||||
[3]:https://en.wikipedia.org/wiki/Raft_%28computer_science%29
|
||||
[4]:http://www.dedoimedo.com/computers/lenovo-g50-centos-xfce.html
|
||||
[5]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[6]:https://hub.docker.com/_/httpd/
|
||||
[7]:http://www.dedoimedo.com/computers/docker-networking.html
|
||||
[8]:http://www.dedoimedo.com/computers/docker-data-volumes.html
|
||||
[9]:http://www.dedoimedo.com/computers/vagrant-intro.html
|
||||
[10]:http://www.dedoimedo.com/computers/vagrant-coreos.html
|
||||
[11]:https://github.com/docker/docker/issues/24192
|
||||
[12]:http://stackoverflow.com/questions/38618609/docker-swarm-1-12-name-option-not-recognized
|
||||
[13]:https://docs.docker.com/swarm/
|
||||
[14]:https://docs.docker.com/swarm/install-manual/
|
||||
[15]:https://docs.docker.com/swarm/install-w-machine/
|
||||
[16]:https://docs.docker.com/engine/swarm/
|
||||
[17]:https://docs.docker.com/engine/swarm/swarm-tutorial/
|
183
published/20161031 An introduction to Linux filesystems.md
Normal file
183
published/20161031 An introduction to Linux filesystems.md
Normal file
@ -0,0 +1,183 @@
|
||||
Linux 文件系统概览
|
||||
====
|
||||
|
||||

|
||||
|
||||
|
||||
本文旨在高屋建瓴地来讨论 Linux 文件系统概念,而不是对某种特定的文件系统,比如 EXT4 是如何工作的进行具体的描述。另外,本文也不是一个文件系统命令的教程。
|
||||
|
||||
每台通用计算机都需要将各种数据存储在硬盘驱动器(HDD)或其他类似设备上,比如 USB 存储器。这样做有两个原因。首先,当计算机关闭以后,内存(RAM)会失去存于它里面的内容。尽管存在非易失类型的 RAM,在计算机断电以后还能把数据存储下来(比如采用 USB 闪存和固态硬盘的闪存),但是,闪存和标准的、易失性的 RAM,比如 DDR3 以及其他相似类型的 RAM 相比,要贵很多。
|
||||
|
||||
数据需要存储在硬盘驱动上的另一个原因是,即使是标准的 RAM 也要比普通硬盘贵得多。尽管 RAM 和硬盘的价格都在迅速下降,但是 RAM 的价格依旧在以字节为单位来计算。让我们进行一个以字节为单位的快速计算:基于 16 GB 大的 RAM 的价格和 2 TB 大的硬盘驱动的价格。计算显示 RAM 的价格大约比硬盘驱动贵 71 倍。今天,一个典型的 RAM 的价格大约是 0.000000004373750 美元/每字节。
|
||||
|
||||
直观的展示一下在很久以前 RAM 的价格,在计算机发展的非常早的时期,其中一种类型的 RAM 是基于在 CRT 屏幕上的点。这种 RAM 非常昂贵,大约 1 美元/每字节。
|
||||
|
||||
### 定义
|
||||
|
||||
你可能听过其他人以各种不同和令人迷惑的方式谈论过文件系统。文件系统这个单词本身有多重含义,你需要从一个讨论或文件的上下文中理解它的正确含义。
|
||||
|
||||
我将根据我所观察到的在不同情况下使用“文件系统”这个词来定义它的不同含义。注意,尽管我试图遵循标准的“官方”含义,但是我打算基于它的不同用法来定义这个术语(如下)。这就是说我将在本文的后续章节中进行更详细的探讨。
|
||||
|
||||
1. 始于顶层 root(/)目录的整个 Linux 目录结构。
|
||||
2. 特定类型的数据存储格式,比如 EXT3、EXT4、BTRFS 以及 XFS 等等。Linux 支持近百种类型的文件系统,包括一些非常老的以及一些最新的。每一种文件系统类型都使用它自己独特的元数据结构来定义数据是如何存储和访问的。
|
||||
3. 用特定类型的文件系统格式化后的分区或逻辑卷,可以挂载到 Linux 文件系统的指定挂载点上。
|
||||
|
||||
### 文件系统的基本功能
|
||||
|
||||
磁盘存储是文件系统必须的功能,它与之伴生的有一些有趣而且不可或缺的细节。很明显,文件系统是用来为非易失数据的存储提供空间,这是它的基本功能。然而,它还有许多从需求出发的重要功能。
|
||||
|
||||
所有文件系统都需要提供一个名字空间,这是一种命名和组织方法。它定义了文件应该如何命名、文件名的最大长度,以及所有可用字符集中可用于文件名中字符集子集。它也定义了一个磁盘上数据的逻辑结构,比如使用目录来组织文件而不是把所有文件聚集成一个单一的、巨大的文件混合体。
|
||||
|
||||
定义名字空间以后,元数据结构是为该名字空间提供逻辑基础所必须的。这包括所需数据结构要能够支持分层目录结构,同时能够通过结构来确定硬盘空间中的块是已用的或可用的,支持修改文件或目录的名字,提供关于文件大小、创建时间、最后访问或修改时间等信息,以及位置或数据所属的文件在磁盘空间中的位置。其他的元数据用来存储关于磁盘细分的高级信息,比如逻辑卷和分区。这种更高层次的元数据以及它所代表的结构包含描述文件系统存储在驱动器或分区中的信息,但与文件系统元数据无关,与之独立。
|
||||
|
||||
文件系统也需要一个应用程序接口(API),从而提供了对文件系统对象,比如文件和目录进行操作的系统功能调用的访问。API 也提供了诸如创建、移动和删除文件的功能。它也提供了算法来确定某些信息,比如文件存于文件系统中的位置。这样的算法可以用来解释诸如磁盘速度和最小化磁盘碎片等术语。
|
||||
|
||||
现代文件系统还提供一个安全模型,这是一个定义文件和目录的访问权限的方案。Linux 文件系统安全模型确保用户只能访问自己的文件,而不能访问其他用户的文件或操作系统本身。
|
||||
|
||||
最后一块组成部分是实现这些所有功能所需要的软件。Linux 使用两层软件实现的方式来提高系统和程序员的效率。
|
||||
|
||||

|
||||
|
||||
*图片 1:Linux 两层文件系统软件实现。*
|
||||
|
||||
这两层中的第一层是 Linux 虚拟文件系统。虚拟文件系统提供了内核和开发者访问所有类型文件系统的的单一命令集。虚拟文件系统软件通过调用特殊设备驱动来和不同类型的文件系统进行交互。特定文件系统的设备驱动是第二层实现。设备驱动程序将文件系统命令的标准集解释为在分区或逻辑卷上的特定类型文件系统命令。
|
||||
|
||||
### 目录结构
|
||||
|
||||
作为一个通常来说非常有条理的处女座,我喜欢将东西存储在更小的、有组织的小容器中,而不是存于同一个大容器中。目录的使用使我能够存储文件并在我想要查看这些文件的时候也能够找到它们。目录也被称为文件夹,之所以被称为文件夹,是因为其中的文件被类比存放于物理桌面上。
|
||||
|
||||
在 Linux 和其他许多操作系统中,目录可以被组织成树状的分层结构。在 [Linux 文件系统层次标准][10]中定义了 Linux 的目录结构(LCTT 译注:可参阅[这篇][23])。当通过目录引用来访问目录时,更深层目录名字是通过正斜杠(/)来连接,从而形成一个序列,比如 `/var/log` 和 `/var/spool/mail` 。这些被称为路径。
|
||||
|
||||
下表提供了标准的、众所周知的、预定义的顶层 Linux 目录及其用途的简要清单。
|
||||
|
||||
| 目录 | 描述 |
|
||||
| ------------- | ---------------------------------------- |
|
||||
| **/ (root 文件系统)** | root 文件系统是文件系统的顶级目录。它必须包含在挂载其它文件系统前需要用来启动 Linux 系统的全部文件。它必须包含需要用来启动剩余文件系统的全部可执行文件和库。文件系统启动以后,所有其他文件系统作为 root 文件系统的子目录挂载到标准的、预定义好的挂载点上。 |
|
||||
| **/bin** | `/bin` 目录包含用户的可执行文件。 |
|
||||
| /boot | 包含启动 Linux 系统所需要的静态引导程序和内核可执行文件以及配置文件。 |
|
||||
| **/dev** | 该目录包含每一个连接到系统的硬件设备的设备文件。这些文件不是设备驱动,而是代表计算机上的每一个计算机能够访问的设备。 |
|
||||
| **/etc** | 包含主机计算机的本地系统配置文件。 |
|
||||
| /home | 主目录存储用户文件,每一个用户都有一个位于 `/home` 目录中的子目录(作为其主目录)。 |
|
||||
| **/lib** | 包含启动系统所需要的共享库文件。 |
|
||||
| /media | 一个挂载外部可移动设备的地方,比如主机可能连接了一个 USB 驱动器。 |
|
||||
| /mnt | 一个普通文件系统的临时挂载点(如不可移动的介质),当管理员对一个文件系统进行修复或在其上工作时可以使用。 |
|
||||
| /opt | 可选文件,比如供应商提供的应用程序应该安装在这儿。 |
|
||||
| **/root** | 这不是 root(`/`)文件系统。它是 root 用户的主目录。 |
|
||||
| **/sbin** | 系统二进制文件。这些是用于系统管理的可执行文件。 |
|
||||
| /tmp | 临时目录。被操作系统和许多程序用来存储临时文件。用户也可能临时在这儿存储文件。注意,存储在这儿的文件可能在任何时候在没有通知的情况下被删除。 |
|
||||
| /usr | 该目录里面包含可共享的、只读的文件,包括可执行二进制文件和库、man 文件以及其他类型的文档。 |
|
||||
| /var | 可变数据文件存储在这儿。这些文件包括日志文件、MySQL 和其他数据库的文件、Web 服务器的数据文件、邮件以及更多。 |
|
||||
|
||||
*表 1:Linux 文件系统层次结构的顶层*
|
||||
|
||||
这些目录以及它们的子目录如表 1 所示,在所有子目录中,粗体的目录组成了 root 文件系统的必需部分。也就是说,它们不能创建为一个分离的文件系统并且在开机时进行挂载。这是因为它们(特别是它们包含的内容)必须在系统启动的时候出现,从而系统才能正确启动。
|
||||
|
||||
`/media` 目录和 `/mnt` 目录是 root 文件系统的一部分,但是它们从来不包含任何数据,因为它们只是一个临时挂载点。
|
||||
|
||||
表 1 中剩下的非粗体的目录不需要在系统启动过程中出现,但会在之后挂载到 root 文件系统上,在开机阶段,它们为主机进行准备,从而执行有用的工作。
|
||||
|
||||
请参考官方 [Linux 文件系统层次标准][11](FHS)网页来了解这些每一个目录以及它们的子目录的更多细节。维基百科上也有关于 [FHS][12] 的一个很好的介绍。应该尽可能的遵循这些标准,从而确保操作和功能的一致性。无论在主机上使用什么类型的文件系统,该层次目录结构都是相同的。
|
||||
|
||||
### Linux 统一目录结构
|
||||
|
||||
在一些非 Linux 操作系统的个人电脑上,如果有多个物理硬盘驱动器或多个分区,每一个硬盘或分区都会分配一个驱动器号。知道文件或程序位于哪一个硬盘驱动器上是很有必要的,比如 `C:` 或 `D:` 。然后,你可以在命令中使用驱动器号,以 `D:` 为例,为了进入 `D:` 驱动器,你可以使用 `cd` 命令来更改工作目录为正确的目录,从而定位需要的文件。每一个硬盘驱动器都有自己单独的、完整的目录树。
|
||||
|
||||
Linux 文件系统将所有物理硬盘驱动器和分区统一为一个目录结构。它们均从顶层 root 目录(`/`)开始。所有其它目录以及它们的子目录均位于单一的 Linux 根目录下。这意味着只有一棵目录树来搜索文件和程序。
|
||||
|
||||
因为只有一个文件系统,所以 `/home`、`/tmp`、`/var`、`/opt` 或 `/usr` 能够创建在和 root(`/`)文件系统不同的物理硬盘驱动器、分区或逻辑分区上,然后挂载到一个挂载点(目录)上,从而作为 root 文件系统树的一部分。甚至可移动驱动器,比如 USB 驱动器或一个外接的 USB 或 ESATA 硬盘驱动器均可以挂载到 root 文件系统上,成为目录树不可或缺的部分。
|
||||
|
||||
当从 Linux 发行版的一个版本升级到另一个版本或从一个发行版更改到另一个发行版的时候,就会很清楚地看到这样创建到不同分区的好处。通常情况下,除了任何像 Fedora 中的 `dnf-upgrade` 之类的升级工具,会明智地在升级过程中偶尔重新格式化包含操作系统的硬盘驱动来删除那些长期积累的垃圾。如果 `/home` 目录是 root 文件系统的一部分(位于同一个硬盘驱动器),那么它也会被格式化,然后需要通过之前的备份恢复。如果 /home 目录作为一个分离的文件系统,那么安装程序将会识别到,并跳过它的格式化。对于存储数据库、邮箱、网页和其它可变的用户以及系统数据的 `/var` 目录也是这样的。
|
||||
|
||||
将 Linux 系统目录树的某些部分作为一个分离的文件系统还有一些其他原因。比如,在很久以前,我还不知道将所有需要的 Linux 目录均作为 root(`/`)文件系统的一部分可能存在的问题,于是,一些非常大的文件填满了 `/home` 目录。因为 `/home` 目录和 `/tmp` 目录均不是分离的文件系统,而是 root 文件系统的简单子目录,整个 root 文件系统就被填满了。于是就不再有剩余空间可以让操作系统用来存储临时文件或扩展已存在数据文件。首先,应用程序开始抱怨没有空间来保存文件,然后,操作系统也开始异常行动。启动到单用户模式,并清除了 `/home` 目录中的多余文件之后,终于又能够重新工作了。然后,我使用非常标准的多重文件系统设置来重新安装 Linux 系统,从而避免了系统崩溃的再次发生。
|
||||
|
||||
我曾经遇到一个情况,Linux 主机还在运行,但是却不允许用户通过 GUI 桌面登录。我可以通过使用[虚拟控制台][13]之一,通过命令行界面(CLI)本地登录,然后远程使用 SSH 。问题的原因是因为 `/tmp` 文件系统满了,因此 GUI 桌面登录时所需要的一些临时文件不能被创建。因为命令行界面登录不需要在 `/tmp` 目录中创建文件,所以无可用空间并不会阻止我使用命令行界面来登录。在这种情况下,`/tmp` 目录是一个分离的文件系统,在 `/tmp` 所位于的逻辑卷上还有大量的可用空间。我简单地[扩展了 /tmp 逻辑卷][14]的容量到能够容纳主机所需要的临时文件,于是问题便解决了。注意,这个解决方法不需要重启,当 `/tmp` 文件系统扩大以后,用户就可以登录到桌面了。
|
||||
|
||||
当我在一家很大的科技公司当实验室管理员的时候,遇到过另外一个故障。开发者将一个应用程序安装到了一个错误的位置(`/var`)。结果该应用程序崩溃了,因为 `/var` 文件系统满了,由于缺乏空间,存储于 `/var/log` 中的日志文件无法附加新的日志消息。然而,系统仍然在运行,因为 root 文件系统和 `/tmp` 文件系统还没有被填满。删除了该应用程序并重新安装在 `/opt` 文件系统后,问题便解决了。
|
||||
|
||||
### 文件系统类型
|
||||
|
||||
Linux 系统支持大约 100 种分区类型的读取,但是只能对很少的一些进行创建和写操作。但是,可以挂载不同类型的文件系统在同一个 root 文件系统上,并且是很常见的。在这样的背景下,我们所说的文件系统一词是指在硬盘驱动器或逻辑卷上的一个分区中存储和管理用户数据所需要的结构和元数据。能够被 Linux 系统的 `fdisk` 命令识别的文件系统类型的完整列表[在此][24],你可以感受一下 Linux 系统对许多类型的系统的高度兼容性。
|
||||
|
||||
Linux 支持读取这么多类型的分区系统的主要目的是为了提高兼容性,从而至少能够与一些其他计算机系统的文件系统进行交互。下面列出了在 Fedora 中创建一个新的文件系统时的所有可选类型:
|
||||
|
||||
* btrfs
|
||||
* **cramfs**
|
||||
* **ext2**
|
||||
* **ext3**
|
||||
* **ext4**
|
||||
* fat
|
||||
* gfs2
|
||||
* hfsplus
|
||||
* minix
|
||||
* **msdos**
|
||||
* ntfs
|
||||
* reiserfs
|
||||
* **vfat**
|
||||
* xfs
|
||||
|
||||
其他发行版支持创建的文件系统类型不同。比如,CentOS 6 只支持创建上表中标为黑体的文件系统类型。
|
||||
|
||||
### 挂载
|
||||
|
||||
在 Linux 系统上“<ruby>挂载<rt>mount</rt></ruby>”文件系统的术语是指在计算机发展的早期,磁带或可移动的磁盘组需要需要物理地挂载到一个合适的驱动器设备上。当通过物理的方式放置到驱动器上以后,操作系统会逻辑地挂载位于磁盘上的文件系统,从而操作系统、应用程序和用户才能够访问文件系统中的内容。
|
||||
|
||||
一个挂载点简单的来说就是一个目录,就像任何其它目录一样,是作为 root 文件系统的一部分创建的。所以,比如,home 文件系统是挂载在目录 `/home` 下。文件系统可以被挂载到其他非 root 文件系统的挂载点上,但是这并不常见。
|
||||
|
||||
在 Linux 系统启动阶段的最初阶段,root 文件系统就会被挂载到 root 目录下(`/`)。其它文件系统在之后通过 SystemV 下的 `rc` 或更新一些的 Linux 发行版中的 `systemd` 等 Linux 启动程序挂载。在启动进程中文件系统的挂载是由 `/etc/fstab` 配置文件管理的。一个简单的记忆方法是,fstab 代表“<ruby>文件系统表<rt>file system table</rt></ruby>”,它包含了需要挂载的文件系统的列表,这些文件系统均指定了挂载点,以及针对特定文件系统可能需要的选项。
|
||||
|
||||
使用 `mount` 命令可以把文件系统挂载到一个已有的目录/挂载点上。通常情况下,任何作为挂载点的目录都应该是空的且不包含任何其他文件。Linux 系统不会阻止用户挂载一个已被挂载了文件系统的目录或将文件系统挂载到一个包含文件的目录上。如果你将文件系统挂载到一个已有的目录或文件系统上,那么其原始内容将会被隐藏,只有新挂载的文件系统的内容是可见的。
|
||||
|
||||
### 结论
|
||||
|
||||
我希望通过这篇文章,阐明了围绕文件系统这个术语的一些可能的模糊之处。我花费了很长的时间,以及在一个良师的帮助下才真正理解和欣赏到 Linux 文件系统的复杂性、优雅性和功能以及它的全部含义。
|
||||
|
||||
如果你有任何问题,请写到下面的评论中,我会尽力来回答它们。
|
||||
|
||||
### 下个月
|
||||
|
||||
Linux 的另一个重要概念是:[万物皆为文件][15]。这个概念对用户和系统管理员来说有一些有趣和重要的实际应用。当我说完这个理由之后,你可能会想阅读我的文章:[万物皆为文件][15],这篇文章会在我下个月计划写的关于 `/dev` 目录的文章之前写完。(LCTT 译注,也可参阅[这篇][25])
|
||||
|
||||
(题图 : 原始图片来自 Rikki Endsley. [CC BY-SA 4.0][9])
|
||||
|
||||
-----------------
|
||||
|
||||
作者简介:
|
||||
|
||||
David Both 居住在美国北卡罗纳州的首府罗利,是一个 Linux 开源贡献者。他已经从事 IT 行业 40 余年,在 IBM 教授 OS/2 20 余年。1981 年,他在 IBM 开发了第一个关于最初的 IBM 个人电脑的培训课程。他也曾在 Red Hat 教授 RHCE 课程,也曾供职于 MCI worldcom,Cico 以及北卡罗纳州等。他已经为 Linux 开源社区工作近 20 年。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/10/introduction-linux-filesystems
|
||||
|
||||
作者:[David Both][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dboth
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_source=intcallout&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_source=intcallout&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_source=intcallout&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_source=intcallout&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_source=intcallout&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/life/16/10/introduction-linux-filesystems?rate=Qyf2jgkdgrj5_zfDwadBT8KsHZ2Gp5Be2_tF7R-s02Y
|
||||
[7]:https://opensource.com/users/dboth
|
||||
[8]:https://opensource.com/user/14106/feed
|
||||
[9]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]:http://www.pathname.com/fhs/
|
||||
[11]:http://www.pathname.com/fhs/
|
||||
[12]:https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
|
||||
[13]:https://en.wikipedia.org/wiki/Virtual_console
|
||||
[14]:https://opensource.com/business/16/9/linux-users-guide-lvm
|
||||
[15]:https://opensource.com/life/15/9/everything-is-a-file
|
||||
[16]:https://opensource.com/users/dboth
|
||||
[17]:https://opensource.com/users/dboth
|
||||
[18]:https://opensource.com/users/dboth
|
||||
[19]:https://opensource.com/life/16/10/introduction-linux-filesystems#comments
|
||||
[20]:https://opensource.com/tags/linux
|
||||
[21]:https://opensource.com/tags/sysadmin
|
||||
[22]:https://opensource.com/participate
|
||||
[23]:https://linux.cn/article-6132-1.html
|
||||
[24]:https://www.win.tue.nl/~aeb/partitions/partition_types-1.html
|
||||
[25]:https://linux.cn/article-7669-1.html
|
120
published/20170102 What is Kubernetes.md
Normal file
120
published/20170102 What is Kubernetes.md
Normal file
@ -0,0 +1,120 @@
|
||||
Kubernetes 是什么?
|
||||
=================
|
||||
|
||||

|
||||
|
||||
Kubernetes,简称 k8s(k,8 个字符,s——明白了?)或者 “kube”,是一个开源的 [Linux 容器][3]自动化运维平台,它消除了容器化应用程序在部署、伸缩时涉及到的许多手动操作。换句话说,你可以将多台主机组合成集群来运行 Linux 容器,而 Kubernetes 可以帮助你简单高效地管理那些集群。构成这些集群的主机还可以跨越[公有云][4]、[私有云][5]以及混合云。
|
||||
|
||||
Kubernetes 最开始是由 Google 的工程师设计开发的。Google 作为 [Linux 容器技术的早期贡献者][6]之一,曾公开演讲介绍 [Google 如何将一切都运行于容器之中][7](这是 Google 的云服务背后的技术)。Google 一周内的容器部署超过 20 亿次,全部的工作都由内部平台 [Borg][8] 支撑。Borg 是 Kubernetes 的前身,几年来开发 Borg 的经验教训也成了影响 Kubernetes 中许多技术的主要因素。
|
||||
|
||||
_趣闻: Kubernetes logo 中的七个辐条来源于项目原先的名称, “[Seven of Nine 项目][1]”(LCTT 译注:Borg 是「星际迷航」中的一个宇宙种族,Seven of Nine 是该种族的一名女性角色)。_
|
||||
|
||||

|
||||
|
||||
红帽作为最早与 Google 合作开发 Kubernetes 的公司之一(甚至早于 Kubernetes 的发行),已经是 Kubernetes 上游项目的[第二大贡献者][9]。Google 在 2015 年把 Kubernetes 项目捐献给了新成立的 <ruby>[云计算基金会][11]<rt>Cloud Native Computing Foundation</rt></ruby>(CNCF)。
|
||||
|
||||
|
||||
### 为什么你需要 Kubernetes ?
|
||||
|
||||
真实的生产环境应用会包含多个容器,而这些容器还很可能会跨越多个服务器主机部署。Kubernetes 提供了为那些工作负载大规模部署容器的编排与管理能力。Kubernetes 编排让你能够构建多容器的应用服务,在集群上调度或伸缩这些容器,以及管理它们随时间变化的健康状态。
|
||||
|
||||
Kubernetes 也需要与网络、存储、安全、监控等其它服务集成才能提供综合性的容器基础设施。
|
||||
|
||||

|
||||
|
||||
当然,这取决于你如何在你的环境中使用容器。一个初步的 Linux 容器应用程序把容器视作高效、快速的虚拟机。一旦把它部署到生产环境或者扩展为多个应用,很显然你需要许多组托管在相同位置的容器合作提供某个单一的服务。随着这些容器的累积,你的运行环境中容器的数量会急剧增加,复杂度也随之增长。
|
||||
|
||||
Kubernetes 通过将容器分类组成 “pod” 来解决了容器增殖带来的许多常见问题。pod 为容器分组提供了一层抽象,以此协助你调度工作负载以及为这些容器提供类似网络与存储这类必要的服务。Kubernetes 的其它组件帮助你对 pod 进行负载均衡,以保证有合适数量的容器支撑你的工作负载。
|
||||
|
||||
正确实施的 Kubernetes,结合类似 [Atomic Registry][12]、[Open vSwitch][13]、[heapster][14]、[OAuth][15] 和 [SELinux][16] 的开源项目,让你可以管理你自己的整个容器基础设施。
|
||||
|
||||
### Kubernetes 能做些什么?
|
||||
|
||||
在生产环境中使用 Kubernetes 的主要优势在于它提供了在物理机或虚拟机集群上调度和运行容器的平台。更宽泛地说,它能帮你在生产环境中实现可以依赖的基于容器的基础设施。而且,由于 Kubernetes 本质上就是运维任务的自动化平台,你可以执行一些其它应用程序平台或管理系统支持的操作,只不过操作对象变成了容器。
|
||||
|
||||
有了 Kubernetes,你可以:
|
||||
|
||||
* 跨主机编排容器。
|
||||
* 更充分地利用硬件资源来最大化地满足企业应用的需求。
|
||||
* 控制与自动化应用的部署与升级。
|
||||
* 为有状态的应用程序挂载和添加存储器。
|
||||
* 线上扩展或裁剪容器化应用程序与它们的资源。
|
||||
* 声明式的容器管理,保证所部署的应用按照我们部署的方式运作。
|
||||
* 通过自动布局、自动重启、自动复制、自动伸缩实现应用的状态检查与自我修复。
|
||||
|
||||
然而 Kubernetes 依赖其它项目来提供完整的编排服务。结合其它开源项目作为其组件,你才能充分感受到 Kubernetes 的能力。这些必要组件包括:
|
||||
|
||||
* 仓库:Atomic Registry、Docker Registry 等。
|
||||
* 网络:OpenvSwitch 和智能边缘路由等。
|
||||
* 监控:heapster、kibana、hawkular 和 elastic。
|
||||
* 安全:LDAP、SELinux、 RBAC 与 支持多租户的 OAUTH。
|
||||
* 自动化:通过 Ansible 的 playbook 进行集群的安装和生命周期管理。
|
||||
* 服务:大量事先创建好的常用应用模板。
|
||||
|
||||
[红帽 OpenShift 为容器部署预先集成了上面这些组件。][17]
|
||||
|
||||
### Kubernetes 入门
|
||||
|
||||
和其它技术一样,大量的专有名词有可能成为入门的障碍。下面解释一些通用的术语,希望帮助你理解 Kubernetes。
|
||||
|
||||
- **Master(主节点):** 控制 Kubernetes 节点的机器,也是创建作业任务的地方。
|
||||
- **Node(节点):** 这些机器在 Kubernetes 主节点的控制下执行被分配的任务。
|
||||
- **Pod:** 由一个或多个容器构成的集合,作为一个整体被部署到一个单一节点。同一个 pod 中的容器共享 IP 地址、进程间通讯(IPC)、主机名以及其它资源。Pod 将底层容器的网络和存储抽象出来,使得集群内的容器迁移更为便捷。
|
||||
- **Replication controller(复制控制器):** 控制一个 pod 在集群上运行的实例数量。
|
||||
- **Service(服务):** 将服务内容与具体的 pod 分离。Kubernetes 服务代理负责自动将服务请求分发到正确的 pod 处,不管 pod 移动到集群中的什么位置,甚至可以被替换掉。
|
||||
- **Kubelet:** 这个守护进程运行在各个工作节点上,负责获取容器列表,保证被声明的容器已经启动并且正常运行。
|
||||
- **kubectl:** 这是 Kubernetes 的命令行配置工具。
|
||||
|
||||
[上面这些知识就足够了吗?不,这仅仅是一小部分,更多内容请查看 Kubernetes 术语表。][18]
|
||||
|
||||
### 生产环境中使用 Kubernetes
|
||||
|
||||
Kubernetes 是开源的,所以没有正式的技术支持机构为你的商业业务提供支持。如果在生产环境使用 Kubernetes 时遇到问题,你恐怕不会太愉快,当然你的客户也不会太高兴。
|
||||
|
||||
这就是[红帽 OpenShift][2] 要解决的问题。OpenShift 是为企业提供的 Kubernetes ——并且集成了更多的组件。OpenShift 包含了强化 Kubernetes 功能、使其更适用于企业场景的额外部件,包括仓库、网络、监控、安全、自动化和服务在内。OpenShift 使得开发者能够在具有伸缩性、控制和编排能力的云端开发、托管和部署容器化的应用,快速便捷地把想法转变为业务。
|
||||
|
||||
而且,OpenShift 还是由头号开源领导公司红帽支持和开发的。
|
||||
|
||||
### Kubernetes 如何适用于你的基础设施
|
||||
|
||||

|
||||
|
||||
Kubernetes 运行在操作系统(例如 [Red Hat Enterprise Linux Atomic Host][19])之上,操作着该节点上运行的容器。Kubernetes 主节点(master)从管理员(或者 DevOps 团队)处接受命令,再把指令转交给附属的节点。这种带有大量服务的切换工作自动决定最适合该任务的节点,然后在该节点上分配资源并指派 pod 来完成任务请求。
|
||||
|
||||
所以从基础设施的角度,管理容器的方式发生了一点小小的变化。对容器的控制在更高的层次进行,提供了更佳的控制方式,而无需用户微观管理每个单独的容器或者节点。必要的工作则主要集中在如何指派 Kubernetes 主节点、定义节点和 pod 等问题上。
|
||||
|
||||
#### docker 在 Kubernetes 中的角色
|
||||
|
||||
[Docker][20] 技术依然执行它原本的任务。当 kubernetes 把 pod 调度到节点上,节点上的 kubelet 会指示 docker 启动特定的容器。接着,kubelet 会通过 docker 持续地收集容器的信息,然后提交到主节点上。Docker 如往常一样拉取容器镜像、启动或停止容器。不同点仅仅在于这是由自动化系统控制而非管理员在每个节点上手动操作的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/containers/what-is-kubernetes
|
||||
|
||||
作者:[www.redhat.com][a]
|
||||
译者:[haoqixu](https://github.com/haoqixu)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.redhat.com/
|
||||
[1]:https://cloudplatform.googleblog.com/2016/07/from-Google-to-the-world-the-Kubernetes-origin-story.html
|
||||
[2]:https://www.redhat.com/en/technologies/cloud-computing/openshift
|
||||
[3]:https://www.redhat.com/en/containers/whats-a-linux-container
|
||||
[4]:https://www.redhat.com/en/topics/cloud-computing/what-is-public-cloud
|
||||
[5]:https://www.redhat.com/en/topics/cloud-computing/what-is-private-cloud
|
||||
[6]:https://en.wikipedia.org/wiki/Cgroups
|
||||
[7]:https://speakerdeck.com/jbeda/containers-at-scale
|
||||
[8]:http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html
|
||||
[9]:http://stackalytics.com/?project_type=kubernetes-group&metric=commits
|
||||
[10]:https://techcrunch.com/2015/07/21/as-kubernetes-hits-1-0-google-donates-technology-to-newly-formed-cloud-native-computing-foundation-with-ibm-intel-twitter-and-others/
|
||||
[11]:https://www.cncf.io/
|
||||
[12]:http://www.projectatomic.io/registry/
|
||||
[13]:http://openvswitch.org/
|
||||
[14]:https://github.com/kubernetes/heapster
|
||||
[15]:https://oauth.net/
|
||||
[16]:https://selinuxproject.org/page/Main_Page
|
||||
[17]:https://www.redhat.com/en/technologies/cloud-computing/openshift
|
||||
[18]:https://kubernetes.io/docs/reference/
|
||||
[19]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/options
|
||||
[20]:https://www.redhat.com/en/containers/what-is-docker
|
@ -1,18 +1,17 @@
|
||||
我对 Go 的错误处理有哪些不满,以及我是如何处理的
|
||||
======================
|
||||
|
||||
写 Go 的人往往对它的错误处理模式有一定的看法。按不同的语言经验,人们可能有不同的习惯处理方法。这就是为什么我决定要写这篇文章,尽管有点固执己见,但我认为吸收我的经验是有用的。我想要讲的主要问题是,很难去强制良好的错误处理实践,经常错误没有堆栈追踪,并且错误处理本身太冗长。不过,我已经看到了一些潜在的解决方案,或许能帮助解决一些问题。
|
||||
写 Go 的人往往对它的错误处理模式有一定的看法。按不同的语言经验,人们可能有不同的习惯处理方法。这就是为什么我决定要写这篇文章,尽管有点固执己见,但我认为听取我的经验是有用的。我想要讲的主要问题是,很难去强制执行良好的错误处理实践,错误经常没有堆栈追踪,并且错误处理本身太冗长。不过,我已经看到了一些潜在的解决方案,或许能帮助解决一些问题。
|
||||
|
||||
### 与其他语言的快速比较
|
||||
|
||||
|
||||
[在 Go 中,所有的错误都是值][1]。因为这点,相当多的函数最后会返回一个 `error`, 看起来像这样:
|
||||
|
||||
```
|
||||
func (s *SomeStruct) Function() (string, error)
|
||||
```
|
||||
|
||||
由于这点,调用代码常规上会使用 `if` 语句来检查它们:
|
||||
因此这导致调用代码通常会使用 `if` 语句来检查它们:
|
||||
|
||||
```
|
||||
bytes, err := someStruct.Function()
|
||||
@ -27,7 +26,7 @@ if err != nil {
|
||||
public String function() throws Exception
|
||||
```
|
||||
|
||||
`try-catch` 而不是 `if err != nil`:
|
||||
它使用的是 `try-catch` 而不是 `if err != nil`:
|
||||
|
||||
```
|
||||
try {
|
||||
@ -43,7 +42,7 @@ catch (Exception e) {
|
||||
|
||||
### 实现集中式错误处理
|
||||
|
||||
退一步,让我们看看为什么以及如何在一个集中的地方处理错误。
|
||||
退一步,让我们看看为什么要在一个集中的地方处理错误,以及如何做到。
|
||||
|
||||
大多数人或许会熟悉的一个例子是 web 服务 - 如果出现了一些未预料的的服务端错误,我们会生成一个 5xx 错误。在 Go 中,你或许会这么实现:
|
||||
|
||||
@ -68,7 +67,7 @@ func viewCompanies(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
这并不是一个好的解决方案,因为我们不得不重复在所有的处理函数中处理错误。为了能更好地维护,最好能在一处地方处理错误。幸运的是,[在 Go 的博客中,Andrew Gerrand 提供了一个替代方法][2],可以完美地实现。我们可以创建一个处理错误的 Type:
|
||||
这并不是一个好的解决方案,因为我们不得不重复地在所有的处理函数中处理错误。为了能更好地维护,最好能在一处地方处理错误。幸运的是,[在 Go 语言的官方博客中,Andrew Gerrand 提供了一个替代方法][2],可以完美地实现。我们可以创建一个处理错误的 Type:
|
||||
|
||||
```
|
||||
type appHandler func(http.ResponseWriter, *http.Request) error
|
||||
@ -89,11 +88,11 @@ func init() {
|
||||
}
|
||||
```
|
||||
|
||||
接着我们需要做的是修改处理函数的签名来使它们返回 `errors`。这个方法很好,因为我们做到了 [dry][3] 原则,并且没有重复使用不必要的代码 - 现在我们可以在一处返回默认错误了。
|
||||
接着我们需要做的是修改处理函数的签名来使它们返回 `errors`。这个方法很好,因为我们做到了 [DRY][3] 原则,并且没有重复使用不必要的代码 - 现在我们可以在单独一个地方返回默认错误了。
|
||||
|
||||
### 错误上下文
|
||||
|
||||
在先前的例子中,我们可能会收到许多潜在的错误,它们的任何一个都可能在调用堆栈的许多部分生成。这时候事情就变得棘手了。
|
||||
在先前的例子中,我们可能会收到许多潜在的错误,它们中的任何一个都可能在调用堆栈的许多环节中生成。这时候事情就变得棘手了。
|
||||
|
||||
为了演示这点,我们可以扩展我们的处理函数。它可能看上去像这样,因为模板执行并不是唯一一处会发生错误的地方:
|
||||
|
||||
@ -109,9 +108,9 @@ func viewUsers(w http.ResponseWriter, r *http.Request) error {
|
||||
|
||||
调用链可能会相当深,在整个过程中,各种错误可能在不同的地方实例化。[Russ Cox][4]的这篇文章解释了如何避免遇到太多这类问题的最佳实践:
|
||||
|
||||
“在 Go 中错误报告的部分约定是函数包含相关的上下文,包括正在尝试的操作(比如函数名和它的参数)。”
|
||||
> “在 Go 中错误报告的部分约定是函数包含相关的上下文,包括正在尝试的操作(比如函数名和它的参数)。”
|
||||
|
||||
给出的例子是对 OS 包的一个调用:
|
||||
这个给出的例子是对 OS 包的一个调用:
|
||||
|
||||
```
|
||||
err := os.Remove("/tmp/nonexist")
|
||||
@ -140,11 +139,11 @@ if err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
这意味着错误何时发生并没有传递出来。
|
||||
这意味着错误何时发生并没有被传递出来。
|
||||
|
||||
应该注意的是,所有这些错误都可以在 `Exception` 驱动的模型中发生 - 糟糕的错误信息、隐藏异常等。那么为什么我认为该模型更有用?
|
||||
|
||||
如果我们在处理一个糟糕的异常消息,_我们仍然能够了解它发生在调用堆栈中什么地方_。因为堆栈跟踪,这引发了一些我对 Go 不了解的部分 - 你知道 Go 的 `panic` 包含了堆栈追踪,但是 `error` 没有。我推测可能是 `panic` 会使你的程序崩溃,因此需要一个堆栈追踪,而处理错误并不会,因为它会假定你在它发生的地方做一些事。
|
||||
即便我们在处理一个糟糕的异常消息,_我们仍然能够了解它发生在调用堆栈中什么地方_。因为堆栈跟踪,这引发了一些我对 Go 不了解的部分 - 你知道 Go 的 `panic` 包含了堆栈追踪,但是 `error` 没有。我推测可能是 `panic` 会使你的程序崩溃,因此需要一个堆栈追踪,而处理错误并不会,因为它会假定你在它发生的地方做一些事。
|
||||
|
||||
所以让我们回到之前的例子 - 一个有糟糕错误信息的第三方库,它只是输出了调用链。你认为调试会更容易吗?
|
||||
|
||||
@ -174,13 +173,13 @@ LOGGER.error(ex.getMessage()) // 不记录堆栈追踪
|
||||
LOGGER.error(ex.getMessage(), ex) // 记录堆栈追踪
|
||||
```
|
||||
|
||||
但是 Go 似乎设计中就没有这个信息。
|
||||
但是 Go 似乎在设计中就没有这个信息。
|
||||
|
||||
在获取上下文信息方面 - Russ 还提到了社区正在讨论一些潜在的接口用于剥离上下文错误。关于这点,了解更多或许会很有趣。
|
||||
|
||||
### 堆栈追踪问题解决方案
|
||||
|
||||
幸运的是,在做了一些查找后,我发现了这个出色的[ Go 错误][5]库来帮助解决这个问题,来给错误添加堆栈跟踪:
|
||||
幸运的是,在做了一些查找后,我发现了这个出色的 [Go 错误][5]库来帮助解决这个问题,来给错误添加堆栈跟踪:
|
||||
|
||||
```
|
||||
if errors.Is(err, crashy.Crashed) {
|
||||
@ -188,7 +187,7 @@ if errors.Is(err, crashy.Crashed) {
|
||||
}
|
||||
```
|
||||
|
||||
不过,我认为这个功能如果能成为语言的<ruby>第一类公民<rt>first class citizenship </rt></ruby>将是一个改进,这样你就不必对类型做一些修改了。此外,如果我们像先前的例子那样使用第三方库,它可能没有使用 `crashy` - 我们仍有相同的问题。
|
||||
不过,我认为这个功能如果能成为语言的<ruby>第一类公民<rt>first class citizenship</rt></ruby>将是一个改进,这样你就不必做一些类型修改了。此外,如果我们像先前的例子那样使用第三方库,它可能没有使用 `crashy` - 我们仍有相同的问题。
|
||||
|
||||
### 我们对错误应该做什么?
|
||||
|
||||
@ -280,7 +279,7 @@ if ew.err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
这也是一个很好的方案,但是我感觉缺少了点什么 - 因为我们不能重复使用这个模式。如果我们想要一个含有字符串参数的方法,我们就不得不改变函数签名。或者如果我们不想执行写会怎样?我们可以尝试使它更通用:
|
||||
这也是一个很好的方案,但是我感觉缺少了点什么 - 因为我们不能重复使用这个模式。如果我们想要一个含有字符串参数的方法,我们就不得不改变函数签名。或者如果我们不想执行写操作会怎样?我们可以尝试使它更通用:
|
||||
|
||||
```
|
||||
type errWrapper struct {
|
@ -0,0 +1,73 @@
|
||||
为什么我们比以往更需要开放的领导人
|
||||
============================================================
|
||||
|
||||
> 不断变化的社会和文化条件正促使着开放的领导。
|
||||
|
||||

|
||||
|
||||
|
||||
领导力就是力量。更具体地说,领导力是影响他人行动的力量。 关于领导力的神话不仅可以让人联想到人类浪漫的一面而且还有人类境况险恶的一面。 我们最终决定如何领导才能决定其真正的本质。
|
||||
|
||||
现代许多对领导力的理解都是在战争中诞生的,在那里,领导力意味着熟练地执行命令和控制思想。 在现代商业的大部分时间里,我们都是以一个到达权力顶峰的伟大的男人或女人作为领导,并通过地位来发挥力量的。 这种传统的通过等级和报告关系的领导方式严重依赖于正式的权威。 这些结构中的权威通过垂直层次结构向下流动,并沿命令链的形式存在。
|
||||
|
||||
然而,在 20 世纪后期,一些东西开始改变。 新技术打开了全球化的大门,从而使团队更加分散。 我们投入人力资本的方式开始转变,永远地改变了人们之间的沟通方式。组织内部的人开始感觉得到了责任感,他们要求对自己的成功(和失败)拥有归属感。 领导者不再是权力的唯一拥有者。 21世纪的领导者带领 21 世纪的组织开始了解授权、协作、责任和清晰的沟通是一种新型权力的本质。 这些新领导人开始分享权力——他们无保留地信任他们的追随者。
|
||||
|
||||
随着组织继续变得更加开放,即使是没有“领导力”头衔的人也会感到有责任推动变革。 这些组织消除了等级制度的枷锁,让工人们以他们认为合适的方式去工作。 历史暴露了 20 世纪领导人倾向通过单边决策和单向信息流来扼杀敏捷性。 但是,新世纪的领导者却是确定一个组织,让由它授权的若干个体来完成一些事情。 重点是权力赋予若干个体——坦率地说,一个领导者不能在任何时候出现在所有的地方,做出所有的决定。
|
||||
|
||||
因此,领导人也开始变得开放。
|
||||
|
||||
### 控制
|
||||
|
||||
当旧式领导人专注于指挥和控制的地位权力时,一个开放的领导者通过新形式的组织管理方式、新技术和其他减少摩擦的方式,将组织控制权放在了其它人身上,这样可以更有效的方式实现集体行动的方式。 这些领导者了解信任的力量,相信追随者总是会表现出主动性、参与性和独立性。 而这种新的领导方式需要在战术上有所转变——从告诉人们如何去做,到向他们展示如何去做,并在路上指导他们。开放的领导人很快就发现,领导力不是影响我们发挥进步的力量,而是我们在组织成员中分配的力量和信心。 21 世纪的领导者专注于社区和对他人的教化。最后,开放的领导者并不是专注于自我,而是无私的。
|
||||
|
||||
### 交流
|
||||
|
||||
20 世纪的领导者人组织并控制整个组织的信息的流动。 然而,开放的领导者试图通过与团队成员共享信息和背景(以及权力)来组织一个组织。 这些领导人摧毁了领地,谦逊前行,分享着前所未有的力量。 集体赋权和参与的协作创造了灵活性,分担责任,所有权,尤其是幸福。 当一个组织的成员被授权做他们的工作时,他们比等级层次的同事更快乐(因而更有生产力)。
|
||||
|
||||
### 信任
|
||||
|
||||
开放的领导者接受不确定性,相信他们的追随者在正确的时间做正确的事情。 他们拥有比传统对手,有更高的吸引人力资本效率的能力。 再说一次:他们不会像命令和控制的微观管理者那样运作。 提高透明度,而不是暗箱操作,他们尽可能的把决策和行动放在公开场合,解释决策的基础,并假设员工对组织内的情况有高度的把握。开放领导者的操作的前提是,如果没有他们的持续干预,该组织的人力资本就更有能力取得成功。
|
||||
|
||||
### 自治权
|
||||
|
||||
在 20 世纪具有强大指挥和控制力的领导者专注于某些权力的时候,一个开放的领导者更多地关注组织内个人的实际活动。 当领导者专注于个人时,他们就能够更好地训练和指导团队成员。 从这个角度来看,一个开放的领导者关注的是与组织的愿景和使命一致的行为和行动。最后,一个开放的领导者被看作是团队中的一员,而不是团队的领导者。 这并不意味着领导人放弃了权力的地位,而是低估了这一点,以分享权力,并通过自主创造成果赋予个人权力。
|
||||
|
||||
### 赋权
|
||||
|
||||
开放的领导人把重点放在授予组织成员的权力上。 在这个过程中承认领导者在组织人力资本中的技能、能力和信任,从而为整个团队带来了积极的动力和意愿。 最终,赋权就是帮助追随者相信他们自己的能力。 那些相信自己拥有个人权力的追随者更有可能采取主动行动、制定和实现更高的目标,并在困难的环境下坚持下去。 最终,开放组织的概念是关于包容性,每个人都是属于自己的,个性和不同的观点对于成功是至关重要的。 一个开放的组织及其开放的领导者提供了一种社区的感觉,而成员则受到组织的使命或目的的驱动。 这会产生一种比个人更大的归属感。 个性创造了成员之间的幸福和工作满意度。 反过来,又实现了更高的效率和成功。
|
||||
|
||||
我们都应该为 21 世纪领导人所要求的开放性而努力。 这需要自我反省,好奇心,尤其是它正在进行的改变。 通过新的态度和习惯,我们逐渐发现了一个真正的开放领导者,并且希望我们在适应 21 世纪的领导风格的同时,也开始采纳这些理念。
|
||||
|
||||
是的,领导力就是力量。我们如何利用这种权力决定了我们组织的成败。 那些滥用权力的人不会持久,但那些分享权力和庆祝他人的人会更持久。 通过阅读 [这本书][7],你可以在开放组织及其领导的持续对话中开始发挥重要作用。 在[本卷][8]的结论中,您将找到与开放组织社区联系的额外资源和机会,以便您也可以与我们聊天、思考和成长。 欢迎来到谈话——欢光临!
|
||||
|
||||
_这篇文章最初是作为《开放组织领导手册》的引言出现的,它现在可以[从 Opensource.com 中可获得][5]。_
|
||||
|
||||
( Image by : opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Philip A Foster - Dr. Philip A. Foster 是一名领导/商业教练兼顾问兼兼职教授。 他是企业运营、组织发展、展望和战略领导层的著名思想领袖。 Dr. Foster 通过设计和实施战略、战略预见和规划来促进变革。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/2/need-open-leaders-more-ever
|
||||
|
||||
作者:[Philip A Foster][a]
|
||||
译者:[TimeBear](https://github.com/TimeBear)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/maximumchange
|
||||
[1]:https://opensource.com/open-organization/resources/leaders-manual?src=too_resource_menu
|
||||
[2]:https://opensource.com/open-organization/resources/field-guide?src=too_resource_menu
|
||||
[3]:https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu
|
||||
[4]:https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu
|
||||
[5]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[6]:https://opensource.com/open-organization/17/2/need-open-leaders-more-ever?rate=c_9hT0EKbdXcTGRl-YW0QgW60NsRwO2a4RaplUKfvXs
|
||||
[7]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[8]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[9]:https://opensource.com/user/15497/feed
|
||||
[10]:https://opensource.com/users/maximumchange
|
@ -0,0 +1,151 @@
|
||||
Docker 引擎的 Swarm 模式:添加工作者节点教程
|
||||
================
|
||||
|
||||
让我们继续几周前在 CentOS 7.2 中开始的工作。 在本[指南][1]中,我们学习了如何初始化以及启动 Docker 1.12 中内置的原生的集群以及编排功能。但是我们只有管理者(manager)节点还没有其它工作者(worker)节点。今天我们会展开讲述这个。
|
||||
|
||||
我将向你展示如何将不对称节点添加到 Sawrm 中,比如一个与 CentOS 相邻的 [Fedora 24][2],它们都将加入到集群中,还有相关很棒的负载均衡等等。当然这并不是轻而易举的,我们会遇到一些障碍,所以它应该是非常有趣的。
|
||||
|
||||

|
||||
|
||||
### 先决条件
|
||||
|
||||
在将其它节点成功加入 Swarm 之前,我们需要做几件事情。理想情况下,所有节点都应该运行相同版本的 Docker,为了支持原生的编排功能,它的版本至少应该为 1.12。像 CentOS 一样,Fedora 内置的仓库没有最新的构建版本,所以你需要手动构建,或者使用 Docker 仓库手动[添加和安装][3]正确的版本,并修复一些依赖冲突。我已经向你展示了如何在 CentOS 中操作,经过是相同的。
|
||||
|
||||
此外,所有节点都需要能够相互通信。这就需要有正确的路由和防火墙规则,这样管理者(manager)和工作者(worker)节点才能互相通信。否则,你无法将节点加入 Swarm 中。最简单的解决方法是临时清除防火墙规则 (`iptables -F`),但这可能会损害你的安全。请确保你完全了解你正在做什么,并为你的节点和端口创建正确的规则。
|
||||
|
||||
> Error response from daemon: Timeout was reached before node was joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
|
||||
|
||||
> 守护进程的错误响应:节点加入之前已超时。尝试加入 Swarm 的请求将在后台继续进行。使用 “docker info” 命令查看节点的当前 Swarm 状态。
|
||||
|
||||
你需要在主机上提供相同的 Docker 镜像。在上一个教程中我们创建了一个 Apache 映像,你需要在你的工作者(worker)节点上执行相同操作,或者分发已创建的镜像。如果你不这样做,你会遇到错误。如果你在设置 Docker 上需要帮助,请阅读我的[介绍指南][4]和[网络教程][5]。
|
||||
|
||||
```
|
||||
7vwdxioopmmfp3amlm0ulimcu \_ websky.11 my-apache2:latest
|
||||
localhost.localdomain Shutdown Rejected 7 minutes ago
|
||||
"No such image: my-apache2:lat&"
|
||||
```
|
||||
|
||||
### 现在开始
|
||||
|
||||
现在我们有一台启动了 CentOS 机器,并成功地创建了容器。你可以使用主机端口连接到该服务,这一切都看起来很好。目前,你的 Swarm 只有管理者(manager)。
|
||||
|
||||

|
||||
|
||||
### 加入工作者(worker)
|
||||
|
||||
要添加新的节点,你需要使用 `join` 命令。但是你首先必须提供令牌、IP 地址和端口,以便工作者(woker)节点能正确地对 Swarm 管理器进行身份验证。接着(在 Fedora 上)执行:
|
||||
|
||||
```
|
||||
[root@localhost ~]# docker swarm join-token worker
|
||||
To add a worker to this swarm, run the following command:
|
||||
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-0xvojvlza90nrbihu6gfu3qm34ari7lwnza ... \
|
||||
192.168.2.100:2377
|
||||
```
|
||||
|
||||
如果你不修复防火墙和路由规则,你会得到超时错误。如果你已经加入了 Swarm,重复 `join` 命令会收到错误:
|
||||
|
||||
```
|
||||
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
|
||||
```
|
||||
|
||||
如果有疑问,你可以离开 Swarm,然后重试:
|
||||
|
||||
```
|
||||
[root@localhost ~]# docker swarm leave
|
||||
Node left the swarm.
|
||||
|
||||
docker swarm join --token
|
||||
SWMTKN-1-0xvojvlza90nrbihu6gfu3qnza4 ... 192.168.2.100:2377
|
||||
This node joined a swarm as a worker.
|
||||
```
|
||||
|
||||
在工作者(worker)节点中,你可以使用 `docker info` 来检查状态:
|
||||
|
||||
```
|
||||
Swarm: active
|
||||
NodeID: 2i27v3ce9qs2aq33nofaon20k
|
||||
Is Manager: false
|
||||
Node Address: 192.168.2.103
|
||||
|
||||
Likewise, on the manager:
|
||||
|
||||
Swarm: active
|
||||
NodeID: cneayene32jsb0t2inwfg5t5q
|
||||
Is Manager: true
|
||||
ClusterID: 8degfhtsi7xxucvi6dxvlx1n4
|
||||
Managers: 1
|
||||
Nodes: 3
|
||||
Orchestration:
|
||||
Task History Retention Limit: 5
|
||||
Raft:
|
||||
Snapshot Interval: 10000
|
||||
Heartbeat Tick: 1
|
||||
Election Tick: 3
|
||||
Dispatcher:
|
||||
Heartbeat Period: 5 seconds
|
||||
CA Configuration:
|
||||
Expiry Duration: 3 months
|
||||
Node Address: 192.168.2.100
|
||||
```
|
||||
|
||||
### 创建或缩放服务
|
||||
|
||||
现在,我们需要看下 Docker 是否以及如何在节点间分发容器。我的测试展示了一个在非常轻的负载下相当简单的平衡算法。试了一两次之后,即使在我尝试缩放并更新之后,Docker 也没有将运行的服务重新分配给新的 worker。同样,有一次,它在工作者(worker)节点上创建了一个新的服务。也许这是最好的选择。
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
*在新的工作者(worker)节点上完整创建新的服务。*
|
||||
|
||||
过了一段时间,两个容器之间的现有服务有一些重新分配,但这需要一些时间。新服务工作正常。这只是一个前期观察,所以我现在不能说更多。现在是开始探索和调整的新起点。
|
||||
|
||||

|
||||
|
||||
*负载均衡过了一会工作了。*
|
||||
|
||||
### 总结
|
||||
|
||||
Docker 是一只灵巧的小野兽,它仍在继续长大,变得更复杂、更强大,当然也更优雅。它被一个大企业吃掉只是一个时间问题。当它带来了原生的编排功能时,Swarm 模式运行得很好,但是它不只是几个容器而已,而是充分利用了其算法和可扩展性。
|
||||
|
||||
我的教程展示了如何将 Fedora 节点添加到由 CentOS 运行的群集中,并且两者能并行工作。关于负载平衡还有一些问题,但这是我将在以后的文章中探讨的。总而言之,我希望这是一个值得记住的一课。我们已经解决了在尝试设置 Swarm 时可能遇到的一些先决条件和常见问题,同时我们启动了一堆容器,我们甚至简要介绍了如何缩放和分发服务。要记住,这只是一个开始。
|
||||
|
||||
干杯。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
我是 Igor Ljubuncic。现在大约 38 岁,已婚但还没有孩子。我现在在一个大胆创新的云科技公司做首席工程师。直到大约 2015 年初时,我还在一个全世界最大的 IT 公司之一中做系统架构工程师,和一个工程计算团队开发新的基于 Linux 的解决方案,优化内核以及攻克 Linux 的问题。在那之前,我是一个为高性能计算环境设计创新解决方案的团队的技术领导。还有一些其他花哨的头衔,包括系统专家、系统程序员等等。所有这些都曾是我的爱好,但从 2008 年开始成为了我的付费工作。还有什么比这更令人满意的呢?
|
||||
|
||||
从 2004 年到 2008 年间,我曾通过作为医学影像行业的物理学家来糊口。我的工作专长集中在解决问题和算法开发。为此,我广泛地使用了 Matlab,主要用于信号和图像处理。另外,我得到了几个主要的工程方法学的认证,包括 MEDIC 六西格玛绿带、试验设计以及统计工程学。
|
||||
|
||||
我也开始写书,包括奇幻类和 Linux 上的技术性工作。彼此交融。
|
||||
|
||||
要查看我开源项目、出版物和专利的完整列表,请滚动到下面。
|
||||
|
||||
有关我的奖项,提名和 IT 相关认证的完整列表,请稍等一下。
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
[1]:https://linux.cn/article-8888-1.html
|
||||
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[3]:http://www.dedoimedo.com/computers/docker-centos-upgrade-latest.html
|
||||
[4]:http://www.dedoimedo.com/computers/docker-guide.html
|
||||
[5]:http://www.dedoimedo.com/computers/docker-networking.html
|
71
published/20170310 Developer-defined application delivery.md
Normal file
71
published/20170310 Developer-defined application delivery.md
Normal file
@ -0,0 +1,71 @@
|
||||
开发者定义的应用交付
|
||||
============================================================
|
||||
|
||||
> 负载均衡器如何帮助你解决分布式系统的复杂性。
|
||||
|
||||
|
||||

|
||||
|
||||
原生云应用旨在利用分布式系统的性能、可扩展性和可靠性优势。不幸的是,分布式系统往往以额外的复杂性为代价。由于你程序的各个组件跨网络分布,并且这些网络有通信障碍或者性能降级,因此你的分布式程序组件需要能够继续独立运行。
|
||||
|
||||
为了避免程序状态的不一致,分布式系统设计应该有一个共识,即组件会失效。没有什么比在网络中更突出了。因此,在其核心,分布式系统在很大程度上依赖于负载平衡——请求分布于两个或多个系统,以便在面临网络中断时具有弹性,并在系统负载波动时水平缩放时。
|
||||
|
||||
随着分布式系统在原生云程序的设计和交付中越来越普及,负载平衡器在现代应用程序体系结构的各个层次都影响了基础设施设计。在大多数常见配置中,负载平衡器部署在应用程序前端,处理来自外部世界的请求。然而,微服务的出现意味着负载平衡器可以在幕后发挥关键作用:即管理_服务_之间的流。
|
||||
|
||||
因此,当你使用原生云程序和分布式系统时,负载均衡器将承担其他角色:
|
||||
|
||||
* 作为提供缓存和增加安全性的**反向代理**,因为它成为外部客户端的中间人。
|
||||
* 作为通过提供协议转换(例如 REST 到 AMQP)的 **API 网关**。
|
||||
* 它可以处理**安全性**(即运行 Web 应用程序防火墙)。
|
||||
* 它可能承担应用程序管理任务,如速率限制和 HTTP/2 支持。
|
||||
|
||||
鉴于它们的扩展能力远大于平衡流量,<ruby>负载平衡器<rt>load balancer</rt></ruby>可以更广泛地称为<ruby>应用交付控制器<rt>Application Delivery Controller</rt></ruby>(ADC)。
|
||||
|
||||
### 开发人员定义基础设施
|
||||
|
||||
从历史上看,ADC 是由 IT 专业人员购买、部署和管理的,最常见运行企业级架构的应用程序。对于物理负载平衡器设备(如 F5、Citrix、Brocade等),这种情况在很大程度上仍然存在。具有分布式系统设计和临时基础设施的云原生应用要求负载平衡器与它们运行时的基础设施 (如容器) 一样具有动态特性。这些通常是软件负载均衡器(例如来自公共云提供商的 NGINX 和负载平衡器)。云原生应用通常是开发人员主导的计划,这意味着开发人员正在创建应用程序(例如微服务器)和基础设施(Kubernetes 和 NGINX)。开发人员越来越多地对负载平衡 (和其他) 基础设施的决策做出或产生重大影响。
|
||||
|
||||
作为决策者,云原生应用的开发人员通常不会意识到企业基础设施需求或现有部署的影响,同时要考虑到这些部署通常是新的,并且经常在公共或私有云环境中进行部署。云技术将基础设施抽象为可编程 API,开发人员正在定义应用程序在该基础设施的每一层的构建方式。在有负载平衡器的情况下,开发人员会选择要使用的类型、部署方式以及启用哪些功能。它们以编程的方式对负载平衡器的行为进行编码 —— 随着程序在部署的生存期内增长、收缩和功能上进化时,它如何动态响应应用程序的需要。开发人员将基础设施定义为代码 —— 包括基础设施配置及其运维。
|
||||
|
||||
### 开发者为什么定义基础设施?
|
||||
|
||||
编写如何构建和部署应用程序的代码实践已经发生了根本性的转变,它体现在很多方面。简而言之,这种根本性的转变是由两个因素推动的:将新的应用功能推向市场所需的时间(_上市时间_)以及应用用户从产品中获得价值所需的时间(_获益时间_)。因此,新的程序写出来就被持续地交付(作为服务),无需下载和安装。
|
||||
|
||||
上市时间和获益时间的压力并不是新的,但由于其他因素的加剧,这些因素正在加强开发者的决策权力:
|
||||
|
||||
* 云:通过 API 将基础设施定义为代码的能力。
|
||||
* 伸缩:需要在大型环境中高效运维。
|
||||
* 速度:马上需要交付应用功能,为企业争取竞争力。
|
||||
* 微服务:抽象框架和工具选择,进一步赋予开发人员基础设施决策权力。
|
||||
|
||||
除了上述因素外,值得注意的是开源的影响。随着开源软件的普及和发展,开发人员手中掌握了许多应用程序基础设施 - 语言、运行时环境、框架、数据库、负载均衡器、托管服务等。微服务的兴起使应用程序基础设施的选择民主化,允许开发人员选择最佳的工具。在选择负载平衡器的情况下,那些与云原生应用的动态特质紧密集成并响应的那些人将超人一等。
|
||||
|
||||
### 总结
|
||||
|
||||
当你在仔细考虑你的云原生应用设计时,请与我一起讨论“[在云中使用 NGINX 和 Kubernetes 进行负载平衡][8]”。我们将检测不同公共云和容器平台的负载平衡功能,并通过一个宏应用的案例研究。我们将看看它是如何被变成较小的、独立的服务,以及 NGINX 和 Kubernetes 的能力是如何拯救它的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Lee Calcote 是一位创新思想领袖,对开发者平台和云、容器、基础设施和应用的管理软件充满热情。先进的和新兴的技术一直是 Calcote 在 SolarWinds、Seagate、Cisco 和 Pelco 时的关注重点。他是技术会议和聚会的组织者、写作者、作家、演讲者,经常活跃在技术社区。
|
||||
|
||||
----------------------------
|
||||
|
||||
via: https://www.oreilly.com/learning/developer-defined-application-delivery
|
||||
|
||||
作者:[Lee Calcote][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/7f693-lee-calcote
|
||||
[1]:https://pixabay.com/en/ship-containers-products-shipping-84139/
|
||||
[2]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
|
||||
[3]:https://www.oreilly.com/people/7f693-lee-calcote
|
||||
[4]:http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_text_cta
|
||||
[5]:https://www.oreilly.com/learning/developer-defined-application-delivery?imm_mid=0ee8c5&cmp=em-webops-na-na-newsltr_20170310
|
||||
[6]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
|
||||
[7]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
|
||||
[8]:http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_body_text_cta
|
@ -0,0 +1,89 @@
|
||||
如何使用拉取请求(PR)来改善你的代码审查
|
||||
============================================================
|
||||
|
||||
> 通过使用 GitHub 的<ruby>拉取请求<rt>Pull Request</rt></ruby>正确地进行代码审核,把时间更多的花在构建上,而在修复上少用点时间。
|
||||
|
||||

|
||||
|
||||
如果你不是每天编写代码,你可能不知道软件开发人员日常面临的一些问题。
|
||||
|
||||
* 代码中的安全漏洞
|
||||
* 导致应用程序崩溃的代码
|
||||
* 被称作 “技术债务” 和之后需要重写的代码
|
||||
* 在某处你所不知道地方的代码已经被重写
|
||||
|
||||
<ruby>代码审查<rt>Code review</rt></ruby>可以允许其他的人或工具来检查代码,帮助我们改善所编写的软件。这种审查(也称为<ruby>同行评审<rt>peer review</rt></ruby>)能够通过自动化代码分析或者测试覆盖工具来进行,是软件开发过程中两个重要的部分,它能够节省数小时的手工工作。同行评审是开发人员审查彼此工作的一个过程。在软件开发的过程中,速度和紧迫性是两个经常提及的问题。如果你没有尽快的发布,你的竞争对手可能会率先发布新功能。如果你不能够经常发布新的版本,你的用户可能会怀疑您是否仍然关心改进你的应用程序。
|
||||
|
||||
### 权衡时间:代码审查与缺陷修复
|
||||
|
||||
如果有人能够以最小争议的方式汇集多种类型的代码审查,那么随着时间的推移,该软件的质量将会得到改善。如果认为引入新的工具或流程最先导致的不是延迟,那未免太天真了。但是代价更高昂的是:修复生产环境中的错误花费的时间,或者在放到生产环境之前改进软件所花费的时间。即使新工具延迟了新功能的发布和得到客户欣赏的时间,但随着软件开发人员提高自己的技能,该延迟会缩短,软件开发周期将会回升到以前的水平,而同时缺陷将会减少。
|
||||
|
||||
通过代码审查实现提升代码质量目标的关键之一就是使用一个足够灵活的平台,允许软件开发人员快速编写代码,置入他们熟悉的工具,并对彼此进行同行评审。 GitHub 就是这样的平台的一个很好的例子。然而,只是把你的代码放在 [GitHub][9] 上并不会魔术般地使代码审查发生;你必须使用<ruby>拉取请求<rt>Pull Request</rt></ruby>来开始这个美妙的旅程。
|
||||
|
||||
### 拉取请求:关于代码的现场讨论
|
||||
|
||||
<ruby>[拉取请求][10]<rt>Pull Request</rt></ruby>是 Github 上的一个工具,允许软件开发人员讨论并提出对项目的主要代码库的更改,这些更改稍后可以部署给所有用户看到。这个功能创建于 2008 年 2 月,其目的是在接受(合并)之前,对某人的建议进行更改,然后在部署到生产环境中,供最终用户看到这种变化。
|
||||
|
||||
拉取请求开始是以一种松散的方式让你为某人的项目提供更改,但是它们已经演变成:
|
||||
|
||||
* 关于你想要合并的代码的现场讨论
|
||||
* 提升了所更改内容的可视功能
|
||||
* 整合了你最喜爱的工具
|
||||
* 作为受保护的分支工作流程的一部分可能需要显式的拉取请求评审
|
||||
|
||||
### 对于代码:URL 是永久的
|
||||
|
||||
看看上述的前两个点,拉取请求促成了一个正在进行的代码讨论,使代码变更可以更醒目,并且使您很容易在审查的过程中找到所需的代码。无论是对于新人还是有经验的开发人员,能够回顾以前的讨论,了解一个功能为什么以这种方式开发出来,或者与另一个相关功能的讨论相联系起来是无价的。当跨多个项目协调,并使每个人尽可能接近代码时,前后讨论的内容也非常重要。如果这些功能仍在开发中,重要的是能够看到上次审查以来更改了哪些内容。毕竟,[审查小的更改要比大的容易得多][11],但不可能全都是小功能。因此,重要的是能够找到你上次审查,并只看到从那时以来的变化。
|
||||
|
||||
### 集成工具:软件开发人员的偏执
|
||||
|
||||
再看下上述第三点,GitHub 上的拉取请求有很多功能,但开发人员总是偏好第三方工具。代码质量是个完整的代码审查领域,它涉及到其它组件的代码评审,而这些评审不一定是由人完成的。检测“低效”或缓慢的代码、具有潜在安全漏洞或不符合公司标准的代码是留给自动化工具的任务。类似 [SonarQube][12] 和 [Code Climatecan][13] 这样工具可以分析你的代码,而像 [Codecov][14] 和 [Coveralls][15] 的这样工具可以告诉你刚刚写的新代码还没有得到很好的测试。这些工具最令人称奇的是,它们可以集成到 GitHub 中,并把它们的发现汇报到拉取请求当中!这意味着该过程中不仅是人们在审查代码,而且工具也在会在那里报告情况。这样每个人都可以完全了解一个功能是如何开发的。
|
||||
|
||||
最后,根据您的团队的偏好,您可以利用[受保护的分支工作流][16]的<ruby>必需状态<rt>required status</rt></ruby>功能来要求进行工具审查和同行评审。
|
||||
|
||||
虽然您可能只是刚刚开始您的软件开发之旅,或者是一位希望知道项目正在做什么的业务利益相关者,抑或是一位想要确保项目的及时性和质量的项目经理,你都可以通过设置批准流程来参与到拉取请求中,并考虑集成更多的工具以确保质量,这在任何级别的软件开发中都很重要。
|
||||
|
||||
无论是为您的个人网站,贵公司的在线商店,还是想用最新的组合以获得最大的收获,编写好的软件都需要进行良好的代码审查。良好的代码审查涉及到正确的工具和平台。要了解有关 GitHub 和软件开发过程的更多信息,请参阅 O'Reilly 的 《[GitHub 简介][17]》 一书, 您可以在其中了解创建项目、启动拉取请求以及概要了解团队的软件开发流程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
**Brent Beer**
|
||||
|
||||
Brent Beer 通过大学的课程、对开源项目的贡献,以及担任专业网站开发人员使用 Git 和 GitHub 已经超过五年了。在担任 GitHub 上的培训师时,他也成为 O’Reilly 的 《GitHub 简介》的出版作者。他在阿姆斯特丹担任 GitHub 的解决方案工程师,帮助 Git 和 GitHub 向世界各地的开发人员提供服务。
|
||||
|
||||
**Peter Bell**
|
||||
|
||||
Peter Bell 是 Ronin 实验室的创始人以及 CTO。培训是存在问题的,我们通过技术提升培训来改进它!他是一位有经验的企业家、技术专家、敏捷教练和 CTO,专门从事 EdTech 项目。他为 O'Reilly 撰写了 《GitHub 简介》,为代码学校创建了“精通 GitHub ”课程,为 Pearson 创建了“ Git 和 GitHub 现场课”课程。他经常在国际和国际会议上发表 ruby、 nodejs、NoSQL(尤其是 MongoDB 和 neo4j )、云计算、软件工艺、java、groovy、j 等的演讲。
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews
|
||||
|
||||
作者:[Brent Beer][a], [Peter Bell][b]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[b]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[1]:https://pixabay.com/en/measure-measures-rule-metro-106354/
|
||||
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[4]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[5]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews
|
||||
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[8]:https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
[9]:https://github.com/about
|
||||
[10]:https://help.github.com/articles/about-pull-requests/
|
||||
[11]:https://blog.skyliner.io/ship-small-diffs-741308bec0d1
|
||||
[12]:https://github.com/integrations/sonarqube
|
||||
[13]:https://github.com/integrations/code-climate
|
||||
[14]:https://github.com/integrations/codecov
|
||||
[15]:https://github.com/integrations/coveralls
|
||||
[16]:https://help.github.com/articles/about-protected-branches/
|
||||
[17]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews-lower
|
242
published/20170403 Introduction to functional programming.md
Normal file
242
published/20170403 Introduction to functional programming.md
Normal file
@ -0,0 +1,242 @@
|
||||
函数式编程简介
|
||||
============================================================
|
||||
|
||||
> 我们来解释函数式编程的什么,它的优点是哪些,并且给出一些函数式编程的学习资源。
|
||||
|
||||

|
||||
|
||||
这要看您问的是谁, <ruby>函数式编程<rt>functional programming</rt></ruby>(FP)要么是一种理念先进的、应该广泛传播的程序设计方法;要么是一种偏学术性的、实际用途不多的编程方式。在这篇文章中我将讲解函数式编程,探究其优点,并推荐学习函数式编程的资源。
|
||||
|
||||
### 语法入门
|
||||
|
||||
本文的代码示例使用的是 [Haskell][40] 编程语言。在这篇文章中你只需要了解的基本函数语法:
|
||||
|
||||
```
|
||||
even :: Int -> Bool
|
||||
even = ... -- 具体的实现放在这里
|
||||
```
|
||||
|
||||
上述示例定义了含有一个参数的函数 `even` ,第一行是 _类型声明_,具体来说就是 `even` 函数接受一个 Int 类型的参数,返回一个 Bool 类型的值,其实现跟在后面,由一个或多个等式组成。在这里我们将忽略具体实现方法(名称和类型已经足够了):
|
||||
|
||||
```
|
||||
map :: (a -> b) -> [a] -> [b]
|
||||
map = ...
|
||||
```
|
||||
|
||||
这个示例,`map` 是一个有两个参数的函数:
|
||||
|
||||
1. `(a -> b)` :将 `a` 转换成 `b` 的函数
|
||||
2. `[a]`:一个 `a` 的列表,并返回一个 `b` 的列表。(LCTT 译注: 将函数作用到 `[a]` (List 序列对应于其它语言的数组)的每一个元素上,将每次所得结果放到另一个 `[b]` ,最后返回这个结果 `[b]`。)
|
||||
|
||||
同样我们不去关心要如何实现,我们只感兴趣它的定义类型。`a` 和 `b` 是任何一种的的 <ruby>类型变量<rt>type variable</rt></ruby> 。就像上一个示例中, `a` 是 `Int` 类型, `b` 是 `Bool` 类型:
|
||||
|
||||
```
|
||||
map even [1,2,3]
|
||||
```
|
||||
|
||||
这个是一个 Bool 类型的序列:
|
||||
|
||||
```
|
||||
[False,True,False]
|
||||
```
|
||||
|
||||
如果你看到你不理解的其他语法,不要惊慌;对语法的充分理解不是必要的。
|
||||
|
||||
### 函数式编程的误区
|
||||
|
||||
我们先来解释一下常见的误区:
|
||||
|
||||
* 函数式编程不是命令行编程或者面向对象编程的竞争对手或对立面,这并不是非此即彼的。
|
||||
* 函数式编程不仅仅用在学术领域。这是真的,在函数式编程的历史中,如像 Haskell 和 OCaml 语言是最流行的研究语言。但是今天许多公司使用函数式编程来用于大型的系统、小型专业程序,以及种种不同场合。甚至还有一个[面向函数式编程的商业用户[33]的年度会议;以前的那些程序让我们了解了函数式编程在工业中的用途,以及谁在使用它。
|
||||
* 函数式编程与 [monad][34] 无关 ,也不是任何其他特殊的抽象。在这篇文章里面 monad 只是一个抽象的规定。有些是 monad,有些不是。
|
||||
* 函数式编程不是特别难学的。某些语言可能与您已经知道的语法或求值语义不同,但这些差异是浅显的。函数式编程中有大量的概念,但其他语言也是如此。
|
||||
|
||||
### 什么是函数式编程?
|
||||
|
||||
核心是函数式编程是只使用_纯粹_的数学函数编程,函数的结果仅取决于参数,而没有副作用,就像 I/O 或者状态转换这样。程序是通过 <ruby>组合函数<rt>function composition</rt></ruby> 的方法构建的:
|
||||
|
||||
```
|
||||
(.) :: (b -> c) -> (a -> b) -> (a -> c)
|
||||
(g . f) x = g (f x)
|
||||
```
|
||||
|
||||
这个<ruby>中缀<rt>infix</rt></ruby>函数 `(.)` 表示的是二个函数组合成一个,将 `g` 作用到 `f` 上。我们将在下一个示例中看到它的使用。作为比较,我们看看在 Python 中同样的函数:
|
||||
|
||||
```
|
||||
def compose(g, f):
|
||||
return lambda x: g(f(x))
|
||||
```
|
||||
|
||||
函数式编程的优点在于:由于函数是确定的、没有副作用的,所以可以用结果替换函数,这种替代等价于使用使 <ruby>等式推理<rt>equational reasoning</rt></ruby> 。每个程序员都有使用自己代码和别人代码的理由,而等式推理就是解决这样问题不错的工具。来看一个示例。等你遇到这个问题:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
```
|
||||
|
||||
这段代码是做什么的?可以简化吗?通过等式推理,可以通过一系列替换来分析代码:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
map (even . (+1)) -- 来自 'map' 的定义
|
||||
map (\x -> even (x + 1)) -- lambda 抽象
|
||||
map odd -- 来自 'even' 的定义
|
||||
```
|
||||
|
||||
我们可以使用等式推理来理解程序并优化可读性。Haskell 编译器使用等式推理进行多种程序优化。没有纯函数,等式推理是不可能的,或者需要程序员付出更多的努力。
|
||||
|
||||
### 函数式编程语言
|
||||
|
||||
你需要一种编程语言来做函数式编程吗?
|
||||
|
||||
在没有<ruby>高阶函数<rt>higher-order function</rt></ruby>(传递函数作为参数和返回函数的能力)、lambdas (匿名函数)和<ruby>泛型<rt>generics</rt></ruby>的语言中进行有意义的函数式编程是困难的。 大多数现代语言都有这些,但在不同语言中支持函数式编程方面存在差异。 具有最佳支持的语言称为<ruby>函数式编程语言<rt>functional programming language</rt></ruby>。 这些包括静态类型的 _Haskell_、_OCaml_、_F#_ 和 _Scala_ ,以及动态类型的 _Erlang_ 和 _Clojure_。
|
||||
|
||||
即使是在函数式语言里,可以在多大程度上利用函数编程有很大差异。有一个<ruby>类型系统<rt>type system</rt></ruby>会有很大的帮助,特别是它支持 <ruby>类型推断<rt>type inference</rt></ruby> 的话(这样你就不用总是必须键入类型)。这篇文章中没有详细介绍这部分,但足以说明,并非所有的类型系统都是平等的。
|
||||
|
||||
与所有语言一样,不同的函数的语言强调不同的概念、技术或用例。选择语言时,考虑它支持函数式编程的程度以及是否适合您的用例很重要。如果您使用某些非 FP 语言,你仍然会受益于在该语言支持的范围内的函数式编程。
|
||||
|
||||
### 不要打开陷阱之门
|
||||
|
||||
回想一下,函数的结果只取决于它的输入。但是,几乎所有的编程语言都有破坏这一原则的“功能”。空值、<ruby>实例类型<rt>type case</rt></ruby>(`instanceof`)、类型转换、异常、<ruby>边际效用<rt>side-effect</rt></ruby>,以及无尽循环的可能性都是陷阱,它打破等式推理,并削弱程序员对程序行为正确性的理解能力。(所有语言里面,没有任何陷阱的语言包括 Agda、Idris 和 Coq。)
|
||||
|
||||
幸运的是,作为程序员,我们可以选择避免这些陷阱,如果我们受到严格的规范,我们可以假装陷阱不存在。 这个方法叫做<ruby>轻率推理<rt>fast and loose reasoning</rt></ruby> 。它不需要任何条件,几乎任何程序都可以在不使用陷阱的情况下进行编写,并且通过避免这些可以而获得等式推理、可组合性和可重用性。
|
||||
|
||||
让我们详细讨论一下。 这个陷阱破坏了等式推理,因为异常终止的可能性没有反映在类型中。(你可以庆幸文档中甚至没有提到能抛出的异常)。但是没有理由我们没有一个可以包含所有故障模式的返回类型。
|
||||
|
||||
避开陷阱是语言特征中出现很大差异的领域。为避免例外, <ruby>代数数据类型<rt>algebraic data type</rt></ruby>可用于模型错误的条件下,就像:
|
||||
|
||||
```
|
||||
-- new data type for results of computations that can fail
|
||||
--
|
||||
data Result e a = Error e | Success a
|
||||
|
||||
-- new data type for three kinds of arithmetic errors
|
||||
--
|
||||
data ArithError = DivByZero | Overflow | Underflow
|
||||
|
||||
-- integer division, accounting for divide-by-zero
|
||||
--
|
||||
safeDiv :: Int -> Int -> Result ArithError Int
|
||||
safeDiv x y =
|
||||
if y == 0
|
||||
then Error DivByZero
|
||||
else Success (div x y)
|
||||
```
|
||||
|
||||
在这个例子中的权衡你现在必须使用 Result ArithError Int 类型,而不是以前的 Int 类型,但这也是解决这个问题的一种方式。你不再需要处理异常,而能够使用轻率推理 ,总体来说这是一个胜利。
|
||||
|
||||
### 自由定理
|
||||
|
||||
大多数现代静态类型语言具有<ruby>范型<rt>generics</rt></ruby>(也称为<ruby>参数多态性<rt>parametric polymorphism</rt></ruby> ),其中函数是通过一个或多个抽象类型定义的。 例如,看看这个 List(序列)函数:
|
||||
|
||||
```
|
||||
f :: [a] -> [a]
|
||||
f = ...
|
||||
```
|
||||
|
||||
Java 中的相同函数如下所示:
|
||||
|
||||
```
|
||||
static <A> List<A> f(List<A> xs) { ... }
|
||||
```
|
||||
|
||||
该编译的程序证明了这个函数适用于类型 `a` 的*任意*选择。考虑到这一点,采用轻率推理的方法,你能够弄清楚该函数的作用吗?知道类型有什么帮助?
|
||||
|
||||
在这种情况下,该类型并不能告诉我们函数的功能(它可以逆转序列、删除第一个元素,或许多其它的操作),但它确实告诉了我们很多信息。只是从该类型,我们可以推演出该函数的定理:
|
||||
|
||||
* 定理 1 :输出中的每个元素也出现于输入中;不可能在输入的序列 `a` 中添加值,因为你不知道 `a` 是什么,也不知道怎么构造一个。
|
||||
* 定理 2 :如果你映射某个函数到列表上,然后对其应用 `f`,其等同于对映射应用 `f`。
|
||||
|
||||
定理 1 帮助我们了解代码的作用,定理 2 对于程序优化提供了帮助。我们从类型中学到了这一切!其结果,即从类型中获取有用的定理的能力,称之为<ruby>参数化<rt>parametricity</rt></ruby>。因此,类型是函数行为的部分(有时是完整的)规范,也是一种机器检查机制。
|
||||
|
||||
现在你可以利用参数化了。你可以从 `map` 和 `(.)` 的类型或者下面的这些函数中发现什么呢?
|
||||
|
||||
* `foo :: a -> (a, a)`
|
||||
* `bar :: a -> a -> a`
|
||||
* `baz :: b -> a -> a`
|
||||
|
||||
### 学习功能编程的资源
|
||||
|
||||
也许你已经相信函数式编程是编写软件不错的方式,你想知道如何开始?有几种学习功能编程的方法;这里有一些我推荐(我承认,我对 Haskell 偏爱):
|
||||
|
||||
* UPenn 的 [CIS 194: 介绍 Haskell][35] 是函数式编程概念和 Haskell 实际开发的不错选择。有课程材料,但是没有讲座(您可以用几年前 Brisbane 函数式编程小组的 [CIS 194 系列讲座][36]。
|
||||
* 不错的入门书籍有 《[Scala 的函数式编程][30]》 、 《[Haskell 函数式编程思想][31]》 , 和 《[Haskell 编程原理][32]》。
|
||||
* [Data61 FP 课程][37] (即 _NICTA_ 课程)通过<ruby>类型驱动开发<rt>type-driven development</rt></ruby>来教授基础的抽象概念和数据结构。这是十分困难,但收获也是丰富的,其起源于培训会,如果你有一名愿意引导你函数式编程的程序员,你可以尝试。
|
||||
* 在你的工作学习中使用函数式编程书写代码,写一些纯函数(避免不确定性和异常的出现),使用高阶函数和递归而不是循环,利用参数化来提高可读性和重用性。许多人从体验和实验各种语言的美妙之处,开始走上了函数式编程之旅。
|
||||
* 加入到你的地区中的一些函数式编程小组或者学习小组中,或者创建一个,也可以是参加一些函数式编程的会议(新的会议总是不断的出现)。
|
||||
|
||||
### 总结
|
||||
|
||||
在本文中,我讨论了函数式编程是什么以及不是什么,并了解到了函数式编程的优势,包括等式推理和参数化。我们了解到在大多数编程语言中都有一些函数式编程功能,但是语言的选择会影响受益的程度,而 Haskell 是函数式编程中语言最受欢迎的语言。我也推荐了一些学习函数式编程的资源。
|
||||
|
||||
函数式编程是一个丰富的领域,还有许多更深入(更神秘)的主题正在等待探索。我没有提到那些具有实际意义的事情,比如:
|
||||
|
||||
* lenses 和 prisms (是一流的设置和获取值的方式;非常适合使用嵌套数据);
|
||||
* 定理证明(当你可以证明你的代码正确时,为什么还要测试你的代码?);
|
||||
* 延迟评估(让您处理潜在的无数的数据结构);
|
||||
* 分类理论(函数式编程中许多美丽实用的抽象的起源);
|
||||
|
||||
我希望你喜欢这个函数式编程的介绍,并且启发你走上这个有趣和实用的软件开发之路。
|
||||
|
||||
_本文根据 [CC BY 4.0][38] 许可证发布。_
|
||||
|
||||
(题图: opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
红帽软件工程师。对函数式编程,分类理论,数学感兴趣。Crazy about jalapeños.
|
||||
|
||||
----------------------
|
||||
|
||||
via: https://opensource.com/article/17/4/introduction-functional-programming
|
||||
|
||||
作者:[Fraser Tweedale][a]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/frasertweedale
|
||||
[1]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://developers.redhat.com/products/#developer_tools?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[6]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[7]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[8]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[9]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:div
|
||||
[10]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[11]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[12]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Bool
|
||||
[13]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[14]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[15]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[16]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[17]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[18]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[19]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[20]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[21]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[22]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[23]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[24]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[25]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[26]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[27]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[28]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[29]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:odd
|
||||
[30]:https://www.manning.com/books/functional-programming-in-scala
|
||||
[31]:http://www.cambridge.org/gb/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell
|
||||
[32]:http://haskellbook.com/
|
||||
[33]:http://cufp.org/
|
||||
[34]:https://www.haskell.org/tutorial/monads.html
|
||||
[35]:https://www.cis.upenn.edu/~cis194/fall16/
|
||||
[36]:https://github.com/bfpg/cis194-yorgey-lectures
|
||||
[37]:https://github.com/data61/fp-course
|
||||
[38]:https://creativecommons.org/licenses/by/4.0/
|
||||
[39]:https://opensource.com/article/17/4/introduction-functional-programming?rate=_tO5hNzT4hRKNMJtWwQM-K3Jmxm10iPeqoy3bbS12MQ
|
||||
[40]:https://wiki.haskell.org/Introduction
|
||||
[41]:https://opensource.com/user/123116/feed
|
||||
[42]:https://opensource.com/users/frasertweedale
|
178
published/20170422 FEWER MALLOCS IN CURL.md
Normal file
178
published/20170422 FEWER MALLOCS IN CURL.md
Normal file
@ -0,0 +1,178 @@
|
||||
减少 curl 中内存分配操作(malloc)
|
||||
===========================================================
|
||||
|
||||

|
||||
|
||||
今天我在 libcurl 内部又做了[一个小改动][4],使其做更少的 malloc。这一次,泛型链表函数被转换成更少的 malloc (这才是链表函数应有的方式,真的)。
|
||||
|
||||
### 研究 malloc
|
||||
|
||||
几周前我开始研究内存分配。这很容易,因为多年前我们 curl 中就已经有内存调试和日志记录系统了。使用 curl 的调试版本,并在我的构建目录中运行此脚本:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
export CURL_MEMDEBUG=$HOME/tmp/curlmem.log
|
||||
./src/curl http://localhost
|
||||
./tests/memanalyze.pl -v $HOME/tmp/curlmem.log
|
||||
```
|
||||
|
||||
对于 curl 7.53.1,这大约有 115 次内存分配。这算多还是少?
|
||||
|
||||
内存日志非常基础。为了让你有所了解,这是一个示例片段:
|
||||
|
||||
```
|
||||
MEM getinfo.c:70 free((nil))
|
||||
MEM getinfo.c:73 free((nil))
|
||||
MEM url.c:294 free((nil))
|
||||
MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98
|
||||
MEM url.c:294 free((nil))
|
||||
MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8
|
||||
MEM multi.c:302 calloc(1,480) = 0x559e73760ff8
|
||||
MEM hash.c:75 malloc(224) = 0x559e737611f8
|
||||
MEM hash.c:75 malloc(29152) = 0x559e737a2bc8
|
||||
MEM hash.c:75 malloc(3104) = 0x559e737a9dc8
|
||||
```
|
||||
|
||||
### 检查日志
|
||||
|
||||
然后,我对日志进行了更深入的研究,我意识到在相同的代码行做了许多小内存分配。我们显然有一些相当愚蠢的代码模式,我们分配一个结构体,然后将该结构添加到链表或哈希,然后该代码随后再添加另一个小结构体,如此这般,而且经常在循环中执行。(我在这里说的是_我们_,不是为了责怪某个人,当然大部分的责任是我自己……)
|
||||
|
||||
这两种分配操作将总是成对地出现,并被同时释放。我决定解决这些问题。做非常小的(小于 32 字节)的分配也是浪费的,因为非常多的数据将被用于(在 malloc 系统内)跟踪那个微小的内存区域。更不用说堆碎片了。
|
||||
|
||||
因此,将该哈希和链表代码修复为不使用 malloc 是快速且简单的方法,对于最简单的 “curl http://localhost” 传输,它可以消除 20% 以上的 malloc。
|
||||
|
||||
此时,我根据大小对所有的内存分配操作进行排序,并检查所有最小的分配操作。一个突出的部分是在 `curl_multi_wait()` 中,它是一个典型的在 curl 传输主循环中被反复调用的函数。对于大多数典型情况,我将其转换为[使用堆栈][5]。在大量重复的调用函数中避免 malloc 是一件好事。
|
||||
|
||||
### 重新计数
|
||||
|
||||
现在,如上面的脚本所示,同样的 `curl localhost` 命令从 curl 7.53.1 的 115 次分配操作下降到 80 个分配操作,而没有牺牲任何东西。轻松地有 26% 的改善。一点也不差!
|
||||
|
||||
由于我修改了 `curl_multi_wait()`,我也想看看它实际上是如何改进一些稍微更高级一些的传输。我使用了 [multi-double.c][6] 示例代码,添加了初始化内存记录的调用,让它使用 `curl_multi_wait()`,并且并行下载了这两个 URL:
|
||||
|
||||
```
|
||||
http://www.example.com/
|
||||
http://localhost/512M
|
||||
```
|
||||
|
||||
第二个文件是 512 兆字节的零,第一个文件是一个 600 字节的公共 html 页面。这是 [count-malloc.c 代码][7]。
|
||||
|
||||
首先,我使用 7.53.1 来测试上面的例子,并使用 `memanalyze` 脚本检查:
|
||||
|
||||
```
|
||||
Mallocs: 33901
|
||||
Reallocs: 5
|
||||
Callocs: 24
|
||||
Strdups: 31
|
||||
Wcsdups: 0
|
||||
Frees: 33956
|
||||
Allocations: 33961
|
||||
Maximum allocated: 160385
|
||||
```
|
||||
|
||||
好了,所以它总共使用了 160KB 的内存,分配操作次数超过 33900 次。而它下载超过 512 兆字节的数据,所以它每 15KB 数据有一次 malloc。是好是坏?
|
||||
|
||||
回到 git master,现在是 7.54.1-DEV 的版本 - 因为我们不太确定当我们发布下一个版本时会变成哪个版本号。它可能是 7.54.1 或 7.55.0,它还尚未确定。我离题了,我再次运行相同修改的 multi-double.c 示例,再次对内存日志运行 memanalyze,报告来了:
|
||||
|
||||
```
|
||||
Mallocs: 69
|
||||
Reallocs: 5
|
||||
Callocs: 24
|
||||
Strdups: 31
|
||||
Wcsdups: 0
|
||||
Frees: 124
|
||||
Allocations: 129
|
||||
Maximum allocated: 153247
|
||||
```
|
||||
|
||||
我不敢置信地反复看了两遍。发生什么了吗?为了仔细检查,我最好再运行一次。无论我运行多少次,结果还是一样的。
|
||||
|
||||
### 33961 vs 129
|
||||
|
||||
在典型的传输中 `curl_multi_wait()` 被调用了很多次,并且在传输过程中至少要正常进行一次内存分配操作,因此删除那个单一的微小分配操作对计数器有非常大的影响。正常的传输也会做一些将数据移入或移出链表和散列操作,但是它们现在也大都是无 malloc 的。简单地说:剩余的分配操作不会在传输循环中执行,所以它们的重要性不大。
|
||||
|
||||
以前的 curl 是当前示例分配操作数量的 263 倍。换句话说:新的是旧的分配操作数量的 0.37% 。
|
||||
|
||||
另外还有一点好处,新的内存分配量更少,总共减少了 7KB(4.3%)。
|
||||
|
||||
### malloc 重要吗?
|
||||
|
||||
在几个 G 内存的时代里,在传输中有几个 malloc 真的对于普通人有显著的区别吗?对 512MB 数据进行的 33832 个额外的 malloc 有什么影响?
|
||||
|
||||
为了衡量这些变化的影响,我决定比较 localhost 的 HTTP 传输,看看是否可以看到任何速度差异。localhost 对于这个测试是很好的,因为没有网络速度限制,更快的 curl 下载也越快。服务器端也会相同的快/慢,因为我将使用相同的测试集进行这两个测试。
|
||||
|
||||
我相同方式构建了 curl 7.53.1 和 curl 7.54.1-DEV,并运行这个命令:
|
||||
|
||||
```
|
||||
curl http://localhost/80GB -o /dev/null
|
||||
```
|
||||
|
||||
下载的 80GB 的数据会尽可能快地写到空设备中。
|
||||
|
||||
我获得的确切数字可能不是很有用,因为它将取决于机器中的 CPU、使用的 HTTP 服务器、构建 curl 时的优化级别等,但是相对数字仍然应该是高度相关的。新代码对决旧代码!
|
||||
|
||||
7.54.1-DEV 反复地表现出更快 30%!我的早期版本是 2200MB/秒增加到当前版本的超过 2900 MB/秒。
|
||||
|
||||
这里的要点当然不是说它很容易在我的机器上使用单一内核以超过 20GB/秒的速度来进行 HTTP 传输,因为实际上很少有用户可以通过 curl 做到这样快速的传输。关键在于 curl 现在每个字节的传输使用更少的 CPU,这将使更多的 CPU 转移到系统的其余部分来执行任何需要做的事情。或者如果设备是便携式设备,那么可以省电。
|
||||
|
||||
关于 malloc 的成本:512MB 测试中,我使用旧代码发生了 33832 次或更多的分配。旧代码以大约 2200MB/秒的速率进行 HTTP 传输。这等于每秒 145827 次 malloc - 现在它们被消除了!600 MB/秒的改进意味着每秒钟 curl 中每个减少的 malloc 操作能额外换来多传输 4300 字节。
|
||||
|
||||
### 去掉这些 malloc 难吗?
|
||||
|
||||
一点也不难,非常简单。然而,有趣的是,在这个旧项目中,仍然有这样的改进空间。我有这个想法已经好几年了,我很高兴我终于花点时间来实现。感谢我们的测试套件,我可以有相当大的信心做这个“激烈的”内部变化,而不会引入太可怕的回归问题。由于我们的 API 很好地隐藏了内部,所以这种变化可以完全不改变任何旧的或新的应用程序……
|
||||
|
||||
(是的,我还没在版本中发布该变更,所以这还有风险,我有点后悔我的“这很容易”的声明……)
|
||||
|
||||
### 注意数字
|
||||
|
||||
curl 的 git 仓库从 7.53.1 到今天已经有 213 个提交。即使我没有别的想法,可能还会有一次或多次的提交,而不仅仅是内存分配对性能的影响。
|
||||
|
||||
### 还有吗?
|
||||
|
||||
还有其他类似的情况么?
|
||||
|
||||
也许。我们不会做很多性能测量或比较,所以谁知道呢,我们也许会做更多的愚蠢事情,我们可以收手并做得更好。有一个事情是我一直想做,但是从来没有做,就是添加所使用的内存/malloc 和 curl 执行速度的每日“监视” ,以便更好地跟踪我们在这些方面不知不觉的回归问题。
|
||||
|
||||
### 补遗,4/23
|
||||
|
||||
(关于我在 hacker news、Reddit 和其它地方读到的关于这篇文章的评论)
|
||||
|
||||
有些人让我再次运行那个 80GB 的下载,给出时间。我运行了三次新代码和旧代码,其运行“中值”如下:
|
||||
|
||||
旧代码:
|
||||
|
||||
```
|
||||
real 0m36.705s
|
||||
user 0m20.176s
|
||||
sys 0m16.072s
|
||||
```
|
||||
|
||||
新代码:
|
||||
|
||||
```
|
||||
real 0m29.032s
|
||||
user 0m12.196s
|
||||
sys 0m12.820s
|
||||
```
|
||||
|
||||
承载这个 80GB 文件的服务器是标准的 Apache 2.4.25,文件存储在 SSD 上,我的机器的 CPU 是 i7 3770K 3.50GHz 。
|
||||
|
||||
有些人也提到 `alloca()` 作为该补丁之一也是个解决方案,但是 `alloca()` 移植性不够,只能作为一个孤立的解决方案,这意味着如果我们要使用它的话,需要写一堆丑陋的 `#ifdef`。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/
|
||||
|
||||
作者:[DANIEL STENBERG][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://daniel.haxx.se/blog/author/daniel/
|
||||
[1]:https://daniel.haxx.se/blog/author/daniel/
|
||||
[2]:https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/
|
||||
[3]:https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/#comments
|
||||
[4]:https://github.com/curl/curl/commit/cbae73e1dd95946597ea74ccb580c30f78e3fa73
|
||||
[5]:https://github.com/curl/curl/commit/5f1163517e1597339d
|
||||
[6]:https://github.com/curl/curl/commit/5f1163517e1597339d
|
||||
[7]:https://gist.github.com/bagder/dc4a42cb561e791e470362da7ef731d3
|
144
published/20170427 Enabling DNS split authority with OctoDNS.md
Normal file
144
published/20170427 Enabling DNS split authority with OctoDNS.md
Normal file
@ -0,0 +1,144 @@
|
||||
使用 OctoDNS 启用 DNS 分割权威
|
||||
============================================================
|
||||
|
||||
构建一个健壮的系统需要为故障而设计。作为 GitHub 的网站可靠性工程师(SRE),我们一直在寻求通过冗余来帮助缓解问题,今天将讨论最近我们所做的工作,以便支持你通过 DNS 来查找我们的服务器。
|
||||
|
||||
大型 [DNS][4] 提供商在其服务中构建了多级冗余,出现导致中断的问题时,可以采取措施来减轻其影响。最佳选择之一是把你的<ruby>区域<rt>zone</rt></ruby>的权威服务分割到多个服务提供商中。启用<ruby>分割权威<rt>split authority</rt></ruby>很简单,你只需在域名注册商配置两套或多套你区域的[名称服务器][5],然后 DNS 请求将分割到整个列表中。但是,你必须在多个提供商之间对这些区域的记录保持同步,并且,根据具体情况这可能要么设置复杂,要么是完全手动的过程。
|
||||
|
||||
```
|
||||
$ dig NS github.com. @a.gtld-servers.net.
|
||||
|
||||
...
|
||||
|
||||
;; QUESTION SECTION:
|
||||
;github.com. IN NS
|
||||
|
||||
;; AUTHORITY SECTION:
|
||||
github.com. 172800 IN NS ns4.p16.dynect.net.
|
||||
github.com. 172800 IN NS ns-520.awsdns-01.net.
|
||||
github.com. 172800 IN NS ns1.p16.dynect.net.
|
||||
github.com. 172800 IN NS ns3.p16.dynect.net.
|
||||
github.com. 172800 IN NS ns-421.awsdns-52.com.
|
||||
github.com. 172800 IN NS ns-1283.awsdns-32.org.
|
||||
github.com. 172800 IN NS ns2.p16.dynect.net.
|
||||
github.com. 172800 IN NS ns-1707.awsdns-21.co.uk.
|
||||
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
上面的查询是向 [TLD 名称服务器][6] 询问 `github.com.` 的 `NS` 记录。它返回了在我们在域名注册商中配置的值,在本例中,一共有两个 DNS 服务提供商,每个四条记录。如果其中一个提供商发生中断,那么其它的仍有希望可以服务请求。我们在各个地方同步记录,并且可以安全地修改它们,而不必担心数据陈旧或状态不正确。
|
||||
|
||||
完整地配置分割权威的最后一部分是在两个 DNS 服务提供商中将所有名称服务器作为顶层 `NS` 记录添加到区域的根中。
|
||||
|
||||
```
|
||||
$ dig NS github.com. @ns1.p16.dynect.net.
|
||||
|
||||
...
|
||||
|
||||
;; QUESTION SECTION:
|
||||
;github.com. IN NS
|
||||
|
||||
;; ANSWER SECTION:
|
||||
github.com. 551 IN NS ns1.p16.dynect.net.
|
||||
github.com. 551 IN NS ns2.p16.dynect.net.
|
||||
github.com. 551 IN NS ns-520.awsdns-01.net.
|
||||
github.com. 551 IN NS ns3.p16.dynect.net.
|
||||
github.com. 551 IN NS ns-421.awsdns-52.com.
|
||||
github.com. 551 IN NS ns4.p16.dynect.net.
|
||||
github.com. 551 IN NS ns-1283.awsdns-32.org.
|
||||
github.com. 551 IN NS ns-1707.awsdns-21.co.uk.
|
||||
|
||||
```
|
||||
|
||||
在 GitHub,我们有几十个区域和数千条记录,而大多数这些区域并没有关键到需要冗余,因此我们只需要处理一部分。我们希望有能够在多个 DNS 服务提供商中保持这些记录同步的方案,并且更一般地管理内部和外部的所有 DNS 记录。所以今天我们宣布了 [OctoDNS][7]。
|
||||
|
||||

|
||||
|
||||
### 配置
|
||||
|
||||
OctoDNS 能够让我们重新打造我们的 DNS 工作流程。我们的区域和记录存储在 Git 仓库的配置文件中。对它们的变更使用 [GitHub 流][8],并[像个站点一样用分支部署][9]。我们甚至可以做个 “空” 部署来预览哪些记录将在变更中修改。配置文件是 yaml 字典,每个区域一个,它的顶层的键名是记录名称,键值是 ttl、类型和类型特定的数据。例如,当包含在区域文件 `github.com.yaml` 中时,以下配置将创建 `octodns.github.com.` 的 `A` 记录。
|
||||
|
||||
```
|
||||
octodns:
|
||||
type: A
|
||||
values:
|
||||
- 1.2.3.4
|
||||
- 1.2.3.5
|
||||
|
||||
```
|
||||
|
||||
配置的第二部分将记录数据的源映射到 DNS 服务提供商。下面的代码片段告诉 OctoDNS 从 `config` 提供程序加载区域 `github.com`,并将其结果同步到 `dyn` 和 `route53`。
|
||||
|
||||
```
|
||||
zones:
|
||||
github.com.:
|
||||
sources:
|
||||
- config
|
||||
targets:
|
||||
- dyn
|
||||
- route53
|
||||
|
||||
```
|
||||
|
||||
### 同步
|
||||
|
||||
一旦我们的配置完成,OctoDNS 就可以评估当前的状态,并建立一个计划,其中列出将需要将目标状态与源相匹配的一组更改。在下面的例子中,`octodns.github.com` 是一个新的记录,所以所需的操作是在两者中创建记录。
|
||||
|
||||
```
|
||||
$ octodns-sync --config-file=./config/production.yaml
|
||||
...
|
||||
********************************************************************************
|
||||
* github.com.
|
||||
********************************************************************************
|
||||
* route53 (Route53Provider)
|
||||
* Create <ARecord A 60, octodns.github.com., [u'1.2.3.4', '1.2.3.5']>
|
||||
* Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0
|
||||
* dyn (DynProvider)
|
||||
* Create <ARecord A 60, octodns.github.com., [u'1.2.3.4', '1.2.3.5']>
|
||||
* Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0
|
||||
********************************************************************************
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
默认情况下 `octodns-sync` 处于模拟运行模式,因此不会采取任何行动。一旦我们审阅了变更,并对它们感到满意,我们可以添加 `--doit' 标志并再次运行命令。OctoDNS 将继续它的处理流程,这一次将在 Route53 和 Dynect 中进行必要的更改,以便创建新的记录。
|
||||
|
||||
```
|
||||
$ octodns-sync --config-file=./config/production.yaml --doit
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
此刻,在两个 DNS 服务提供商里我们有了相同的数据记录,并可以轻松地分割我们的 DNS 请求给它们,并知道它们将提供准确的结果。当我们直接运行上面的 OctoDNS 命令时,我们的内部工作流程依赖于部署脚本和 chatops。你可以在 [README 的工作流程部分][10]中找到更多信息。
|
||||
|
||||
### 总结
|
||||
|
||||
我们认为大多数网站可以从分割权威中受益,并且希望用 [OctoDNS][11],其中最大的障碍已被扫除。即使对分割权威不感兴趣,OctoDNS 仍然值得一看,因为它将[基础设施即代码][12]的好处带给了 DNS。
|
||||
|
||||
想帮助 GitHub SRE 团队解决有趣的问题吗?我们很乐意加入我们。[在这里申请][13]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
|
||||
作者:[Ross McFarland][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/ross
|
||||
[1]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[2]:https://github.com/ross
|
||||
[3]:https://github.com/ross
|
||||
[4]:https://en.wikipedia.org/wiki/Domain_Name_System
|
||||
[5]:https://en.wikipedia.org/wiki/Name_server
|
||||
[6]:https://en.wikipedia.org/wiki/Top-level_domain
|
||||
[7]:https://github.com/github/octodns/
|
||||
[8]:https://guides.github.com/introduction/flow/
|
||||
[9]:https://githubengineering.com/deploying-branches-to-github-com/
|
||||
[10]:https://github.com/github/octodns#workflow
|
||||
[11]:https://github.com/github/octodns/
|
||||
[12]:https://en.wikipedia.org/wiki/Infrastructure_as_Code
|
||||
[13]:https://boards.greenhouse.io/github/jobs/669805#.WPVqJlPyvUI
|
106
published/20170517 Security Debt is an Engineers Problem.md
Normal file
106
published/20170517 Security Debt is an Engineers Problem.md
Normal file
@ -0,0 +1,106 @@
|
||||
安全债务是工程师的问题
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||

|
||||
>Keziah Plattner of AirBnBSecurity.
|
||||
|
||||
在上个月旧金山 Twitter 总部举办的 [WomenWhoCode Connect][5] 活动中参会者了解到,就像组织会形成技术债务一样,如果他们不相应地计划,也会形成一个名为“安全债务”的东西。
|
||||
|
||||
甲骨文首席安全官 [Mary Ann Davidson][6] 与 [WomenWhoCode][8] 的 [Zassmin Montes de Oca][7] 在一个面对开发人员的安全性的主题谈话中强调,安全性已经成为软件开发过程中每步的重要组成部分,
|
||||
|
||||
在过去,除银行外,安全性几乎被所有人忽视。但安全性比以往任何时候都更重要,因为现在有这么多接入点。我们已经进入[物联网][9]的时代,窃贼可以通过劫持你的冰箱而了解到你不在家的情况。
|
||||
|
||||
Davidson 负责 Oracle 的保障,“我们确保为建立的一切构建安全性,无论是内部部署产品、云服务,甚至是设备,我们在客户的网站建立有支持小组并报告数据给我们,帮助我们做诊断 - 每件事情都必须对其进行安全保护。”
|
||||
|
||||

|
||||
|
||||
*Plattner 与 #WWCConnect 人群交谈*
|
||||
|
||||
AirBnB 的 [Keziah Plattner][10] 在分组会议中回应了这个看法。她说:“大多数开发者并不认为安全是他们的工作,但这必须改变。”
|
||||
|
||||
她分享了工程师的四项基本安全原则。首先,安全债务是昂贵的。现在有很多人在谈论[技术债务][11],她认为这些谈话应该也包括安全债务。
|
||||
|
||||
Plattner 说:“历史上这个看法是‘我们会稍后考虑安全’”。当公司抓住软件效率和增长的唾手可得的成果时,他们忽视了安全性,但最初的不安全设计可能在未来几年会引发问题。
|
||||
|
||||
她说,很难为现有的脆弱系统增加安全性。即使你知道安全漏洞在哪里,并且有进行更改的时间和资源的预算,重新设计一个安全系统也是耗时和困难的。
|
||||
|
||||
她说,所以这就是关键,从一开始就建立安全性。将安全性视为技术债务的一部分以避免这个问题,并涵盖所有可能性。
|
||||
|
||||
根据 Plattner 说的,最重要的是难以让人们改变行为。没有人会自愿改变,她说,即使你指出新的行为更安全。他们也只不过是点点头而已。
|
||||
|
||||
Davidson 说,工程师们需要开始考虑他们的代码如何被攻击,并从这个角度进行设计。她说她只有两个规则。第一个从不信任任何未验证的数据;规则二参见规则一。
|
||||
|
||||
她笑着说:“人们一直这样做。他们说:‘我的客户端给我发送数据,所以没有问题’。千万不要……”。
|
||||
|
||||
Plattner说,安全的第二个关键是“永远不信任用户”。
|
||||
|
||||
Davidson 以另外一种说法表示:“我的工作是做专业的偏执狂。”她一直担心有人或许无意中会破坏她的系统。这不是学术性的考虑,最近已经有通过 IoT 设备的拒绝服务攻击。
|
||||
|
||||
### Little Bobby Tables
|
||||
|
||||
Plattner 说:“如果你安全计划的一部分是信任用户做正确的事情,那么无论你有什么其他安全措施,你系统本质上是不安全的。”
|
||||
|
||||
她解释说,重要的是要净化所有的用户输入,如 [XKCD 漫画][12]中的那样,一位妈妈干掉整个学校的数据库——因为她的儿子的中间名是 “DropTable Students”(LCTT 译注:看不懂的[点这里][17])。
|
||||
|
||||

|
||||
|
||||
所以净化所有的用户输入。你一定检查一下。
|
||||
|
||||
她展示了一个 JavaScript 开发者在开源软件中使用 eval 的例子。她警告说:“一个好的基本规则是‘从不使用 eval()’”。 [eval()] [13] 函数会执行 JavaScript 代码。“如果你这样做,你正在向任意用户开放你的系统。”
|
||||
|
||||
Davidson 警告说,她甚至偏执到将文档中的示例代码的安全测试也包括在内。她笑着说:“我们都知道没有人会去复制示例代码”。她强调指出,任何代码都应进行安全检查。
|
||||
|
||||

|
||||
|
||||
*让它容易*
|
||||
|
||||
Plattner 的第三个建议:要使安全容易实施。她建议采取阻力最小的道路。
|
||||
|
||||
对外,使用户<ruby>默认采用<rt>option out</rt></ruby>安全措施而不是<ruby>可选采用<rt>option in</rt></ruby>,或者更好使其成为强制性的措施。她说,改变人们的行为是科技中最难的问题。一旦用户习惯以非安全的方式使用你的产品,让他们改进会变得非常困难。
|
||||
|
||||
在公司内部,她建议制定安全标准,因此这不是个别开发人员需要考虑的内容。例如,将数据加密作为服务,这样工程师可以只需要调用服务就可以加密或解密数据。
|
||||
|
||||
她说,确保公司注重安全环境。在让整个公司切换到好的安全习惯。
|
||||
|
||||
你的最薄弱的环节决定了你的安全水准,所以重要的是每个人都有良好的个人安全习惯,并具有良好的企业安全环境。
|
||||
|
||||
在 Oracle,他们已经全面覆盖安全的各个环节。Davidson 表示,她厌倦了向没有安全培训的大学毕业的工程师解释安全性,所以她写了 Oracle 的第一个编码标准,现在已经有数百个页面之多以及很多贡献者,还有一些课程是强制性的。它们具有符合安全要求的度量标准。这些课程不仅适用于工程师,也适用于文档作者。她说:“这是一种文化。”
|
||||
|
||||
没有提及密码的关于安全性的讨论怎么能是安全的?Plattner 说:“每个人都应该使用一个好的密码管理器,在工作中应该是强制性的,还有双重身份验证。”
|
||||
|
||||
她说,基本的密码原则应该是每个工程师日常生活的一部分。密码中最重要的是它们的长度和熵(使按键的集合尽可能地随机)。强健的密码熵检查器对此非常有用。她建议使用 Dropbox 开源的熵检查器 [zxcvbn][14]。
|
||||
|
||||
Plattner 说,另一个诀窍是在验证用户输入时使用一些故意减慢速度的算法,如 [bcrypt][15]。慢速并不困扰大多数合法用户,但会让那些试图强行进行密码尝试的黑客难受。
|
||||
|
||||
Davidson 说:“所有这些都为那些想要进入技术安全领域的人提供了工作安全保障,我们在各种地方放了各种代码,这就产生了系统性风险。只要我们继续在技术领域做有趣的事情,我不认为任何人不想要在安全中工作。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thenewstack.io/security-engineers-problem/
|
||||
|
||||
作者:[TC Currie][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://thenewstack.io/author/tc/
|
||||
[1]:http://twitter.com/share?url=https://thenewstack.io/security-engineers-problem/&text=Security+Debt+is+an+Engineer%E2%80%99s+Problem+
|
||||
[2]:http://www.facebook.com/sharer.php?u=https://thenewstack.io/security-engineers-problem/
|
||||
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https://thenewstack.io/security-engineers-problem/
|
||||
[4]:https://thenewstack.io/security-engineers-problem/#disqus_thread
|
||||
[5]:http://connect2017.womenwhocode.com/
|
||||
[6]:https://www.linkedin.com/in/mary-ann-davidson-235ba/
|
||||
[7]:https://www.linkedin.com/in/zassmin/
|
||||
[8]:https://www.womenwhocode.com/
|
||||
[9]:https://www.thenewstack.io/tag/Internet-of-Things
|
||||
[10]:https://twitter.com/ittskeziah
|
||||
[11]:https://martinfowler.com/bliki/TechnicalDebt.html
|
||||
[12]:https://xkcd.com/327/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
|
||||
[14]:https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/
|
||||
[15]:https://en.wikipedia.org/wiki/Bcrypt
|
||||
[16]:https://thenewstack.io/author/tc/
|
||||
[17]:https://www.explainxkcd.com/wiki/index.php/Little_Bobby_Tables
|
79
published/20170531 DNS Infrastructure at GitHub.md
Normal file
79
published/20170531 DNS Infrastructure at GitHub.md
Normal file
@ -0,0 +1,79 @@
|
||||
GitHub 的 DNS 基础设施
|
||||
============================================================
|
||||
|
||||
在 GitHub,我们最近从头改进了 DNS。这包括了我们[如何与外部 DNS 提供商交互][4]以及我们如何在内部向我们的主机提供记录。为此,我们必须设计和构建一个新的 DNS 基础设施,它可以随着 GitHub 的增长扩展并跨越多个数据中心。
|
||||
|
||||
以前,GitHub 的 DNS 基础设施相当简单直接。它包括每台服务器上本地的、只具备转发功能的 DNS 缓存服务器,以及一对被所有这些主机使用的缓存服务器和权威服务器主机。这些主机在内部网络以及公共互联网上都可用。我们在缓存守护程序中配置了<ruby>区域<rt>zone</rt></ruby><ruby>存根<rt>stub</rt></ruby>,以在本地进行查询,而不是在互联网上进行递归。我们还在我们的 DNS 提供商处设置了 NS 记录,它们将特定的内部<ruby>域<rt>domain</rt></ruby>指向这对主机的公共 IP,以便我们网络外部的查询。
|
||||
|
||||
这个配置使用了很多年,但它并非没有缺点。许多程序对于解析 DNS 查询非常敏感,我们遇到的任何性能或可用性问题在最好的情况下也会导致服务排队和性能降级,而最坏情况下客户会遭遇服务中断。配置和代码的更改可能会导致查询率发生大幅度的意外变化。因此超出这两台主机的扩展成为了一个问题。由于这些主机的网络配置,如果我们只是继续添加 IP 和主机的话存在一些本身的问题。在试图解决和补救这些问题的同时,由于缺乏测量指标和可见性,老旧的系统难以识别问题的原因。在许多情况下,我们使用 `tcpdump` 来识别有问题的流量和查询。另一个问题是在公共 DNS 服务器上运行,我们处于泄露内部网络信息的风险之下。因此,我们决定建立更好的东西,并开始确定我们对新系统的要求。
|
||||
|
||||
我们着手设计一个新的 DNS 基础设施,以改善上述包括扩展和可见性在内的运维问题,并引入了一些额外的需求。我们希望通过外部 DNS 提供商继续运行我们的公共 DNS 域,因此我们构建的系统需要与供应商无关。此外,我们希望该系统能够服务于我们的内部和外部域,这意味着内部域仅在我们的内部网络上可用,除非另有特别配置,而外部域也不用离开我们的内部网络就可解析。我们希望新的 DNS 架构不但可以[基于部署的工作流进行更改][5],并可以通过我们的仓库和配置系统使用 API 自动更改 DNS 记录。新系统不能有任何外部依赖,太依赖于 DNS 功能将会陷入级联故障,这包括连接到其他数据中心和其中可能有的 DNS 服务。我们的旧系统将缓存服务器和权威服务器在同一台主机上混合使用。我们想转到具有独立角色的分层设计。最后,我们希望系统能够支持多数据中心环境,无论是 EC2 还是裸机。
|
||||
|
||||
### 实现
|
||||
|
||||

|
||||
|
||||
为了构建这个系统,我们确定了三类主机:<ruby>缓存主机<rt>cache</rt></ruby>、<ruby>边缘主机<rt>edge</rt></ruby>和<ruby>权威主机<rt>authority</rt></ruby>。缓存主机作为<ruby>递归解析器<rt>recursive resolver</rt></ruby>和 DNS “路由器” 缓存来自边缘层的响应。边缘层运行 DNS 权威守护程序,用于响应缓存层对 DNS <ruby>区域<rt>zone</rt></ruby>的请求,其被配置为来自权威层的<ruby>区域传输<rt>zone transfer</rt></ruby>。权威层作为隐藏的 DNS <ruby>主服务器<rt>master</rt></ruby>,作为 DNS 数据的规范来源,为来自边缘主机的<ruby>区域传输<rt>zone transfer</rt></ruby>提供服务,并提供用于创建、修改或删除记录的 HTTP API。
|
||||
|
||||
在我们的新配置中,缓存主机存在于每个数据中心中,这意味着应用主机不需要穿过数据中心边界来检索记录。缓存主机被配置为将<ruby>区域<rt>zone</rt></ruby>映射到其<ruby>地域<rt>region</rt></ruby>内的边缘主机,以便将我们的内部<ruby>区域<rt>zone</rt></ruby>路由到我们自己的主机。未明确配置的任何<ruby>区域<rt>zone</rt></ruby>将通过互联网递归解析。
|
||||
|
||||
边缘主机是地域性的主机,存在我们的网络边缘 PoP(<ruby>存在点<rt>Point of Presence</rt></ruby>)内。我们的 PoP 有一个或多个依赖于它们进行外部连接的数据中心,没有 PoP 数据中心将无法访问互联网,互联网也无法访问它们。边缘主机对所有的权威主机执行<ruby>区域传输<rt>zone transfer</rt></ruby>,无论它们存在什么<ruby>地域<rt>region</rt></ruby>或<ruby>位置<rt>location</rt></ruby>,并将这些区域存在本地的磁盘上。
|
||||
|
||||
我们的权威主机也是地域性的主机,只包含适用于其所在<ruby>地域<rt>region</rt></ruby>的<ruby>区域<rt>zone</rt></ruby>。我们的仓库和配置系统决定一个<ruby>区域<rt>zone</rt></ruby>存放在哪个<ruby>地域性权威主机<rt>regional authority</rt></ruby>,并通过 HTTP API 服务来创建和删除记录。 OctoDNS 将区域映射到地域性权威主机,并使用相同的 API 创建静态记录,以及确保动态源处于同步状态。对于外部域 (如 github.com),我们有另外一个单独的权威主机,以允许我们可以在连接中断期间查询我们的外部域。所有记录都存储在 MySQL 中。
|
||||
|
||||
### 可运维性
|
||||
|
||||

|
||||
|
||||
迁移到更现代的 DNS 基础设施的巨大好处是可观察性。我们的旧 DNS 系统几乎没有指标,只有有限的日志。决定使用哪些 DNS 服务器的一个重要因素是它们所产生的指标的广度和深度。我们最终用 [Unbound][6] 作为缓存主机,[NSD][7] 作为边缘主机,[PowerDNS][8] 作为权威主机,所有这些都已在比 GitHub 大得多的 DNS 基础架构中得到了证实。
|
||||
|
||||
当在我们的裸机数据中心运行时,缓存通过私有的<ruby>[任播][9]<rt>anycast</rt></ruby> IP 访问,从而使之可以到达最近的可用缓存主机。缓存主机已经以机架感知的方式部署,在它们之间提供了一定程度的平衡负载,并且与一些电源和网络故障模式相隔离。当缓存主机出现故障时,通常将用其进行 DNS 查询的服务器现在将自动路由到下一个最接近的缓存主机,以保持低延迟并提供对某些故障模式的容错。任播允许我们扩展单个 IP 地址后面的缓存数量,这与先前的配置不同,使得我们能够按 DNS 需求量运行尽可能多的缓存主机。
|
||||
|
||||
无论地域或位置如何,边缘主机使用权威层进行区域传输。我们的<ruby>区域<rt>zone</rt></ruby>并没有大到在每个<ruby>地域<rt>region</rt></ruby>保留所有<ruby>区域<rt>zone</rt></ruby>的副本成为问题。(LCTT 译注:此处原文“Our zones are not large enough that keeping a copy of all of them in every region is a problem.”,根据上下文理解而翻译。)这意味着对于每个区域,即使某个地域处于脱机状态,或者上游服务提供商存在连接问题,所有缓存服务器都可以访问具备所有区域的本地副本的本地边缘服务器。这种变化在面对连接问题方面已被证明是相当有弹性的,并且在不久前本来会导致客户面临停止服务的故障期间帮助保持 GitHub 可用。
|
||||
|
||||
那些区域传输包括了内部和外部域从它们相应的权威服务器进行的传输。正如你可能会猜想像 github.com 这样的区域是外部的,像 github.net 这样的区域通常是内部的。它们之间的区别仅在于我们使用的类型和存储在其中的数据。了解哪些区域是内部和外部的,为我们在配置中提供了一些灵活性。
|
||||
|
||||
```
|
||||
$ dig +short github.com
|
||||
192.30.253.112
|
||||
192.30.253.113
|
||||
```
|
||||
|
||||
公共<ruby>区域<rt>zone</rt></ruby>被[同步][10]到外部 DNS 提供商,并且是 GitHub 用户每天使用的 DNS 记录。另外,公共区域在我们的网络中是完全可解析的,而不需要与我们的外部提供商进行通信。这意味着需要查询 `api.github.com` 的任何服务都可以这样做,而无需依赖外部网络连接。我们还使用了 Unbound 的 `stub-first` 配置选项,它给了我们第二次查询的机会,如果我们的内部 DNS 服务由于某些原因在外部查询失败,则可以进行第二次查找。
|
||||
|
||||
```
|
||||
$ dig +short time.github.net
|
||||
10.127.6.10
|
||||
```
|
||||
|
||||
大部分的 `github.net` 区域是完全私有的,无法从互联网访问,它只包含 [RFC 1918][11] 中规定的 IP 地址。每个地域和站点都划分了私有区域。每个地域和/或站点都具有适用于该位置的一组子区域,子区域用于管理网络、服务发现、特定的服务记录,并且还包括在我们仓库中的配置主机。私有区域还包括 PTR 反向查找区域。
|
||||
|
||||
### 总结
|
||||
|
||||
用一个新系统替换可以为数百万客户提供服务的旧系统并不容易。使用实用的、基于需求的方法来设计和实施我们的新 DNS 系统,才能打造出一个能够迅速有效地运行、并有望与 GitHub 一起成长的 DNS 基础设施。
|
||||
|
||||
想帮助 GitHub SRE 团队解决有趣的问题吗?我们很乐意你加入我们。[在这申请][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://githubengineering.com/dns-infrastructure-at-github/
|
||||
|
||||
作者:[Joe Williams][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/joewilliams
|
||||
[1]:https://githubengineering.com/dns-infrastructure-at-github/
|
||||
[2]:https://github.com/joewilliams
|
||||
[3]:https://github.com/joewilliams
|
||||
[4]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[5]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[6]:https://unbound.net/
|
||||
[7]:https://www.nlnetlabs.nl/projects/nsd/
|
||||
[8]:https://powerdns.com/
|
||||
[9]:https://en.wikipedia.org/wiki/Anycast
|
||||
[10]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[11]:http://www.faqs.org/rfcs/rfc1918.html
|
||||
[12]:https://boards.greenhouse.io/github/jobs/669805#.WPVqJlPyvUI
|
216
published/20170617 Top 8 IDEs for Raspberry Pi.md
Normal file
216
published/20170617 Top 8 IDEs for Raspberry Pi.md
Normal file
@ -0,0 +1,216 @@
|
||||
8 款适合树莓派使用的 IDE
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
树莓派是一种微型的单板电脑(SBC),已经在学校的计算机科学教学中掀起了一场革命,但同样,它也给软件开发者带来了福音。目前,树莓派获得的知名度远远超出了它原本的目标市场,而且正在应用于机器人项目中。
|
||||
|
||||
树莓派是一个可以运行 Linux 操作系统的微型开发板计算机,由英国树莓派基金会开发,用来在英国和发展中国家促进学校的基础计算机科学教育。树莓派拥有 USB 接口,能够支持多种即插即用外围设备,比如键盘、鼠标、打印机等。它包含了一个 HDMI(高清多媒体界面)端口,可以为用户提供视频输出。信用卡大小的尺寸使得树莓派非常便携且价格便宜。仅需一个 5V 的 micro-USB 电源供电,类似于给手机用的充电器一样。
|
||||
|
||||
多年来,树莓派基金会已经推出了几个不同版本的树莓派产品。 第一个版本是树莓派 1B 型,随后是一个相对简单便宜的 A 型。在 2014 年,基金会推出了一个增强版本 —— 树莓派 1B+。在 2015 年,基金会推出了全新设计的版本,售价为 5 美元,命名为树莓派 Zero。
|
||||
|
||||
在 2016 年 2 月,树莓派 3B 型发布,这也是现在可用的主要型号。在 2017 年,基金会发布了树莓派 Zero 的新型号树莓派 Zero W (W = wireless 无线)。
|
||||
|
||||
在不久的将来,一个提高了技术规格的型号将会到来,为嵌入式系统发烧友、研究员、爱好者和工程师们用其开发多种功能的实时应用提供一个稳健的平台。
|
||||
|
||||
![][3]
|
||||
|
||||
*图 1 :树莓派*
|
||||
|
||||
### 树莓派是一个高效的编程设备
|
||||
|
||||
在给树莓派供电后,启动运行 LXDE 窗口管理器,用户会获得一个完整的基于 Debian 的 Linux 操作系统,即 Raspbian。Raspbian 操作系统为用户提供了众多自由开源的程序,涵盖了程序设计、游戏、应用以及教育方面。
|
||||
|
||||
树莓派的官方编程语言是 Python ,并已预装在了 Paspbian 操作系统上。结合树莓派和 Python 的集成开发环境 IDLE3 ,可以让程序员能够开发各种基于 Python 的程序。
|
||||
|
||||
除了 Python ,树莓派还支持多种其它语言。并且可以使用一些自由开源的 IDE (集成开发环境)。允许程序员、开发者和应用工程师在树莓派上开发程序和应用。
|
||||
|
||||
### 树莓派上的最佳 IDE
|
||||
|
||||
作为一名程序员和开发者,你需要的首先就是有一个 IDE ,这是一个集成了开发者和程序员编写、编译和测试软件所需的的基本工具的综合软件套件。IDE 包含了代码编辑器、编译或解释程序和调试器,并允许开发者通过一个图形用户界面(GUI)来访问。IDE 的主要目的之一是提供一个整合单元来统一功能设置,减少组合多个开发工具的必要配置。
|
||||
|
||||
IDE 的用户界面与文字处理程序相似,在工具栏提供颜色编码、源代码格式化、错误诊断、报告以及智能代码补全工具。IDE 被设计用来整合第三方版本控制库如 GitHub 或 Apache Subversion 。一些 IDE 专注于特定的编程语言,支持一个匹配该编程语言的功能集,当然也有一些是支持多种语言的。
|
||||
|
||||
树莓派上拥有丰富的 IDE ,为程序员提供友好界面来开发源代码、应用程序以及系统程序。
|
||||
|
||||
就让我们来探索最适合树莓派的 IDE 吧。
|
||||
|
||||
#### BlueJ
|
||||
|
||||
![][4]
|
||||
|
||||
*图 2 :BlueJ 的 GUI 界面*
|
||||
|
||||
BlueJ 是一款致力于 Java 编程语言的 IDE ,主要是为教育目的而开发的。它也支持小型的软件开发项目。BlueJ 由澳大利亚的莫纳什大学的 Michael Kolling 和 John Rosenburg 在 2000 年作为 Blue 系统的继任者而开发的,后来在 2009 年 3 月成为自由开源软件。
|
||||
|
||||
BlueJ 提供一种学习面向对象的编程概念的高效的方式,图形用户界面为应用程序提供像 UML 图一样的类结构。每一个像类、对象和函数调用这样基于 OOPS 的概念,都可以通过基于交互的设计来表示。
|
||||
|
||||
**特性:**
|
||||
|
||||
* _简单的交互界面:_ 与 NetBeans 或 Eclipse 这样的专业界面相比,BlueJ 的用户界面更加简易学。使开发者可以专注于编程而不是环境。
|
||||
* _便携:_ BlueJ 支持多种平台如 Windows、Linux 以及 Mac OS X , 可以免安装直接运行。
|
||||
* _新的创新:_ BlueJ IDE 在对象工作台、代码块和范围着色方面有着大量的创新,使新手体验到开发的乐趣。
|
||||
* _强大的技术支持:_ BlueJ 拥有一个核心功能团队来解答疑问,并且在 24 小时内为开发者的各种问题提供解决方案。
|
||||
|
||||
**最新版本:** 4.0.1
|
||||
|
||||
#### Geany IDE
|
||||
|
||||
![][5]
|
||||
|
||||
*图 3 : Geany IDE 的 GUI 界面*
|
||||
|
||||
Geany IDE 使用了 Scintilla 和 GTK+ 的集成开发环境支持,被认为是一个非常轻量级的基于 GUI 的文本编辑器。 Geany 的独特之处在于它被设计为独立于特定的桌面环境,并且仅需要较少数量的依赖包。只需要 GTK2 运行库就可以运行。Geany IDE 支持多种编程语言如 C、C++、C#、Java、HTML、PHP、Python、Perl、Ruby、Erlang 和 LaTeX 。
|
||||
|
||||
**特性:**
|
||||
|
||||
* 代码自动补全和简单的代码导航。
|
||||
* 高效的语法高亮和代码折叠。
|
||||
* 支持嵌入式终端仿真器,拥有高度可扩展性,可以免费下载大量功能丰富的插件。
|
||||
* 简单的项目管理并支持多种文件类型,包括 C、Java、PHP、HTML、Python、Perl 等。
|
||||
* 高度定制的界面,可以添加或删除设置、栏及窗口。
|
||||
|
||||
**最新版本:** 1.30.1
|
||||
|
||||
#### Adafruit WebIDE
|
||||
|
||||
![][6]
|
||||
|
||||
*图 4 :Adafruit WebIDE 的 GUI 界面*
|
||||
|
||||
Adafruit WebIDE 为树莓派用户提供一个基于 Web 的界面来执行编程功能,并且允许开发者编译多种语言的源代码如 Python、Ruby、JavaScript 等。
|
||||
|
||||
Adafruit IDE 允许开发者把代码放在 GIT 仓库,这样就可以通过 GitHub 在任何地方进行访问。
|
||||
|
||||
**特性:**
|
||||
|
||||
* 可以通过 Web 浏览器的 8080 端口或 80 端口进行访问。
|
||||
* 支持源代码的简单编译和运行。
|
||||
* 配备一个调试器和可视器来进行正确追踪,代码导航以及测试源代码。
|
||||
|
||||
#### AlgoIDE
|
||||
|
||||
![][7]
|
||||
|
||||
*图 5 :AlgoIDE 的 GUI 界面*
|
||||
|
||||
AlgoIDE 结合了一个脚本语言和一个 IDE 环境,它被设计用来将编程与下一步的示例一起来运行。AlgoIDE 包含了一个强大的调试器、 实时范围管理器并且一步一步的执行代码。针对全年龄人群而设计,用来设计程序以及对算法进行大量的研究。
|
||||
|
||||
AlgoIDE 支持多种类型的语言如 C、C++、Python、Java、Smalltalk、Objective C、ActionScript 等。
|
||||
|
||||
**特性:**
|
||||
|
||||
* 代码自动缩进和补全。
|
||||
* 高效的语法高亮和错误管理。
|
||||
* 包含了一个调试器、范围管理器和动态帮助系统。
|
||||
* 支持 GUI 和传统的 Logo 程序语言 Turtle 来进行源代码开发。
|
||||
|
||||
**最新版本:** 2016-12-08 (上次更新时间)
|
||||
|
||||
#### Ninja IDE
|
||||
|
||||
![][8]
|
||||
|
||||
图 6 :Ninja IDE 的 GUI 界面
|
||||
|
||||
Ninja IDE (“Ninja-IDE Is Not Just Another IDE”的缩写),由 Diego Sarmentero 、Horacio Duranm Gabriel Acosta 、Pedro Mourelle 和 Jose Rostango 设计,使用纯 Python 编写并且支持多种平台运行如 Linux 、Mac OS X 和 Windows 。Ninja IDE 被认为是一个跨平台的 IDE 软件,尤其是用来设计基于 Python 的应用程序。
|
||||
|
||||
Ninja IDE 是非常轻量级的,并能执行多种功能如文件处理、代码定位、跳转行、标签、代码自动缩进和编辑器缩放。除了 Python ,这款 IDE 也支持几种其他语言。
|
||||
|
||||
**特性:**
|
||||
|
||||
* _高效的代码编辑器:_ Ninja-IDE 被认为是最有效的代码编辑器,因为它能执行多种功能如代码补全和缩进,以及助手功能。
|
||||
* _错误和 PEP8 查找器:_ 高亮显示文件中的静态和 PEP8 错误。
|
||||
* _代码定位器:_ 使用此功能,快速直接访问能够访问的文件。用户可以使用快捷键 “CTRL+K” 进行输入,IDE 会找到特定的文本。
|
||||
* 独特的项目管理功能以及大量的插件使得具有 Ninja-IDE 高度可扩展性。
|
||||
|
||||
**最新版本:** 2.3
|
||||
|
||||
#### Lazarus IDE
|
||||
|
||||
![][9]
|
||||
|
||||
图 7 :Lazarus IDE 的 GUI 界面
|
||||
|
||||
Lazarus IDE 是由 Cliff Baeseman、Shane Miller 和 Michael A. Hess 于 1999 年 2 月 开发。它被视为是一款用于应用程序快速开发的基于 GUI 的跨平台 IDE ,使用的是 Free Pascal 编译器。Lazarus IDE 继承了 Free Pascal 的三个主要特性 —— 编译速度、执行速度和交叉编译。可以在多种操作系统上对应用程序进行交叉编译,如 Windows 、Linux 、Mac OS X 等。
|
||||
|
||||
这款 IDE 由 Lazarus 组件库组成。这些组件库以一个单一和带有不同的特定平台实现的统一接口的形式为开发者提供了多种配套设施。它支持“一次编写,随处编译”的原则。
|
||||
|
||||
**特性:**
|
||||
|
||||
* 强大而快速的处理各种类型的源代码,同时支持性能测试。
|
||||
* 易用的 GUI ,支持组件拖拽功能。可以通过 Lazarus 包文件为 IDE 添加附加组件。
|
||||
* 使用新功能加强的 Free Pascal ,可以用来开发 Android 应用。
|
||||
* 高可扩展性、开放源代码并支持多种框架来编译其他语言。
|
||||
|
||||
**最新版本:** 1.6.4
|
||||
|
||||
#### Codeblock IDE
|
||||
|
||||
![][10]
|
||||
|
||||
*图 8 : Codeblock IDE 界面*
|
||||
|
||||
Codeblock IDE 是用 C++ 编写的,使用了 wxWidgets 作为 GUI 库,发布于 2005 年。它是一款自由开源、跨平台的 IDE ,支持多种类型的编译器如 GCC 、Clang 和 Visual C++ 。
|
||||
|
||||
Codeblock IDE 高度智能并且可以支持多种功能,如语法高亮、代码折叠、代码补全和缩进,同时也拥有一些扩展插件来进行定制。它可以在 Windows 、Mac OS X 和 Linux 操作系统上运行。
|
||||
|
||||
**特性:**
|
||||
|
||||
* 支持多种类型的编译器如 GCC 、Visual C++ 、Borland C++ 、Watcom 、Intel C++ 等。主要针对 C++ 而设计,不过现在也支持其他的一些语言。
|
||||
* 智能的调试器,允许用户通过访问本地函数符号和参数显示,用户自定义监视、调用堆栈、自定义内存转储、线程切换以及 GNU 调试接口调试程序。
|
||||
* 支持多种功能用来从 Dev-C++ 、Visual C++ 等平台迁移代码。
|
||||
* 使用自定义系统和 XML 扩展文件来存储信息。
|
||||
|
||||
**最新版本:** 16.01
|
||||
|
||||
#### Greenfoot IDE
|
||||
|
||||
![][11]
|
||||
|
||||
*图 9 : Greenfoot IDE 界面*
|
||||
|
||||
Greenfoot IDE 是由肯特大学的 Michael Kolling 设计。它是一款基于 Java 的跨平台 IDE ,针对中学和大学教育目的而设计。Greenfoot IDE 的功能有项目管理、代码自动补全、语法高亮并提供一个简易的 GUI 界面。
|
||||
|
||||
Greenfoot IDE 编程包括两个主类的子类 —— World 和 Actor 。 World 表示主要执行发生的类,Actors 是已经存在且活动于 World 中的对象。
|
||||
|
||||
**特性:**
|
||||
|
||||
* 简单易用的 GUI ,比 BlueJ 和其他的 IDE 交互性更强。
|
||||
* 易于新手和初学者上手。
|
||||
* 在执行 Java 代码方面非常强大。
|
||||
* 支持 GNOME/KDE/X11 图形环境。
|
||||
* 其他功能包括项目管理、自动补全、语法高亮以及错误自动校正。
|
||||
|
||||
**最新版本:** 3.1.0
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Anand Nayyar
|
||||
|
||||
作者是位于印度旁遮普邦的贾朗达尔学院计算机应用与 IT 系的教授助理。他热爱开源技术、嵌入式系统、云计算、无线传感器网络以及模拟器。可以在 anand_nayyar@yahoo.co.in 联系他。
|
||||
|
||||
--------------------
|
||||
|
||||
via: http://opensourceforu.com/2017/06/top-ides-raspberry-pi/
|
||||
|
||||
作者:[Anand Nayyar][a]
|
||||
译者:[softpaopao](https://github.com/softpaopao)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensourceforu.com/author/anand-nayyar/
|
||||
[1]:http://opensourceforu.com/2017/06/top-ides-raspberry-pi/#disqus_thread
|
||||
[2]:http://opensourceforu.com/author/anand-nayyar/
|
||||
[3]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-1-8.jpg
|
||||
[4]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-2-6.jpg
|
||||
[5]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-3-3.jpg
|
||||
[6]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-4-3.jpg
|
||||
[7]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-5-2.jpg
|
||||
[8]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-6-1.jpg
|
||||
[9]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-7.jpg
|
||||
[10]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-8.jpg
|
||||
[11]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-9.jpg
|
@ -1,45 +1,32 @@
|
||||
开发一个 Linux 调试器(七):源码层断点
|
||||
开发一个 Linux 调试器(七):源码级断点
|
||||
============================================================
|
||||
|
||||
在内存地址上设置断点是可以的,但它没有提供最方便用户的工具。我们希望能够在源代码行和函数入口地址上设置断点,以便我们可以在与代码相同的抽象级中别进行调试。
|
||||
在内存地址上设置断点虽然不错,但它并没有提供最方便用户的工具。我们希望能够在源代码行和函数入口地址上设置断点,以便我们可以在与代码相同的抽象级别中进行调试。
|
||||
|
||||
这篇文章将会添加源码层断点到我们的调试器中。通过所有我们已经支持的,这比起最初听起来容易得多。我们还将添加一个命令来获取符号的类型和地址,这对于定位代码或数据以及理解链接概念非常有用。
|
||||
|
||||
* * *
|
||||
这篇文章将会添加源码级断点到我们的调试器中。通过所有我们已经支持的功能,这要比起最初听起来容易得多。我们还将添加一个命令来获取符号的类型和地址,这对于定位代码或数据以及理解链接概念非常有用。
|
||||
|
||||
### 系列索引
|
||||
|
||||
随着后面文章的发布,这些链接会逐渐生效。
|
||||
|
||||
1. [准备环境][1]
|
||||
|
||||
2. [断点][2]
|
||||
|
||||
3. [寄存器和内存][3]
|
||||
|
||||
4. [Elves 和 dwarves][4]
|
||||
|
||||
5. [源码和信号][5]
|
||||
|
||||
6. [源码层逐步执行][6]
|
||||
|
||||
7. [源码层断点][7]
|
||||
|
||||
6. [源码级逐步执行][6]
|
||||
7. [源码级断点][7]
|
||||
8. [调用栈][8]
|
||||
|
||||
9. 读取变量
|
||||
|
||||
10. 之后步骤
|
||||
|
||||
* * *
|
||||
|
||||
### 断点
|
||||
|
||||
### DWARF
|
||||
#### DWARF
|
||||
|
||||
[Elves 和 dwarves][9] 这篇文章,描述了 DWARF 调试信息是如何工作的,以及如何用它来将机器码映射到高层源码中。回想一下,DWARF 包含函数的地址范围和一个允许你在抽象层之间转换代码位置的行表。我们将使用这些功能来实现我们的断点。
|
||||
[Elves 和 dwarves][4] 这篇文章,描述了 DWARF 调试信息是如何工作的,以及如何用它来将机器码映射到高层源码中。回想一下,DWARF 包含了函数的地址范围和一个允许你在抽象层之间转换代码位置的行表。我们将使用这些功能来实现我们的断点。
|
||||
|
||||
### 函数入口
|
||||
#### 函数入口
|
||||
|
||||
如果你考虑重载、成员函数等等,那么在函数名上设置断点可能有点复杂,但是我们将遍历所有的编译单元,并搜索与我们正在寻找的名称匹配的函数。DWARF 信息如下所示:
|
||||
|
||||
@ -85,13 +72,13 @@ void debugger::set_breakpoint_at_function(const std::string& name) {
|
||||
}
|
||||
```
|
||||
|
||||
这代码看起来有点奇怪的唯一一点是 `++entry`。 问题是函数的 `DW_AT_low_pc` 不指向该函数的用户代码的起始地址,它指向 prologue 的开始。编译器通常会输出一个函数的 prologue 和 epilogue,它们用于执行保存和恢复堆栈、操作堆栈指针等。这对我们来说不是很有用,所以我们将入口行加一来获取用户代码的第一行而不是 prologue。DWARF 行表实际上具有一些功能,用于将入口标记为函数 prologue 之后的第一行,但并不是所有编译器都输出该函数,因此我采用了原始的方法。
|
||||
这代码看起来有点奇怪的唯一一点是 `++entry`。 问题是函数的 `DW_AT_low_pc` 不指向该函数的用户代码的起始地址,它指向 prologue 的开始。编译器通常会输出一个函数的 prologue 和 epilogue,它们用于执行保存和恢复堆栈、操作堆栈指针等。这对我们来说不是很有用,所以我们将入口行加一来获取用户代码的第一行而不是 prologue。DWARF 行表实际上具有一些功能,用于将入口标记为函数 prologue 之后的第一行,但并不是所有编译器都输出它,因此我采用了原始的方法。
|
||||
|
||||
### 源码行
|
||||
#### 源码行
|
||||
|
||||
要在高层源码行上设置一个断点,我们要将这个行号转换成 DWARF 中的一个地址。我们将遍历编译单元,寻找一个名称与给定文件匹配的编译单元,然后查找与给定行对应的入口。
|
||||
|
||||
DWARF 看山去有点像这样:
|
||||
DWARF 看上去有点像这样:
|
||||
|
||||
```
|
||||
.debug_line: line number info for a single cu
|
||||
@ -119,7 +106,7 @@ IS=val ISA number, DI=val discriminator value
|
||||
|
||||
```
|
||||
|
||||
所以如果我们想要在 `ab.cpp` 的第五行设置一个断点,我们查找与行 (`0x004004e3`) 相关的入口并设置一个断点。
|
||||
所以如果我们想要在 `ab.cpp` 的第五行设置一个断点,我们将查找与行 (`0x004004e3`) 相关的入口并设置一个断点。
|
||||
|
||||
```
|
||||
void debugger::set_breakpoint_at_source_line(const std::string& file, unsigned line) {
|
||||
@ -138,13 +125,11 @@ void debugger::set_breakpoint_at_source_line(const std::string& file, unsigned l
|
||||
}
|
||||
```
|
||||
|
||||
我这里的 `is_suffix` hack,这样你可以为 `a/b/c.cpp` 输入 `c.cpp`。当然你应该使用大小写敏感路径处理库或者其他东西。我很懒。`entry.is_stmt` 是检查行表入口是否被标记为一个语句的开头,这是由编译器根据它认为是断点的最佳目标的地址设置的。
|
||||
|
||||
* * *
|
||||
我这里做了 `is_suffix` hack,这样你可以输入 `c.cpp` 代表 `a/b/c.cpp` 。当然你实际上应该使用大小写敏感路径处理库或者其它东西,但是我比较懒。`entry.is_stmt` 是检查行表入口是否被标记为一个语句的开头,这是由编译器根据它认为是断点的最佳目标的地址设置的。
|
||||
|
||||
### 符号查找
|
||||
|
||||
当我们在对象文件层时,符号是王者。函数用符号命名,全局变量用符号命名,得到一个符号,我们得到一个符号,每个人都得到一个符号。 在给定的对象文件中,一些符号可能引用其他对象文件或共享库,链接器将从符号引用创建一个可执行程序。
|
||||
当我们在对象文件层时,符号是王者。函数用符号命名,全局变量用符号命名,你得到一个符号,我们得到一个符号,每个人都得到一个符号。 在给定的对象文件中,一些符号可能引用其他对象文件或共享库,链接器将从符号引用创建一个可执行程序。
|
||||
|
||||
可以在正确命名的符号表中查找符号,它存储在二进制文件的 ELF 部分中。幸运的是,`libelfin` 有一个不错的接口来做这件事,所以我们不需要自己处理所有的 ELF 的事情。为了让你知道我们在处理什么,下面是一个二进制文件的 `.symtab` 部分的转储,它由 `readelf` 生成:
|
||||
|
||||
@ -222,7 +207,7 @@ Num: Value Size Type Bind Vis Ndx Name
|
||||
|
||||
你可以在对象文件中看到用于设置环境的很多符号,最后还可以看到 `main` 符号。
|
||||
|
||||
我们对符号的类型、名称和值(地址)感兴趣。我们有一个 `symbol_type` 类型的枚举,并使用一个 `std::string` 作为名称,`std::uintptr_t` 作为地址:
|
||||
我们对符号的类型、名称和值(地址)感兴趣。我们有一个该类型的 `symbol_type` 枚举,并使用一个 `std::string` 作为名称,`std::uintptr_t` 作为地址:
|
||||
|
||||
```
|
||||
enum class symbol_type {
|
||||
@ -265,7 +250,7 @@ symbol_type to_symbol_type(elf::stt sym) {
|
||||
};
|
||||
```
|
||||
|
||||
最后我们要查找符号。为了说明的目的,我循环查找符号表的 ELF 部分,然后收集我在其中找到的任意符号到 `std::vector` 中。更智能的实现将建立从名称到符号的映射,这样你只需要查看一次数据就行了。
|
||||
最后我们要查找符号。为了说明的目的,我循环查找符号表的 ELF 部分,然后收集我在其中找到的任意符号到 `std::vector` 中。更智能的实现可以建立从名称到符号的映射,这样你只需要查看一次数据就行了。
|
||||
|
||||
```
|
||||
std::vector<symbol> debugger::lookup_symbol(const std::string& name) {
|
||||
@ -287,16 +272,12 @@ std::vector<symbol> debugger::lookup_symbol(const std::string& name) {
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### 添加命令
|
||||
|
||||
一如往常,我们需要添加一些更多的命令来向用户暴露功能。对于断点,我使用 GDB 风格的接口,其中断点类型是通过你传递的参数推断的,而不用要求显式切换:
|
||||
|
||||
* `0x<hexadecimal>` -> 断点地址
|
||||
|
||||
* `<line>:<filename>` -> 断点行号
|
||||
|
||||
* `<anything else>` -> 断点函数名
|
||||
|
||||
```
|
||||
@ -326,11 +307,9 @@ else if(is_prefix(command, "symbol")) {
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### 测试一下
|
||||
|
||||
在一个简单的二进制文件上启动调试器,并设置源代码级别的断点。在一些 `foo` 上设置一个断点,看到我的调试器停在它上面是我这个项目最有价值的时刻之一。
|
||||
在一个简单的二进制文件上启动调试器,并设置源代码级别的断点。在一些 `foo` 函数上设置一个断点,看到我的调试器停在它上面是我这个项目最有价值的时刻之一。
|
||||
|
||||
符号查找可以通过在程序中添加一些函数或全局变量并查找它们的名称来进行测试。请注意,如果你正在编译 C++ 代码,你还需要考虑[名称重整][10]。
|
||||
|
||||
@ -342,19 +321,19 @@ else if(is_prefix(command, "symbol")) {
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/06/19/writing-a-linux-debugger-source-break/
|
||||
|
||||
作者:[Simon Brand ][a]
|
||||
作者:[Simon Brand][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/c++/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[1]:https://linux.cn/article-8626-1.html
|
||||
[2]:https://linux.cn/article-8645-1.html
|
||||
[3]:https://linux.cn/article-8663-1.html
|
||||
[4]:https://linux.cn/article-8719-1.html
|
||||
[5]:https://linux.cn/article-8812-1.html
|
||||
[6]:https://linux.cn/article-8813-1.html
|
||||
[7]:https://blog.tartanllama.xyz/c++/2017/06/19/writing-a-linux-debugger-source-break/
|
||||
[8]:https://blog.tartanllama.xyz/c++/2017/06/24/writing-a-linux-debugger-unwinding/
|
||||
[9]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
@ -0,0 +1,384 @@
|
||||
Samba 系列(十五):用 SSSD 和 Realm 集成 Ubuntu 到 Samba4 AD DC
|
||||
============================================================
|
||||
|
||||
本教程将告诉你如何将 Ubuntu 桌面版机器加入到带有 SSSD 和 Realm 服务的 Samba4 活动目录域中,以在活动目录中认证用户。
|
||||
|
||||
### 要求:
|
||||
|
||||
1. [在 Ubuntu 上用 Samba4 创建一个活动目录架构][1]
|
||||
|
||||
### 第 1 步:初始配置
|
||||
|
||||
1、 在把 Ubuntu 加入活动目录前确保主机名被正确设置了。使用 `hostnamectl` 命令设置机器名字或者手动编辑 `/etc/hostname` 文件。
|
||||
|
||||
```
|
||||
$ sudo hostnamectl set-hostname your_machine_short_hostname
|
||||
$ cat /etc/hostname
|
||||
$ hostnamectl
|
||||
```
|
||||
|
||||
2、 接下来,编辑机器网络接口设置并且添加合适的 IP 设置,并将正确的 DNS IP 服务器地址指向 Samba 活动目录域控制器,如下图所示。
|
||||
|
||||
如果你已经配置了 DHCP 服务来为局域网机器自动分配包括合适的 AD DNS IP 地址的 IP 设置,那么你可以跳过这一步。
|
||||
|
||||
[][2]
|
||||
|
||||
*设置网络接口*
|
||||
|
||||
上图中,`192.168.1.254` 和 `192.168.1.253` 代表 Samba4 域控制器的 IP 地址。
|
||||
|
||||
3、 用 GUI(图形用户界面)或命令行重启网络服务来应用修改,并且对你的域名发起一系列 ping 请求来测试 DNS 解析如预期工作。 也用 `host` 命令来测试 DNS 解析。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart networking.service
|
||||
$ host your_domain.tld
|
||||
$ ping -c2 your_domain_name
|
||||
$ ping -c2 adc1
|
||||
$ ping -c2 adc2
|
||||
```
|
||||
|
||||
4、 最后, 确保机器时间和 Samba4 AD 同步。安装 `ntpdate` 包并用下列指令和 AD 同步时间。
|
||||
|
||||
```
|
||||
$ sudo apt-get install ntpdate
|
||||
$ sudo ntpdate your_domain_name
|
||||
```
|
||||
|
||||
### 第 2 步:安装需要的包
|
||||
|
||||
5、 这一步将安装将 Ubuntu 加入 Samba4 活动目录域控制器所必须的软件和依赖:Realmd 和 SSSD 服务。
|
||||
|
||||
```
|
||||
$ sudo apt install adcli realmd krb5-user samba-common-bin samba-libs samba-dsdb-modules sssd sssd-tools libnss-sss libpam-sss packagekit policykit-1
|
||||
```
|
||||
|
||||
6、 输入大写的默认 realm 名称,然后按下回车继续安装。
|
||||
|
||||
[][3]
|
||||
|
||||
*输入 Realm 名称*
|
||||
|
||||
7、 接着,创建包含以下内容的 SSSD 配置文件。
|
||||
|
||||
```
|
||||
$ sudo nano /etc/sssd/sssd.conf
|
||||
```
|
||||
|
||||
加入下面的内容到 `sssd.conf` 文件。
|
||||
|
||||
```
|
||||
[nss]
|
||||
filter_groups = root
|
||||
filter_users = root
|
||||
reconnection_retries = 3
|
||||
[pam]
|
||||
reconnection_retries = 3
|
||||
[sssd]
|
||||
domains = tecmint.lan
|
||||
config_file_version = 2
|
||||
services = nss, pam
|
||||
default_domain_suffix = TECMINT.LAN
|
||||
[domain/tecmint.lan]
|
||||
ad_domain = tecmint.lan
|
||||
krb5_realm = TECMINT.LAN
|
||||
realmd_tags = manages-system joined-with-samba
|
||||
cache_credentials = True
|
||||
id_provider = ad
|
||||
krb5_store_password_if_offline = True
|
||||
default_shell = /bin/bash
|
||||
ldap_id_mapping = True
|
||||
use_fully_qualified_names = True
|
||||
fallback_homedir = /home/%d/%u
|
||||
access_provider = ad
|
||||
auth_provider = ad
|
||||
chpass_provider = ad
|
||||
access_provider = ad
|
||||
ldap_schema = ad
|
||||
dyndns_update = true
|
||||
dyndsn_refresh_interval = 43200
|
||||
dyndns_update_ptr = true
|
||||
dyndns_ttl = 3600
|
||||
```
|
||||
|
||||
确保你对应地替换了下列参数的域名:
|
||||
|
||||
```
|
||||
domains = tecmint.lan
|
||||
default_domain_suffix = TECMINT.LAN
|
||||
[domain/tecmint.lan]
|
||||
ad_domain = tecmint.lan
|
||||
krb5_realm = TECMINT.LAN
|
||||
```
|
||||
|
||||
8、 接着,用下列命令给 SSSD 配置文件适当的权限:
|
||||
|
||||
```
|
||||
$ sudo chmod 700 /etc/sssd/sssd.conf
|
||||
```
|
||||
|
||||
9、 现在,打开并编辑 Realmd 配置文件,输入下面这行:
|
||||
|
||||
```
|
||||
$ sudo nano /etc/realmd.conf
|
||||
```
|
||||
|
||||
`realmd.conf` 文件摘录:
|
||||
|
||||
```
|
||||
[active-directory]
|
||||
os-name = Linux Ubuntu
|
||||
os-version = 17.04
|
||||
[service]
|
||||
automatic-install = yes
|
||||
[users]
|
||||
default-home = /home/%d/%u
|
||||
default-shell = /bin/bash
|
||||
[tecmint.lan]
|
||||
user-principal = yes
|
||||
fully-qualified-names = no
|
||||
```
|
||||
|
||||
10、 最后需要修改的文件属于 Samba 守护进程。 打开 `/etc/samba/smb.conf` 文件编辑,然后在文件开头加入下面这块代码,在 `[global]` 之后的部分如下图所示。
|
||||
|
||||
```
|
||||
workgroup = TECMINT
|
||||
client signing = yes
|
||||
client use spnego = yes
|
||||
kerberos method = secrets and keytab
|
||||
realm = TECMINT.LAN
|
||||
security = ads
|
||||
```
|
||||
|
||||
[][4]
|
||||
|
||||
*配置 Samba 服务器*
|
||||
|
||||
确保你替换了域名值,特别是对应域名的 realm 值,并运行 `testparm` 命令检验设置文件是否包含错误。
|
||||
|
||||
```
|
||||
$ sudo testparm
|
||||
```
|
||||
|
||||
[][5]
|
||||
|
||||
*测试 Samba 配置*
|
||||
|
||||
11、 在做完所有必需的修改之后,用 AD 管理员帐号验证 Kerberos 认证并用下面的命令列出票据。
|
||||
|
||||
```
|
||||
$ sudo kinit ad_admin_user@DOMAIN.TLD
|
||||
$ sudo klist
|
||||
```
|
||||
|
||||
[][6]
|
||||
|
||||
*检验 Kerberos 认证*
|
||||
|
||||
### 第 3 步: 加入 Ubuntu 到 Samba4 Realm
|
||||
|
||||
12、 键入下列命令将 Ubuntu 机器加入到 Samba4 活动目录。用有管理员权限的 AD DC 账户名字,以便绑定 realm 可以如预期般工作,并替换对应的域名值。
|
||||
|
||||
```
|
||||
$ sudo realm discover -v DOMAIN.TLD
|
||||
$ sudo realm list
|
||||
$ sudo realm join TECMINT.LAN -U ad_admin_user -v
|
||||
$ sudo net ads join -k
|
||||
```
|
||||
|
||||
[][7]
|
||||
|
||||
*加入 Ubuntu 到 Samba4 Realm*
|
||||
|
||||
[][8]
|
||||
|
||||
*列出 Realm Domain 信息*
|
||||
|
||||
[][9]
|
||||
|
||||
*添加用户到 Realm Domain*
|
||||
|
||||
[][10]
|
||||
|
||||
*添加 Domain 到 Realm*
|
||||
|
||||
13、 区域绑定好了之后,运行下面的命令确保所有域账户允许在这台机器上认证。
|
||||
|
||||
```
|
||||
$ sudo realm permit -all
|
||||
```
|
||||
|
||||
然后你可以使用下面举例的 `realm` 命令允许或者禁止域用户帐号或群组访问。
|
||||
|
||||
```
|
||||
$ sudo realm deny -a
|
||||
$ realm permit --groups ‘domain.tld\Linux Admins’
|
||||
$ realm permit user@domain.lan
|
||||
$ realm permit DOMAIN\\User2
|
||||
```
|
||||
|
||||
14、 从一个 [安装了 RSAT 工具的][11] Windows 机器上你可以打开 AD UC 并浏览“<ruby>电脑<rt>computers</rt></ruby>”容器,并检验是否有一个使用你机器名的对象帐号已经创建。
|
||||
|
||||
[][12]
|
||||
|
||||
*确保域被加入 AD DC*
|
||||
|
||||
### 第 4 步:配置 AD 账户认证
|
||||
|
||||
15、 为了在 Ubuntu 机器上用域账户认证,你需要用 root 权限运行 `pam-auth-update` 命令并允许所有 PAM 配置文件,包括为每个域账户在第一次注册的时候自动创建家目录的选项。
|
||||
|
||||
按 [空格] 键检验所有配置项并点击 ok 来应用配置。
|
||||
|
||||
```
|
||||
$ sudo pam-auth-update
|
||||
```
|
||||
|
||||
[][13]
|
||||
|
||||
*PAM 配置*
|
||||
|
||||
16、 在系统上手动编辑 `/etc/pam.d/common-account` 文件,下面这几行是为了给认证过的域用户自动创建家目录。
|
||||
|
||||
```
|
||||
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
|
||||
```
|
||||
|
||||
17、 如果活动目录用户不能用 linux 命令行修改他们的密码,打开 `/etc/pam.d/common-password` 文件并在 `password` 行移除 `use_authtok` 语句,最后如下:
|
||||
|
||||
```
|
||||
password [success=1 default=ignore] pam_winbind.so try_first_pass
|
||||
```
|
||||
|
||||
18、 最后,用下面的命令重启并启用以应用 Realmd 和 SSSD 服务的修改:
|
||||
|
||||
```
|
||||
$ sudo systemctl restart realmd sssd
|
||||
$ sudo systemctl enable realmd sssd
|
||||
```
|
||||
|
||||
19、 为了测试 Ubuntu 机器是是否成功集成到 realm ,安装 winbind 包并运行 `wbinfo` 命令列出域账户和群组,如下所示。
|
||||
|
||||
```
|
||||
$ sudo apt-get install winbind
|
||||
$ wbinfo -u
|
||||
$ wbinfo -g
|
||||
```
|
||||
|
||||
[][14]
|
||||
|
||||
*列出域账户*
|
||||
|
||||
20、 同样,也可以针对特定的域用户或群组使用 `getent` 命令检验 Winbind nsswitch 模块。
|
||||
|
||||
```
|
||||
$ sudo getent passwd your_domain_user
|
||||
$ sudo getent group ‘domain admins’
|
||||
```
|
||||
|
||||
[][15]
|
||||
|
||||
*检验 Winbind Nsswitch*
|
||||
|
||||
21、 你也可以用 Linux `id` 命令获取 AD 账户的信息,命令如下:
|
||||
|
||||
```
|
||||
$ id tecmint_user
|
||||
```
|
||||
|
||||
[][16]
|
||||
|
||||
*检验 AD 用户信息*
|
||||
|
||||
22、 用 `su -` 后跟上域用户名参数来认证 Ubuntu 主机的一个 Samba4 AD 账户。运行 `id` 命令获取该 AD 账户的更多信息。
|
||||
|
||||
```
|
||||
$ su - your_ad_user
|
||||
```
|
||||
|
||||
[][17]
|
||||
|
||||
*AD 用户认证*
|
||||
|
||||
用 `pwd` 命令查看你的域用户当前工作目录,和用 `passwd` 命令修改密码。
|
||||
|
||||
23、 在 Ubuntu 上使用有 root 权限的域账户,你需要用下面的命令添加 AD 用户名到 sudo 系统群组:
|
||||
|
||||
```
|
||||
$ sudo usermod -aG sudo your_domain_user@domain.tld
|
||||
```
|
||||
|
||||
用域账户登录 Ubuntu 并运行 `apt update` 命令来更新你的系统以检验 root 权限。
|
||||
|
||||
24、 给一个域群组 root 权限,用 `visudo` 命令打开并编辑 `/etc/sudoers` 文件,并加入如下行:
|
||||
|
||||
```
|
||||
%domain\ admins@tecmint.lan ALL=(ALL:ALL) ALL
|
||||
```
|
||||
|
||||
25、 要在 Ubuntu 桌面使用域账户认证,通过编辑 `/usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf` 文件来修改 LightDM 显示管理器,增加以下两行并重启 lightdm 服务或重启机器应用修改。
|
||||
|
||||
```
|
||||
greeter-show-manual-login=true
|
||||
greeter-hide-users=true
|
||||
```
|
||||
|
||||
域账户用“你的域用户”或“你的域用户@你的域” 格式来登录 Ubuntu 桌面。
|
||||
|
||||
26、 为使用 Samba AD 账户的简称格式,编辑 `/etc/sssd/sssd.conf` 文件,在 `[sssd]` 块加入如下几行命令。
|
||||
|
||||
```
|
||||
full_name_format = %1$s
|
||||
```
|
||||
|
||||
并重启 SSSD 守护进程应用改变。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart sssd
|
||||
```
|
||||
|
||||
你会注意到 bash 提示符会变成了没有附加域名部分的 AD 用户名。
|
||||
|
||||
27、 万一你因为 `sssd.conf` 里的 `enumerate=true` 参数设定而不能登录,你得用下面的命令清空 sssd 缓存数据:
|
||||
|
||||
```
|
||||
$ rm /var/lib/sss/db/cache_tecmint.lan.ldb
|
||||
```
|
||||
|
||||
这就是全部了!虽然这个教程主要集中于集成 Samba4 活动目录,同样的步骤也能被用于把使用 Realm 和 SSSD 服务的 Ubuntu 整合到微软 Windows 服务器活动目录。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Matei Cezar - 我是一名网瘾少年,开源和基于 linux 系统软件的粉丝,有4年经验在 linux 发行版桌面、服务器和 bash 脚本。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/integrate-ubuntu-to-samba4-ad-dc-with-sssd-and-realm/
|
||||
|
||||
作者:[Matei Cezar][a]
|
||||
译者:[XYenChi](https://github.com/XYenChi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/cezarmatei/
|
||||
[1]:https://linux.cn/article-8065-1.html
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Network-Interface.jpg
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2017/07/Set-realm-name.png
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Samba-Server.jpg
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2017/07/Test-Samba-Configuration.jpg
|
||||
[6]:https://www.tecmint.com/wp-content/uploads/2017/07/Check-Kerberos-Authentication.jpg
|
||||
[7]:https://www.tecmint.com/wp-content/uploads/2017/07/Join-Ubuntu-to-Samba4-Realm.jpg
|
||||
[8]:https://www.tecmint.com/wp-content/uploads/2017/07/List-Realm-Domain-Info.jpg
|
||||
[9]:https://www.tecmint.com/wp-content/uploads/2017/07/Add-User-to-Realm-Domain.jpg
|
||||
[10]:https://www.tecmint.com/wp-content/uploads/2017/07/Add-Domain-to-Realm.jpg
|
||||
[11]:https://linux.cn/article-8097-1.html
|
||||
[12]:https://www.tecmint.com/wp-content/uploads/2017/07/Confirm-Domain-Added.jpg
|
||||
[13]:https://www.tecmint.com/wp-content/uploads/2017/07/PAM-Configuration.jpg
|
||||
[14]:https://www.tecmint.com/wp-content/uploads/2017/07/List-Domain-Accounts.jpg
|
||||
[15]:https://www.tecmint.com/wp-content/uploads/2017/07/check-Winbind-nsswitch.jpg
|
||||
[16]:https://www.tecmint.com/wp-content/uploads/2017/07/Check-AD-User-Info.jpg
|
||||
[17]:https://www.tecmint.com/wp-content/uploads/2017/07/AD-User-Authentication.jpg
|
||||
[18]:https://www.tecmint.com/author/cezarmatei/
|
||||
[19]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[20]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,58 @@
|
||||
IoT 边缘计算框架的新进展
|
||||
---
|
||||
|
||||

|
||||
|
||||
> 开源项目 EdgeX Foundry 旨在开发一个标准化的互操作物联网边缘计算框架。
|
||||
|
||||
4 月份时, Linux 基金组织[启动](http://linuxgizmos.com/open-source-group-focuses-on-industrial-iot-gateway-middleware/)了一个开源项目 [EdgeX Foundry](https://www.edgexfoundry.org/) ,用于为物联网边缘计算开发一个标准化互操作框架。 就在最近, EdgeX Foundry 又[宣布](https://www.edgexfoundry.org/announcement/2017/07/17/edgex-foundry-builds-momentum-for-a-iot-interoperability-and-a-unified-marketplace-with-eight-new-members/)新增了 8 个成员,其总成员达到 58 位。
|
||||
|
||||
这些新成员是 Absolute、IoT Impact LABS、inwinStack、Parallel Machines、Queen's University Belfast、RIOT、Toshiba Digital Solutions Corporation 和 Tulip Interfaces。 其原有成员包括 AMD、Analog Devices、Canonical/Ubuntu、Cloud Foundry、Dell、Linaro、Mocana、NetFoundry、 Opto 22、RFMicron 和 VMWare 等其他公司或组织。
|
||||
|
||||
EdgeX Foundry 项目构建于戴尔早期的基于 Apache2.0 协议的 [FUSE](https://medium.com/@gigastacey/dell-plans-an-open-source-iot-stack-3dde43f24feb) 物联网中间件框架之上,其中包括十几个微服务和超过 12.5 万行代码。在 FUSE 合并了类同项目 AllJoyn-compliant IoTX 之后,Linux 基金会协同 Dell 创立了 EdgeX Foundry ,后者是由 EdgeX Foundry 现有成员 Two Bulls 和 Beechwood 发起的项目。
|
||||
|
||||
EdgeX Foundry 将创造一个互操作性的、即插即用组件的物联网边缘计算的生态系统。开源的 EdgeX 栈将协调各种传感器网络协议与多种云平台及分析平台。该框架旨在充分挖掘横跨边缘计算、安全、系统管理和服务等模块间的互操作性代码。
|
||||
|
||||
对于项目成员及其客户来说,其关键的好处是在于能将各种预先认证的软件集成到许多 IoT 网关和智能边缘设备上。 在 Linux.com 的一次采访中,[IoT Impact LABS](https://iotimpactlabs.com/) 的首席工程师 Dan Mahoney 说:“现实中,EdgeX Foundry 降低了我们在部署多供应商解决方案时所面对的挑战。”
|
||||
|
||||
在 Linux 基金会仍然将其 AllSeen Alliance 项目下的 AllJoyn 规范合并到 [IoTivity](https://www.linux.com/news/how-iotivity-and-alljoyn-could-combine) 标准的情况下,为什么会发起了另外一个物联网标准化项目(EdgeX Foundry) 呢? 原因之一,EdgeX Foundry 不同于 IoTivity,IoTivity 主要解决工业物联网问题,而 EdgeX Foundry 旨在解决消费级和工业级物联网全部的问题。 更具体来说, EdgeX Foundry 旨在成为网关和智能终端的通用中间件。 EdgeX Foundry 与 IoTivity 的另一个不同在于,前者希望借助预认证的终端塑造一种新产品,后者更多解决现存产品之间的互操作性。
|
||||
|
||||
Linux 基金会 IoT 高级总监 Philip DesAutels 说:“IoTivity 提供实现设备之间无缝连接的协议, 而 EdgeX Foundry 提供了一个边缘计算框架。EdgeX Foundry 能够兼容如 IoTivity、 BacNet、 EtherCat 等任何协议设备,从而实现集成多协议通信系统的通用边缘计算框架,该项目的目标是为构建互操作组件的生态系统的过程中,降低不确定性,缩短市场化时间,更好地产生规模效应。”
|
||||
|
||||
上个月, 由 [Open Connectivity Foundation](https://openconnectivity.org/developer/specifications/international-standards) (OCF)和 Linux 基金组织共同发起的 IoTivity 项目发布了 [IoTivity 1.3](https://wiki.iotivity.org/release_note_1.3.0),该版本增加了与其曾经的对手 AllJoyn spec 的纽带,也增加了对于 OCF 的 UPnP 设备发现标准的接口。 预计在 [IoTivity 2.0](https://www.linux.com/news/iotivity-20-whats-store) 中, IoTivity 和 AllJoyn 将会更进一步深入集成。
|
||||
|
||||
DesAutels 告诉 linux.com,IoTivity 和 EdgeX 是“高度互补的”,其“原因是 EdgeX Foundry 项目的几个成员也是 IoTivity 或 OCF 的成员,如此更强化了 IoTivity 和 EdgeX 的合作关系。”
|
||||
|
||||
尽管 IoTivity 和 EdgeX 都宣称是跨平台的,包括在 CPU 架构和 OS 方面,但是二者还是存在一定区别。 IoTivity 最初是基于 Linux 平台设计,兼容 Ubuntu、Tizen 和 Android 等 Linux 系列 OS,后来逐步扩展到 Windows 和 iOS 操作系统。与之对应的 EdgeX 设计之初就是基于跨平台的理念,其完美兼容于各种 CPU 架构,支持 Linux, Windows 和 Mac OS 等操作系统, 未来还将兼容于实时操作系统(RTOS)。”
|
||||
|
||||
EdgeX 的新成员 [RIOT](https://riot-os.org/) 提供了一个开源的面向物联网的项目 RIOT RTOS。RIOT 的主要维护者 Thomas Eichinger 在一次表彰讲话中说:“由于 RIOT 初衷就是致力于解决 linux 不太适应的问题, 故对于 RIOT 社区来说,参加和支持类似于 EdgeX Foundry 等边缘计算的开源组织的积极性是自然而然的。”
|
||||
|
||||
### 传感器集成的简化
|
||||
|
||||
IoT Impact LABS (即 Impact LABS 或直接称为 LABS)是另一个 EdgeX 新成员。 该公司推出了一个独特的业务模式,旨在帮助中小企业度过物联网解决方案的试用阶段。该公司的大部分客户,其中包括几个 EdgeX Foundry 的项目成员,是致力于建设智慧城市、基础设施再利用、提高食品安全,以及解决社会面临的自然资源缺乏的挑战。
|
||||
|
||||
Dan Mahoney 说:“在 LABS 我们花费了很多时间来调和试点客户的解决方案之间的差异性。 EdgeX Foundry 可以最小化部署边缘软件系统的工作量,从而使我们能够更快更好地部署高质量的解决方案。”
|
||||
|
||||
该框架在涉及多个供应商、多种类型传感器的场景尤其凸显优势。“Edgex Foundry 将为我们提供快速构建可以控制所有部署的传感器的网关的能力。” Mahoney 补充说到。传感器制造商将借助 EdgeX SDK 烧写应用层协议驱动到边缘设备,该协议能够兼容多供应商和解决方案。
|
||||
|
||||
### 边缘分析能力的构建
|
||||
|
||||
当我们问到, Mahoney 的公司希望见到 EdgeX Foundry 怎样的发展时,他说:“我们喜见乐闻的一个目标是有更多有效的工业协议成为设备服务,这是一个更清晰的边缘计算实现之路。”
|
||||
|
||||
在工业物联网和消费级物联网中边缘计算都呈现增长趋势。 在后者,我们已经看到如 Alexa 的智能声控以及录像分析等几个智能家居系统[集成了边缘计算分析](https://www.linux.com/news/smart-linux-home-hubs-mix-iot-ai)技术。 这减轻了云服务平台的计算负荷,但同时也带来了安全、隐私,以及由于供应商中断或延迟问题引起的服务中断问题。
|
||||
|
||||
对于工业物联网网关,延迟问题成为首要的问题。因此,在物联网网关方面出现了一些类似于云服务功能的扩展。 其中一个解决方案是,为了安全将一些云服务上的安全保障应用借助容器如 [RIOS 与 Ubuntu 内核快照机制](https://www.linux.com/news/future-iot-containers-aim-solve-security-crisis)等方式集成到嵌入式设备。 另一种方案是,开发 IoT 生态系统,迁移云功能到边缘计算上。上个月,Amazon 为基于 linux 的网关发布了实现 [AWS Greengrass](http://linuxgizmos.com/amazon-releases-aws-greengrass-for-local-iot-processing-on-linux-devices/) 物联网协议栈的 AWS lambda。 该软件能够使 AWS 计算、消息路由、数据缓存和同步能力在诸如物联网网关等联网设备上完成。
|
||||
|
||||
分析能力是 EdgeX Foundry 发展路线上的一个关键功能要点。 发起成员之一 Cloud Foundry 其旨在集成其主要的工业应用平台到边缘设备。 另一个新成员 [Parallel Machines](https://www.parallelmachines.com/) 则计划利用 EdgeX 将 AI 带到边缘设备。
|
||||
|
||||
EdgeX Foundry 仍然在项目早期, 软件仍然在 α 阶段,其成员在上个月(六月份)才刚刚进行了第一次全体成员大会。同时该项目已经为新开发者准备了一些初始训练课程,另外从[这里](https://wiki.edgexfoundry.org/)也能获取更多的信息。
|
||||
|
||||
----
|
||||
|
||||
via: [https://www.linux.com/blog/2017/7/iot-framework-edge-computing-gains-ground](https://www.linux.com/blog/2017/7/iot-framework-edge-computing-gains-ground)
|
||||
|
||||
作者: [ERIC BROWN](https://www.linux.com/users/ericstephenbrown)
|
||||
译者:[penghuster](https://github.com/penghuster)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
216
published/20170813 An Intro to Compilers.md
Normal file
216
published/20170813 An Intro to Compilers.md
Normal file
@ -0,0 +1,216 @@
|
||||
编译器简介: 在 Siri 前时代如何与计算机对话
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
简单说来,一个<ruby>编译器<rt>compiler</rt></ruby>不过是一个可以翻译其他程序的程序。传统的编译器可以把源代码翻译成你的计算机能够理解的可执行机器代码。(一些编译器将源代码翻译成别的程序语言,这样的编译器称为源到源翻译器或<ruby>转化器<rt>transpilers</rt></ruby>。)[LLVM][7] 是一个广泛使用的编译器项目,包含许多模块化的编译工具。
|
||||
|
||||
传统的编译器设计包含三个部分:
|
||||
|
||||

|
||||
|
||||
* <ruby>前端<rt>Frontend</rt></ruby>将源代码翻译为<ruby>中间表示<rt>intermediate representation </rt></ruby> (IR)* 。[clang][1] 是 LLVM 中用于 C 家族语言的前端工具。
|
||||
* <ruby>优化器<rt>Optimizer</rt></ruby>分析 IR 然后将其转化为更高效的形式。[opt][2] 是 LLVM 的优化工具。
|
||||
* <ruby>后端<rt>Backend</rt></ruby>通过将 IR 映射到目标硬件指令集从而生成机器代码。[llc][3] 是 LLVM 的后端工具。
|
||||
|
||||
注:LLVM 的 IR 是一种和汇编类似的低级语言。然而,它抽离了特定硬件信息。
|
||||
|
||||
### Hello, Compiler
|
||||
|
||||
下面是一个打印 “Hello, Compiler!” 到标准输出的简单 C 程序。C 语法是人类可读的,但是计算机却不能理解,不知道该程序要干什么。我将通过三个编译阶段使该程序变成机器可执行的程序。
|
||||
|
||||
```
|
||||
// compile_me.c
|
||||
// Wave to the compiler. The world can wait.
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
printf("Hello, Compiler!\n");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
#### 前端
|
||||
|
||||
正如我在上面所提到的,`clang` 是 LLVM 中用于 C 家族语言的前端工具。Clang 包含 <ruby>C 预处理器<rt>C preprocessor</rt></ruby>、<ruby>词法分析器<rt>lexer</rt></ruby>、<ruby>语法解析器<rt>parser</rt></ruby>、<ruby>语义分析器<rt>semantic analyzer</rt></ruby>和 <ruby>IR 生成器<rt>IR generator</rt></ruby>。
|
||||
|
||||
**C 预处理器**在将源程序翻译成 IR 前修改源程序。预处理器处理外部包含文件,比如上面的 `#include <stdio.h>`。 它将会把这一行替换为 `stdio.h` C 标准库文件的完整内容,其中包含 `printf` 函数的声明。
|
||||
|
||||
通过运行下面的命令来查看预处理步骤的输出:
|
||||
|
||||
```
|
||||
clang -E compile_me.c -o preprocessed.i
|
||||
```
|
||||
|
||||
**词法分析器**(或<ruby>扫描器<rt>scanner</rt></ruby>或<ruby>分词器<rt>tokenizer</rt></ruby>)将一串字符转化为一串单词。每一个单词或<ruby>记号<rt>token</rt></ruby>,被归并到五种语法类别之一:标点符号、关键字、标识符、文字或注释。
|
||||
|
||||
compile_me.c 的分词过程:
|
||||
|
||||

|
||||
|
||||
**语法分析器**确定源程序中的单词流是否组成了合法的句子。在分析记号流的语法后,它会输出一个<ruby>抽象语法树<rt>abstract syntax tree</rt></ruby>(AST)。Clang 的 AST 中的节点表示声明、语句和类型。
|
||||
|
||||
compile_me.c 的语法树:
|
||||
|
||||

|
||||
|
||||
**语义分析器**会遍历抽象语法树,从而确定代码语句是否有正确意义。这个阶段会检查类型错误。如果 `compile_me.c` 的 main 函数返回 `"zero"`而不是 `0`, 那么语义分析器将会抛出一个错误,因为 `"zero"` 不是 `int` 类型。
|
||||
|
||||
**IR 生成器**将抽象语法树翻译为 IR。
|
||||
|
||||
对 compile_me.c 运行 clang 来生成 LLVM IR:
|
||||
|
||||
```
|
||||
clang -S -emit-llvm -o llvm_ir.ll compile_me.c
|
||||
```
|
||||
|
||||
在 `llvm_ir.ll` 中的 main 函数:
|
||||
|
||||
```
|
||||
; llvm_ir.ll
|
||||
@.str = private unnamed_addr constant [18 x i8] c"Hello, Compiler!\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- memory allocated on the stack
|
||||
store i32 0, i32* %1, align 4
|
||||
%2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([18 x i8], [18 x i8]* @.str, i32 0, i32 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
#### 优化程序
|
||||
|
||||
优化程序的工作是基于其对程序的运行时行为的理解来提高代码效率。优化程序将 IR 作为输入,然后生成改进后的 IR 作为输出。LLVM 的优化工具 `opt` 将会通过标记 `-O2`(大写字母 `o`,数字 2)来优化处理器速度,通过标记 `Os`(大写字母 `o`,小写字母 `s`)来减少指令数目。
|
||||
|
||||
看一看上面的前端工具生成的 LLVM IR 代码和运行下面的命令生成的结果之间的区别:
|
||||
|
||||
```
|
||||
opt -O2 -S llvm_ir.ll -o optimized.ll
|
||||
```
|
||||
|
||||
在 `optimized.ll` 中的 main 函数:
|
||||
|
||||
```
|
||||
optimized.ll
|
||||
|
||||
@str = private unnamed_addr constant [17 x i8] c"Hello, Compiler!\00"
|
||||
|
||||
define i32 @main() {
|
||||
%puts = tail call i32 @puts(i8* getelementptr inbounds ([17 x i8], [17 x i8]* @str, i64 0, i64 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @puts(i8* nocapture readonly)
|
||||
```
|
||||
|
||||
优化后的版本中, main 函数没有在栈中分配内存,因为它不使用任何内存。优化后的代码中调用 `puts` 函数而不是 `printf` 函数,因为程序中并没有使用 `printf` 函数的格式化功能。
|
||||
|
||||
当然,优化程序不仅仅知道何时可以把 `printf` 函数用 `puts` 函数代替。优化程序也能展开循环并内联简单计算的结果。考虑下面的程序,它将两个整数相加并打印出结果。
|
||||
|
||||
```
|
||||
// add.c
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int a = 5, b = 10, c = a + b;
|
||||
printf("%i + %i = %i\n", a, b, c);
|
||||
}
|
||||
```
|
||||
|
||||
下面是未优化的 LLVM IR:
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- allocate stack space for var a
|
||||
%2 = alloca i32, align 4 ; <- allocate stack space for var b
|
||||
%3 = alloca i32, align 4 ; <- allocate stack space for var c
|
||||
store i32 5, i32* %1, align 4 ; <- store 5 at memory location %1
|
||||
store i32 10, i32* %2, align 4 ; <- store 10 at memory location %2
|
||||
%4 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %4
|
||||
%5 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %5
|
||||
%6 = add nsw i32 %4, %5 ; <- add the values in registers %4 and %5\. put the result in register %6
|
||||
store i32 %6, i32* %3, align 4 ; <- put the value of register %6 into memory address %3
|
||||
%7 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %7
|
||||
%8 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %8
|
||||
%9 = load i32, i32* %3, align 4 ; <- load the value at memory address %3 into register %9
|
||||
%10 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i32 0, i32 0), i32 %7, i32 %8, i32 %9)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
下面是优化后的 LLVM IR:
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i64 0, i64 0), i32 5, i32 10, i32 15)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8* nocapture readonly, ...)
|
||||
```
|
||||
|
||||
优化后的 main 函数本质上是未优化版本的第 17 行和 18 行,伴有变量值内联。`opt` 计算加法,因为所有的变量都是常数。很酷吧,对不对?
|
||||
|
||||
#### 后端
|
||||
|
||||
LLVM 的后端工具是 `llc`。它分三个阶段将 LLVM IR 作为输入生成机器代码。
|
||||
|
||||
* **指令选择**是将 IR 指令映射到目标机器的指令集。这个步骤使用虚拟寄存器的无限名字空间。
|
||||
* **寄存器分配**是将虚拟寄存器映射到目标体系结构的实际寄存器。我的 CPU 是 x86 结构,它只有 16 个寄存器。然而,编译器将会尽可能少的使用寄存器。
|
||||
* **指令安排**是重排操作,从而反映出目标机器的性能约束。
|
||||
|
||||
运行下面这个命令将会产生一些机器代码:
|
||||
|
||||
```
|
||||
llc -o compiled-assembly.s optimized.ll
|
||||
```
|
||||
|
||||
```
|
||||
_main:
|
||||
pushq %rbp
|
||||
movq %rsp, %rbp
|
||||
leaq L_str(%rip), %rdi
|
||||
callq _puts
|
||||
xorl %eax, %eax
|
||||
popq %rbp
|
||||
retq
|
||||
L_str:
|
||||
.asciz "Hello, Compiler!"
|
||||
```
|
||||
|
||||
这个程序是 x86 汇编语言,它是计算机所说的语言,并具有人类可读语法。某些人最后也许能理解我。
|
||||
|
||||
* * *
|
||||
|
||||
相关资源:
|
||||
|
||||
1. [设计一个编译器][4]
|
||||
2. [开始探索 LLVM 核心库][5]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nicoleorchard.com/blog/compilers
|
||||
|
||||
作者:[Nicole Orchard][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://nicoleorchard.com/
|
||||
[1]:http://clang.llvm.org/
|
||||
[2]:http://llvm.org/docs/CommandGuide/opt.html
|
||||
[3]:http://llvm.org/docs/CommandGuide/llc.html
|
||||
[4]:https://www.amazon.com/Engineering-Compiler-Second-Keith-Cooper/dp/012088478X
|
||||
[5]:https://www.amazon.com/Getting-Started-LLVM-Core-Libraries/dp/1782166920
|
||||
[6]:https://twitter.com/norchard/status/864246049266958336
|
||||
[7]:http://llvm.org/
|
@ -4,17 +4,17 @@ Headless Chrome 入门
|
||||
|
||||
### 摘要
|
||||
|
||||
[Headless Chrome][9] 在 Chrome 59 中开始搭载。这是一种在 headless 环境下运行 Chrome 浏览器的方式。从本质上来说,就是不用 chrome 来运行 Chrome!它将 Chromium 和 Blink 渲染引擎提供的所有现代 Web 平台的功能都带入了命令行。
|
||||
在 Chrome 59 中开始搭载 [Headless Chrome][9]。这是一种在<ruby>无需显示<rt>headless</rt></ruby>的环境下运行 Chrome 浏览器的方式。从本质上来说,就是不用 chrome 浏览器来运行 Chrome 的功能!它将 Chromium 和 Blink 渲染引擎提供的所有现代 Web 平台的功能都带入了命令行。
|
||||
|
||||
它为什么有用?
|
||||
它有什么用?
|
||||
|
||||
Headless 浏览器对于自动化测试和不需要可视化 UI 界面的服务器环境是一个很好的工具。例如,你可能需要对真实的网页运行一些测试,创建一个 PDF,或者只是检查浏览器如何呈现 URL。
|
||||
<ruby>无需显示<rt>headless</rt></ruby>的浏览器对于自动化测试和不需要可视化 UI 界面的服务器环境是一个很好的工具。例如,你可能需要对真实的网页运行一些测试,创建一个 PDF,或者只是检查浏览器如何呈现 URL。
|
||||
|
||||
<aside class="caution" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(255, 243, 224); color: rgb(221, 44, 0);">**注意:** Mac 和 Linux 上的 Chrome 59 都可以运行 Headless 模式。[对 Windows 的支持][2]将在 Chrome 60 中提供。检查你使用的 Chrome 版本,请打开 `chrome://version`。</aside>
|
||||
> **注意:** Mac 和 Linux 上的 Chrome 59 都可以运行无需显示模式。[对 Windows 的支持][2]将在 Chrome 60 中提供。要检查你使用的 Chrome 版本,请在浏览器中打开 `chrome://version`。
|
||||
|
||||
### 开启 Headless 模式(命令行界面)
|
||||
### 开启<ruby>无需显示<rt>headless</rt></ruby>模式(命令行界面)
|
||||
|
||||
开启 headless 模式最简单的方法是从命令行打开 Chrome 二进制文件。如果你已经安装了 Chrome 59 以上的版本,请使用 `--headless` 标志启动 Chrome:
|
||||
开启<ruby>无需显示<rt>headless</rt></ruby>模式最简单的方法是从命令行打开 Chrome 二进制文件。如果你已经安装了 Chrome 59 以上的版本,请使用 `--headless` 标志启动 Chrome:
|
||||
|
||||
```
|
||||
chrome \
|
||||
@ -24,11 +24,11 @@ chrome \
|
||||
https://www.chromestatus.com # URL to open. Defaults to about:blank.
|
||||
```
|
||||
|
||||
<aside class="note" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(225, 245, 254); color: rgb(2, 136, 209);">**注意:**目前你仍然需要使用 `--disable-gpu` 标志。但它最终会消失的。</aside>
|
||||
> **注意:**目前你仍然需要使用 `--disable-gpu` 标志。但它最终会不需要的。
|
||||
|
||||
`chrome` 应该指向你安装 Chrome 的位置。确切的位置会因平台差异而不同。当前我在 Mac 上操作,所以我为安装的每个版本的 Chrome 都创建了方便使用的别名。
|
||||
`chrome` 二进制文件应该指向你安装 Chrome 的位置。确切的位置会因平台差异而不同。当前我在 Mac 上操作,所以我为安装的每个版本的 Chrome 都创建了方便使用的别名。
|
||||
|
||||
如果您使用 Chrome 的稳定版,并且无法获得测试版,我建议您使用 `chrome-canary`:
|
||||
如果您使用 Chrome 的稳定版,并且无法获得测试版,我建议您使用 `chrome-canary` 版本:
|
||||
|
||||
```
|
||||
alias chrome="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome"
|
||||
@ -40,9 +40,9 @@ alias chromium="/Applications/Chromium.app/Contents/MacOS/Chromium"
|
||||
|
||||
### 命令行的功能
|
||||
|
||||
在某些情况下,你可能不需要以编程方式在 Headless Chrome 中执行[脚本][11]。可以使用一些[有用的命令行标志][12]来执行常见的任务。
|
||||
在某些情况下,你可能不需要[以脚本编程的方式][11]操作 Headless Chrome。可以使用一些[有用的命令行标志][12]来执行常见的任务。
|
||||
|
||||
### 打印 DOM
|
||||
#### 打印 DOM
|
||||
|
||||
`--dump-dom` 标志将打印 `document.body.innerHTML` 到标准输出:
|
||||
|
||||
@ -50,7 +50,7 @@ alias chromium="/Applications/Chromium.app/Contents/MacOS/Chromium"
|
||||
chrome --headless --disable-gpu --dump-dom https://www.chromestatus.com/
|
||||
```
|
||||
|
||||
### 创建一个 PDF
|
||||
#### 创建一个 PDF
|
||||
|
||||
`--print-to-pdf` 标志将页面转出为 PDF 文件:
|
||||
|
||||
@ -58,7 +58,7 @@ chrome --headless --disable-gpu --dump-dom https://www.chromestatus.com/
|
||||
chrome --headless --disable-gpu --print-to-pdf https://www.chromestatus.com/
|
||||
```
|
||||
|
||||
### 截图
|
||||
#### 截图
|
||||
|
||||
要捕获页面的屏幕截图,请使用 `--screenshot` 标志:
|
||||
|
||||
@ -72,11 +72,11 @@ chrome --headless --disable-gpu --screenshot --window-size=1280,1696 https://www
|
||||
chrome --headless --disable-gpu --screenshot --window-size=412,732 https://www.chromestatus.com/
|
||||
```
|
||||
|
||||
使用 `--screenshot` 标志运行将在当前工作目录中生成一个名为 `screenshot.png` 的文件。如果你正在寻求整个页面的截图,那么会涉及到很多事情。来自 David Schnurr 的一篇博文已经介绍了这一内容。请查看 [Using headless Chrome as an automated screenshot tool ][13]。
|
||||
使用 `--screenshot` 标志运行 Headless Chrome 将在当前工作目录中生成一个名为 `screenshot.png` 的文件。如果你正在寻求整个页面的截图,那么会涉及到很多事情。来自 David Schnurr 的一篇很棒的博文已经介绍了这一内容。请查看 [使用 headless Chrome 作为自动截屏工具][13]。
|
||||
|
||||
### REPL 模式 (read-eval-print loop)
|
||||
#### REPL 模式 (read-eval-print loop)
|
||||
|
||||
`--repl` 标志可以使 Headless 运行在一个你可以使用浏览器评估 JS 表达式的模式下。执行下面的命令:
|
||||
`--repl` 标志可以使 Headless Chrome 运行在一个你可以使用浏览器评估 JS 表达式的模式下。执行下面的命令:
|
||||
|
||||
```
|
||||
$ chrome --headless --disable-gpu --repl https://www.chromestatus.com/
|
||||
@ -84,27 +84,27 @@ $ chrome --headless --disable-gpu --repl https://www.chromestatus.com/
|
||||
>>> location.href
|
||||
{"result":{"type":"string","value":"https://www.chromestatus.com/features"}}
|
||||
>>> quit
|
||||
$
|
||||
```
|
||||
|
||||
### 在没有浏览器界面的情况下调试 Chrome
|
||||
|
||||
当你使用 `--remote-debugging-port=9222` 运行 Chrome 时,它会启动一个开启 [DevTools 协议][14]的实例。该协议用于与 Chrome 进行通信,并且驱动 headless 浏览器实例。它也是一个类似 Sublime、VS Code 和 Node 的工具,可用于应用程序的远程调试。#协同效应
|
||||
当你使用 `--remote-debugging-port=9222` 运行 Chrome 时,它会启动一个支持 [DevTools 协议][14]的实例。该协议用于与 Chrome 进行通信,并且驱动 Headless Chrome 浏览器实例。它也是一个类似 Sublime、VS Code 和 Node 的工具,可用于应用程序的远程调试。#协同效应
|
||||
|
||||
由于你没有使用浏览器用户界面来查看网页,请在另一个浏览器中输入 `http://localhost:9222`,以检查一切是否正常。你将会看到一个可检查页面的列表,可以点击它们来查看 Headless 正在呈现的内容:
|
||||
由于你没有浏览器用户界面可用来查看网页,请在另一个浏览器中输入 `http://localhost:9222`,以检查一切是否正常。你将会看到一个<ruby>可检查的<rt>inspectable</rt></ruby>页面的列表,可以点击它们来查看 Headless Chrome 正在呈现的内容:
|
||||
|
||||

|
||||
DevTools 远程调试界面
|
||||
|
||||
从这里,你就可以像往常一样使用熟悉的 DevTools 来检查、调试和调整页面了。如果你以编程方式使用 Headless,这个页面也是一个功能强大的调试工具,用于查看所有穿过电线,与浏览器交互的原始 DevTools 协议命令。
|
||||
*DevTools 远程调试界面*
|
||||
|
||||
从这里,你就可以像往常一样使用熟悉的 DevTools 来检查、调试和调整页面了。如果你以编程方式使用 Headless Chrome,这个页面也是一个功能强大的调试工具,用于查看所有通过网络与浏览器交互的原始 DevTools 协议命令。
|
||||
|
||||
### 使用编程模式 (Node)
|
||||
|
||||
### Puppeteer 库 API
|
||||
#### Puppeteer 库 API
|
||||
|
||||
[Puppeteer][15] 是一个由 Chrome 团队开发的 Node 库。它提供了一个高层次的 API 来控制 headless(或 full) Chrome。它与其他自动化测试库,如 Phantom 和 NightmareJS 相类似,但是只适用于最新版本的 Chrome。
|
||||
[Puppeteer][15] 是一个由 Chrome 团队开发的 Node 库。它提供了一个高层次的 API 来控制无需显示版(或 完全版)的 Chrome。它与其他自动化测试库,如 Phantom 和 NightmareJS 相类似,但是只适用于最新版本的 Chrome。
|
||||
|
||||
除此之外,Puppeteer 还可用于轻松截取屏幕截图,创建 PDF,导航页面以及获取有关这些页面的信息。如果你想快速地自动化测试浏览器,我建议使用该库。它隐藏了 DevTools 协议的复杂性,并可以处理诸如启动 Chrome 调试实例等冗余的任务。
|
||||
除此之外,Puppeteer 还可用于轻松截取屏幕截图,创建 PDF,页面间导航以及获取有关这些页面的信息。如果你想快速地自动化进行浏览器测试,我建议使用该库。它隐藏了 DevTools 协议的复杂性,并可以处理诸如启动 Chrome 调试实例等繁冗的任务。
|
||||
|
||||
安装:
|
||||
|
||||
@ -112,7 +112,7 @@ DevTools 远程调试界面
|
||||
yarn add puppeteer
|
||||
```
|
||||
|
||||
**例子** - 打印用户代理
|
||||
**例子** - 打印用户代理:
|
||||
|
||||
```
|
||||
const puppeteer = require('puppeteer');
|
||||
@ -124,7 +124,7 @@ const puppeteer = require('puppeteer');
|
||||
})();
|
||||
```
|
||||
|
||||
**例子** - 获取页面的屏幕截图
|
||||
**例子** - 获取页面的屏幕截图:
|
||||
|
||||
```
|
||||
const puppeteer = require('puppeteer');
|
||||
@ -142,15 +142,15 @@ browser.close();
|
||||
|
||||
查看 [Puppeteer 的文档][16],了解完整 API 的更多信息。
|
||||
|
||||
### CRI 库
|
||||
#### CRI 库
|
||||
|
||||
[chrome-remote-interface][17] 是一个比 Puppeteer API 更低层次的库。如果你想要更接近原始信息和更直接地使用 [DevTools 协议][18]。
|
||||
[chrome-remote-interface][17] 是一个比 Puppeteer API 更低层次的库。如果你想要更接近原始信息和更直接地使用 [DevTools 协议][18]的话,我推荐使用它。
|
||||
|
||||
#### 启动 Chrome
|
||||
**启动 Chrome**
|
||||
|
||||
chrome-remote-interface 不会为你启动 Chrome,所以你要自己启动它。
|
||||
|
||||
在 CLI 部分,我们使用 `--headless --remote-debugging-port=9222` [手动启动 Chrome][19]。但是,要想做到完全自动化测试,你可能希望从应用程序中跳转到 Chrome。
|
||||
在前面的 CLI 章节中,我们使用 `--headless --remote-debugging-port=9222` [手动启动了 Chrome][19]。但是,要想做到完全自动化测试,你可能希望从你的应用程序中启动 Chrome。
|
||||
|
||||
其中一种方法是使用 `child_process`:
|
||||
|
||||
@ -170,17 +170,17 @@ launchHeadlessChrome('https://www.chromestatus.com', (err, stdout, stderr) => {
|
||||
|
||||
但是如果你想要在多个平台上运行可移植的解决方案,事情会变得很棘手。请注意 Chrome 的硬编码路径:
|
||||
|
||||
##### 使用 ChromeLauncher
|
||||
**使用 ChromeLauncher**
|
||||
|
||||
[Lighthouse][20] 是一个奇妙的网络应用质量的测试工具。Lighthouse 内部开发了一个强大的用于启动 Chrome 的模块,现在已经被提取出来,可以单独使用。[`chrome-launcher` NPM 模块][21] 可以找到 Chrome 的安装位置,设置调试实例,启动浏览器和在程序运行完之后将其杀死。它最好的一点是可以跨平台工作,感谢 Node!
|
||||
[Lighthouse][20] 是一个令人称奇的网络应用的质量测试工具。Lighthouse 内部开发了一个强大的用于启动 Chrome 的模块,现在已经被提取出来单独使用。[chrome-launcher NPM 模块][21] 可以找到 Chrome 的安装位置,设置调试实例,启动浏览器和在程序运行完之后将其杀死。它最好的一点是可以跨平台工作,感谢 Node!
|
||||
|
||||
默认情况下,**`chrome-launcher` 会尝试启动 Chrome Canary**(如果已经安装),但是你也可以更改它,手动选择使用的 Chrome 版本。要想使用它,首先从 npm 安装:
|
||||
默认情况下,**chrome-launcher 会尝试启动 Chrome Canary**(如果已经安装),但是你也可以更改它,手动选择使用的 Chrome 版本。要想使用它,首先从 npm 安装:
|
||||
|
||||
```
|
||||
yarn add chrome-launcher
|
||||
```
|
||||
|
||||
**例子** - 使用 `chrome-launcher` 启动 Headless
|
||||
**例子** - 使用 `chrome-launcher` 启动 Headless Chrome:
|
||||
|
||||
```
|
||||
const chromeLauncher = require('chrome-launcher');
|
||||
@ -214,13 +214,13 @@ launchChrome().then(chrome => {
|
||||
});
|
||||
```
|
||||
|
||||
运行这个脚本没有做太多的事情,但你应该能在任务管理器中看到一个 Chrome 的实例,它加载了页面 `about:blank`。记住,它不会有任何的浏览器界面,我们是 headless 的。
|
||||
运行这个脚本没有做太多的事情,但你应该能在任务管理器中看到启动了一个 Chrome 的实例,它加载了页面 `about:blank`。记住,它不会有任何的浏览器界面,我们是无需显示的。
|
||||
|
||||
为了控制浏览器,我们需要 DevTools 协议!
|
||||
|
||||
#### 检索有关页面的信息
|
||||
|
||||
<aside class="warning" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(251, 233, 231); color: rgb(213, 0, 0);">**警告:** DevTools 协议可以做一些有趣的事情,但是起初可能有点令人生畏。我建议先花点时间浏览 [DevTools 协议查看器][3]。然后,转到 `chrome-remote-interface` 的 API 文档,看看它是如何包装原始协议的。</aside>
|
||||
> **警告:** DevTools 协议可以做一些有趣的事情,但是起初可能有点令人生畏。我建议先花点时间浏览 [DevTools 协议查看器][3]。然后,转到 `chrome-remote-interface` 的 API 文档,看看它是如何包装原始协议的。
|
||||
|
||||
我们来安装该库:
|
||||
|
||||
@ -228,9 +228,7 @@ launchChrome().then(chrome => {
|
||||
yarn add chrome-remote-interface
|
||||
```
|
||||
|
||||
##### 示例
|
||||
|
||||
**例子** - 打印用户代理
|
||||
**例子** - 打印用户代理:
|
||||
|
||||
```
|
||||
const CDP = require('chrome-remote-interface');
|
||||
@ -243,9 +241,9 @@ launchChrome().then(async chrome => {
|
||||
});
|
||||
```
|
||||
|
||||
结果是类似这样的东西:`HeadlessChrome/60.0.3082.0`
|
||||
结果是类似这样的东西:`HeadlessChrome/60.0.3082.0`。
|
||||
|
||||
**例子** - 检查网站是否有 [Web 应用程序清单][22]
|
||||
**例子** - 检查网站是否有 [Web 应用程序清单][22]:
|
||||
|
||||
```
|
||||
const CDP = require('chrome-remote-interface');
|
||||
@ -282,7 +280,7 @@ Page.loadEventFired(async () => {
|
||||
})();
|
||||
```
|
||||
|
||||
**例子** - 使用 DOM API 提取页面 `<title>`
|
||||
**例子** - 使用 DOM API 提取页面的 `<title>`:
|
||||
|
||||
```
|
||||
const CDP = require('chrome-remote-interface');
|
||||
@ -318,11 +316,11 @@ Page.loadEventFired(async () => {
|
||||
|
||||
### 使用 Selenium、WebDriver 和 ChromeDriver
|
||||
|
||||
现在,Selenium 开启了 Chrome 的完整实例。换句话说,这是一个自动化的解决方案,但不是完全 headless 的。但是,Selenium 只需要进行小小的配置即可运行 headless Chrome。如果你想要关于如何自己设置的完整说明,我建议你[使用 Headless Chrome 来运行 Selenium][23],你可以从下面的一些示例开始。
|
||||
现在,Selenium 开启了 Chrome 的完整实例。换句话说,这是一个自动化的解决方案,但不是完全无需显示的。但是,Selenium 只需要进行小小的配置即可运行 Headless Chrome。如果你想要关于如何自己设置的完整说明,我建议你阅读“[使用 Headless Chrome 来运行 Selenium][23]”,不过你可以从下面的一些示例开始。
|
||||
|
||||
#### 使用 ChromeDriver
|
||||
|
||||
[ChromeDriver][24] 2.3.0 支持 Chrome 59 及更新版本,可与 headless Chrome 配合使用。在某些情况下,你可能需要 Chrome 60 以解决 bug。例如,Chrome 59 中屏幕截图已知存在问题。
|
||||
[ChromeDriver][24] 2.3.0 支持 Chrome 59 及更新版本,可与 Headless Chrome 配合使用。在某些情况下,你可能需要等到 Chrome 60 以解决 bug。例如,Chrome 59 中屏幕截图已知存在问题。
|
||||
|
||||
安装:
|
||||
|
||||
@ -370,7 +368,7 @@ driver.quit();
|
||||
|
||||
#### 使用 WebDriverIO
|
||||
|
||||
[WebDriverIO][25] 是一个在 Selenium WebDrive 上构建的更高层次的 API.
|
||||
[WebDriverIO][25] 是一个在 Selenium WebDrive 上构建的更高层次的 API。
|
||||
|
||||
安装:
|
||||
|
||||
@ -378,7 +376,7 @@ driver.quit();
|
||||
yarn add webdriverio chromedriver
|
||||
```
|
||||
|
||||
例子:过滤 chromestatus.com 上的 CSS 功能
|
||||
例子:过滤 chromestatus.com 上的 CSS 功能:
|
||||
|
||||
```
|
||||
const webdriverio = require('webdriverio');
|
||||
@ -446,9 +444,7 @@ browser.end();
|
||||
工具
|
||||
|
||||
* [chrome-remote-interface][5] - 基于 DevTools 协议的 node 模块
|
||||
|
||||
* [Lighthouse][6] - 测试 Web 应用程序质量的自动化工具;大量使用了协议
|
||||
|
||||
* [chrome-launcher][7] - 用于启动 Chrome 的 node 模块,可以自动化
|
||||
|
||||
样例
|
||||
@ -465,7 +461,7 @@ browser.end();
|
||||
|
||||
不。Headless Chrome 不使用窗口,所以不需要像 Xvfb 这样的显示服务器。没有它你也可以愉快地运行你的自动化测试。
|
||||
|
||||
什么是 Xvfb?Xvfb 是一个用于类 Unix 系统的内存显示服务器,可以让你运行图形应用程序(如 Chrome),而无需附加的物理显示。许多人使用 Xvfb 运行早期版本的 Chrome 进行 “headless” 测试。
|
||||
什么是 Xvfb?Xvfb 是一个用于类 Unix 系统的运行于内存之内的显示服务器,可以让你运行图形应用程序(如 Chrome),而无需附加的物理显示器。许多人使用 Xvfb 运行早期版本的 Chrome 进行 “headless” 测试。
|
||||
|
||||
**如何创建一个运行 Headless Chrome 的 Docker 容器?**
|
||||
|
||||
@ -477,7 +473,7 @@ browser.end();
|
||||
|
||||
**它和 PhantomJS 有什么关系?**
|
||||
|
||||
Headless Chrome 和 [PhantomJS][31] 是类似的工具。它们都可以用来在 headless 环境中进行自动化测试。两者的主要不同在于 Phantom 使用了一个较老版本的 WebKit 作为它的渲染引擎,而 Headless Chrome 使用了最新版本的 Blink。
|
||||
Headless Chrome 和 [PhantomJS][31] 是类似的工具。它们都可以用来在无需显示的环境中进行自动化测试。两者的主要不同在于 Phantom 使用了一个较老版本的 WebKit 作为它的渲染引擎,而 Headless Chrome 使用了最新版本的 Blink。
|
||||
|
||||
目前,Phantom 提供了比 [DevTools protocol][32] 更高层次的 API。
|
||||
|
||||
@ -485,7 +481,7 @@ Headless Chrome 和 [PhantomJS][31] 是类似的工具。它们都可以用来
|
||||
|
||||
对于 Headless Chrome 的 bug,请提交到 [crbug.com][33]。
|
||||
|
||||
对于 DevTools 洗衣的 bug,请提交到 [github.com/ChromeDevTools/devtools-protocol][34]。
|
||||
对于 DevTools 协议的 bug,请提交到 [github.com/ChromeDevTools/devtools-protocol][34]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -497,9 +493,9 @@ Headless Chrome 和 [PhantomJS][31] 是类似的工具。它们都可以用来
|
||||
|
||||
via: https://developers.google.com/web/updates/2017/04/headless-chrome
|
||||
|
||||
作者:[Eric Bidelman ][a]
|
||||
作者:[Eric Bidelman][a]
|
||||
译者:[firmianay](https://github.com/firmianay)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,22 +1,21 @@
|
||||
使用 Headless Chrome 进行自动化测试
|
||||
============================================================
|
||||
|
||||
|
||||
如果你想使用 Headless Chrome 进行自动化测试,就看下去吧!这篇文章将让你完全使用 Karma 作为 runner,并且使用 Mocha+Chai 进行 authoring 测试。
|
||||
如果你想使用 Headless Chrome 进行自动化测试,那么就往下!这篇文章将让你完全使用 Karma 作为<ruby>运行器<rt>runner</rt></ruby>,并且使用 Mocha+Chai 来编撰测试。
|
||||
|
||||
**这些东西是什么?**
|
||||
|
||||
Karma, Mocha, Chai, Headless Chrome, oh my!
|
||||
Karma、Mocha、Chai、Headless Chrome,哦,我的天哪!
|
||||
|
||||
[Karma][2] 是一个测试工具,可以和所有最流行的测试框架([Jasmine][3], [Mocha][4], [QUnit][5])配合使用。
|
||||
[Karma][2] 是一个测试工具,可以和所有最流行的测试框架([Jasmine][3]、[Mocha][4]、 [QUnit][5])配合使用。
|
||||
|
||||
[Chai][6] 是一个断言库,可以与 Node 和浏览器一起使用。这里我们需要后者。
|
||||
|
||||
[Headless Chrome][7] 是一种在没有浏览器用户界面的 headless 环境中运行 Chrome 浏览器的方法。使用 Headless Chrome(而不是直接在 Node 中测试) 的一个好处是 JavaScript 测试将在与你的网站用户相同的环境中执行。Headless Chrome 为你提供了真正的浏览器环境,却没有运行完整版本的 Chrome 一样的内存开销。
|
||||
[Headless Chrome][7] 是一种在没有浏览器用户界面的无需显示环境中运行 Chrome 浏览器的方法。使用 Headless Chrome(而不是直接在 Node 中测试) 的一个好处是 JavaScript 测试将在与你的网站用户相同的环境中执行。Headless Chrome 为你提供了真正的浏览器环境,却没有运行完整版本的 Chrome 一样的内存开销。
|
||||
|
||||
### Setup
|
||||
### 设置
|
||||
|
||||
### 安装
|
||||
#### 安装
|
||||
|
||||
使用 `yarn` 安装 Karma、相关插件和测试用例:
|
||||
|
||||
@ -34,11 +33,11 @@ npm i --save-dev mocha chai
|
||||
|
||||
在这篇文章中我使用 [Mocha][8] 和 [Chai][9],但是你也可以选择自己最喜欢的在浏览器中工作的断言库。
|
||||
|
||||
### 配置 Karma
|
||||
#### 配置 Karma
|
||||
|
||||
创建一个使用 `ChromeHeadless` 启动器的 `karma.config.js` 文件。
|
||||
|
||||
**karma.conf.js**
|
||||
**karma.conf.js**:
|
||||
|
||||
```
|
||||
module.exports = function(config) {
|
||||
@ -57,13 +56,13 @@ module.exports = function(config) {
|
||||
}
|
||||
```
|
||||
|
||||
<aside class="note" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(225, 245, 254); color: rgb(2, 136, 209);">**注意:** 运行 `./node_modules/karma/bin/ init karma.conf.js` 生成 Karma 的配置文件。</aside>
|
||||
> **注意:** 运行 `./node_modules/karma/bin/karma init karma.conf.js` 生成 Karma 的配置文件。
|
||||
|
||||
### 写一个测试
|
||||
|
||||
在 `/test/test.js` 中写一个测试:
|
||||
|
||||
**/test/test.js**
|
||||
**/test/test.js**:
|
||||
|
||||
```
|
||||
describe('Array', () => {
|
||||
@ -79,7 +78,7 @@ describe('Array', () => {
|
||||
|
||||
在我们设置好用于运行 Karma 的 `package.json` 中添加一个测试脚本。
|
||||
|
||||
**package.json**
|
||||
**package.json**:
|
||||
|
||||
```
|
||||
"scripts": {
|
||||
@ -97,7 +96,7 @@ describe('Array', () => {
|
||||
|
||||
但是,有时你可能希望将自定义的标志传递给 Chrome 或更改启动器使用的远程调试端口。要做到这一点,可以通过创建一个 `customLaunchers` 字段来扩展基础的 `ChromeHeadless` 启动器:
|
||||
|
||||
**karma.conf.js**
|
||||
**karma.conf.js**:
|
||||
|
||||
```
|
||||
module.exports = function(config) {
|
||||
@ -116,13 +115,13 @@ module.exports = function(config) {
|
||||
};
|
||||
```
|
||||
|
||||
### 在 Travis CI 上运行它
|
||||
### 完全在 Travis CI 上运行它
|
||||
|
||||
在 Headless Chrome 中配置 Karma 运行测试是很困难的。而在 Travis 中持续整合就只有几种!
|
||||
在 Headless Chrome 中配置 Karma 运行测试是很困难的。而在 Travis 中持续集成就只有几种!
|
||||
|
||||
要在 Travis 中运行测试,请使用 `dist: trusty` 并安装稳定版 Chrome 插件:
|
||||
|
||||
**.travis.yml**
|
||||
**.travis.yml**:
|
||||
|
||||
```
|
||||
language: node_js
|
||||
@ -152,9 +151,9 @@ script:
|
||||
|
||||
via: https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai
|
||||
|
||||
作者:[ Eric Bidelman][a]
|
||||
作者:[Eric Bidelman][a]
|
||||
译者:[firmianay](https://github.com/firmianay)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,86 @@
|
||||
创建更好的灾难恢复计划
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
> Tanya Reilly 的五个问题:相互依赖的服务如何使恢复更加困难,为什么有意并预先管理依赖是个好主意。
|
||||
|
||||
我最近请 Google 的网站可靠性工程师 Tanya Reilly 分享了她关于如何制定更好的灾难恢复计划的想法。Tanya 将在 10 月 1 日到 4 日在纽约举行的 O'Reilly Velocity Conference 上发表了一个题为《[你有没有试着把它关闭之后再打开?] [9]》的演讲。
|
||||
|
||||
### 1、 在计划备份系统策略时,人们最常犯的错误是什么?
|
||||
|
||||
经典的一条是“**你不需要备份策略,你需要一个恢复策略**”。如果你有备份,但你尚未测试恢复它们,那么你没有真正的备份。测试不仅仅意味着知道你可以获得数据,还意味着知道如何把它放回数据库,如何处理增量更改,甚至如果你需要的话,如何重新安装整个系统。这意味着确保你的恢复路径不依赖于与数据同时丢失的某些系统。
|
||||
|
||||
但测试恢复是枯燥的。这是人们在忙碌时会偷工减料的那类事情。这值得花时间使其尽可能简单、无痛、自动化,永远不要靠任何人的意志力!同时,你必须确保有关人员知道该怎么做,所以定期进行大规模的灾难测试是很好的。恢复演练是个好方法,可以找出该过程的文档是否缺失或过期,或者你是否没有足够的资源(磁盘、网络等)来传输和重新插入数据。
|
||||
|
||||
### 2、 创建<ruby>灾难恢复<rt>disaster recovery</rt></ruby> (DR) 计划最常见的挑战是什么?
|
||||
|
||||
我认为很多 DR 是一种事后的想法:“我们有这个很棒的系统,我们的业务依赖它……我猜我们应该为它做 DR?”而且到那时,系统会非常复杂,充满相互依赖关系,很难复制。
|
||||
|
||||
第一次安装的东西,它通常是由人手动调整才正常工作的,有时那是个具体特定的版本。当你构建_第二_个时,很难确定它是完全一样的。即使在具有严格的配置管理的站点中,你也可能丢了某些东西,或者过期了。
|
||||
|
||||
例如,如果你已经失去对解密密钥的访问权限,那么加密备份没有太多用处。而且任何只在灾难中使用的部分都可能从你上次检查它们过后就破环了。确保你已经涵盖了所有东西的唯一方法做认真地故障切换。当你准备好了的,就计划一下你的灾难(演练)吧!
|
||||
|
||||
如果你可以设计系统,以使灾难恢复模式成为正常运行的一部分,那么情况会更好。如果你的服务从一开始就被设计为可复制的,添加更多的副本就是一个常规的操作并可能是自动化的。没有新的方法,这只是一个容量问题。但是,系统中仍然存在一些只能在一个或两个地方运行的组件。偶然计划中的假灾难能够很好地将它们暴露出来。
|
||||
|
||||
顺便说一句,那些被遗忘的组件可能包括仅在一个人的大脑中的信息,所以如果你自己发现说:“我们不能在 X 休假回来前进行 DR 故障切换测试”,那么那个人是一个危险的单点失败。
|
||||
|
||||
仅在灾难中使用的部分系统需要最多的测试,否则在需要时会失败。这个部分越少越安全,且辛苦的测试工作也越少。
|
||||
|
||||
### 3、 为什么服务相互依赖使得灾难恢复更加困难?
|
||||
|
||||
如果你只有一个二进制文件,那么恢复它是比较容易的:你做个二进制备份就行。但是我们越来越多地将通用功能分解成单独的服务。微服务意味着我们有更多的灵活性和更少地重新发明轮子:如果我们需要一个后端做一些事情,并且有一个已经存在,那么很好,我们就可以使用它。但是一些需要保留很大的依赖关系,因为它很快会变得纠缠。
|
||||
|
||||
你可能知道你直接使用的后端,但是你可能不会注意到有新的后端添加到你使用的库中。你可能依赖于某个东西,它也间接依赖于你。在依赖中断之后,你可能会遇到一个死锁:两个系统都不能启动,直到另一个运行并提供一些功能。这是一个困难的恢复情况!
|
||||
|
||||
你甚至可以最终遇到间接依赖于自身的东西,例如你需要配置启动网络的设备,但在网络关闭时无法访问该设备。人们通常会提前考虑这些循环依赖,并且有某种后备计划,但是这些本质上是不太行得通的方式:它们只适用于极端情况,并且以不同的方式使用你的系统、进程或代码。这意味着,它们很可能有一个不会被发现的问题,直到你真的,真的需要它们的工作的时候才发现。
|
||||
|
||||
### 4、 你建议人们在感觉需要之前就开始有意管理其依赖关系,以防止潜在的灾难性系统故障。为什么这很重要,你有什么建议有效地做到这一点?
|
||||
|
||||
管理你的依赖关系对于确保你可以从灾难中恢复至关重要。它使操作系统更容易。如果你的依赖不可靠,那么你就不可靠,所以你需要知道它们是什么。
|
||||
|
||||
虽然在它们变得混乱后也可以开始管理依赖关系,但是如果你早点开始,它会变得更容易一些。你可以设置使用各种服务策略——例如,你必须在堆栈中的这一层依赖于这组系统。你可以通过使其成为设计文件审查的常规部分,引入考虑依赖关系的习惯。但请记住,依赖关系列表将很快变得陈旧。如果你有程序化的发现依赖关系的方式,甚至强制实施依赖,这是最好的。 [我的 Velocity 谈话][10]涵盖了我们如何做到这一点。
|
||||
|
||||
早期开始的另一个优点是,你可以将服务拆分为垂直“层”,每个层中的功能必须能够在下一个层启动之前完全在线。所以,例如,你可以说网络必须能够完全启动而不借助任何其他服务。然后说,你的存储系统应该仅仅依赖于网络,程序后端应该仅仅依赖于网络和存储,等等。不同的层次对于不同的架构是有意义的。
|
||||
|
||||
如果你提前计划,新服务更容易选择依赖关系。每个服务应该只依赖堆栈中较低的服务。你仍然可以结束循环,在相同的层次服务上批次依赖 —— 但是它们可以更加紧密地包含,并且在逐个基础上处理更容易。
|
||||
|
||||
### 5、 你对 Velocity NY 的其他部分感兴趣么?
|
||||
|
||||
我整个星期二和星期三的时间表都完成了!正如你可能收集的那样,我非常关心大型相互依赖的系统的可管理性,所以我期待听到 [Carin Meier 关于管理系统复杂性的想法][11]、[Sarah Wells 的微服务][12]和 [Baron 的可观察性][13] 的谈话。我非常着迷听到 [Jon Moore 关于 Comcast 如何从年度发布到每天发布的故事][14]。作为一个前系统管理员,我很期待听到 [Bryan Liles 对这个职位走向的看法][15]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Nikki McDonald 是 O'Reilly Media,Inc. 的内容总监。她住在密歇根州的安娜堡市。
|
||||
|
||||
Tanya Reilly 自 2005 年以来一直是 Google 的系统管理员和站点可靠性工程师,致力于分布式锁、负载均衡和引导等底层基础架构。在加入 Google 之前,她是爱尔兰最大的 ISP eircom.net 的系统管理员,在这之前她担当了一个小型软件公司的整个 IT 部门。
|
||||
|
||||
----------------------------
|
||||
|
||||
via: https://www.oreilly.com/ideas/creating-better-disaster-recovery-plans
|
||||
|
||||
作者:[Nikki McDonald][a], [Tanya Reilly][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/nikki-mcdonald
|
||||
[b]:https://www.oreilly.com/people/5c97a-tanya-reilly
|
||||
[1]:https://pixabay.com/en/crane-baukran-load-crane-crane-arm-2436704/
|
||||
[2]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/nikki-mcdonald
|
||||
[4]:https://www.oreilly.com/people/5c97a-tanya-reilly
|
||||
[5]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_text_cta
|
||||
[6]:https://www.oreilly.com/ideas/creating-better-disaster-recovery-plans
|
||||
[7]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_right_rail_cta
|
||||
[8]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_right_rail_cta
|
||||
[9]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61400?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[10]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61400?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[11]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62779?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[12]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61597?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[13]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61630?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[14]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62733?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[15]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62893?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
184
published/20170821 Getting started with ImageMagick.md
Normal file
184
published/20170821 Getting started with ImageMagick.md
Normal file
@ -0,0 +1,184 @@
|
||||
ImageMagick 入门:使用命令行来编辑图片
|
||||
============================================================
|
||||
|
||||
> 了解使用此轻量级图像编辑器查看和修改图像的常见方法。
|
||||
|
||||

|
||||
|
||||
|
||||
在最近一篇关于[轻量级图像查看器][8]的文章中,作者 Scott Nesbitt 提到了 `display`,它是 [ImageMagick][9] 中的一个组件。ImageMagick 不仅仅是一个图像查看器,它还提供了大量的图像编辑工具和选项。本教程将详细介绍如何在 ImageMagick 中使用 `display` 命令和其他命令行工具。
|
||||
|
||||
现在有许多优秀的图像编辑器可用,你可能会想知道为什么有人会选择一个非 GUI 的、基于命令行的程序,如 ImageMagick。一方面,它非常可靠。但更大的好处是,它允许你建立一个以特定的方式编辑大量图像的方式。
|
||||
|
||||
这篇对于常见的 ImageMagick 命令的介绍应该让你入门。
|
||||
|
||||
### display 命令
|
||||
|
||||
让我们从 Scott 提到的命令开始:`display`。假设你有一个目录,其中有很多想要查看的图像。使用以下命令开始 `display`:
|
||||
|
||||
```
|
||||
cd Pictures
|
||||
display *.JPG
|
||||
```
|
||||
|
||||
这将按照字母数字顺序顺序加载你的 JPG 文件,每张放在一个简单的窗口中。左键单击图像可以打开一个简单的独立菜单(ImageMagick 中唯一的 GUI 功能)。
|
||||
|
||||

|
||||
|
||||
你可以在 **display** 菜单中找到以下内容:
|
||||
|
||||
* **File** 包含选项 Open、Next、Former、Select、Save、Print、Delete、New、Visual Directory 和 Quit。 _Select_ 来选择要显示的特定文件,_Visual Directory_ 显示当前工作目录中的所有文件(而不仅仅是图像)。如果要滚动显示所有选定的图像,你可以使用 _Next_ 和 _Former_,但使用键盘快捷键(下一张图像用空格键,上一张图像用退格)更容易。
|
||||
* **Edit** 提供 Undo、Redo、Cut、Copy 和 Paste,它们只是辅助命令进行更具体的编辑过程。 当你进行不同的编辑功能看看它们做什么时 _Undo_ 特别有用。
|
||||
* **View** 有 Half Size、Original Size、Double Size、Resize、Apply、Refresh 和 Restore。这些大多是不用说明的,除非你在应用其中之一后保存图像,否则图像文件不会更改。_Resize_ 会打开一个对话框,以像素为单位,带有或者不带尺寸限制,或者是百分比指定图片大小。我不知道 _Apply_ 会做什么。
|
||||
* **Transform** 显示 Crop、Chop、Flop、Flip、Rotate Right、Rotate Left、Rotate、Shear、Roll 和 Trim Edges。_Chop_ 使用点击拖动操作剪切图像的垂直或水平部分,将边缘粘贴在一起。了解这些功能如何工作的最佳方法是操作它们,而不是看看。
|
||||
* **Enhance** 提供 Hue、Saturation、Brightness、Gamma、Spiff、Dull、Contrast Stretch、Sigmoidal Contrast、Normalize、Equalize、Negate、Grayscale、Map 和 Quantize。这些是用于颜色和调整亮度和对比度的操作。
|
||||
* **效果** 有 Despeckle、Emboss、Reduce Noise、Add Noise、Sharpen、Blur、Threshold、Edge Detect、Spread、Shade、Raise 和 Segment。这些是相当标准的图像编辑效果。
|
||||
* **F/X** 选项有 Solarize、Sepia Tone、Swirl、Implode、Vignette、Wave、Oil Paint 和 Charcoal Draw,在图像编辑器中也是非常常见的效果。
|
||||
* **Image Edit** 包含 Annotate、Draw、Color、Matte、Composite、Add Border、Add Frame、Comment、Launch 和 Region of Interest。_Launch _ 将打开 GIMP 中的当前图像(至少在我的 Fedora 中是这样)。 _Region of Interest_ 允许你选择一个区域来应用编辑。按下 Esc 取消选择该区域。
|
||||
* **Miscellany** 提供 Image Info、Zoom Image、Show Preview、Show Histogram、Show Matte、Background、Slide Show 和 Preferences。 _Show Preview_ 似乎很有趣,但我努力让它工作。
|
||||
* **Help** 有 Overview、Browse Documentation 和 About Display。 _Overview_ 提供了大量关于 display 的基本信息,并且包含大量内置的键盘快捷键,用于各种命令和操作。在我的 Fedora 中,_Browse Documentation_ 没有作用。
|
||||
|
||||
虽然 `display` 的 GUI 界面提供了一个称职的图像编辑器,但 ImageMagick 还提供了 89 个命令行选项,其中许多与上述菜单项相对应。例如,如果我显示的数码相片目录中的图像大于我的屏幕尺寸,我不用在显示后单独调整大小,我可以指定:
|
||||
|
||||
```
|
||||
display -resize 50% *.JPG
|
||||
```
|
||||
|
||||
上面菜单中的许多操作都可以通过在命令行中添加一个选项来完成。但是还有其他的选项在菜单中没有,包括 `-monochrome`,将图像转换为黑白(不是灰度),还有 `-colors`,你可以指定在图像中使用多少种颜色。例如,尝试这些:
|
||||
|
||||
```
|
||||
display -resize 50% -monochrome *.JPG
|
||||
```
|
||||
|
||||
```
|
||||
display -resize 50% -colors 8 *.JPG
|
||||
```
|
||||
|
||||
这些操作会创建有趣的图像。试试增强颜色或进行其他编辑后减少颜色。记住,除非你保存并覆盖它们,否则原始文件保持不变。
|
||||
|
||||
### convert 命令
|
||||
|
||||
`convert` 命令有 237 个选项 - 是的, 237 个! - 它提供了你可以做的各种各样的事情(其中一些 `display` 也可以做)。我只会覆盖其中的几个,主要是图像操作。你可以用 `convert` 做的两件简单的事情是:
|
||||
|
||||
```
|
||||
convert DSC_0001.JPG dsc0001.png
|
||||
```
|
||||
|
||||
```
|
||||
convert *.bmp *.png
|
||||
```
|
||||
|
||||
第一个命令将单个文件(DSC_0001)从 JPG 转换为 PNG 格式,而不更改原始文件。第二个将对目录中的所有 BMP 图像执行此操作。
|
||||
|
||||
如果要查看 ImageMagick 可以使用的格式,请输入:
|
||||
|
||||
```
|
||||
identify -list format
|
||||
```
|
||||
|
||||
我们来看几个用 `convert` 命令来处理图像的有趣方法。以下是此命令的一般格式:
|
||||
|
||||
```
|
||||
convert inputfilename [options] outputfilename
|
||||
```
|
||||
|
||||
你有多个选项,它们按照从左到右排列的顺序完成。
|
||||
|
||||
以下是几个简单的选项:
|
||||
|
||||
```
|
||||
convert monochrome_source.jpg -monochrome monochrome_example.jpg
|
||||
```
|
||||
|
||||
|
||||

|
||||
|
||||
```
|
||||
convert DSC_0008.jpg -charcoal 1.2 charcoal_example.jpg
|
||||
```
|
||||
|
||||

|
||||
|
||||
`-monochrome` 选项没有关联的设置,但 `-charcoal` 变量需要一个相关因子。根据我的经验,它需要一个小的数字(甚至小于 1)来实现类似于炭笔绘画的东西,否则你会得到很大的黑色斑点。即使如此,图像中的尖锐边缘也是非常明显的,与炭笔绘画不同。
|
||||
|
||||
现在来看看这些:
|
||||
|
||||
```
|
||||
convert DSC_0032.JPG -edge 3 edge_demo.jpg
|
||||
```
|
||||
|
||||
```
|
||||
convert DSC_0032.JPG -colors 4 reduced4_demo.jpg
|
||||
```
|
||||
|
||||
```
|
||||
convert DSC_0032.JPG -colors 4 -edge 3 reduced+edge_demo.jpg
|
||||
```
|
||||
|
||||

|
||||
|
||||
原始图像位于左上方。在第一个命令中,我使用了一个 `-edge` 选项,设置为 3(见右上角的图像) - 对于我的喜好而言小于它的数字都太精细了。在第二个命令(左下角的图像)中,我们将颜色的数量减少到了 4 个,与原来没有什么不同。但是看看当我们在第三个命令中组合这两个时,会发生什么(右下角的图像)!也许这有点大胆,但谁能预期到从原始图像或任何一个选项变成这个结果?
|
||||
|
||||
`-canny` 选项提供了另外一个惊喜。这是另一种边缘检测器,称为“多阶算法”。单独使用 `-canny` 可以产生基本黑色的图像和一些白线。我后面跟着一个 `-negate` 选项:
|
||||
|
||||
```
|
||||
convert DSC_0049.jpg -canny 0x1 -negate canny_egret.jpg
|
||||
convert DSC_0023.jpg -canny 0x1 -negate canny_ship.jpg
|
||||
```
|
||||
|
||||

|
||||
|
||||
这有点极简主义,但我认为它类似于一种笔墨绘画,与原始照片有相当显著的差异。它并不能用于所有图片。一般来说,它对有锐利线条的图像效果最好。不是焦点的元素可能会消失。注意白鹭图片中的背景沙滩没有显示,因为它是模糊的。同样注意下船舶图片,虽然大多数边缘显示得非常好,因为没有颜色,我们失去了图片的整体形象,所以也许这可以作为一些数字着色,甚至在印后着色的基础。
|
||||
|
||||
### montage 命令
|
||||
|
||||
最后,我想谈一下 `montage` (蒙太奇)命令。我已经在上面展示了这个例子,我将单个图像组合成复合图片。
|
||||
|
||||
这是我如何生成炭笔的例子(请注意,它们都在一行):
|
||||
|
||||
```
|
||||
montage -label %f DSC_0008.jpg charcoal_example.jpg -geometry +10+10
|
||||
-resize 25% -shadow -title 'charcoal demo' charcoal_demo.jpg
|
||||
```
|
||||
|
||||
`-label` 选项会在每个图像下方标记它的文件名(`%f`)。不用 `-geometry` 选项,所有的图像将是缩略图大小(120 像素宽),`+10+10` 负责边框大小。接下来,我调整了整个最终组合的大小(`-resize 25%`),并添加了一个阴影(没有设置,因此是默认值),最后为这次 montage 操作创建了一个标题(`-title`)。
|
||||
|
||||
你可以将所有图像名称放在最后,最后一个图像的名称将是 `montage` 操作所保存的文件名。这可用于为命令及其所有选项创建别名,然后我可以简单地键入该别名、输入适当的文件名即可。我偶尔会这么做来减少 `montage` 操作需要输入的命令长度。
|
||||
|
||||
在 `-canny` 的例子中,我对 4 张图像进行了蒙太奇操作。我添加了 `-tile` 选项,确切地说是 `-tile 2x`,它创建了有两列的蒙太奇。我可以指定一个 `matrix`、`-tile 2x2` 或 `-tile x2` 来产生相同的结果。
|
||||
|
||||
ImageMagick 还有更多可以了解,所以我打算写更多关于它的文章,甚至可能使用 [Perl][10] 脚本运行 ImageMagick 命令。ImageMagick 具有丰富的[文档][11],尽管该网站在示例或者显示结果上还不足,我认为最好的学习方式是通过实验和更改各种设置和选项来学习。
|
||||
|
||||
(题图: opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Greg Pittman - Greg 是肯塔基州路易斯维尔的一名退休的神经科医生,对计算机和程序设计有着长期的兴趣,从 1960 年代的 Fortran IV 开始。当 Linux 和开源软件相继出现时,他开始学习更多,并最终做出贡献。他是 Scribus 团队的成员。
|
||||
|
||||
---------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/imagemagick
|
||||
|
||||
作者:[Greg Pittman][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/greg-p
|
||||
[1]:https://opensource.com/file/367401
|
||||
[2]:https://opensource.com/file/367391
|
||||
[3]:https://opensource.com/file/367396
|
||||
[4]:https://opensource.com/file/367381
|
||||
[5]:https://opensource.com/file/367406
|
||||
[6]:https://opensource.com/article/17/8/imagemagick?rate=W2W3j4nu4L14gOClu1RhT7GOMDS31pUdyw-dsgFNqYI
|
||||
[7]:https://opensource.com/user/30666/feed
|
||||
[8]:https://opensource.com/article/17/7/4-lightweight-image-viewers-linux-desktop
|
||||
[9]:https://www.imagemagick.org/script/index.php
|
||||
[10]:https://opensource.com/sitewide-search?search_api_views_fulltext=perl
|
||||
[11]:https://imagemagick.org/script/index.php
|
||||
[12]:https://opensource.com/users/greg-p
|
||||
[13]:https://opensource.com/users/greg-p
|
||||
[14]:https://opensource.com/article/17/8/imagemagick#comments
|
173
published/20170822 Running WordPress in a Kubernetes Cluster.md
Normal file
173
published/20170822 Running WordPress in a Kubernetes Cluster.md
Normal file
@ -0,0 +1,173 @@
|
||||
在 Kubernetes 集群中运行 WordPress
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
作为一名开发者,我会尝试留意那些我可能不会每天使用的技术的进步。了解这些技术至关重要,因为它们可能会间接影响到我的工作。比如[由 Docker 推动][8]的、近期正在兴起的容器化技术,可用于上规模地托管 Web 应用。从技术层面来讲,我并不是一个 DevOps,但当我每天构建 Web 应用时,多去留意这些技术如何去发展,会对我有所裨益。
|
||||
|
||||
这种进步的一个绝佳的例子,是近一段时间高速发展的容器编排平台。它允许你轻松地部署、管理容器化应用,并对它们的规模进行调整。目前看来,容器编排的流行工具有 [Kubernetes (来自 Google)][9],[Docker Swarm][10] 和 [Apache Mesos][11]。如果你想较好的了解上面那些技术以及它们的区别,我推荐你看一下[这篇文章][12]。
|
||||
|
||||
在这篇文章中,我们将会从一些简单的操作开始,了解一下 Kubernetes 平台,看看如何将一个 WordPress 网站部署在本地机器上的一个单节点集群中。
|
||||
|
||||
### 安装 Kubernetes
|
||||
|
||||
在 [Kubernetes 文档][13]中有一个很好的互动教程,涵盖了很多东西。但出于本文的目的,我只会介绍在 MacOS 中 Kuberentes 的安装和使用。
|
||||
|
||||
我们要做的第一件事是在你的本地主机中安装 Kubernetes。我们将使用一个叫做 [MiniKube][14] 的工具,它专门用于在你的机器上方便地设置一个用于测试的 Kubernetes 集群。
|
||||
|
||||
根据 Minikube 文档,在我们开始之前,有一些先决条件。首先要保证你已经安装了一个 Hypervisor (我将会使用 Virtualbox)。接下来,我们需要[安装 Kubernetes 命令行工具][15](也就是 `kubectl`)。如果你在用 Homebrew,这一步非常简单,只需要运行命令:
|
||||
|
||||
```
|
||||
$ brew install kubectl
|
||||
```
|
||||
|
||||
现在我们可以真正 [安装 Minikube][16] 了:
|
||||
|
||||
```
|
||||
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
|
||||
最后,我们要[启动 Minicube][17] 创建一个虚拟机,来作为我们的单节点 Kubernetes 集群。现在我要说一点:尽管我们在本文中只在本地运行它,但是在[真正的服务器][18]上运行 Kubernetes 集群时,后面提到的大多数概念都会适用。在多节点集群上,“主节点”将负责管理其它工作节点(虚拟机或物理服务器),并且 Kubernetes 将会在集群中自动进行容器的分发和调度。
|
||||
|
||||
```
|
||||
$ minikube start --vm-driver=virtualbox
|
||||
```
|
||||
|
||||
### 安装 Helm
|
||||
|
||||
现在,本机中应该有一个正在运行的(单节点)Kubernetes 集群了。我们现在可以用任何方式来与 Kubernetes 交互。如果你想现在可以体验一下,我觉得 [kubernetesbyexample.com][19] 可以很好地向你介绍 Kubernetes 的概念和术语。
|
||||
|
||||
虽然我们可以手动配置这些东西,但实际上我们将会使用另外的工具,来将我们的 WordPress 应用部署到 Kubernetes 集群中。[Helm][20] 被称为“Kubernetes 的包管理工具”,它可以让你轻松地在你的集群中部署预构建的软件包,也就是“<ruby>图表<rt>chart</rt></ruby>”。你可以把图表看做一组专为特定应用(如 WordPress)而设计的容器定义和配置。首先我们在本地主机上安装 Helm:
|
||||
|
||||
```
|
||||
$ brew install kubernetes-helm
|
||||
```
|
||||
|
||||
然后我们需要在集群中安装 Helm。 幸运的是,只需要运行下面的命令就好:
|
||||
|
||||
```
|
||||
$ helm init
|
||||
```
|
||||
|
||||
### 安装 WordPress
|
||||
|
||||
现在 Helm 已经在我们的集群中运行了,我们可以安装 [WordPress 图表][21]。运行:
|
||||
|
||||
```
|
||||
$ helm install --namespace wordpress --name wordpress --set serviceType=NodePort stable/wordpress
|
||||
```
|
||||
|
||||
这条命令将会在容器中安装并运行 WordPress,并在容器中运行 MariaDB 作为数据库。它在 Kubernetes 中被称为“Pod”。一个 [Pod][22] 基本上可视为一个或多个应用程序容器和这些容器的一些共享资源(例如存储卷,网络等)的组合的抽象。
|
||||
|
||||
我们需要给这个部署一个名字和一个命名空间,以将它们组织起来并便于查找。我们同样会将 `serviceType` 设置为 `NodePort` 。这一步非常重要,因为在默认设置中,服务类型会被设置为 `LoadBalancer`。由于我们的集群现在没有负载均衡器,所以我们将无法在集群外访问我们的 WordPress 站点。
|
||||
|
||||
在输出数据的最后一部分,你会注意到一些关于访问你的 WordPress 站点的有用的命令。运行那些命令,你可以获取到我们的 WordPress 站点的外部 IP 地址和端口:
|
||||
|
||||
```
|
||||
$ export NODE_PORT=$(kubectl get --namespace wordpress -o jsonpath="{.spec.ports[0].nodePort}" services wordpress-wordpress)
|
||||
$ export NODE_IP=$(kubectl get nodes --namespace wordpress -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
$ echo http://$NODE_IP:$NODE_PORT/admin
|
||||
```
|
||||
|
||||
你现在访问刚刚生成的 URL(忽略 `/admin` 部分),就可以看到 WordPress 已经在你的 Kubernetes 集群中运行了!
|
||||
|
||||
### 扩展 WordPress
|
||||
|
||||
Kubernetes 等服务编排平台的一个伟大之处,在于它将应用的扩展和管理变得易如反掌。我们看一下应用的部署状态:
|
||||
|
||||
```
|
||||
$ kubectl get deployments --namespace=wordpress
|
||||
```
|
||||
|
||||
[][23]
|
||||
|
||||
可以看到,我们有两个部署,一个是 Mariadb 数据库,一个是 WordPress 本身。现在,我们假设你的 WordPress 开始承载大量的流量,所以我们想将这些负载分摊在多个实例上。我们可以通过一个简单的命令来扩展 `wordpress-wordpress` 部署:
|
||||
|
||||
```
|
||||
$ kubectl scale --replicas 2 deployments wordpress-wordpress --namespace=wordpress
|
||||
```
|
||||
|
||||
再次运行 `kubectl get deployments`,我们现在应该会看到下面的场景:
|
||||
|
||||
[][24]
|
||||
|
||||
你刚刚扩大了你的 WordPress 站点规模!超级简单,对不对?现在我们有了多个 WordPress 容器,可以在它们之中对流量进行负载均衡。想了解 Kubernetes 扩展的更多信息,参见[这篇指南][25]。
|
||||
|
||||
### 高可用
|
||||
|
||||
Kubernetes 等平台的的另一大特色在于,它不单单能进行方便的扩展,还可以通过自愈组件来提供高可用性。假设我们的一个 WordPress 部署因为某些原因失效了,那 Kubernetes 会立刻自动替换掉这个部署。我们可以通过删除我们 WordPress 部署的一个 pod 来模拟这个过程。
|
||||
|
||||
首先运行命令,获取 pod 列表:
|
||||
|
||||
```
|
||||
$ kubectl get pods --namespace=wordpress
|
||||
```
|
||||
|
||||
[][26]
|
||||
|
||||
然后删除其中一个 pod:
|
||||
|
||||
```
|
||||
$ kubectl delete pod wordpress-wordpress-876183909-jqc8s --namespace=wordpress
|
||||
```
|
||||
|
||||
如果你再次运行 `kubectl get pods` 命令,应该会看到 Kubernetes 立刻换上了新的 pod (`3l167`)。
|
||||
|
||||
[][27]
|
||||
|
||||
### 更进一步
|
||||
|
||||
我们只是简单了解了 Kubernetes 能完成工作的表面。如果你想深入研究,我建议你查看以下功能:
|
||||
|
||||
* [平行扩展][2]
|
||||
* [自愈][3]
|
||||
* [自动更新及回滚][4]
|
||||
* [密钥管理][5]
|
||||
|
||||
你在容器平台上运行过 WordPress 吗?有没有使用过 Kubernetes(或其它容器编排平台),有没有什么好的技巧?你通常会怎么扩展你的 WordPress 站点?请在评论中告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Gilbert 喜欢构建软件。从 jQuery 脚本到 WordPress 插件,再到完整的 SaaS 应用程序,Gilbert 一直在创造优雅的软件。 他粗昂做的最有名的的产品,应该是 Nivo Slider.
|
||||
|
||||
--------
|
||||
|
||||
|
||||
via: https://deliciousbrains.com/running-wordpress-kubernetes-cluster/
|
||||
|
||||
作者:[Gilbert Pellegrom][a]
|
||||
译者:[StdioA](https://github.com/StdioA)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://deliciousbrains.com/author/gilbert-pellegrom/
|
||||
[1]:https://deliciousbrains.com/author/gilbert-pellegrom/
|
||||
[2]:https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
|
||||
[3]:https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller
|
||||
[4]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#what-is-a-deployment
|
||||
[5]:https://kubernetes.io/docs/concepts/configuration/secret/
|
||||
[6]:https://deliciousbrains.com/running-wordpress-kubernetes-cluster/
|
||||
[7]:https://deliciousbrains.com/running-wordpress-kubernetes-cluster/
|
||||
[8]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
|
||||
[9]:https://kubernetes.io/
|
||||
[10]:https://docs.docker.com/engine/swarm/
|
||||
[11]:http://mesos.apache.org/
|
||||
[12]:https://mesosphere.com/blog/docker-vs-kubernetes-vs-apache-mesos/
|
||||
[13]:https://kubernetes.io/docs/tutorials/kubernetes-basics/
|
||||
[14]:https://kubernetes.io/docs/getting-started-guides/minikube/
|
||||
[15]:https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[16]:https://github.com/kubernetes/minikube/releases
|
||||
[17]:https://kubernetes.io/docs/getting-started-guides/minikube/#quickstart
|
||||
[18]:https://kubernetes.io/docs/tutorials/kubernetes-basics/
|
||||
[19]:http://kubernetesbyexample.com/
|
||||
[20]:https://docs.helm.sh/
|
||||
[21]:https://kubeapps.com/charts/stable/wordpress
|
||||
[22]:https://kubernetes.io/docs/tutorials/kubernetes-basics/explore-intro/
|
||||
[23]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120711/image4.png
|
||||
[24]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120710/image2.png
|
||||
[25]:https://kubernetes.io/docs/tutorials/kubernetes-basics/scale-intro/
|
||||
[26]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120711/image3.png
|
||||
[27]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120709/image1.png
|
@ -1,32 +1,21 @@
|
||||
使用 Ansible 部署无服务应用
|
||||
使用 Ansible 部署无服务(serverless)应用
|
||||
============================================================
|
||||
|
||||
### 无服务是托管服务方向迈出的另一步,并且与 Ansible 的无代理体系结构相得益彰。
|
||||
> <ruby>无服务<rt>serverless</rt></ruby>是<ruby>托管服务<rt>managed service</rt></ruby>发展方向的又一步,并且与 Ansible 的无代理体系结构相得益彰。
|
||||
|
||||

|
||||
图片提供: opensource.com
|
||||
|
||||
[Ansible][8] 被设计为实际工作中的最简单的部署工具。这意味着它不是一个完整的编程语言。你编写定义任务的 YAML 模板,并列出任何需要自动完成的任务。
|
||||
[Ansible][8] 被设计为实际工作中的最简化的部署工具。这意味着它不是一个完整的编程语言。你需要编写定义任务的 YAML 模板,并列出任何需要自动完成的任务。
|
||||
|
||||
大多数人认为 Ansible 是更强大的“ for 循环中的 SSH”,在简单的情况下这是真的。但真正的 Ansible 是_任务_,而不是 SSH。对于很多情况,我们通过 SSH 进行连接,但也支持 Windows 机器上的 Windows Remote Management(WinRM),以及作为云服务的通用语言的 HTTPS API 之类的东西。。
|
||||
大多数人认为 Ansible 是一种更强大的“处于 for 循环中的 SSH”,在简单的使用场景下这是真的。但其实 Ansible 是_任务_,而非 SSH。在很多情况下,我们通过 SSH 进行连接,但它也支持 Windows 机器上的 Windows 远程管理(WinRM),以及作为云服务的通用语言的 HTTPS API 之类的东西。
|
||||
|
||||
更多关于 Ansible
|
||||
在云中,Ansible 可以在两个独立的层面上操作:<ruby>控制面<rt>control plane</rt></ruby>和<ruby>实例资源<rt>on-instance resource</rt></ruby>。控制面由所有_没有_运行在操作系统上的东西组成。包括设置网络、新建实例、供给更高级别的服务,如亚马逊的 S3 或 DynamoDB,以及保持云基础设施安全和服务客户所需的一切。
|
||||
|
||||
* [Ansible 如何工作][1]
|
||||
实例上的工作是你已经知道 Ansible 可以做的:启动和停止服务、配置文件<ruby>模版化<rt>templating</rt></ruby>、安装软件包以及通过 SSH 执行的所有与操作系统相关的操作。
|
||||
|
||||
* [免费的 Ansible 电子书][2]
|
||||
现在,什么是<ruby>[无服务][9]<rt>serverless</rt></ruby>呢?这要看你问谁,无服务要么是对公有云的无限延伸,或者是一个全新的范例,其中所有的东西都是 API 调用,以前从来没有这样做过。
|
||||
|
||||
* [Ansible 快速入门视频][3]
|
||||
|
||||
* [下载并安装 Ansible][4]
|
||||
|
||||
在云中,Ansible 可以在两个独立的层上操作:控制面和实例资源。控制面由所有_没有_运行在操作系统上的东西组成。包括设置网络、新建实例、配置更高级别的服务,如亚马逊的 S3 或 DynamoDB,以及保持云基础设施安全和服务客户所需的一切。
|
||||
|
||||
实例上的工作是你已经知道 Ansible 可以做的:启动和停止服务、模板配置文件、安装软件包以及通过 SSH 执行的所有与操作系统相关的操作。
|
||||
|
||||
现在,关于[无服务][9]怎么样呢?根据你的要求,无服务要么是对公有云的无限延伸,或者是一个全新的范例,其中所有的东西都是 API 调用,以前从来没有这样做过。
|
||||
|
||||
Ansible 采取第一种观点。在 “无服务” 是艺术术语之前,用户不得不管理和配置 EC2 实例、虚拟私有云 (VPC) 网络以及其他所有内容。无服务是托管服务方向迈出的另一步,并且与 Ansible 的无代理体系结构相得益彰。
|
||||
Ansible 采取第一种观点。在 “无服务” 是专门术语之前,用户不得不管理和配置 EC2 实例、虚拟私有云 (VPC) 网络以及其他所有内容。无服务是托管服务方向迈出的另一步,并且与 Ansible 的无代理体系结构相得益彰。
|
||||
|
||||
在我们开始 [Lambda][10] 示例之前,让我们来看一个简单的配置 CloudFormation 栈任务:
|
||||
|
||||
@ -38,9 +27,9 @@ Ansible 采取第一种观点。在 “无服务” 是艺术术语之前,用
|
||||
template: base_vpc.yml
|
||||
```
|
||||
|
||||
编写这样的任务只需要几分钟,但它是构建基础架构所涉及的最后半手动步骤 - 点击 “Create Stack” - 这将 playbook 与其他放在一起。现在你的 VPC 只是在建立新区域时可以调用的另一项任务了。
|
||||
编写这样的任务只需要几分钟,但它是构建基础架构所涉及的最后的半手动步骤 - 点击 “Create Stack” - 这将 playbook 与其他放在一起。现在你的 VPC 只是在建立新区域时可以调用的另一项任务了。
|
||||
|
||||
由于云提供商是你帐户中发生些什么的真相来源,因此 Ansible 有许多方法来取回并使用 ID、名称和其他参数来过滤和查询运行的实例或网络。以 **cloudformation_facts** 模块为例,我们可以从我们刚刚创建的模板中得到子网 ID、网络范围和其他数据。
|
||||
由于云提供商是你帐户中发生些什么的真相来源,因此 Ansible 有许多方法来取回并使用 ID、名称和其他参数来过滤和查询运行的实例或网络。以 `cloudformation_facts` 模块为例,我们可以从我们刚刚创建的模板中得到子网 ID、网络范围和其他数据。
|
||||
|
||||
```
|
||||
- name: Pull all new resources back in as a variable
|
||||
@ -49,7 +38,7 @@ Ansible 采取第一种观点。在 “无服务” 是艺术术语之前,用
|
||||
register: network_stack
|
||||
```
|
||||
|
||||
对于无服务应用,除了 DynamoDB 表,S3 bucket 和其他任何其他功能之外,你肯定还需要一个 Lambda 函数的补充。幸运的是,通过使用 **lambda** 模块, Lambda 函数可以作为堆栈以上次相同的方式创建:
|
||||
对于无服务应用,除了 DynamoDB 表,S3 bucket 和其他任何其他功能之外,你肯定还需要一个 Lambda 函数的补充。幸运的是,通过使用 `lambda` 模块, Lambda 函数可以以上次任务的堆栈相同的方式创建:
|
||||
|
||||
```
|
||||
- lambda:
|
||||
@ -75,7 +64,7 @@ Ansible 采取第一种观点。在 “无服务” 是艺术术语之前,用
|
||||
register: sls_facts
|
||||
```
|
||||
|
||||
这不是你需要的一切,因为无服务项目也必须存在,你将在那里做大量的定义你的函数和事件源。对于此例,我们将制作一个响应 HTTP 请求的函数。无服务框架使用 YAML 作为其配置语言(和 Ansible 一样),所以这应该看起来很熟悉。
|
||||
这不是你需要的全部,因为无服务项目也必须存在,你将在那里大量的定义你的函数和事件源。对于此例,我们将制作一个响应 HTTP 请求的函数。无服务框架使用 YAML 作为其配置语言(和 Ansible 一样),所以这应该看起来很熟悉。
|
||||
|
||||
```
|
||||
# serverless.yml
|
||||
@ -96,15 +85,17 @@ functions:
|
||||
|
||||
在 [AnsibleFest][12] 中,我将介绍这个例子和其他深入的部署策略,以最大限度地利用你已经拥有的 playbook 和基础设施,还有新的无服务实践。无论你是否能到,我希望这些例子可以让你开始使用 Ansible,无论你是否有任何服务要管理。
|
||||
|
||||
_AnsibleFest 是一个_ _一日_ _会议,汇集了数百名 Ansible 用户、开发人员和行业合作伙伴。为了产品更新、鼓舞人心的交谈、技术深度潜水,动手演示和一天的网络加入我们吧。9 月 7 日在旧金山获得你的 AnsibleFest 票。在[**注册**][6]页使用优惠代码 **OPENSOURCE** 节省 25%。_
|
||||
_AnsibleFest 是一个单日会议,汇集了数百名 Ansible 用户、开发人员和行业合作伙伴。加入我们吧,这里有产品更新、鼓舞人心的交谈、技术深度潜水,动手演示和整天的网络。_
|
||||
|
||||
(题图: opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/ansible-serverless-applications
|
||||
|
||||
作者:[Ryan Scott Brown ][a]
|
||||
作者:[Ryan Scott Brown][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,19 +1,17 @@
|
||||
使用 Kubernetes 进行本地开发 - Minikube
|
||||
Minikube:使用 Kubernetes 进行本地开发
|
||||
============================================================
|
||||
|
||||
如果你的运维团队在使用 Docker 和 Kubernetes,那么建议开发上采用相同或相似的技术。这将减少不兼容性和可移植性问题的数量,并使每个人都认为应用程序容器是开发和运维团队的共同责任。
|
||||
|
||||
如果你的运维团队在使用 Docker 和 Kubernetes,那么建议开发上采用相同或相似的技术。这将减少不兼容性和可移植性问题的数量,并使每个人都会认识到应用程序容器是开发和运维团队的共同责任。
|
||||
|
||||

|
||||
|
||||
这篇博客文章介绍了 Kubernetes 在开发模式中的用法,它受到[没有痛苦的 Docker 教程][10]中的截屏的启发。
|
||||
这篇博客文章介绍了 Kubernetes 在开发模式中的用法,它的灵感来自于一个视频教程,你可以在“[无痛 Docker 教程][10]”中找到它。
|
||||
|
||||
][1]
|
||||

|
||||
|
||||
Minikube 允许开发人员在本地使用和运行 Kubernetes 集群,从而使开发人员的生活变得轻松的一种工具。
|
||||
Minikube 是一个允许开发人员在本地使用和运行 Kubernetes 集群的工具,从而使开发人员的生活变得轻松。
|
||||
|
||||
|
||||
在这篇博客中,对于我测试的例子,我使用的是 Linux Mint 18,但它在安装部分没有区别
|
||||
在这篇博客中,对于我测试的例子,我使用的是 Linux Mint 18,但其它 Linux 发行版在安装部分没有区别。
|
||||
|
||||
```
|
||||
cat /etc/lsb-release
|
||||
@ -26,62 +24,47 @@ DISTRIB_CODENAME=serena
|
||||
DISTRIB_DESCRIPTION=”Linux Mint 18.1 Serena”
|
||||
```
|
||||
|
||||
|
||||

|
||||
|
||||
#### 先决条件
|
||||
### 先决条件
|
||||
|
||||
为了与 Minkube 一起工作,我们应该安装 Kubectl 和 Minikube 和一些虚拟化驱动程序。
|
||||
|
||||
* 对于 OS X,安装 [xhyve 驱动][2]、[VirtualBox][3] 或者 [VMware Fusion][4],接着是 Kubectl 和 Minkube。
|
||||
* 对于 OS X,安装 [xhyve 驱动][2]、[VirtualBox][3] 或者 [VMware Fusion][4],然后再安装 Kubectl 和 Minkube。
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
```
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
|
||||
* 对于 Windows,安装 [VirtualBox][6] 或者 [Hyper-V][7],接着是 Kubectl 和 Minkube。
|
||||
* 对于 Windows,安装 [VirtualBox][6] 或者 [Hyper-V][7],然后再安装 Kubectl 和 Minkube。
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
将二进制文件添加到你的 PATH 中(这篇[文章][11]解释了如何修改 PATH)
|
||||
将二进制文件添加到你的 PATH 中(这篇[文章][11]解释了如何修改 PATH)
|
||||
|
||||
下载 `minikube-windows-amd64.exe`,将其重命名为 `minikube.exe`,并将其添加到你的 PATH 中。
|
||||
下载 `minikube-windows-amd64.exe`,将其重命名为 `minikube.exe`,并将其添加到你的 PATH 中。[在这][12]可以找到最新版本。
|
||||
|
||||
[在这][12]找到最后一个版本。
|
||||
* 对于 Linux,安装 [VirtualBox][8] 或者 [KVM][9],然后再安装 Kubectl 和 Minkube。
|
||||
|
||||
* 对于 Linux,安装 [VirtualBox][8] 或者 [KVM][9],接着是 Kubectl 和 Minkube。
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
|
||||
chmod +x ./kubectl
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
|
||||
#### 使用 Minikube
|
||||
### 使用 Minikube
|
||||
|
||||
我们先从这个 Dockerfile 创建一个镜像:
|
||||
|
||||
@ -106,19 +89,16 @@ docker build -t eon01/hello-world-web-server .
|
||||
docker run -d --name webserver -p 8000:8000 eon01/hello-world-web-server
|
||||
```
|
||||
|
||||
这是 docker ps 的输出:
|
||||
这是 `docker ps` 的输出:
|
||||
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
2ad8d688d812 eon01/hello-world-web-server "/bin/sh -c 'httpd..." 3 seconds ago Up 2 seconds 0.0.0.0:8000->8000/tcp webserver
|
||||
```
|
||||
|
||||
让我们提交镜像并将其上传到公共 Docker Hub 中。你可以使用自己的私有仓库:
|
||||
让我们提交镜像并将其上传到公共 Docker Hub 中。你也可以使用自己的私有仓库:
|
||||
|
||||
```
|
||||
docker commit webserver
|
||||
@ -143,7 +123,7 @@ minkube start
|
||||
minikube status
|
||||
```
|
||||
|
||||
我们运行一个节点:
|
||||
我们运行一个单一节点:
|
||||
|
||||
```
|
||||
kubectl get node
|
||||
@ -215,7 +195,7 @@ kubetctl get pods
|
||||
kubectl exec webserver-2022867364-0v1p9 -it -- /bin/sh
|
||||
```
|
||||
|
||||
要完成,请删除所有部署:
|
||||
最后完成了,请删除所有部署:
|
||||
|
||||
```
|
||||
kubectl delete deployments --all
|
||||
@ -237,7 +217,7 @@ minikube stop
|
||||
|
||||
### 更加深入
|
||||
|
||||
如果你对本文感到共鸣,您可以在[没有痛苦的 Docker 教程][13]中找到更多有趣的内容。
|
||||
如果你对本文感到共鸣,您可以在[无痛 Docker 教程][13]中找到更多有趣的内容。
|
||||
|
||||
我们 [Eralabs][14] 将很乐意为你的 Docker 和云计算项目提供帮助,[联系我们][15],我们将很乐意听到你的项目。
|
||||
|
||||
@ -245,9 +225,9 @@ minikube stop
|
||||
|
||||
你可能也有兴趣加入我们的新闻订阅 [Shipped][17],一个专注于容器,编排和无服务技术的新闻订阅。
|
||||
|
||||
你可以在[Twitter][18]、[Clarity][19] 或我的[网站][20]上找到我,你也可以看看我的书:[SaltStack For DevOps][21]。
|
||||
你可以在 [Twitter][18]、[Clarity][19] 或我的[网站][20]上找到我,你也可以看看我的书:[SaltStack For DevOps][21]。
|
||||
|
||||
不要忘记加入我的最后一个项目[DevOps 的职位][22]!
|
||||
不要忘记加入我的最后一个项目 [DevOps 的职位][22]!
|
||||
|
||||
如果你喜欢本文,请推荐它,并与你的关注者分享。
|
||||
|
||||
@ -255,17 +235,16 @@ minikube stop
|
||||
|
||||
作者简介:
|
||||
|
||||
Aymen El Amri -
|
||||
云和软件架构师、企业家、作者、www.eralabs.io 的 CEO、www.devopslinks.com 的创始人,个人页面:www.aymenelamri.com
|
||||
Aymen El Amri - 云和软件架构师、企业家、作者、www.eralabs.io 的 CEO、www.devopslinks.com 的创始人,个人页面:www.aymenelamri.com
|
||||
|
||||
-------------------
|
||||
|
||||
|
||||
via: https://medium.com/devopslinks/using-kubernetes-minikube-for-local-development-c37c6e56e3db
|
||||
|
||||
作者:[Aymen El Amri ][a]
|
||||
作者:[Aymen El Amri][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,61 @@
|
||||
一个开源软件许可证合规的经济高效模式
|
||||
============================================================
|
||||
|
||||
> 使用开源的方式有利于你的盈亏底线以及开源生态系统。
|
||||
|
||||
|
||||

|
||||
|
||||
|
||||
“<ruby>合规性工业联合体<rt>The Compliance Industrial Complex</rt></ruby>” 是一个术语,它会唤起那些组织参与精心设计并且花费昂贵流程的以遵守开源许可条款的反乌托邦想象。由于“生活经常模仿艺术”,许多组织采用了这种做法,可惜的是它们剥夺了许多开源模型的好处。本文介绍了一种经济高效的开源软件许可证合规性方法。
|
||||
|
||||
开源许可证通常对从第三方授权的代码分发者有三个要求:
|
||||
|
||||
1. 提供开源许可证的副本
|
||||
2. 包括版权声明
|
||||
3. 对于 copyleft 许可证(如 GPL),将相应的源代码提供给接受者。
|
||||
|
||||
_(与任何一般性声明一样,可能会有例外情况,因此始终建议审查许可条款,如有需要,请咨询律师的意见。)_
|
||||
|
||||
因为源代码(以及任何相关的文件,例如:许可证、README)通常都包含所有这些信息,所以最简单的遵循方法就是随着二进制/可执行程序一起提供源代码。
|
||||
|
||||
替代方案更加困难并且昂贵,因为在大多数情况下,你仍然需要提供开源许可证的副本并保留版权声明。提取这些信息来结合你的二进制/可执行版本并不简单。你需要流程、系统和人员来从源代码和相关文件中复制此信息,并将其插入到单独的文本文件或文档中。
|
||||
|
||||
不要低估创建此文件的时间和费用。虽然有工具也许可以自动化部分流程,但这些工具通常需要人力资源(例如工程师、质量经理、发布经理)来准备代码来扫描并对结果进行评估(没有完美的工具,几乎总是需要审查)。你的组织资源有限,将其转移到此活动会增加机会成本。考虑到这笔费用,每个后续版本(主要或次要)的成本将需要进行新的分析和修订。
|
||||
|
||||
也有因不选择发布不能被很好识别的源码而导致增加的其他成本。这些根源在于不向开源项目的原始作者和/或维护者发布源代码, 这一活动称为上游化。独自上游化一般不满足大多数开源许可证的要求,这就是为什么这篇文章主张与你的二进制/可执行文件一起发布源代码。然而,上游化和提供源代码以及二进制/可执行文件都能提供额外的经济效益。这是因为你的组织不再需要保留随着每次发布合并开源代码修改而产生的私有分支 - 由于你的内部代码库与社区项目不同,这将是越来越消耗和凌乱的工作。上游化还增强了开源生态系统,它会鼓励社区创新,从中你的组织或许也会得到收益。
|
||||
|
||||
那么为什么大量的组织不会为其产品发布源代码来简化其合规性工作?在许多情况下,这是因为他们认为这可能会暴露他们竞争优势的信息。考虑到这些专有产品中的大量代码可能是开源代码的直接副本,以支持诸如 WiFi 或云服务这些当代产品的基础功能,这种信念可能是错误的。
|
||||
|
||||
即使对这些开源作品进行了修改来适配其专有产品,这些更改也往往是微不足道的,并包含了很少的新的版权部分或可用来专利的内容。因此,任何组织都应该通过这种方式来查看其代码,因为它可能会发现其代码库中绝大部分是开源的,只有一小部分是真正专有的、与竞争对手区分开来的部分。那么为什么不分发和向上游提交这些没有差别的代码呢?
|
||||
|
||||
考虑一下拒绝遵从工业联合体的思维方式, 以降低成本并大大简化合规性。使用开源的方式,并体验发布你的源代码的乐趣,以造福于你的盈亏底线和开源生态系统,从中你将继续收获更多的利益。
|
||||
|
||||
------------------------
|
||||
|
||||
作者简介
|
||||
|
||||
Jeffrey Robert Kaufman - Jeffrey R. Kaufman 是全球领先的开源软件解决方案提供商红帽公司的开源知识产权律师。Jeffrey 还担任着 Thomas Jefferson 法学院的兼职教授。 在加入红帽前,Jeffrey 在高通担任专利法律顾问,为首席科学家办公室提供开源顾问。 Jeffrey 在 RFID、条形码、图像处理和打印技术方面拥有多项专利。[更多关于我][2]
|
||||
|
||||
(题图: opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/economically-efficient-model
|
||||
|
||||
作者:[Jeffrey Robert Kaufman][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jkaufman
|
||||
[1]:https://opensource.com/article/17/9/economically-efficient-model?rate=0SO3DeFAxtgLdmZxE2ZZQyTRTTbu2OOlksFZSUXmjJk
|
||||
[2]:https://opensource.com/users/jkaufman
|
||||
[3]:https://opensource.com/user/74461/feed
|
||||
[4]:https://opensource.com/users/jkaufman
|
||||
[5]:https://opensource.com/users/jkaufman
|
||||
[6]:https://opensource.com/users/jkaufman
|
||||
[7]:https://opensource.com/tags/law
|
||||
[8]:https://opensource.com/tags/licensing
|
||||
[9]:https://opensource.com/participate
|
175
published/20170905 Maneuvering around run levels on Linux.md
Normal file
175
published/20170905 Maneuvering around run levels on Linux.md
Normal file
@ -0,0 +1,175 @@
|
||||
漫谈传统的 Linux 初始化系统的运行级别
|
||||
=====
|
||||
|
||||
> 了解运行级别是如何配置的,如何改变系统运行级别以及修改对应状态下运行的服务。
|
||||
|
||||

|
||||
|
||||
在 Linux 系统中,<ruby>运行级别<rt>run level</rt></ruby>是指运维的级别,用于描述一种表明什么服务是可用的系统运行状态。
|
||||
|
||||
运行级别 1 是严格限制的,仅仅用于系统维护;该级别下,网络连接将不可操作,但是管理员可以通过控制台连接登录系统。
|
||||
|
||||
其他运行级别的系统允许任何人登录和使用,但是不同级别中可使用的服务不同。本文将探索如何配置运行级别,如何交互式改变系统运行级别以及修改该状态下可用的服务。
|
||||
|
||||
Linux 系统的默认运行状态是一个在系统开机时使用的运行级别(除非有其他的指示),它通常是在 `/etc/inittab` 文件中进行配置的,该文件内容通常如下:
|
||||
|
||||
```
|
||||
id:3:initdefault
|
||||
```
|
||||
|
||||
包括 Debian 系统在内的一些系统,默认运行级别为 2,而不是上述文件中的 3,甚至都没有 `/etc/inittab` 文件。
|
||||
|
||||
运行级别在默认情况下是如何被配置,其配置依赖于你所运行的 Linux 操作系统的具体发行版本。 例如,在某些系统中, 运行级别 2 是多用户模式,运行级别 3 是多用户模式并支持 NFS (网络文件系统)。 在另外一些系统,运行级别 2 - 5 基本相同,运行级别 1 是单用户模式。例如,Debian 系统的所用运行级别如下:
|
||||
|
||||
```
|
||||
0 = 停机
|
||||
1 = 单用户(维护模式)
|
||||
2 = 多用户模式
|
||||
3-5 = 同 2 一样
|
||||
6 = 重启
|
||||
```
|
||||
|
||||
在 Linux 系统上,运行级别 3 用于共享文件系统给其它系统,可以方便地只通过改变系统的运行级别来启动和停止文件系统共享。系统从运行级别 2 改变到 3 系统将允许文件系统共享,反之从运行级别 3 改变到 2 则系统不支持文件系统共享。
|
||||
|
||||
在某个运行级别中,系统运行哪些进程依赖于目录 `/etc/rc?.d` 目录的内容,其中 `?` 可以是 2、 3、 4 或 5 (对应于相应的运行级别)。
|
||||
|
||||
在以下示例中(Ubuntu 系统),由于这些目录的配置是相同的,我们将看见上述 4 个级别对应的目录中的内容是一致的。
|
||||
|
||||
```
|
||||
/etc/rc2.d$ ls
|
||||
README S20smartmontools S50saned S99grub-common
|
||||
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
|
||||
S20rsync S20sysstat S70pppd-dns S99rc.local
|
||||
/etc/rc2.d$ cd ../rc3.d
|
||||
/etc/rc3.d$ ls
|
||||
README S20smartmontools S50saned S99grub-common
|
||||
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
|
||||
S20rsync S20sysstat S70pppd-dns S99rc.local
|
||||
/etc/rc3.d$ cd ../rc4.d
|
||||
/etc/rc4.d$ ls
|
||||
README S20smartmontools S50saned S99grub-common
|
||||
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
|
||||
S20rsync S20sysstat S70pppd-dns S99rc.local
|
||||
/etc/rc4.d$ cd ../rc5.d
|
||||
/etc/rc5.d$ ls
|
||||
README S20smartmontools S50saned S99grub-common
|
||||
S20kerneloops S20speech-dispatcher S70dns-clean S99ondemand
|
||||
S20rsync S20sysstat S70pppd-dns S99rc.local
|
||||
```
|
||||
|
||||
这些都是什么文件?它们都是指向 `/etc/init.d` 目录下用于启动服务的脚本符号连接。 这些文件的文件名是至关重要的, 因为它们决定了这些脚本文件的执行顺序,例如, S20 脚本是在 S50 脚本前面运行的。
|
||||
|
||||
```
|
||||
$ ls -l
|
||||
total 4
|
||||
-rw-r--r-- 1 root root 677 Feb 16 2016 README
|
||||
lrwxrwxrwx 1 root root 20 Aug 30 14:40 S20kerneloops -> ../init.d/kerneloops
|
||||
lrwxrwxrwx 1 root root 15 Aug 30 14:40 S20rsync -> ../init.d/rsync
|
||||
lrwxrwxrwx 1 root root 23 Aug 30 16:10 S20smartmontools -> ../init.d/smartmontools
|
||||
lrwxrwxrwx 1 root root 27 Aug 30 14:40 S20speech-dispatcher -> ../init.d/speech-dispatcher
|
||||
lrwxrwxrwx 1 root root 17 Aug 31 14:12 S20sysstat -> ../init.d/sysstat
|
||||
lrwxrwxrwx 1 root root 15 Aug 30 14:40 S50saned -> ../init.d/saned
|
||||
lrwxrwxrwx 1 root root 19 Aug 30 14:40 S70dns-clean -> ../init.d/dns-clean
|
||||
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S70pppd-dns -> ../init.d/pppd-dns
|
||||
lrwxrwxrwx 1 root root 21 Aug 30 14:40 S99grub-common -> ../init.d/grub-common
|
||||
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S99ondemand -> ../init.d/ondemand
|
||||
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S99rc.local -> ../init.d/rc.local
|
||||
```
|
||||
|
||||
如你所想,目录 `/etc/rc1.d` 因运行级别 1 的特殊而不同。它包含的符号链接指向非常不同的一套脚本。 同样也要注意到其中一些脚本以 `K` 开头命名,而另一些与其它运行级别脚本一样以 `S` 开头命名。这是因为当系统进入单用户模式时, 一些服务需要**停止**。 然而这些 K 开头的符号链接指向了其它级别 S 开头的符号链接的同一文件时, K(kill)表示这个脚本将以指示其停止的参数执行,而不是以启动的参数执行。
|
||||
|
||||
```
|
||||
/etc/rc1.d$ ls -l
|
||||
total 4
|
||||
lrwxrwxrwx 1 root root 20 Aug 30 14:40 K20kerneloops -> ../init.d/kerneloops
|
||||
lrwxrwxrwx 1 root root 15 Aug 30 14:40 K20rsync -> ../init.d/rsync
|
||||
lrwxrwxrwx 1 root root 15 Aug 30 14:40 K20saned -> ../init.d/saned
|
||||
lrwxrwxrwx 1 root root 23 Aug 30 16:10 K20smartmontools -> ../init.d/smartmontools
|
||||
lrwxrwxrwx 1 root root 27 Aug 30 14:40 K20speech-dispatcher -> ../init.d/speech-dispatcher
|
||||
-rw-r--r-- 1 root root 369 Mar 12 2014 README
|
||||
lrwxrwxrwx 1 root root 19 Aug 30 14:40 S30killprocs -> ../init.d/killprocs
|
||||
lrwxrwxrwx 1 root root 19 Aug 30 14:40 S70dns-clean -> ../init.d/dns-clean
|
||||
lrwxrwxrwx 1 root root 18 Aug 30 14:40 S70pppd-dns -> ../init.d/pppd-dns
|
||||
lrwxrwxrwx 1 root root 16 Aug 30 14:40 S90single -> ../init.d/single
|
||||
```
|
||||
|
||||
你可以改变系统的默认运行级别,尽管这很少被用到。例如,通过修改前文中提到的 `/etc/inittab` 文件,你能够配置 Debian 系统的默认运行级别为 3 (而不是 2),以下是该文件示例:
|
||||
|
||||
```
|
||||
id:3:initdefault:
|
||||
```
|
||||
|
||||
一旦你修改完成并重启系统, `runlevel` 命令将显示如下:
|
||||
|
||||
```
|
||||
$ runlevel
|
||||
N 3
|
||||
```
|
||||
|
||||
另外一种可选方式,使用 `init 3` 命令,你也能改变系统运行级别(且无需重启立即生效), `runlevel` 命令的输出为:
|
||||
|
||||
```
|
||||
$ runlevel
|
||||
2 3
|
||||
```
|
||||
|
||||
当然,除非你修改了系统默认级别的 `/etc/rc?.d` 目录下的符号链接,使得系统默认运行在一个修改的运行级别之下,否则很少需要通过创建或修改 `/etc/inittab` 文件改变系统的运行级别。
|
||||
|
||||
### 在 Linux 系统中如何使用运行级别?
|
||||
|
||||
为了扼要重述在系统中如何使用运行级别,下面有几个关于运行级别的快速问答问题:
|
||||
|
||||
**如何查询系统当前的运行级别?**
|
||||
|
||||
使用 `runlevel` 命令。
|
||||
|
||||
**如何查看特定运行级别所关联的服务进程?**
|
||||
|
||||
查看与该运行级别关联的运行级别开始目录(例如, `/etc/rc2.d` 对应于运行级别 2)。
|
||||
|
||||
**如何查看系统的默认运行级别?**
|
||||
|
||||
首先,查看 `/etc/inittab` 文件是否存在。如果不存在,就执行 `runlevel` 命令查询,你一般就已经处在该运行级别。
|
||||
|
||||
**如何改变系统运行级别?**
|
||||
|
||||
用 `init` 命令(例如 `init 3`)临时改变运行级别,通过修改或创建 `/etc/inittab` 文件永久改变其运行级别。
|
||||
|
||||
**能改变特定运行级别下运行的服务么?**
|
||||
|
||||
当然,通过改变对应的 `/etc/rc?.d` 目录下的符号连接即可。
|
||||
|
||||
**还有一些其他的什么需要考虑?**
|
||||
|
||||
当改变系统运行级别时,你应该特别小心,确保不影响到系统上正在运行的服务或者正在使用的用户。
|
||||
|
||||
(题图:[Vincent Desjardins][15] [(CC BY 2.0)][16])
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3222070/linux/maneuvering-around-run-levels-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[penghuster](https://github.com/penghuster)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.networkworld.com/article/3218728/linux/how-log-rotation-works-with-logrotate.html
|
||||
[2]:https://www.networkworld.com/article/3219684/linux/half-a-dozen-clever-linux-command-line-tricks.html
|
||||
[3]:https://www.networkworld.com/article/3219736/linux/how-to-use-the-motd-file-to-get-linux-users-to-pay-attention.html
|
||||
[4]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[5]:https://www.networkworld.com/article/3222828/home-tech/52-off-299-piece-all-purpose-first-aid-kit-deal-alert.html
|
||||
[6]:https://www.networkworld.com/article/3222847/mobile/save-a-whopping-100-on-amazon-echo-right-now-by-going-refurbished-deal-alert.html
|
||||
[7]:https://www.networkworld.com/article/3221348/mobile/35-off-etekcity-smart-plug-2-pack-energy-monitoring-and-alexa-compatible-deal-alert.html
|
||||
[8]:https://www.networkworld.com/article/3218728/linux/how-log-rotation-works-with-logrotate.html
|
||||
[9]:https://www.networkworld.com/article/3219684/linux/half-a-dozen-clever-linux-command-line-tricks.html
|
||||
[10]:https://www.networkworld.com/article/3219736/linux/how-to-use-the-motd-file-to-get-linux-users-to-pay-attention.html
|
||||
[11]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[12]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[13]:https://www.networkworld.com/article/3222847/mobile/save-a-whopping-100-on-amazon-echo-right-now-by-going-refurbished-deal-alert.html
|
||||
[14]:https://www.networkworld.com/article/3221348/mobile/35-off-etekcity-smart-plug-2-pack-energy-monitoring-and-alexa-compatible-deal-alert.html
|
||||
[15]:https://www.flickr.com/photos/endymion120/4824696883/in/photolist-8mkQi2-8vtyRx-8vvYZS-i31xQj-4TXTS2-S7VRNC-azimYK-dW8cYu-Sb5b7S-S7VRES-fpSVvo-61Zpn8-WxFwGi-UKKq3x-q6NSnC-8vsBLr-S3CPxn-qJUrLr-nDnpNu-8d7a6Q-T7mGpN-RE26wj-SeEXRa-5mZ7LG-Vp7t83-fEG5HS-Vp7sU7-6JpNBi-RCuR8P-qLzCL5-6WsfZx-5nU1tF-6ieGFi-3P5xwh-8mnxpo-hBXwSj-i3iCur-9dmrST-6bXk8d-8vtDb4-i2KLwU-5jhfU6-8vwbrN-ShAtNm-XgzXmb-8rad18-VfXm4L-8tQTrh-Vp7tcb-UceVDB
|
||||
[16]:https://creativecommons.org/licenses/by/2.0/legalcode
|
98
published/20170906 The Incredible Growth of Python.md
Normal file
98
published/20170906 The Incredible Growth of Python.md
Normal file
@ -0,0 +1,98 @@
|
||||
Stack Overflow 报告:Python 正在令人难以置信地增长!
|
||||
============================================================
|
||||
|
||||
我们[最近探讨][3]了那些世界银行定义为[高收入][4]的富裕国家是如何倾向于使用与世界上其它地区不同的技术。这其中我们看到的最大的差异在于 Python 编程语言。就高收入国家而言,Python 的增长甚至要比 [Stack Overflow Trends][5] 等工具展现的或其他针对全球的软件开发的排名更高。
|
||||
|
||||
在本文中,我们将探讨在过去五年中 Python 编程语言的非凡增长,就如在高收入国家的 Stack Overflow 流量所示 那样。“增长最快”一词[很难准确定义][6],但是我们认为 Python 确实可以称得上增长最快的主流编程语言。
|
||||
|
||||
这篇文章中讨论的所有数字都是针对高收入国家的。它们一般指的是美国、英国、德国、加拿大等国家的趋势,他们加起来占了 Stack Overflow 大约 64% 的流量。许多其他国家,如印度、巴西、俄罗斯和中国,也为全球软件开发生态系统做出了巨大贡献,尽管我们也将看到 Python 在这方面有所增长,但本文对这些经济体的描述较少。
|
||||
|
||||
值得强调的是,一种语言的用户数量并不能衡量语言的品质:我们是在_描述_开发人员使用的语言,但没有规定任何东西。(完全披露:我[曾经][7]主要使用 Python 编程,尽管我已经完全切换到 R 了)。
|
||||
|
||||
### Python 在高收入国家的增长
|
||||
|
||||
你可以在 [Stack Overflow Trends][8] 中看到,Python 在过去几年中一直在快速增长。但是对于本文,我们将重点关注高收入国家,考虑的是问题的浏览量而不是提出的问题数量(这基本上结果是类似的,但是每个月都有所波动,特别是对于较小的标签分类)。
|
||||
|
||||
我们有关于 Stack Overflow 问题的查看数据可以追溯到 2011 年底,在这段时间内,我们可以研究下 Python 相对于其他五种主要编程语言的增长。(请注意,这比 Stack Overflow Trends 的时间范围更短,它可追溯到 2008 年)。这些目前是高收入国家里十大访问最高的 Stack Overflow 标签中的六个。我们没有包括的四个是 CSS、HTML、Android 和 JQuery。
|
||||
|
||||

|
||||
|
||||
2017 年 6 月,Python 是成为高收入国家里 Stack Overflow 访问量最高的标签的第一个月。这也是美国和英国最受欢迎的标签,以及几乎所有其他高收入国家的前两名(接着就是 Java 或 JavaScript)。这是特别令人印象深刻的,因为在 2012 年,它比其他 5 种语言的访问量小,比当时增长了 2.5 倍。
|
||||
|
||||
部分原因是因为 Java 流量的季节性。由于它[在本科课程中有很多课程][9],Java 流量在秋季和春季会上升,夏季则下降。到年底,它会再次赶上 Python 吗?我们可以尝试用一个叫做 [“STL” 的模型][10]来预测未来两年的增长, 它将增长与季节性趋势结合起来,来预测将来的变化。
|
||||
|
||||

|
||||
|
||||
根据这个模型,Python 可能会在秋季保持领先地位或被 Java 取代(大致在模型预测的变化范围之内),但是 Python 显然会在 2018 年成为浏览最多的标签。STL 还表明,与过去两年一样,JavaScript 和 Java 在高收入国家中的流量水平将保持相似水平。
|
||||
|
||||
### 什么标签整体上增长最快?
|
||||
|
||||
上面只看了六个最受欢迎的编程语言。在其他重大技术中,哪些是目前在高收入国家中增长最快的技术?
|
||||
|
||||
我们以 2017 年至 2016 年流量的比例来定义增长率。在此分析中,我们决定仅考虑编程语言(如 Java 和 Python)和平台(如 iOS、Android、Windows 和 Linux),而不考虑像 [Angular][11] 或 [TensorFlow][12] 这样的框架(虽然其中许多有显著的增长,可能在未来的文章中分析)。
|
||||
|
||||

|
||||
|
||||
由于上面[这个漫画][13]中所描述的“最快增长”定义的激励,我们将增长与[平均差异图][14]中的整体平均值进行比较。
|
||||
|
||||

|
||||
|
||||
Python 以 27% 的年增长率成为了规模大、增长快的标签。下一个类似增长的最大标签是 R。我们看到,大多数其他大型标签的流量在高收入国家中保持稳定,浏览 Android、iOS 和 PHP 则略有下降。我们以前在 [Flash 之死这篇文章][15]中审查过一些正在衰减的标签,如 Objective-C、Perl 和 Ruby。我们还注意到,在函数式编程语言中,Scala 是最大的并且不断增长的,而 F# 和 Clojure 较小并且正在衰减,Haskell 则保持稳定。
|
||||
|
||||
上面的图表中有一个重要的遗漏:去年,有关 TypeScript 的问题流量增长了惊人的 142%,这使得我们需要去除它以避免压扁比例尺。你还可以看到,其他一些较小的语言的增长速度与 Python 类似或更快(例如 R、Go 和 Rust),而且还有许多标签,如 Swift 和 Scala,这些标签也显示出惊人的增长。它们随着时间的流量相比 Python 如何?
|
||||
|
||||

|
||||
|
||||
像 R 和 Swift 这样的语言的发展确实令人印象深刻,而 TypeScript 在更短的时间内显示出特别快速的扩张。这些较小的语言中,有许多从很少的流量成为软件生态系统中引人注目的存在。但是如图所示,当标签开始相对较小时,显示出快速增长更容易。
|
||||
|
||||
请注意,我们并不是说这些语言与 Python “竞争”。相反,这只是解释了为什么我们要把它们的增长分成一个单独的类别,这些是始于较低流量的标签。Python 是一个不寻常的案例,**既是 Stack Overflow 中最受欢迎的标签之一,也是增长最快的其中之一**。(顺便说一下,它也在加速!自 2013 年以来,每年的增长速度都会更快)。
|
||||
|
||||
### 世界其他地区
|
||||
|
||||
在这篇文章中,我们一直在分析高收入国家的趋势。Python 在世界其他地区,如印度、巴西、俄罗斯和中国等国家的增长情况是否类似?
|
||||
|
||||
确实如此。
|
||||
|
||||

|
||||
|
||||
在高收入国家之外,Python _仍旧_是增长最快的主要编程语言。它从较低的水平开始,两年后才开始增长(2014 年而不是 2012 年)。事实上,非高收入国家的 Python 同比增长率高于高收入国家。我们不会在这里研究它,但是 R ([其它语言的使用与 GDP 正相关][16]) 在这些国家也在增长。
|
||||
|
||||
在这篇文章中,许多关于高收入国家标签 (相对于绝对排名) 的增长和下降的结论,对世界其他地区都是正确的。两个部分增长率之间有一个 0.979 Spearman 相关性。在某些情况下,你可以看到类似于 Python 上发生的 “滞后” 现象,其中一个技术在高收入国家被广泛采用,一年或两年才能在世界其他地区扩大。(这是一个有趣的现象,这可能是未来文章的主题!)
|
||||
|
||||
### 下一次
|
||||
|
||||
我们不打算为任何“语言战争”提供弹药。一种语言的用户数量并不意味着它的质量,而且肯定不会让你知道哪种语言[更适合某种特定情况][17]。不过,考虑到这点,我们认为值得了解什么语言构成了开发者生态系统,以及生态系统会如何变化。
|
||||
|
||||
本文表明 Python 在过去五年中,特别是在高收入国家,显示出惊人的增长。在我们的下一篇文章中,我们将开始研究_“为什么”_。我们将按国家和行业划分增长情况,并研究有哪些其他技术与 Python 一起使用(例如,估计多少增长是由于 Python 用于 Web 开发而不是数据科学)。
|
||||
|
||||
在此期间,如果你使用 Python 工作,并希望你的职业生涯中进入下一阶段,那么[在 Stack Overflow Jobs 上有些公司正在招聘 Python 开发][18]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://stackoverflow.blog/2017/09/06/incredible-growth-python/?cb=1
|
||||
|
||||
作者:[David Robinson][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://stackoverflow.blog/authors/drobinson/
|
||||
[1]:https://stackoverflow.blog/authors/drobinson/
|
||||
[2]:https://stackoverflow.blog/authors/drobinson/
|
||||
[3]:https://stackoverflow.blog/2017/08/29/tale-two-industries-programming-languages-differ-wealthy-developing-countries/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
||||
[4]:https://en.wikipedia.org/wiki/World_Bank_high-income_economy
|
||||
[5]:https://insights.stackoverflow.com/trends?tags=python%2Cjavascript%2Cjava%2Cc%23%2Cphp%2Cc%2B%2B&utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
||||
[6]:https://xkcd.com/1102/
|
||||
[7]:https://stackoverflow.com/search?tab=newest&q=user%3a712603%20%5bpython%5d
|
||||
[8]:https://insights.stackoverflow.com/trends?tags=python%2Cjavascript%2Cjava%2Cc%23%2Cphp%2Cc%2B%2B&utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
||||
[9]:https://stackoverflow.blog/2017/02/15/how-do-students-use-stack-overflow/
|
||||
[10]:http://otexts.org/fpp2/sec-6-stl.html
|
||||
[11]:https://stackoverflow.com/questions/tagged/angular
|
||||
[12]:https://stackoverflow.com/questions/tagged/tensorflow
|
||||
[13]:https://xkcd.com/1102/
|
||||
[14]:https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot
|
||||
[15]:https://stackoverflow.blog/2017/08/01/flash-dead-technologies-might-next/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
||||
[16]:https://stackoverflow.blog/2017/08/29/tale-two-industries-programming-languages-differ-wealthy-developing-countries/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
||||
[17]:https://stackoverflow.blog/2011/08/16/gorilla-vs-shark/?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
||||
[18]:https://stackoverflow.com/jobs/developer-jobs-using-python?utm_source=so-owned&utm_medium=blog&utm_campaign=gen-blog&utm_content=blog-link&utm_term=incredible-growth-python
|
419
published/GDB-common-commands.md
Normal file
419
published/GDB-common-commands.md
Normal file
@ -0,0 +1,419 @@
|
||||
常用 GDB 命令中文速览
|
||||
============
|
||||
|
||||
### 目录
|
||||
|
||||
- [break](#break) -- 在指定的行或函数处设置断点,缩写为 `b`
|
||||
- [info breakpoints](#info-breakpoints) -- 打印未删除的所有断点,观察点和捕获点的列表,缩写为 `i b`
|
||||
- [disable](#disable) -- 禁用断点,缩写为 `dis`
|
||||
- [enable](#enable) -- 启用断点
|
||||
- [clear](#clear) -- 清除指定行或函数处的断点
|
||||
- [delete](#delete) -- 删除断点,缩写为 `d`
|
||||
- [tbreak](#tbreak) -- 设置临时断点,参数同 `break`,但在程序第一次停住后会被自动删除
|
||||
- [watch](#watch) -- 为表达式(或变量)设置观察点,当表达式(或变量)的值有变化时,暂停程序执行
|
||||
- [step](#step) -- 单步跟踪,如果有函数调用,会进入该函数,缩写为 `s`
|
||||
- [reverse-step](#reverse-step) -- 反向单步跟踪,如果有函数调用,会进入该函数
|
||||
- [next](#next) -- 单步跟踪,如果有函数调用,不会进入该函数,缩写为 `n`
|
||||
- [reverse-next](#reverse-next) -- 反向单步跟踪,如果有函数调用,不会进入该函数
|
||||
- [return](#return) -- 使选定的栈帧返回到其调用者
|
||||
- [finish](#finish) -- 执行直到选择的栈帧返回,缩写为 `fin`
|
||||
- [until](#until) -- 执行直到达到当前栈帧中当前行后的某一行(用于跳过循环、递归函数调用),缩写为 `u`
|
||||
- [continue](#continue) -- 恢复程序执行,缩写为 `c`
|
||||
- [print](#print) -- 打印表达式 EXP 的值,缩写为 `p`
|
||||
- [x](#x) -- 查看内存
|
||||
- [display](#display) -- 每次程序停止时打印表达式 EXP 的值(自动显示)
|
||||
- [info display](#info-display) -- 打印早先设置为自动显示的表达式列表
|
||||
- [disable display](#disable-display) -- 禁用自动显示
|
||||
- [enable display](#enable-display) -- 启用自动显示
|
||||
- [undisplay](#undisplay) -- 删除自动显示项
|
||||
- [help](#help) -- 打印命令列表(带参数时查找命令的帮助),缩写为 `h`
|
||||
- [attach](#attach) -- 挂接到已在运行的进程来调试
|
||||
- [run](#run) -- 启动被调试的程序,缩写为 `r`
|
||||
- [backtrace](#backtrace) -- 查看程序调用栈的信息,缩写为 `bt`
|
||||
- [ptype](#ptype) -- 打印类型 TYPE 的定义
|
||||
|
||||
------
|
||||
|
||||
### break
|
||||
|
||||
使用 `break` 命令(缩写 `b`)来设置断点。
|
||||
|
||||
用法:
|
||||
|
||||
- `break` 当不带参数时,在所选栈帧中执行的下一条指令处设置断点。
|
||||
- `break <function-name>` 在函数体入口处打断点,在 C++ 中可以使用 `class::function` 或 `function(type, ...)` 格式来指定函数名。
|
||||
- `break <line-number>` 在当前源码文件指定行的开始处打断点。
|
||||
- `break -N` `break +N` 在当前源码行前面或后面的 `N` 行开始处打断点,`N` 为正整数。
|
||||
- `break <filename:linenum>` 在源码文件 `filename` 的 `linenum` 行处打断点。
|
||||
- `break <filename:function>` 在源码文件 `filename` 的 `function` 函数入口处打断点。
|
||||
- `break <address>` 在程序指令的地址处打断点。
|
||||
- `break ... if <cond>` 设置条件断点,`...` 代表上述参数之一(或无参数),`cond` 为条件表达式,仅在 `cond` 值非零时暂停程序执行。
|
||||
|
||||
详见[官方文档][1]。
|
||||
|
||||
### info breakpoints
|
||||
|
||||
查看断点,观察点和捕获点的列表。
|
||||
|
||||
用法:
|
||||
|
||||
- `info breakpoints [list...]`
|
||||
- `info break [list...]`
|
||||
- `list...` 用来指定若干个断点的编号(可省略),可以是 `2`, `1-3`, `2 5` 等。
|
||||
|
||||
### disable
|
||||
|
||||
禁用一些断点。参数是用空格分隔的断点编号。要禁用所有断点,不加参数。
|
||||
|
||||
禁用的断点不会被忘记,但直到重新启用才有效。
|
||||
|
||||
用法:
|
||||
|
||||
- `disable [breakpoints] [list...]`
|
||||
- `breakpoints` 是 `disable` 的子命令(可省略),`list...` 同 `info breakpoints` 中的描述。
|
||||
|
||||
详见[官方文档][2]。
|
||||
|
||||
### enable
|
||||
|
||||
启用一些断点。给出断点编号(以空格分隔)作为参数。没有参数时,所有断点被启用。
|
||||
|
||||
用法:
|
||||
|
||||
- `enable [breakpoints] [list...]` 启用指定的断点(或所有定义的断点)。
|
||||
- `enable [breakpoints] once list...` 临时启用指定的断点。GDB 在停止您的程序后立即禁用这些断点。
|
||||
- `enable [breakpoints] delete list...` 使指定的断点启用一次,然后删除。一旦您的程序停止,GDB 就会删除这些断点。等效于用 `tbreak` 设置的断点。
|
||||
|
||||
`breakpoints` 同 `disable` 中的描述。
|
||||
|
||||
详见[官方文档][2]。
|
||||
|
||||
### clear
|
||||
|
||||
在指定行或函数处清除断点。参数可以是行号,函数名称或 `*` 跟一个地址。
|
||||
|
||||
用法:
|
||||
|
||||
- `clear` 当不带参数时,清除所选栈帧在执行的源码行中的所有断点。
|
||||
- `clear <function>`, `clear <filename:function>` 删除在命名函数的入口处设置的任何断点。
|
||||
- `clear <linenum>`, `clear <filename:linenum>` 删除在指定的文件指定的行号的代码中设置的任何断点。
|
||||
- `clear <address>` 清除指定程序指令的地址处的断点。
|
||||
|
||||
详见[官方文档][3]。
|
||||
|
||||
### delete
|
||||
|
||||
删除一些断点或自动显示表达式。参数是用空格分隔的断点编号。要删除所有断点,不加参数。
|
||||
|
||||
用法: `delete [breakpoints] [list...]`
|
||||
|
||||
详见[官方文档][3]。
|
||||
|
||||
### tbreak
|
||||
|
||||
设置临时断点。参数形式同 `break` 一样。
|
||||
|
||||
除了断点是临时的之外,其他同 `break` 一样,所以在命中时会被删除。
|
||||
|
||||
详见[官方文档][1]。
|
||||
|
||||
### watch
|
||||
|
||||
为表达式设置观察点。
|
||||
|
||||
用法: `watch [-l|-location] <expr>` 每当一个表达式的值改变时,观察点就会暂停程序执行。
|
||||
|
||||
如果给出了 `-l` 或者 `-location`,则它会对 `expr` 求值并观察它所指向的内存。例如,`watch *(int *)0x12345678` 将在指定的地址处观察一个 4 字节的区域(假设 int 占用 4 个字节)。
|
||||
|
||||
详见[官方文档][4]。
|
||||
|
||||
### step
|
||||
|
||||
单步执行程序,直到到达不同的源码行。
|
||||
|
||||
用法: `step [N]` 参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
|
||||
|
||||
警告:如果当控制在没有调试信息的情况下编译的函数中使用 `step` 命令,则执行将继续进行,直到控制到达具有调试信息的函数。 同样,它不会进入没有调试信息编译的函数。
|
||||
|
||||
要执行没有调试信息的函数,请使用 `stepi` 命令,详见后文。
|
||||
|
||||
详见[官方文档][5]。
|
||||
|
||||
### reverse-step
|
||||
|
||||
反向单步执行程序,直到到达另一个源码行的开头。
|
||||
|
||||
用法: `reverse-step [N]` 参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
|
||||
|
||||
详见[官方文档][6]。
|
||||
|
||||
### next
|
||||
|
||||
单步执行程序,执行完子程序调用。
|
||||
|
||||
用法: `next [N]`
|
||||
|
||||
与 `step` 不同,如果当前的源代码行调用子程序,则此命令不会进入子程序,而是将其视为单个源代码行,继续执行。
|
||||
|
||||
详见[官方文档][5]。
|
||||
|
||||
### reverse-next
|
||||
|
||||
反向步进程序,执行完子程序调用。
|
||||
|
||||
用法: `reverse-next [N]`
|
||||
|
||||
如果要执行的源代码行调用子程序,则此命令不会进入子程序,调用被视为一个指令。
|
||||
|
||||
参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
|
||||
|
||||
详见[官方文档][6]。
|
||||
|
||||
### return
|
||||
|
||||
您可以使用 `return` 命令取消函数调用的执行。如果你给出一个表达式参数,它的值被用作函数的返回值。
|
||||
|
||||
用法: `return <expression>` 将 `expression` 的值作为函数的返回值并使函数直接返回。
|
||||
|
||||
详见[官方文档][7]。
|
||||
|
||||
### finish
|
||||
|
||||
执行直到选定的栈帧返回。
|
||||
|
||||
用法: `finish` 返回后,返回的值将被打印并放入到值历史记录中。
|
||||
|
||||
详见[官方文档][5]。
|
||||
|
||||
### until
|
||||
|
||||
执行直到程序到达当前栈帧中当前行之后(与 [break](#break) 命令相同的参数)的源码行。此命令用于通过一个多次的循环,以避免单步执行。
|
||||
|
||||
用法:`until <location>` 或 `u <location>` 继续运行程序,直到达到指定的位置,或者当前栈帧返回。
|
||||
|
||||
详见[官方文档][5]。
|
||||
|
||||
### continue
|
||||
|
||||
在信号或断点之后,继续运行被调试的程序。
|
||||
|
||||
用法: `continue [N]` 如果从断点开始,可以使用数字 `N` 作为参数,这意味着将该断点的忽略计数设置为 `N - 1`(以便断点在第 N 次到达之前不会中断)。如果启用了非停止模式(使用 `show non-stop` 查看),则仅继续当前线程,否则程序中的所有线程都将继续。
|
||||
|
||||
详见[官方文档][5]。
|
||||
|
||||
### print
|
||||
|
||||
求值并打印表达式 EXP 的值。可访问的变量是所选栈帧的词法环境,以及范围为全局或整个文件的所有变量。
|
||||
|
||||
用法:
|
||||
|
||||
- `print [expr]` 或 `print /f [expr]` `expr` 是一个(在源代码语言中的)表达式。
|
||||
|
||||
默认情况下,`expr` 的值以适合其数据类型的格式打印;您可以通过指定 `/f` 来选择不同的格式,其中 `f` 是一个指定格式的字母;详见[输出格式][9]。
|
||||
|
||||
如果省略 `expr`,GDB 再次显示最后一个值。
|
||||
|
||||
要以每行一个成员带缩进的格式打印结构体变量请使用命令 `set print pretty on`,取消则使用命令 `set print pretty off`。
|
||||
|
||||
可使用命令 `show print` 查看所有打印的设置。
|
||||
|
||||
详见[官方文档][8]。
|
||||
|
||||
### x
|
||||
|
||||
检查内存。
|
||||
|
||||
用法: `x/nfu <addr>` 或 `x <addr>` `n`、`f` 和 `u` 都是可选参数,用于指定要显示的内存以及如何格式化。`addr` 是要开始显示内存的地址的表达式。
|
||||
|
||||
`n` 重复次数(默认值是 1),指定要显示多少个单位(由 `u` 指定)的内存值。
|
||||
|
||||
`f` 显示格式(初始默认值是 `x`),显示格式是 `print('x','d','u','o','t','a','c','f','s')` 使用的格式之一,再加 `i`(机器指令)。
|
||||
|
||||
`u` 单位大小,`b` 表示单字节,`h` 表示双字节,`w` 表示四字节,`g` 表示八字节。
|
||||
|
||||
例如:
|
||||
|
||||
`x/3uh 0x54320` 表示从地址 0x54320 开始以无符号十进制整数的格式,双字节为单位来显示 3 个内存值。
|
||||
|
||||
`x/16xb 0x7f95b7d18870` 表示从地址 0x7f95b7d18870 开始以十六进制整数的格式,单字节为单位显示 16 个内存值。
|
||||
|
||||
详见[官方文档][10]。
|
||||
|
||||
### display
|
||||
|
||||
每次程序暂停时,打印表达式 EXP 的值。
|
||||
|
||||
用法: `display <expr>`, `display/fmt <expr>` 或 `display/fmt <addr>` `fmt` 用于指定显示格式。像 [print](#print) 命令里的 `/f` 一样。
|
||||
|
||||
对于格式 `i` 或 `s`,或者包括单位大小或单位数量,将表达式 `addr` 添加为每次程序停止时要检查的内存地址。
|
||||
|
||||
详见[官方文档][11]。
|
||||
|
||||
### info display
|
||||
|
||||
打印自动显示的表达式列表,每个表达式都带有项目编号,但不显示其值。
|
||||
|
||||
包括被禁用的表达式和不能立即显示的表达式(当前不可用的自动变量)。
|
||||
|
||||
### undisplay
|
||||
|
||||
取消某些表达式在程序暂停时的自动显示。参数是表达式的编号(使用 `info display` 查询编号)。不带参数表示取消所有自动显示表达式。
|
||||
|
||||
`delete display` 具有与此命令相同的效果。
|
||||
|
||||
### disable display
|
||||
|
||||
禁用某些表达式在程序暂停时的自动显示。禁用的显示项目不会被自动打印,但不会被忘记。 它可能稍后再次被启用。
|
||||
|
||||
参数是表达式的编号(使用 `info display` 查询编号)。不带参数表示禁用所有自动显示表达式。
|
||||
|
||||
### enable display
|
||||
|
||||
启用某些表达式在程序暂停时的自动显示。
|
||||
|
||||
参数是重新显示的表达式的编号(使用 `info display` 查询编号)。不带参数表示启用所有自动显示表达式。
|
||||
|
||||
### help
|
||||
|
||||
打印命令列表。
|
||||
|
||||
您可以使用不带参数的 `help`(缩写为 `h`)来显示命令的类别名的简短列表。
|
||||
|
||||
使用 `help <class>` 您可以获取该类中的各个命令的列表。使用 `help <command>` 显示如何使用该命令。
|
||||
|
||||
详见[官方文档][12]。
|
||||
|
||||
### attach
|
||||
|
||||
挂接到 GDB 之外的进程或文件。该命令可以将进程 ID 或设备文件作为参数。
|
||||
|
||||
对于进程 ID,您必须具有向进程发送信号的权限,并且必须具有与调试器相同的有效的 uid。
|
||||
|
||||
用法: `attach <process-id>` GDB 在安排调试指定的进程之后做的第一件事是暂停该进程。
|
||||
|
||||
无论是通过 `attach` 命令挂接的进程还是通过 `run` 命令启动的进程,您都可以使用的 GDB 命令来检查和修改挂接的进程。
|
||||
|
||||
详见[官方文档][13]。
|
||||
|
||||
### run
|
||||
|
||||
启动被调试的程序。
|
||||
|
||||
可以直接指定参数,也可以用 [set args][15] 设置(启动所需的)参数。
|
||||
|
||||
例如: `run arg1 arg2 ...` 等效于
|
||||
|
||||
```
|
||||
set args arg1 arg2 ...
|
||||
run
|
||||
```
|
||||
|
||||
还允许使用 `>`、 `<` 或 `>>` 进行输入和输出重定向。
|
||||
|
||||
详见[官方文档][14]。
|
||||
|
||||
### backtrace
|
||||
|
||||
打印整体栈帧信息。
|
||||
|
||||
- `bt` 打印整体栈帧信息,每个栈帧一行。
|
||||
- `bt n` 类似于上,但只打印最内层的 n 个栈帧。
|
||||
- `bt -n` 类似于上,但只打印最外层的 n 个栈帧。
|
||||
- `bt full n` 类似于 `bt n`,还打印局部变量的值。
|
||||
|
||||
`where` 和 `info stack`(缩写 `info s`) 是 `backtrace` 的别名。调用栈信息类似如下:
|
||||
|
||||
```
|
||||
(gdb) where
|
||||
#0 vconn_stream_run (vconn=0x99e5e38) at lib/vconn-stream.c:232
|
||||
#1 0x080ed68a in vconn_run (vconn=0x99e5e38) at lib/vconn.c:276
|
||||
#2 0x080dc6c8 in rconn_run (rc=0x99dbbe0) at lib/rconn.c:513
|
||||
#3 0x08077b83 in ofconn_run (ofconn=0x99e8070, handle_openflow=0x805e274 <handle_openflow>) at ofproto/connmgr.c:1234
|
||||
#4 0x08075f92 in connmgr_run (mgr=0x99dc878, handle_openflow=0x805e274 <handle_openflow>) at ofproto/connmgr.c:286
|
||||
#5 0x08057d58 in ofproto_run (p=0x99d9ba0) at ofproto/ofproto.c:1159
|
||||
#6 0x0804f96b in bridge_run () at vswitchd/bridge.c:2248
|
||||
#7 0x08054168 in main (argc=4, argv=0xbf8333e4) at vswitchd/ovs-vswitchd.c:125
|
||||
```
|
||||
|
||||
详见[官方文档][16]。
|
||||
|
||||
### ptype
|
||||
|
||||
打印类型 TYPE 的定义。
|
||||
|
||||
用法: `ptype[/FLAGS] TYPE-NAME | EXPRESSION`
|
||||
|
||||
参数可以是由 `typedef` 定义的类型名, 或者 `struct STRUCT-TAG` 或者 `class CLASS-NAME` 或者 `union UNION-TAG` 或者 `enum ENUM-TAG`。
|
||||
|
||||
根据所选的栈帧的词法上下文来查找该名字。
|
||||
|
||||
类似的命令是 `whatis`,区别在于 `whatis` 不展开由 `typedef` 定义的数据类型,而 `ptype` 会展开,举例如下:
|
||||
|
||||
```
|
||||
/* 类型声明与变量定义 */
|
||||
typedef double real_t;
|
||||
struct complex {
|
||||
real_t real;
|
||||
double imag;
|
||||
};
|
||||
typedef struct complex complex_t;
|
||||
|
||||
complex_t var;
|
||||
real_t *real_pointer_var;
|
||||
```
|
||||
|
||||
这两个命令给出了如下输出:
|
||||
|
||||
```
|
||||
(gdb) whatis var
|
||||
type = complex_t
|
||||
(gdb) ptype var
|
||||
type = struct complex {
|
||||
real_t real;
|
||||
double imag;
|
||||
}
|
||||
(gdb) whatis complex_t
|
||||
type = struct complex
|
||||
(gdb) whatis struct complex
|
||||
type = struct complex
|
||||
(gdb) ptype struct complex
|
||||
type = struct complex {
|
||||
real_t real;
|
||||
double imag;
|
||||
}
|
||||
(gdb) whatis real_pointer_var
|
||||
type = real_t *
|
||||
(gdb) ptype real_pointer_var
|
||||
type = double *
|
||||
```
|
||||
|
||||
详见[官方文档][17]。
|
||||
|
||||
------
|
||||
|
||||
### 参考资料
|
||||
|
||||
- [Debugging with GDB](https://sourceware.org/gdb/current/onlinedocs/gdb/)
|
||||
|
||||
------
|
||||
|
||||
译者:[robot527](https://github.com/robot527)
|
||||
校对:[mudongliang](https://github.com/mudongliang), [wxy](https://github.com/wxy)
|
||||
|
||||
[1]: https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Breaks.html
|
||||
[2]: https://sourceware.org/gdb/current/onlinedocs/gdb/Disabling.html
|
||||
[3]: https://sourceware.org/gdb/current/onlinedocs/gdb/Delete-Breaks.html
|
||||
[4]: https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Watchpoints.html
|
||||
[5]: https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html
|
||||
[6]: https://sourceware.org/gdb/current/onlinedocs/gdb/Reverse-Execution.html
|
||||
[7]: https://sourceware.org/gdb/current/onlinedocs/gdb/Returning.html
|
||||
[8]: https://sourceware.org/gdb/current/onlinedocs/gdb/Data.html
|
||||
[9]: https://sourceware.org/gdb/current/onlinedocs/gdb/Output-Formats.html
|
||||
[10]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html
|
||||
[11]: https://sourceware.org/gdb/current/onlinedocs/gdb/Auto-Display.html
|
||||
[12]: https://sourceware.org/gdb/current/onlinedocs/gdb/Help.html
|
||||
[13]: https://sourceware.org/gdb/current/onlinedocs/gdb/Attach.html
|
||||
[14]: https://sourceware.org/gdb/current/onlinedocs/gdb/Starting.html
|
||||
[15]: https://sourceware.org/gdb/current/onlinedocs/gdb/Arguments.html
|
||||
[16]: https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html
|
||||
[17]: https://sourceware.org/gdb/current/onlinedocs/gdb/Symbols.html
|
@ -1,4 +1,4 @@
|
||||
translating by XYenChi
|
||||
XYenChi is translating
|
||||
A 5-step plan to encourage your team to make changes on your project
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by gitlilys
|
||||
|
||||
GOOGLE CHROME–ONE YEAR IN
|
||||
========================================
|
||||
|
||||
|
@ -1,83 +0,0 @@
|
||||
Translated by DPueng
|
||||
|
||||
Why we need open leaders more than ever
|
||||
============================================================
|
||||
|
||||
### Changing social and cultural conditions are giving rise to open leadership.
|
||||
|
||||
Posted 02 Feb 2017[Philip A Foster][10][Feed][9]13[up][6]
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
Leadership is power. More specifically, leadership is the power to influence the actions of others. The mythology of leadership can certainly conjure images of not only the romantic but also the sinister side of the human condition. How we ultimately decide to engage in leadership determines its true nature.
|
||||
|
||||
Many modern understandings of leadership are born out of warfare, where leadership is the skillful execution of command-and-control thinking. For most of the modern era of business, then, we engaged leadership as some great man or woman arriving at the pinnacle of power and exerting this power through position. Such traditional leadership relies heavily on formal lines of authority through hierarchies and reporting relationships. Authority in these structures flows down through the vertical hierarchy and exists along formal lines in the chain of command.
|
||||
|
||||
>Open leaders quickly discover that leadership is not about the power we exert to influence progress, but the power and confidence we distribute among the members of the organization.
|
||||
|
||||
However, in the late 20th century, something began to change. New technologies opened doors to globalism and thus more dispersed teams. The way we engaged human capital began to shift, forever changing the way people communicate with each other. People inside organizations began to feel empowered, and they demanded a sense of ownership of their successes (and failures). Leaders were no longer the sole owners of power. The 21st century leader leading the 21st century organization began to understand empowerment, collaboration, accountability, and clear communication were the essence of a new kind of power. These new leaders began _sharing_ that power—and they implicitly trusted their followers.
|
||||
|
||||
As organizations continue becoming more open, even individuals without "leadership" titles feel empowered to drive change. These organizations remove the chains of hierarchy and untether workers to do their jobs in the ways they best see fit. History has exposed 20th century leaders' tendencies to strangle agility through unilateral decision-making and unidirectional information flows. But the new century's leader best defines an organization by the number of individuals it empowers to get something done. There's power in numbers—and, frankly, one leader cannot be in all places at all times, making all the decisions.
|
||||
|
||||
So leaders are becoming open, too.
|
||||
|
||||
### Control
|
||||
|
||||
Where the leaders of old are focused on command-and-control positional power, an open leader cedes organizational control to others via new forms of organizational governance, new technologies, and other means of reducing friction, thereby enabling collective action in a more efficient manner. These leaders understand the power of trust, and believe followers will always show initiative, engagement, and independence. And this new brand of leadership requires a shift in tactics—from _telling people what to do_ to _showing them what to do_ and _coaching them along the way_. Open leaders quickly discover that leadership is not about the power we exert to influence progress, but the power and confidence we _distribute_ among the members of the organization. The 21stcentury leader is focused on community and the edification of others. In the end, the open leader is not focused on self but is selfless.
|
||||
|
||||
### Communication
|
||||
|
||||
The 20th century leader hordes and controls the flow of information throughout the organization. The open leader, however, seeks to engage an organization by sharing information and context (as well as authority) with members of a team. These leaders destroy fiefdoms, walk humbly, and share power like never before. The collective empowerment and engaged collaboration they inspire create agility, shared responsibility, ownership—and, above all, happiness. When members of an organization are empowered to do their jobs, they're happier (and thus more productive) than their hierarchical counterparts.
|
||||
|
||||
### Trust
|
||||
|
||||
Open leaders embrace uncertainty and trust their followers to do the right thing at the right time. They possess an ability to engage human capital at a higher level of efficiency than their traditional counterparts. Again: They don't operate as command-and-control micromanagers. Elevating transparency, they don't operate in hiding, and they do their best to keep decisions and actions out in the open, explaining the basis on which decisions get made and assuming employees have a high level grasp of situations within the organization. Open leaders operate from the premise that the organization's human capital is more than capable of achieving success without their constant intervention.
|
||||
|
||||
### Autonomy
|
||||
|
||||
Where the powerful command-and-control 20th century leader is focused on some _position_ of power, an open leader is more interested in the actual _role_ an individual plays within the organization. When a leader is focused on an _individual_, they're better able to coach and mentor members of a team. From this perspective, an open leader is focused on modeling behaviors and actions that are congruent with the organization's vision and mission. In the end, an open leader is very much seen as a member of the team rather than the _head_ of the team. This does not mean the leader abdicates a position of authority, but rather understates it in an effort to share power and empower individuals through autonomy to create results.
|
||||
|
||||
### Empowerment
|
||||
|
||||
Open leaders are focused on granting authority to members of an organization. This process acknowledges the skills, abilities, and trust the leader has in the organization's human capital, and thereby creates positive motivation and willingness for the entire team to take risks. Empowerment, in the end, is about helping followers believe in their own abilities. Followers who believe that they have personal power are more likely to undertake initiatives, set and achieve higher goals, and persist in the face of difficult circumstances. Ultimately the concept of an open organization is about inclusivity, where everyone belongs and individuality and differing opinions are essential to success. An open organization and its open leaders offer a sense of community, and members are motivated by the organization's mission or purpose. This creates a sense of belonging to something bigger than the individual. Individuality creates happiness and job satisfaction among its members. In turn, higher degrees of efficiency and success are achieved.
|
||||
|
||||
>More Open Organization Resources
|
||||
|
||||
* [Download the Open Organization Leaders Manual][1]
|
||||
* [Download the Open Organization Field Guide][2]
|
||||
* [What is an Open Organization?][3]
|
||||
* [What is an Open Decision?][4]
|
||||
|
||||
We should all strive for the openness the 21st century leader requires. This requires self-examination, curiosity—and, above all, it's ongoing process of change. Through new attitudes and habits, we move toward the discovery of what an open leader really _is _and _does,_ and hopefully we begin to take on those ideals as we adapt our leadership styles to the 21st century.
|
||||
|
||||
Yes, leadership is power. How we use that power determines the success or failure of our organizations. Those who abuse power don't last, but those who share power and celebrate others do. By reading [this book][7], you are beginning to play an important role in the ongoing conversation of the open organization and its leadership. And at the conclusion of [this volume][8], you'll find additional resources and opportunities to connect with the open organization community, so that you too can chat, think, and grow with us. Welcome to the conversation—welcome to the journey!
|
||||
|
||||
_This article originally appeared as the introduction to _The Open Organization Leaders Manual_, now [available from Opensource.com][5]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
Philip A Foster - Dr. Philip A. Foster is a leadership/business coach and consultant and Adjunct Professor. He is a noted Thought Leader in Business Operations, Organizational Development, Foresight and Strategic Leadership. Dr. Foster facilitates change through the design and implementation of strategies, strategic foresight, and planning.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/2/need-open-leaders-more-ever
|
||||
|
||||
作者:[Philip A Foster][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/maximumchange
|
||||
[1]:https://opensource.com/open-organization/resources/leaders-manual?src=too_resource_menu
|
||||
[2]:https://opensource.com/open-organization/resources/field-guide?src=too_resource_menu
|
||||
[3]:https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu
|
||||
[4]:https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu
|
||||
[5]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[6]:https://opensource.com/open-organization/17/2/need-open-leaders-more-ever?rate=c_9hT0EKbdXcTGRl-YW0QgW60NsRwO2a4RaplUKfvXs
|
||||
[7]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[8]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[9]:https://opensource.com/user/15497/feed
|
||||
[10]:https://opensource.com/users/maximumchange
|
@ -1,5 +1,3 @@
|
||||
Translating by windcode
|
||||
|
||||
# Why DevOps is the end of security as we know it
|
||||
|
||||

|
||||
|
@ -1,7 +1,7 @@
|
||||
Education of a Programmer
|
||||
============================================================
|
||||
|
||||
_When I left Microsoft in October 2016 after almost 21 years there and almost 35 years in the industry, I took some time to reflect on what I had learned over all those years. This is a lightly edited version of that post. Pardon the length!_
|
||||
_When I left Microsoft in October 2016 after almost 21 years there and almost 35 years in the industry, I took some time to reflect on what I had learned over all those years. This is a lightly edited version of that post. Pardon the length!_
|
||||
|
||||
There are an amazing number of things you need to know to be a proficient programmer — details of languages, APIs, algorithms, data structures, systems and tools. These things change all the time — new languages and programming environments spring up and there always seems to be some hot new tool or language that “everyone” is using. It is important to stay current and proficient. A carpenter needs to know how to pick the right hammer and nail for the job and needs to be competent at driving the nail straight and true.
|
||||
|
||||
|
@ -1,7 +1,3 @@
|
||||
申请翻译 by WangYueScream
|
||||
==================================
|
||||
|
||||
|
||||
A Window Into the Linux Desktop
|
||||
============================================================
|
||||
|
||||
|
@ -1,103 +0,0 @@
|
||||
Security Debt is an Engineer’s Problem
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||

|
||||
>Keziah Plattner of AirBnBSecurity.
|
||||
|
||||
Just like organizations can build up technical debt, so too can they also build up something called “security debt,” if they don’t plan accordingly, attendees learned at the [WomenWhoCode Connect ][5]event at Twitter headquarters in San Francisco last month.
|
||||
|
||||
Security has got to be integral to every step of the software development process, stressed [Mary Ann Davidson][6], Oracle’s Chief Security Officer, in a keynote talk with about security for developers with [Zassmin Montes de Oca][7] of [WomenWhoCode][8].
|
||||
|
||||
In the past, security used to be ignored by pretty much everyone, except banks. But security is more critical than it has ever been because there are so many access points. We’ve entered the era of [Internet of Things][9], where thieves can just hack your fridge to see that you’re not home.
|
||||
|
||||
Davidson is in charge of assurance at Oracle, “making sure we build security into everything we build, whether it’s an on-premise product, whether it’s a cloud service, even devices we have that support group builds at customer sites and reports data back to us, helping us do diagnostics — every single one of those things has to have security engineered into it.”
|
||||
|
||||

|
||||
|
||||
Plattner talking to a capacity crowd at #WWCConnect
|
||||
|
||||
AirBnB’s [Keziah Plattner][10] echoed that sentiment in her breakout session. “Most developers don’t see security as their job,” she said, “but this has to change.”
|
||||
|
||||
She shared four basic security principles for engineers. First, security debt is expensive. There’s a lot of talk about [technical debt ][11]and she thinks security debt should be included in those conversations.
|
||||
|
||||
“This historical attitude is ‘We’ll think about security later,’” Plattner said. As companies grab the low-hanging fruit of software efficiency and growth, they ignore security, but an initial insecure design can cause problems for years to come.
|
||||
|
||||
It’s very hard to add security to an existing vulnerable system, she said. Even when you know where the security holes are and have budgeted the time and resources to make the changes, it’s time-consuming and difficult to re-engineer a secure system.
|
||||
|
||||
So it’s key, she said, to build security into your design from the start. Think of security as part of the technical debt to avoid. And cover all possibilities.
|
||||
|
||||
Most importantly, according to Plattner, is the difficulty in getting to people to change their behavior. No one will change voluntarily, she said, even when you point out that the new behavior is more secure. We all nodded.
|
||||
|
||||
Davidson said engineers need to start thinking about how their code could be attacked, and design from that perspective. She said she only has two rules. The first is never trust any unvalidated data and rule two is see rule one.
|
||||
|
||||
“People do this all the time. They say ‘My client sent me the data so it will be fine.’ Nooooooooo,” she said, to laughs.
|
||||
|
||||
The second key to security, Plattner said, is “never trust users.”
|
||||
|
||||
Davidson put it another way: “My job is to be a professional paranoid.” She worries all the time about how someone might breach her systems even inadvertently. This is not academic, there has been recent denial of service attacks through IoT devices.
|
||||
|
||||
### Little Bobby Tables
|
||||
|
||||
If part of your security plan is trusting users to do the right thing, your system is inherently insecure regardless of whatever other security measures you have in place, said Plattner.
|
||||
|
||||
It’s important to properly sanitize all user input, she explained, showing the [XKCD cartoon][12] where a mom wiped out an entire school database because her son’s middle name was “DropTable Students.”
|
||||
|
||||
So sanitize all user input. Check.
|
||||
|
||||
She showed an example of JavaScript developers using Eval on open source. “A good ground rule is ‘Never use eval(),’” she cautioned. The [eval() ][13]function evaluates JavaScript code. “You’re opening your system to random users if you do.”
|
||||
|
||||
Davidson cautioned that her paranoia extends to including security testing your example code in documentation. “Because we all know no one ever copies sample code,” she said to laughter. She underscored the point that any code should be subject to security checks.
|
||||
|
||||

|
||||
|
||||
Make it easy
|
||||
|
||||
Plattner’s suggestion three: Make security easy. Take the path of least resistance, she suggested.
|
||||
|
||||
Externally, make users opt out of security instead of opting in, or, better yet, make it mandatory. Changing people’s behavior is the hardest problem in tech, she said. Once users get used to using your product in a non-secure way, getting them to change in the future is extremely difficult.
|
||||
|
||||
Internal to your company, she suggested make tools that standardize security so it’s not something individual developers need to think about. For example, encrypting data as a service so engineers can just call the service to encrypt or decrypt data.
|
||||
|
||||
Make sure that your company is focused on good security hygiene, she said. Switch to good security habits across the company.
|
||||
|
||||
You’re only secure as your weakest link, so it’s important that each individual also has good personal security hygiene as well as having good corporate security hygiene.
|
||||
|
||||
At Oracle, they’ve got this covered. Davidson said she got tired of explaining security to engineers who graduated college with absolutely no security training, so she wrote the first coding standards at Oracle. There are now hundreds of pages with lots of contributors, and there are classes that are mandatory. They have metrics for compliance to security requirements and measure it. The classes are not just for engineers, but for doc writers as well. “It’s a cultural thing,” she said.
|
||||
|
||||
And what discussion about security would be secure without a mention of passwords? While everyone should be using a good password manager, Plattner said, but they should be mandatory for work, along with two-factor authentication.
|
||||
|
||||
Basic password principles should be a part of every engineer’s waking life, she said. What matters most in passwords is their length and entropy — making the collection of keystrokes as random as possible. A robust password entropy checker is invaluable for this. She recommends [zxcvbn][14], the Dropbox open-source entropy checker.
|
||||
|
||||
Another trick is to use something intentionally slow like [bcrypt][15] when authenticating user input, said Plattner. The slowness doesn’t bother most legit users but irritates hackers who try to force password attempts.
|
||||
|
||||
All of this adds up to job security for anyone wanting to get into the security side of technology, said Davidson. We’re putting more code more places, she said, and that creates systemic risk. “I don’t think anybody is not going to have a job in security as long as we keep doing interesting things in technology.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thenewstack.io/security-engineers-problem/
|
||||
|
||||
作者:[TC Currie][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://thenewstack.io/author/tc/
|
||||
[1]:http://twitter.com/share?url=https://thenewstack.io/security-engineers-problem/&text=Security+Debt+is+an+Engineer%E2%80%99s+Problem+
|
||||
[2]:http://www.facebook.com/sharer.php?u=https://thenewstack.io/security-engineers-problem/
|
||||
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https://thenewstack.io/security-engineers-problem/
|
||||
[4]:https://thenewstack.io/security-engineers-problem/#disqus_thread
|
||||
[5]:http://connect2017.womenwhocode.com/
|
||||
[6]:https://www.linkedin.com/in/mary-ann-davidson-235ba/
|
||||
[7]:https://www.linkedin.com/in/zassmin/
|
||||
[8]:https://www.womenwhocode.com/
|
||||
[9]:https://www.thenewstack.io/tag/Internet-of-Things
|
||||
[10]:https://twitter.com/ittskeziah
|
||||
[11]:https://martinfowler.com/bliki/TechnicalDebt.html
|
||||
[12]:https://xkcd.com/327/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
|
||||
[14]:https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/
|
||||
[15]:https://en.wikipedia.org/wiki/Bcrypt
|
||||
[16]:https://thenewstack.io/author/tc/
|
@ -1,122 +0,0 @@
|
||||
translating by @explosic4
|
||||
|
||||
Why working openly is hard when you just want to get stuff done
|
||||
============================================================
|
||||
|
||||
### Learn how to create a book using the Open Decision Framework.
|
||||
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
Three letters guide the way I work: GSD—get stuff done. Over the years, I've managed to blend concepts like feedback loops (from lean methodologies) and iterative improvement (from Agile) into my everyday work habits so I can better GSD (if I can use that as a verb). This means being extremely efficient with my time: outlining clear, discrete goals; checking completed items off a master list; and advancing projects forward iteratively and constantly. But can someone still GSD while defaulting to open? Or is this when getting stuff done comes to a grinding halt? Most would assume the worst, but I found that's not necessarily the case.
|
||||
|
||||
Working in the open and using guidance from the [Open Decision Framework][6] can get projects off to a slower start. But during a recent project, we made the decision—right at the beginning—to work openly and collaborate with our community.
|
||||
|
||||
Open Organization resources
|
||||
|
||||
* [Download the Open Organization Guide to IT Culture Change][1]
|
||||
|
||||
* [Download the Open Organization Leaders Manual][2]
|
||||
|
||||
* [What is an Open Organization?][3]
|
||||
|
||||
* [What is an Open Decision?][4]
|
||||
|
||||
It was the best decision we could have made.
|
||||
|
||||
Let's take a look at a few unexpected consequences from this experience and see how we can incorporate a GSD mentality into the Open Decision Framework.
|
||||
|
||||
### Building community
|
||||
|
||||
In November 2014, I undertook a new project: build a community around the concepts in _The Open Organization_ , a forthcoming (at the time) book by Red Hat CEO Jim Whitehurst. I thought, "Cool, that sounds like a challenge—I'm in!" Then [impostor syndrome][7] set in. I started thinking: "What in the world are we going to do, and what would success look like?"
|
||||
|
||||
_Spoiler alert_ . At the end of the book, Jim recommends that readers visit Opensource.com to continue the conversation about openness and management in the 21st century. So, in May 2015, my team launched a new section of the site dedicated to those ideas. We planned to engage in some storytelling, just like we always do at Opensource.com—this time around the ideas and concepts in the book. Since then, we've published new articles every week, hosted an online book club complete with Twitter chats, and turned _The Open Organization_ into [a book series][8].
|
||||
|
||||
We produced the first three installments of our book series in-house, releasing one every six months. When we finished one, we'd announce it to the community. Then we'd get to work on the next one, and the cycle would continue.
|
||||
|
||||
Working this way, we saw great success. Nearly 3,000 people have registered to receive the [latest book in the series][9], _The Open Organization Leaders Manual_ . And we've maintained our six-month cadence, which means the next book would coincide with the second anniversary of the book.
|
||||
|
||||
Behind the scenes, our process for creating the books is fairly straightforward: We collect our best-of-best stories about particular aspects of working openly, organize them into a compelling narrative, recruit writers to fill some gaps, typeset everything with open tools, collaborate with designers on a cover, and release. Working like this allows us to stay on our own timeline—GSD, full steam ahead. By the [third book][10], we seemed to have perfected the process.
|
||||
|
||||
That all changed when we began planning the latest volume in _Open Organization_ series, one focused on the intersection of open organizations and IT culture. I proposed using the Open Decision Framework because I wanted this book to be proof that working openly produced better results, even though I knew it would completely change our approach to the work. In spite of a fairly aggressive timeline (about two-and-a-half months), we decided to try it.
|
||||
|
||||
### Creating a book with the Open Decision Framework
|
||||
|
||||
The Open Decision Framework lists four phases that constitute the open decision-making process. Here's what we did during each (and how it worked out).
|
||||
|
||||
### 1\. Ideation
|
||||
|
||||
First, we drafted a document outlining a tentative vision for the project. We needed something we could begin sharing with potential "customers" (in our case, potential stakeholders and authors). Then we scheduled interviews with source matter experts we thought would be interested in the project—people who would give us raw, honest feedback about it. Enthusiasm and guidance from those experts validated our idea and gave us the feedback we needed to move forward. If we hadn't gotten that validation, we'd have gone back to our proposal and made a decision on where to pivot and start over.
|
||||
|
||||
### 2\. Planning and research
|
||||
|
||||
With validation after several interviews, we prepared to [announce the project publically on Opensource.com][11]. At the same time, we [launched the project on GitHub][12], offering a description, prospective timeline, and set of constraints. The project announcement was so well-received that all remaining holes in our proposed table of contents were filled within 72 hours. In addition (and more importantly), readers proposed ideas for chapters that _weren't _ already in the table of contents—things they thought might enhance the vision we'd initially sketched.
|
||||
|
||||
We experienced Linus' Law firsthand: "With more eyes, all _typos_ are shallow."
|
||||
|
||||
Looking back, I get the sense that working openly on Phases 1 and 2 really didn't negatively impacted our ability to GSD. In fact, working this way had a huge upside: identifying and filling content gaps. We didn't just fill them; we filled them _rapidly_ and with chapter ideas, we never would have considered on our own. This didn't necessarily involve more work—just work of a different type. We found ourselves asking people in our limited network to write chapters, then managing incoming requests, setting context, and pointing people in the right direction.
|
||||
|
||||
### 3\. Design, development, and testing
|
||||
|
||||
This point in the project was all about project management, cat herding, and maintaining expectations. We were on a deadline, and we communicated that early and often. We also used a tactic of creating a list of contributors and stakeholders and keeping them updated and informed along the entire journey, particularly at milestones we'd identified on GitHub.
|
||||
|
||||
Eventually, our book needed a title. We gathered lots of feedback on what the title should be, and more importantly what it _shouldn't_ be. We [opened an issue][13] as one way of gathering feedback, then openly shared that my team would be making the final decision. When we were ready to announce the final title, my colleague Bryan Behrenshausen did a great job [sharing the context for the decision][14]. People seemed to be happy with it—even if they didn't agree with where we landed with the final title.
|
||||
|
||||
Book "testing" involved extensive [proofreading][15]. The community really stepped up to answer this "help wanted" request. We received approximately 80 comments on the GitHub issue outlining the proofing process (not to mention numerous additional interactions from others via email and other feedback channels).
|
||||
|
||||
With respect to getting things done: In this phase, we experienced [Linus' Law][16] firsthand: "With more eyes, all _typos_ are shallow." Had we used the internalized method we'd used for our three previous book projects, the entire burden of proofing would have fallen on our shoulders (as it did for those books!). Instead, community members graciously helped us carry the burden of proofing, and our work shifted from proofing itself (though we still did plenty of that) to managing all the change requests coming in. This was a much-welcomed change for our team and a chance for the community to participate. We certainly would have finished the proofing faster if we'd done it ourselves, but working on this in the open undeniably allowed us to catch a greater number of errors in advance of our deadline.
|
||||
|
||||
### 4\. Launch
|
||||
|
||||
And here we are, on the cusp of launching the final (or is it just the first?) version of the book.
|
||||
|
||||
Following the Open Decision Framework was key to the success of the Guide to IT Culture Change.
|
||||
|
||||
Our approach to launch consists of two phases. First, in keeping with our public project timeline, we quietly soft-launched the book days ago so our community of contributors could help us test the [download form][17]. The second phase begins right now, with the final, formal announcement of the book's [general availability][18]. Of course, we'll continue accepting additional feedback post-launch, as is the open source way.
|
||||
|
||||
### Achievement unlocked
|
||||
|
||||
Following the Open Decision Framework was key to the success of the _Guide to IT Culture Change_ . By working with our customers and stakeholders, sharing our constraints, and being transparent with the work, we exceeded even our own expectations for the book project.
|
||||
|
||||
I was definitely pleased with the collaboration, feedback, and activity we experienced throughout the entire project. And although the feeling of anxiety about not getting stuff done as quickly as I'd liked loomed over me for a time, I soon realized that opening up the process actually allowed us to get _more_ done than we would have otherwise. That should be evident based on some of the outcomes I outlined above.
|
||||
|
||||
So perhaps I should reconsider my GSD mentality and expand it to GMD: Get **more** done—and, in this case, with better results.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Jason Hibbets - Jason Hibbets is a senior community evangelist in Corporate Marketing at Red Hat where he is a community manager for Opensource.com. He has been with Red Hat since 2003 and is the author of The foundation for an open source city. Prior roles include senior marketing specialist, project manager, Red Hat Knowledgebase maintainer, and support engineer. Follow him on Twitter:
|
||||
|
||||
-----------
|
||||
|
||||
via: https://opensource.com/open-organization/17/6/working-open-and-gsd
|
||||
|
||||
作者:[Jason Hibbets ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jhibbets
|
||||
[1]:https://opensource.com/open-organization/resources/culture-change?src=too_resource_menu
|
||||
[2]:https://opensource.com/open-organization/resources/leaders-manual?src=too_resource_menu
|
||||
[3]:https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu
|
||||
[4]:https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu
|
||||
[5]:https://opensource.com/open-organization/17/6/working-open-and-gsd?rate=ZgpGc0D07SjGkTOf708lnNqbF_HvkhXTXeSzRKMhvVM
|
||||
[6]:https://opensource.com/open-organization/resources/open-decision-framework
|
||||
[7]:https://opensource.com/open-organization/17/5/team-impostor-syndrome
|
||||
[8]:https://opensource.com/open-organization/resources
|
||||
[9]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[10]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[11]:https://opensource.com/open-organization/17/3/announcing-it-culture-book
|
||||
[12]:https://github.com/open-organization-ambassadors/open-org-it-culture
|
||||
[13]:https://github.com/open-organization-ambassadors/open-org-it-culture/issues/20
|
||||
[14]:https://github.com/open-organization-ambassadors/open-org-it-culture/issues/20#issuecomment-297970303
|
||||
[15]:https://github.com/open-organization-ambassadors/open-org-it-culture/issues/29
|
||||
[16]:https://en.wikipedia.org/wiki/Linus%27s_Law
|
||||
[17]:https://opensource.com/open-organization/resources/culture-change
|
||||
[18]:https://opensource.com/open-organization/resources/culture-change
|
||||
[19]:https://opensource.com/user/10530/feed
|
||||
[20]:https://opensource.com/users/jhibbets
|
@ -1,5 +1,3 @@
|
||||
Translating by sanfusu
|
||||
|
||||
Network automation with Ansible
|
||||
================
|
||||
|
||||
|
@ -1,140 +0,0 @@
|
||||
Running MongoDB as a Microservice with Docker and Kubernetes
|
||||
===================
|
||||
|
||||
### Introduction
|
||||
|
||||
Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all traces when you're done.
|
||||
|
||||
Need an identical copy of your application stack in multiple environments? Build your own container image and let your development, test, operations, and support teams launch an identical clone of your environment.
|
||||
|
||||
Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support.
|
||||
|
||||
#### [Read the Enabling Microservices: Containers & Orchestration Explained white paper][6].
|
||||
|
||||
Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.
|
||||
|
||||
The rich functionality, simple tools, and powerful APIs make container and orchestration functionality a favorite for DevOps teams who integrate them into Continuous Integration (CI) and Continuous Delivery (CD) workflows.
|
||||
|
||||
This post delves into the extra challenges you face when attempting to run and orchestrate MongoDB in containers and illustrates how these challenges can be overcome.
|
||||
|
||||
### Considerations for MongoDB
|
||||
|
||||
Running MongoDB with containers and orchestration introduces some additional considerations:
|
||||
|
||||
* MongoDB database nodes are stateful. In the event that a container fails, and is rescheduled, it's undesirable for the data to be lost (it could be recovered from other nodes in the replica set, but that takes time). To solve this, features such as the _Volume_ abstraction in Kubernetes can be used to map what would otherwise be an ephemeral MongoDB data directory in the container to a persistent location where the data survives container failure and rescheduling.
|
||||
|
||||
* MongoDB database nodes within a replica set must communicate with each other – including after rescheduling. All of the nodes within a replica set must know the addresses of all of their peers, but when a container is rescheduled, it is likely to be restarted with a different IP address. For example, all containers within a Kubernetes Pod share a single IP address, which changes when the pod is rescheduled. With Kubernetes, this can be handled by associating a Kubernetes Service with each MongoDB node, which uses the Kubernetes DNS service to provide a `hostname` for the service that remains constant through rescheduling.
|
||||
|
||||
* Once each of the individual MongoDB nodes is running (each within its own container), the replica set must be initialized and each node added. This is likely to require some additional logic beyond that offered by off the shelf orchestration tools. Specifically, one MongoDB node within the intended replica set must be used to execute the `rs.initiate` and `rs.add` commands.
|
||||
|
||||
* If the orchestration framework provides automated rescheduling of containers (as Kubernetes does) then this can increase MongoDB's resiliency since a failed replica set member can be automatically recreated, thus restoring full redundancy levels without human intervention.
|
||||
|
||||
* It should be noted that while the orchestration framework might monitor the state of the containers, it is unlikely to monitor the applications running within the containers, or backup their data. That means it's important to use a strong monitoring and backup solution such as [MongoDB Cloud Manager][1], included with [MongoDB Enterprise Advanced][2] and [MongoDB Professional][3]. Consider creating your own image that contains both your preferred version of MongoDB and the [MongoDB Automation Agent][4].
|
||||
|
||||
### Implementing a MongoDB Replica Set using Docker and Kubernetes
|
||||
|
||||
As described in the previous section, distributed databases such as MongoDB require a little extra attention when being deployed with orchestration frameworks such as Kubernetes. This section goes to the next level of detail, showing how this can actually be implemented.
|
||||
|
||||
We start by creating the entire MongoDB replica set in a single Kubernetes cluster (which would normally be within a single data center – that clearly doesn't provide geographic redundancy). In reality, little has to be changed to run across multiple clusters and those steps are described later.
|
||||
|
||||
Each member of the replica set will be run as its own pod with a service exposing an external IP address and port. This 'fixed' IP address is important as both external applications and other replica set members can rely on it remaining constant in the event that a pod is rescheduled.
|
||||
|
||||
The following diagram illustrates one of these pods and the associated Replication Controller and service.
|
||||
|
||||
<center style="-webkit-font-smoothing: subpixel-antialiased; color: rgb(66, 66, 66); font-family: "Akzidenz Grotesk BQ Light", Helvetica; font-size: 16px; position: relative;">
|
||||

|
||||
</center>
|
||||
|
||||
Figure 1: MongoDB Replica Set member configured as a Kubernetes Pod and exposed as a service
|
||||
|
||||
Stepping through the resources described in that configuration we have:
|
||||
|
||||
* Starting at the core there is a single container named `mongo-node1`. `mongo-node1`includes an image called `mongo` which is a publicly available MongoDB container image hosted on [Docker Hub][5]. The container exposes port `27107` within the cluster.
|
||||
|
||||
* The Kubernetes _volumes_ feature is used to map the `/data/db` directory within the connector to the persistent storage element named `mongo-persistent-storage1`; which in turn is mapped to a disk named `mongodb-disk1` created in the Google Cloud. This is where MongoDB would store its data so that it is persisted over container rescheduling.
|
||||
|
||||
* The container is held within a pod which has the labels to name the pod `mongo-node`and provide an (arbitrary) instance name of `rod`.
|
||||
|
||||
* A Replication Controller named `mongo-rc1` is configured to ensure that a single instance of the `mongo-node1` pod is always running.
|
||||
|
||||
* The `LoadBalancer` service named `mongo-svc-a` exposes an IP address to the outside world together with the port of `27017` which is mapped to the same port number in the container. The service identifies the correct pod using a selector that matches the pod's labels. That external IP address and port will be used by both an application and for communication between the replica set members. There are also local IP addresses for each container, but those change when containers are moved or restarted, and so aren't of use for the replica set.
|
||||
|
||||
The next diagram shows the configuration for a second member of the replica set.
|
||||
|
||||
<center style="-webkit-font-smoothing: subpixel-antialiased; color: rgb(66, 66, 66); font-family: "Akzidenz Grotesk BQ Light", Helvetica; font-size: 16px; position: relative;">
|
||||

|
||||
</center>
|
||||
|
||||
Figure 2: Second MongoDB Replica Set member configured as a Kubernetes Pod
|
||||
|
||||
90% of the configuration is the same, with just these changes:
|
||||
|
||||
* The disk and volume names must be unique and so `mongodb-disk2` and `mongo-persistent-storage2` are used
|
||||
|
||||
* The Pod is assigned a label of `instance: jane` and `name: mongo-node2` so that the new service can distinguish it (using a selector) from the `rod` Pod used in Figure 1.
|
||||
|
||||
* The Replication Controller is named `mongo-rc2`
|
||||
|
||||
* The Service is named `mongo-svc-b` and gets a unique, external IP address (in this instance, Kubernetes has assigned `104.1.4.5`)
|
||||
|
||||
The configuration of the third replica set member follows the same pattern and the following figure shows the complete replica set:
|
||||
|
||||
<center style="-webkit-font-smoothing: subpixel-antialiased; color: rgb(66, 66, 66); font-family: "Akzidenz Grotesk BQ Light", Helvetica; font-size: 16px; position: relative;">
|
||||

|
||||
</center>
|
||||
|
||||
Figure 3: Full Replica Set member configured as a Kubernetes Service
|
||||
|
||||
Note that even if running the configuration shown in Figure 3 on a Kubernetes cluster of three or more nodes, Kubernetes may (and often will) schedule two or more MongoDB replica set members on the same host. This is because Kubernetes views the three pods as belonging to three independent services.
|
||||
|
||||
To increase redundancy (within the zone), an additional _headless_ service can be created. The new service provides no capabilities to the outside world (and will not even have an IP address) but it serves to inform Kubernetes that the three MongoDB pods form a service and so Kubernetes will attempt to schedule them on different nodes.
|
||||
|
||||
<center style="-webkit-font-smoothing: subpixel-antialiased; color: rgb(66, 66, 66); font-family: "Akzidenz Grotesk BQ Light", Helvetica; font-size: 16px; position: relative;">
|
||||

|
||||
</center>
|
||||
|
||||
Figure 4: Headless service to avoid co-locating of MongoDB replica set members
|
||||
|
||||
The actual configuration files and the commands needed to orchestrate and start the MongoDB replica set can be found in the [Enabling Microservices: Containers & Orchestration Explained white paper][7]. In particular, there are some special steps required to combine the three MongoDB instances into a functioning, robust replica set which are described in the paper.
|
||||
|
||||
#### Multiple Availability Zone MongoDB Replica Set
|
||||
|
||||
There is risk associated with the replica set created above in that everything is running in the same GCE cluster, and hence in the same availability zone. If there were a major incident that took the availability zone offline, then the MongoDB replica set would be unavailable. If geographic redundancy is required, then the three pods should be run in three different availability zones or regions.
|
||||
|
||||
Surprisingly little needs to change in order to create a similar replica set that is split between three zones – which requires three clusters. Each cluster requires its own Kubernetes YAML file that defines just the pod, Replication Controller and service for one member of the replica set. It is then a simple matter to create a cluster, persistent storage, and MongoDB node for each zone.
|
||||
|
||||
<center style="-webkit-font-smoothing: subpixel-antialiased; color: rgb(66, 66, 66); font-family: "Akzidenz Grotesk BQ Light", Helvetica; font-size: 16px; position: relative;">
|
||||

|
||||
</center>
|
||||
|
||||
Figure 5: Replica set running over multiple availability zones
|
||||
|
||||
### Next Steps
|
||||
|
||||
To learn more about containers and orchestration – both the technologies involved and the business benefits they deliver – read the [Enabling Microservices: Containers & Orchestration Explained white paper][8]. The same paper provides the complete instructions to get the replica set described in this post up and running on Docker and Kubernetes in the Google Container Engine.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Andrew is a Principal Product Marketing Manager working for MongoDB. He joined at the start last summer from Oracle where he spent 6+ years in product management, focused on High Availability. He can be contacted @andrewmorgan or through comments on his blog (clusterdb.com).
|
||||
|
||||
-------
|
||||
|
||||
via: https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes
|
||||
|
||||
作者:[Andrew Morgan ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.clusterdb.com/
|
||||
[1]:https://www.mongodb.com/cloud/
|
||||
[2]:https://www.mongodb.com/products/mongodb-enterprise-advanced
|
||||
[3]:https://www.mongodb.com/products/mongodb-professional
|
||||
[4]:https://docs.cloud.mongodb.com/tutorial/nav/install-automation-agent/
|
||||
[5]:https://hub.docker.com/_/mongo/
|
||||
[6]:https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained?jmp=inline
|
||||
[7]:https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained
|
||||
[8]:https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained
|
@ -1,147 +0,0 @@
|
||||
申请翻译 by WangYueScream
|
||||
=========================
|
||||
|
||||
The Children's Illustrated Guide to Kubernetes
|
||||
============================================================
|
||||
|
||||
|
||||
Introducing Phippy, an intrepid little PHP app, and her journey to Kubernetes.
|
||||
|
||||
What is this? Well, I wrote a book that explains Kubernetes. We posted [a video version][1] to the Kubernetes community blog. If you find us at a conference, you stand a chance to pick up a physical copy. But for now, here's a blog post version!
|
||||
|
||||
And after you've finished reading, tweet something at [@opendeis][2] for a chance to win a squishy little Phippy toy of your own. Not sure what to tweet? Why don't you tell us about yourself and how you use Kubernetes!
|
||||
|
||||
### The Other Day...
|
||||
|
||||

|
||||
|
||||
The other day, my daughter sidled into my office, and asked me, "Dearest Father, whose knowledge is incomparable, what is Kubernetes?"
|
||||
|
||||
Alright, that's a little bit of a paraphrase, but you get the idea.
|
||||
|
||||

|
||||
|
||||
And I responded, "Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users' declared intentions. Using the concepts of "labels" and "pods", it groups the container which make up an application into logical units for easy management and discovery."
|
||||
|
||||
And my daughter said to me, "Huh?"
|
||||
|
||||
And so I give you...
|
||||
|
||||
### The Children's Illustrated Guide to Kubernetes
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Once upon a time there was an app named Phippy. And she was a simple app. She was written in PHP and had just one page. She lived on a hosting provider and she shared her environment with scary other apps that she didn't know and didn't care to associate with. She wished she had her own environment: just her and a webserver she could call home.
|
||||
|
||||

|
||||
|
||||
An app has an environment that it relies upon to run. For a PHP app, that environment might include a webserver, a readable file system, and the PHP engine itself.
|
||||
|
||||

|
||||
|
||||
One day, a kindly whale came along. He suggested that little Phippy might be happier living in a container. And so the app moved. And the container was nice, but… It was a little bit like having a fancy living room floating in the middle of the ocean.
|
||||
|
||||

|
||||
|
||||
A container provides an isolated environment in which an app, together with its environment, can run. But those isolated containers often need to be managed and connected to the external world. Shared file systems, networking, scheduling, load balancing, and distribution are all challenges.
|
||||
|
||||

|
||||
|
||||
The whale shrugged his shoulders. "Sorry, kid," he said, and disappeared beneath the ocean's surface. But before Phippy could even begin to despair, a captain appeared on the horizon, piloting a gigantic ship. The ship was made of dozens of rafts all lashed together, but from the outside, it looked like one giant ship.
|
||||
|
||||
"Hello there, friend PHP app. My name is Captain Kube" said the wise old captain.
|
||||
|
||||

|
||||
|
||||
"Kubernetes" is the Greek word for a ship's captain. We get the words _Cybernetic_ and _Gubernatorial_ from it. Led by Google, the Kubernetes project focuses on building a robust platform for running thousands of containers in production.
|
||||
|
||||

|
||||
|
||||
"I'm Phippy," said the little app.
|
||||
|
||||
"Nice to make your acquaintance," said the Captain as he slapped a name tag on her.
|
||||
|
||||

|
||||
|
||||
Kubernetes uses labels as "nametags" to identify things. And it can query based on these labels. Labels are open-ended: You can use them to indicate roles, stability, or other important attributes.
|
||||
|
||||

|
||||
|
||||
Captain Kube suggested that the app might like to move her container to a pod on board the ship. Phippy happily moved her container inside of the pod aboard Kube's giant ship. It felt like home.
|
||||
|
||||

|
||||
|
||||
In Kubernetes, a Pod represents a runnable unit of work. Usually, you will run a single container inside of a Pod. But for cases where a few containers are tightly coupled, you may opt to run more than one container inside of the same Pod. Kubernetes takes on the work of connecting your pod to the network and the rest of the Kubernetes environment.
|
||||
|
||||

|
||||
|
||||
Phippy had some unusual interests. She was really into genetics and sheep. And so she asked the captain, "What if I want to clone myself… On demand… Any number of times?"
|
||||
|
||||
"That's easy," said the captain. And he introduced her to the replication controllers.
|
||||
|
||||

|
||||
|
||||
Replication controllers provide a method for managing an arbitrary number of pods. A replication controller contains a pod template, which can be replicated any number of times. Through the replication controller, Kubernetes will manage your pods' lifecycle, including scaling up and down, rolling deployments, and monitoring.
|
||||
|
||||

|
||||
|
||||
For many days and nights the little app was happy with her pod and happy with her replicas. But only having yourself for company is not all it's cracked up to be…. even if it is N copies of yourself.
|
||||
|
||||
Captain Kube smiled benevolently, "I have just the thing."
|
||||
|
||||
No sooner had he spoken than a tunnel opened between Phippy's replication controller and the rest of the ship. With a hearty laugh, Captain Kube said, "Even when your clones come and go, this tunnel will stay here so you can discover other pods, and they can discover you!"
|
||||
|
||||

|
||||
|
||||
A service tells the rest of the Kubernetes environment (including other pods and replication controllers) what _services_ your application provides. While pods come and go, the service IP addresses and ports remain the same. And other applications can find your service through Kurbenetes service discovery.
|
||||
|
||||

|
||||
|
||||
Thanks to the services, Phippy began to explore the rest of the ship. It wasn't long before Phippy met Goldie. And they became the best of friends. One day, Goldie did something extraordinary. She gave Phippy a present. Phippy took one look and the saddest of sad tears escaped her eye.
|
||||
|
||||
"Why are you so sad?" asked Goldie.
|
||||
|
||||
"I love the present, but I have nowhere to put it!" sniffled Phippy.
|
||||
|
||||
But Goldie knew what to do. "Why not put it in a volume?"
|
||||
|
||||

|
||||
|
||||
A volume represents a location where containers can access and store information. For the application, the volume appears as part of the local filesystem. But volumes may be backed by local storage, Ceph, Gluster, Elastic Block Storage, and a number of other storage backends.
|
||||
|
||||

|
||||
|
||||
Phippy loved life aboard Captain Kube's ship and she enjoyed the company of her new friends (every replicated pod of Goldie was equally delightful). But as she thought back to her days on the scary hosted provider, she began to wonder if perhaps she could also have a little privacy.
|
||||
|
||||
"It sounds like what you need," said Captain Kube, "is a namespace."
|
||||
|
||||

|
||||
|
||||
A namespace functions as a grouping mechanism inside of Kubernetes. Services, pods, replication controllers, and volumes can easily cooperate within a namespace, but the namespace provides a degree of isolation from the other parts of the cluster.
|
||||
|
||||

|
||||
|
||||
Together with her new friends, Phippy sailed the seas on Captain Kube's great boat. She had many grand adventures, but most importantly, Phippy had found her home.
|
||||
|
||||
And so Phippy lived happily ever after.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Platform Architect at Deis. Lover of wisdom, coffee, and finely crafted code.
|
||||
|
||||
via: https://deis.com/blog/2016/kubernetes-illustrated-guide/
|
||||
|
||||
作者:[Matt Butcher ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://twitter.com/@technosophos
|
||||
[1]:http://blog.kubernetes.io/2016/06/illustrated-childrens-guide-to-kubernetes.html
|
||||
[2]:https://twitter.com/opendeis
|
@ -1,4 +1,4 @@
|
||||
Best Linux Adobe Alternatives You Need to Know ###translating by ninaiwohe109
|
||||
Best Linux Adobe Alternatives You Need to Know
|
||||
============================================================
|
||||

|
||||
|
||||
|
@ -1,177 +0,0 @@
|
||||
yzca Translating
|
||||
Docker Engine swarm mode - Intro tutorial
|
||||
============================
|
||||
|
||||
Sounds like a punk rock band. But it is the brand new orchestration mechanism, or rather, an improvement of the orchestration available in [Docker][1]. To keep it short and sweet, if you are using an older version of Docker, you will manually need to setup Swarm to create Docker clusters. Starting with [version 1.12][2], the Docker engine comes with a native implementation allowing a seamless clustering setup. The reason why we are here.
|
||||
|
||||
In this tutorial, I will try to give you a taste of what Docker can do when it comes to orchestration. This article is by no means all inclusive (bed & breakfast) or all-knowing, but it has what it takes to embark you on your clustering journey. After me.
|
||||
|
||||

|
||||
|
||||
### Technology overview
|
||||
|
||||
It would be a shame for me to rehash the very detailed and highly useful Docker documentation article, so I will just outline a brief overview of the technology. So we have Docker, right. Now, you want to use more than a single server as a Docker host, but you want them to belong to the same logical entity. Hence, clustering.
|
||||
|
||||
Let's start by a cluster of one. When you initiate swarm on a host, it becomes a manager of the cluster. Technically speaking, it becomes a consensus group of one node. The mathematical logic behind is based on the [Raft][3] algorithm. The manager is responsible for scheduling tasks. The tasks will be delegated to worker nodes, once and if they join the swarm. This is governed by the Node API. I hate the word API, but I must use it here.
|
||||
|
||||
The Service API is the second component of this implementation. It allows manager nodes to create distributed services on all of the nodes in the swarm. The services can be replicated, meaning they are spread across the cluster using balancing mechanisms, or they can be global, meaning an instance of the service will be running on each node.
|
||||
|
||||
There's much more at work here, but this is good enough to get you primed and pumped. Now, let's do some actual hands-on stuff. Our target platform is [CentOS 7.2][4], which is quite interesting, because at the time I wrote this tutorial, it only had Docker 1.10 in the repos, and I had to manually upgrade the framework to use swarm. We will discuss this in a separate tutorial. Then, we will also have a follow-up guide, where we will join new nodes into our cluster, and we will try an asymmetric setup with [Fedora][5]. At this point, please assume the correct setup is in place, and let's get a cluster service up and running.
|
||||
|
||||
### Setup image & service
|
||||
|
||||
I will try to setup a load-balanced [Apache][6] service, with multiple instances serving content via a single IP address. Pretty standard. It also highlights the typical reasons why you would go with a cluster configuration - availability, redundancy, horizontal scaling, and performance. Of course, you also need to take into consideration the [networking][7] piece, as well as [storage][8], but that's something that goes beyond the immediate scope of this guide.
|
||||
|
||||
The actual Dockerfile template is available in the official repository under httpd. You will need a minimal setup to get underway. The details on how to download images, how to create your own and such are available in my intro guide, linked at the beginning of this tutorial.
|
||||
|
||||
docker build -t my-apache2 .
|
||||
Sending build context to Docker daemon 2.048 kB
|
||||
Step 1 : FROM httpd:2.4
|
||||
Trying to pull repository docker.io/library/httpd ...
|
||||
2.4: Pulling from docker.io/library/httpd
|
||||
|
||||
8ad8b3f87b37: Pull complete
|
||||
c95e1f92326d: Pull complete
|
||||
96e8046a7a4e: Pull complete
|
||||
00a0d292c371: Pull complete
|
||||
3f7586acab34: Pull complete
|
||||
Digest: sha256:3ad4d7c4f1815bd1c16788a57f81b413...a915e50a0d3a4
|
||||
Status: Downloaded newer image for docker.io/httpd:2.4
|
||||
---> fe3336dd034d
|
||||
Step 2 : COPY ../public-html/ /usr/local/apache2/htdocs/
|
||||
...
|
||||
|
||||

|
||||
|
||||
Before you go any further, you should start a single instance and see that your container is created without any errors and that you can connect to the Web server. Once we establish that, we will create a distributed service.
|
||||
|
||||
docker run -dit --name my-running-app my-apache2
|
||||
|
||||
Check the IP address, punch into a browser, see what gives.
|
||||
|
||||
### Swarm initiation and setup
|
||||
|
||||
The next step is to get swarm going. Here's the most basic of commands that will get you underway, and it is very similar to the example used in the Docker blog:
|
||||
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
|
||||
What do we have here? We are creating a service called frontend, with five container instances. We are also binding our hostPort 80 with the containerPort 80. And we are using my freshly created Apache image for this. However, when you do this, you will get the following error:
|
||||
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
|
||||
|
||||
This means you have not setup the current host (node) to be a swarm manager. You either need to init the swarm or join an existing one. Since we do not have one yet, we will now initialize it:
|
||||
|
||||
docker swarm init
|
||||
Swarm initialized: current node (dm58mmsczqemiikazbfyfwqpd) is now a manager.
|
||||
|
||||
To add a worker to this swarm, run the following command:
|
||||
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-4ofd46a2nfyvrqwu8w5oeetukrbylyznxla
|
||||
9srf9vxkxysj4p8-eu5d68pu5f1ci66s7w4wjps1u \
|
||||
10.0.2.15:2377
|
||||
|
||||
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
|
||||
|
||||
The output is fairly self explanatory. We have created a swarm. New nodes will need to use the correct token to join the swarm. You also have the IP address and port identified, if you require firewall rules. Moreover, you can add managers to the swarm, too. Now, rerun the service create command:
|
||||
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
6lrx1vhxsar2i50is8arh4ud1
|
||||
|
||||
### Test connectivity
|
||||
|
||||
Now, let's check that our service actually works. In a way, this is similar to what we did with [Vagrant][9] and [coreOS][10]. After all, the concepts are almost identical. It's just different implementation of the same idea. First, docker ps should show the right output. You should have multiple replicas for the created service.
|
||||
|
||||
docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
|
||||
NAMES
|
||||
cda532f67d55 my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.1.2sobjfchdyucschtu2xw6ms9a
|
||||
75fe6e0aa77b my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.4.ag77qtdeby9fyvif5v6c4zcpc
|
||||
3ce824d3151f my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.2.b6fqg6sf4hkeqs86ps4zjyq65
|
||||
eda01569181d my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.5.0rmei3zeeh8usagg7fn3olsp4
|
||||
497ef904e381 my-apache2:latest "httpd-foreground"
|
||||
2 minutes ago Up 2 minutes 80/tcp frontend.3.7m83qsilli5dk8rncw3u10g5a
|
||||
|
||||
I also tested with different, non-default ports, and it works well. You have a lot of leeway in how you can connect to the server and get the response. You can use localhost or the docker interface IP address with the correct port. The example below shows port 1080:
|
||||
|
||||

|
||||
|
||||
Now, this a very rough, very simple beginning. The real challenge is in creating optimized, scalable services, but they do require a proper technical use case. Moreover, you should also use the docker info and docker service (inspect|ps) commands to learn more about how your cluster is behaving.
|
||||
|
||||
### Possible problems
|
||||
|
||||
You may encounter some small (or not so small) issues while playing with Docker and swarm. For example, SELinux may complain that you are trying to do something illegal. However, the errors and warnings should not impede you too much.
|
||||
|
||||

|
||||
|
||||
### Docker service is not a docker command
|
||||
|
||||
When you try to run the necessary command to start a replicated service, you get an error that says docker: 'service' is not a docker command. This means that you do not have the right version of Docker (check with -v). We will fix this in a follow-up tutorial.
|
||||
|
||||
docker service create --name frontend --replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
docker: 'service' is not a docker command.
|
||||
|
||||
### Docker tag not recognized
|
||||
|
||||
You may also see the following error:
|
||||
|
||||
docker service create -name frontend -replicas 5 -p 80:80/tcp my-apache2:latest
|
||||
Error response from daemon: rpc error: code = 3 desc = ContainerSpec: "-name" is not a valid repository/tag
|
||||
|
||||
There are several [discussions][11] [threads][12] around this. The error may actually be quite innocent. You may have copied the command from a browser, and the hyphens may not be parsed correctly. As simple as that.
|
||||
|
||||
### More reading
|
||||
|
||||
There's a lot more to be said on this topic, including the Swarm implementation prior to Docker 1.12, as well as the current version of the Docker engine. To wit, please do not be lazy and spend some time reading:
|
||||
|
||||
Docker Swarm [overview][13] (for standalone Swarm installations)
|
||||
|
||||
[Build][14] a Swarm cluster for production (standalone setups)
|
||||
|
||||
[Install and create][15] a Docker Swarm (standalone setups)
|
||||
|
||||
Docker engine swarm [overview][16] (for version 1.12)
|
||||
|
||||
Getting started with [swarm][17] mode (for version 1.12)
|
||||
|
||||
### Conclusion
|
||||
|
||||
There you go. Nothing too grand at this point, but I believe you will find the article useful. It covers several key concepts, there's an overview of how the swarm mode works and what it does, and we successfully managed to download and create our own Web server image and then run several clustered instances of it. We did this on a single node for now, but we will expand in the future. Also, we tackled some common problems.
|
||||
|
||||
I hope you find this guide interesting. Combined with my previous work on Docker, this should give you a decent understand of how to work with images, the networking stack, storage, and now clusters. Warming up. Indeed, enjoy and see you soon with fresh new tutorials on Docker. I just can't contain [sic] myself.
|
||||
|
||||
Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.dedoimedo.com/computers/docker-swarm-intro.html
|
||||
|
||||
作者:[Dedoimedo ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/computers/docker-swarm-intro.html
|
||||
[1]:http://www.dedoimedo.com/computers/docker-guide.html
|
||||
[2]:https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/
|
||||
[3]:https://en.wikipedia.org/wiki/Raft_%28computer_science%29
|
||||
[4]:http://www.dedoimedo.com/computers/lenovo-g50-centos-xfce.html
|
||||
[5]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[6]:https://hub.docker.com/_/httpd/
|
||||
[7]:http://www.dedoimedo.com/computers/docker-networking.html
|
||||
[8]:http://www.dedoimedo.com/computers/docker-data-volumes.html
|
||||
[9]:http://www.dedoimedo.com/computers/vagrant-intro.html
|
||||
[10]:http://www.dedoimedo.com/computers/vagrant-coreos.html
|
||||
[11]:https://github.com/docker/docker/issues/24192
|
||||
[12]:http://stackoverflow.com/questions/38618609/docker-swarm-1-12-name-option-not-recognized
|
||||
[13]:https://docs.docker.com/swarm/
|
||||
[14]:https://docs.docker.com/swarm/install-manual/
|
||||
[15]:https://docs.docker.com/swarm/install-w-machine/
|
||||
[16]:https://docs.docker.com/engine/swarm/
|
||||
[17]:https://docs.docker.com/engine/swarm/swarm-tutorial/
|
@ -1,214 +0,0 @@
|
||||
ucasFL translating
|
||||
|
||||
An introduction to Linux filesystems
|
||||
============================================================
|
||||
|
||||

|
||||
Image credits : Original photo by Rikki Endsley. [CC BY-SA 4.0][9]
|
||||
|
||||
This article is intended to be a very high-level discussion of Linux filesystem concepts. It is not intended to be a low-level description of how a particular filesystem type, such as EXT4, works, nor is it intended to be a tutorial of filesystem commands.
|
||||
|
||||
More Linux resources
|
||||
|
||||
* [What is Linux?][1]
|
||||
|
||||
* [What are Linux containers?][2]
|
||||
|
||||
* [Download Now: Linux commands cheat sheet][3]
|
||||
|
||||
* [Advanced Linux commands cheat sheet][4]
|
||||
|
||||
* [Our latest Linux articles][5]
|
||||
|
||||
Every general-purpose computer needs to store data of various types on a hard disk drive (HDD) or some equivalent, such as a USB memory stick. There are a couple reasons for this. First, RAM loses its contents when the computer is switched off. There are non-volatile types of RAM that can maintain the data stored there after power is removed (such as flash RAM that is used in USB memory sticks and solid state drives), but flash RAM is much more expensive than standard, volatile RAM like DDR3 and other, similar types.
|
||||
|
||||
The second reason that data needs to be stored on hard drives is that even standard RAM is still more expensive than disk space. Both RAM and disk costs have been dropping rapidly, but RAM still leads the way in terms of cost per byte. A quick calculation of the cost per byte, based on costs for 16GB of RAM vs. a 2TB hard drive, shows that the RAM is about 71 times more expensive per unit than the hard drive. A typical cost for RAM is around $0.0000000043743750 per byte today.
|
||||
|
||||
For a quick historical note to put present RAM costs in perspective, in the very early days of computing, one type of memory was based on dots on a CRT screen. This was very expensive at about $1.00 _per bit_ !
|
||||
|
||||
### Definitions
|
||||
|
||||
You may hear people talk about filesystems in a number of different and confusing ways. The word itself can have multiple meanings, and you may have to discern the correct meaning from the context of a discussion or document.
|
||||
|
||||
I will attempt to define the various meanings of the word "filesystem" based on how I have observed it being used in different circumstances. Note that while attempting to conform to standard "official" meanings, my intent is to define the term based on its various usages. These meanings will be explored in greater detail in the following sections of this article.
|
||||
|
||||
1. The entire Linux directory structure starting at the top (/) root directory.
|
||||
|
||||
2. A specific type of data storage format, such as EXT3, EXT4, BTRFS, XFS, and so on. Linux supports almost 100 types of filesystems, including some very old ones as well as some of the newest. Each of these filesystem types uses its own metadata structures to define how the data is stored and accessed.
|
||||
|
||||
3. A partition or logical volume formatted with a specific type of filesystem that can be mounted on a specified mount point on a Linux filesystem.
|
||||
|
||||
### Basic filesystem functions
|
||||
|
||||
Disk storage is a necessity that brings with it some interesting and inescapable details. Obviously, a filesystem is designed to provide space for non-volatile storage of data; that is its ultimate function. However, there are many other important functions that flow from that requirement.
|
||||
|
||||
All filesystems need to provide a namespace—that is, a naming and organizational methodology. This defines how a file can be named, specifically the length of a filename and the subset of characters that can be used for filenames out of the total set of characters available. It also defines the logical structure of the data on a disk, such as the use of directories for organizing files instead of just lumping them all together in a single, huge conglomeration of files.
|
||||
|
||||
Once the namespace has been defined, a metadata structure is necessary to provide the logical foundation for that namespace. This includes the data structures required to support a hierarchical directory structure; structures to determine which blocks of space on the disk are used and which are available; structures that allow for maintaining the names of the files and directories; information about the files such as their size and times they were created, modified or last accessed; and the location or locations of the data belonging to the file on the disk. Other metadata is used to store high-level information about the subdivisions of the disk, such as logical volumes and partitions. This higher-level metadata and the structures it represents contain the information describing the filesystem stored on the drive or partition, but is separate from and independent of the filesystem metadata.
|
||||
|
||||
Filesystems also require an Application Programming Interface (API) that provides access to system function calls which manipulate filesystem objects like files and directories. APIs provide for tasks such as creating, moving, and deleting files. It also provides algorithms that determine things like where a file is placed on a filesystem. Such algorithms may account for objectives such as speed or minimizing disk fragmentation.
|
||||
|
||||
Modern filesystems also provide a security model, which is a scheme for defining access rights to files and directories. The Linux filesystem security model helps to ensure that users only have access to their own files and not those of others or the operating system itself.
|
||||
|
||||
The final building block is the software required to implement all of these functions. Linux uses a two-part software implementation as a way to improve both system and programmer efficiency.
|
||||
|
||||
<center>
|
||||

|
||||
|
||||
Figure 1: The Linux two-part filesystem software implementation.</center>
|
||||
|
||||
The first part of this two-part implementation is the Linux virtual filesystem. This virtual filesystem provides a single set of commands for the kernel, and developers, to access all types of filesystems. The virtual filesystem software calls the specific device driver required to interface to the various types of filesystems. The filesystem-specific device drivers are the second part of the implementation. The device driver interprets the standard set of filesystem commands to ones specific to the type of filesystem on the partition or logical volume.
|
||||
|
||||
### Directory structure
|
||||
|
||||
As a usually very organized Virgo, I like things stored in smaller, organized groups rather than in one big bucket. The use of directories helps me to be able to store and then locate the files I want when I am looking for them. Directories are also known as folders because they can be thought of as folders in which files are kept in a sort of physical desktop analogy.
|
||||
|
||||
In Linux and many other operating systems, directories can be structured in a tree-like hierarchy. The Linux directory structure is well defined and documented in the [Linux Filesystem Hierarchy Standard][10] (FHS). Referencing those directories when accessing them is accomplished by using the sequentially deeper directory names connected by forward slashes (/) such as /var/log and /var/spool/mail. These are called paths.
|
||||
|
||||
The following table provides a very brief list of the standard, well-known, and defined top-level Linux directories and their purposes.
|
||||
|
||||
| Directory | Description |
|
||||
| --- | --- |
|
||||
| / (root filesystem) | The root filesystem is the top-level directory of the filesystem. It must contain all of the files required to boot the Linux system before other filesystems are mounted. It must include all of the required executables and libraries required to boot the remaining filesystems. After the system is booted, all other filesystems are mounted on standard, well-defined mount points as subdirectories of the root filesystem. |
|
||||
| /bin | The /bin directory contains user executable files. |
|
||||
| /boot | Contains the static bootloader and kernel executable and configuration files required to boot a Linux computer. |
|
||||
| /dev | This directory contains the device files for every hardware device attached to the system. These are not device drivers, rather they are files that represent each device on the computer and facilitate access to those devices. |
|
||||
| /etc | Contains the local system configuration files for the host computer. |
|
||||
| /home | Home directory storage for user files. Each user has a subdirectory in /home. |
|
||||
| /lib | Contains shared library files that are required to boot the system. |
|
||||
| /media | A place to mount external removable media devices such as USB thumb drives that may be connected to the host. |
|
||||
| /mnt | A temporary mountpoint for regular filesystems (as in not removable media) that can be used while the administrator is repairing or working on a filesystem. |
|
||||
| /opt | Optional files such as vendor supplied application programs should be located here. |
|
||||
| /root | This is not the root (/) filesystem. It is the home directory for the root user. |
|
||||
| /sbin | System binary files. These are executables used for system administration. |
|
||||
| /tmp | Temporary directory. Used by the operating system and many programs to store temporary files. Users may also store files here temporarily. Note that files stored here may be deleted at any time without prior notice. |
|
||||
| /usr | These are shareable, read-only files, including executable binaries and libraries, man files, and other types of documentation. |
|
||||
| /var | Variable data files are stored here. This can include things like log files, MySQL, and other database files, web server data files, email inboxes, and much more. |
|
||||
|
||||
<center>Table 1: The top level of the Linux filesystem hierarchy.</center>
|
||||
|
||||
The directories and their subdirectories shown in Table 1, along with their subdirectories, that have a teal background are considered an integral part of the root filesystem. That is, they cannot be created as a separate filesystem and mounted at startup time. This is because they (specifically, their contents) must be present at boot time in order for the system to boot properly.
|
||||
|
||||
The /media and /mnt directories are part of the root filesystem, but they should never contain any data. Rather, they are simply temporary mount points.
|
||||
|
||||
The remaining directories, those that have no background color in Table 1 do not need to be present during the boot sequence, but will be mounted later, during the startup sequence that prepares the host to perform useful work.
|
||||
|
||||
Be sure to refer to the official [Linux Filesystem Hierarchy Standard][11] (FHS) web page for details about each of these directories and their many subdirectories. Wikipedia also has a good description of the [FHS][12]. This standard should be followed as closely as possible to ensure operational and functional consistency. Regardless of the filesystem types used on a host, this hierarchical directory structure is the same.
|
||||
|
||||
### Linux unified directory structure
|
||||
|
||||
In some non-Linux PC operating systems, if there are multiple physical hard drives or multiple partitions, each disk or partition is assigned a drive letter. It is necessary to know on which hard drive a file or program is located, such as C: or D:. Then you issue the drive letter as a command, **D:**, for example, to change to the D: drive, and then you use the **cd** command to change to the correct directory to locate the desired file. Each hard drive has its own separate and complete directory tree.
|
||||
|
||||
The Linux filesystem unifies all physical hard drives and partitions into a single directory structure. It all starts at the top–the root (/) directory. All other directories and their subdirectories are located under the single Linux root directory. This means that there is only one single directory tree in which to search for files and programs.
|
||||
|
||||
This can work only because a filesystem, such as /home, /tmp, /var, /opt, or /usr can be created on separate physical hard drives, a different partition, or a different logical volume from the / (root) filesystem and then be mounted on a mountpoint (directory) as part of the root filesystem tree. Even removable drives such as a USB thumb drive or an external USB or ESATA hard drive will be mounted onto the root filesystem and become an integral part of that directory tree.
|
||||
|
||||
One good reason to do this is apparent during an upgrade from one version of a Linux distribution to another, or changing from one distribution to another. In general, and aside from any upgrade utilities like dnf-upgrade in Fedora, it is wise to occasionally reformat the hard drive(s) containing the operating system during an upgrade to positively remove any cruft that has accumulated over time. If /home is part of the root filesystem it will be reformatted as well and would then have to be restored from a backup. By having /home as a separate filesystem, it will be known to the installation program as a separate filesystem and formatting of it can be skipped. This can also apply to /var where database, email inboxes, website, and other variable user and system data are stored.
|
||||
|
||||
There are other reasons for maintaining certain parts of the Linux directory tree as separate filesystems. For example, a long time ago, when I was not yet aware of the potential issues surrounding having all of the required Linux directories as part of the / (root) filesystem, I managed to fill up my home directory with a large number of very big files. Since neither the /home directory nor the /tmp directory were separate filesystems but simply subdirectories of the root filesystem, the entire root filesystem filled up. There was no room left for the operating system to create temporary files or to expand existing data files. At first, the application programs started complaining that there was no room to save files, and then the OS itself started to act very strangely. Booting to single-user mode and clearing out the offending files in my home directory allowed me to get going again. I then reinstalled Linux using a pretty standard multi-filesystem setup and was able to prevent complete system crashes from occurring again.
|
||||
|
||||
I once had a situation where a Linux host continued to run, but prevented the user from logging in using the GUI desktop. I was able to log in using the command line interface (CLI) locally using one of the [virtual consoles][13], and remotely using SSH. The problem was that the /tmp filesystem had filled up and some temporary files required by the GUI desktop could not be created at login time. Because the CLI login did not require files to be created in /tmp, the lack of space there did not prevent me from logging in using the CLI. In this case, the /tmp directory was a separate filesystem and there was plenty of space available in the volume group the /tmp logical volume was a part of. I simply [expanded the /tmp logical volume][14] to a size that accommodated my fresh understanding of the amount of temporary file space needed on that host and the problem was solved. Note that this solution did not require a reboot, and as soon as the /tmp filesystem was enlarged the user was able to login to the desktop.
|
||||
|
||||
Another situation occurred while I was working as a lab administrator at one large technology company. One of our developers had installed an application in the wrong location (/var). The application was crashing because the /var filesystem was full and the log files, which are stored in /var/log on that filesystem, could not be appended with new messages due to the lack of space. However, the system remained up and running because the critical / (root) and /tmp filesystems did not fill up. Removing the offending application and reinstalling it in the /opt filesystem resolved that problem.
|
||||
|
||||
### Filesystem types
|
||||
|
||||
Linux supports reading around 100 partition types; it can create and write to only a few of these. But it is possible—and very common—to mount filesystems of different types on the same root filesystem. In this context we are talking about filesystems in terms of the structures and metadata required to store and manage the user data on a partition of a hard drive or a logical volume. The complete list of filesystem partition types recognized by the Linux **fdisk**command is provided here, so that you can get a feel for the high degree of compatibility that Linux has with very many types of systems.
|
||||
|
||||
```
|
||||
0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT 10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/ 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto 1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep 1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT
|
||||
```
|
||||
|
||||
The main purpose in supporting the ability to read so many partition types is to allow for compatibility and at least some interoperability with other computer systems' filesystems. The choices available when creating a new filesystem with Fedora are shown in the following list.
|
||||
|
||||
* btrfs
|
||||
|
||||
* **cramfs**
|
||||
|
||||
* **ext2**
|
||||
|
||||
* **ext3**
|
||||
|
||||
* **ext4**
|
||||
|
||||
* fat
|
||||
|
||||
* gfs2
|
||||
|
||||
* hfsplus
|
||||
|
||||
* minix
|
||||
|
||||
* **msdos**
|
||||
|
||||
* ntfs
|
||||
|
||||
* reiserfs
|
||||
|
||||
* **vfat**
|
||||
|
||||
* xfs
|
||||
|
||||
Other distributions support creating different filesystem types. For example, CentOS 6 supports creating only those filesystems highlighted in bold in the above list.
|
||||
|
||||
### Mounting
|
||||
|
||||
The term "to mount" a filesystem in Linux refers back to the early days of computing when a tape or removable disk pack would need to be physically mounted on an appropriate drive device. After being physically placed on the drive, the filesystem on the disk pack would be logically mounted by the operating system to make the contents available for access by the OS, application programs and users.
|
||||
|
||||
A mount point is simply a directory, like any other, that is created as part of the root filesystem. So, for example, the home filesystem is mounted on the directory /home. Filesystems can be mounted at mount points on other non-root filesystems but this is less common.
|
||||
|
||||
The Linux root filesystem is mounted on the root directory (/) very early in the boot sequence. Other filesystems are mounted later, by the Linux startup programs, either **rc** under SystemV or by **systemd** in newer Linux releases. Mounting of filesystems during the startup process is managed by the /etc/fstab configuration file. An easy way to remember that is that fstab stands for "file system table," and it is a list of filesystems that are to be mounted, their designated mount points, and any options that might be needed for specific filesystems.
|
||||
|
||||
Filesystems are mounted on an existing directory/mount point using the **mount**command. In general, any directory that is used as a mount point should be empty and not have any other files contained in it. Linux will not prevent users from mounting one filesystem over one that is already there or on a directory that contains files. If you mount a filesystem on an existing directory or filesystem, the original contents will be hidden and only the content of the newly mounted filesystem will be visible.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I hope that some of the possible confusion surrounding the term filesystem has been cleared up by this article. It took a long time and a very helpful mentor for me to truly understand and appreciate the complexity, elegance, and functionality of the Linux filesystem in all of its meanings.
|
||||
|
||||
If you have questions, please add them to the comments below and I will try to answer them.
|
||||
|
||||
### Next month
|
||||
|
||||
Another important concept is that for Linux, everything is a file. This concept has some interesting and important practical applications for users and system admins. The reason I mention this is that you might want to read my "[Everything is a file][15]" article before the article I am planning for next month on the /dev directory.
|
||||
|
||||
-----------------
|
||||
|
||||
作者简介:
|
||||
|
||||
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for[More about me][7]
|
||||
|
||||
* [Learn how you can contribute][22]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/10/introduction-linux-filesystems
|
||||
|
||||
作者:[David Both][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dboth
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/life/16/10/introduction-linux-filesystems?rate=Qyf2jgkdgrj5_zfDwadBT8KsHZ2Gp5Be2_tF7R-s02Y
|
||||
[7]:https://opensource.com/users/dboth
|
||||
[8]:https://opensource.com/user/14106/feed
|
||||
[9]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]:http://www.pathname.com/fhs/
|
||||
[11]:http://www.pathname.com/fhs/
|
||||
[12]:https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
|
||||
[13]:https://en.wikipedia.org/wiki/Virtual_console
|
||||
[14]:https://opensource.com/business/16/9/linux-users-guide-lvm
|
||||
[15]:https://opensource.com/life/15/9/everything-is-a-file
|
||||
[16]:https://opensource.com/users/dboth
|
||||
[17]:https://opensource.com/users/dboth
|
||||
[18]:https://opensource.com/users/dboth
|
||||
[19]:https://opensource.com/life/16/10/introduction-linux-filesystems#comments
|
||||
[20]:https://opensource.com/tags/linux
|
||||
[21]:https://opensource.com/tags/sysadmin
|
||||
[22]:https://opensource.com/participate
|
@ -1,110 +0,0 @@
|
||||
How to create an internal innersource community
|
||||
============================================================
|
||||
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
In recent years, we have seen more and more interest in a variance of open source known as _innersource_ . Put simply, innersource is taking the principles of open source and bringing them inside the walls of an organization. As such, you build collaboration and community that may look and taste like open source, but in which all code and community is private within the walls of the organization.
|
||||
|
||||
As a [community strategy and leadership consultant][5], I work with many companies to help build their innersource communities. As such, I thought it could be fun to share some of the most important principles that map to most of my clients and beyond. This could be a helpful primer if you are considering exploring innersource inside your organization.
|
||||
|
||||
### Culture then code
|
||||
|
||||
For most companies, their innersource journey starts out life as a pilot project. Typically the project will focus on a few specific teams who may be more receptive to the experiment. As such, innersource is almost always a new workflow and methodology being brought into an existing culture. Making this cultural adjustment is key to being successful and the biggest challenge.
|
||||
|
||||
There is an understandable misconception that the key to doing innersource well is to focus on building the open source software development lifecycle in your company. That is, open code repositories and communication channels, encouraging forking, code review, and continuous integration/deployment. This is definitely an essential part of the innersource approach, but think of these pieces as lego bricks. The real trick is building an environment where people have the permission and incentive to build incredible things with those bricks.
|
||||
|
||||
As such, doing innersource well is more about building a culture, environment, and set of incentives that encourage and reward the behavior we often associate with open source. Building that culture from scratch is much easier, but adapting to this culture, particularly for traditional organizations is where most of the work lies.
|
||||
|
||||
Cultural change can't be dictated, it is the amalgamation of individual actions that build social conventions that the wider group are influenced by. To do this well you need to look at the problem from the vantage-point of your staff. How can you make their work easier, more efficient, more rewarding, and more collaborative?
|
||||
|
||||
When you understand the existing cultural pain points and your staff associate the new innersource culture as a way to relieve those issues, the adaptation goes much more smoothly.
|
||||
|
||||
### Methodology
|
||||
|
||||
Given that innersource is largely a cultural adjustment that incorporates some proven open source workflow methodologies, it begs an interesting question: How do you manage the rollout of an innersource program?
|
||||
|
||||
I have seen good and bad ways in which organizations do this. Some take a top-down approach and announce to their staff that things are going to be different and teams and staff need to fall in line given a specific innersource schedule. Other companies take a more bottom-up approach where a dedicated innersource team informally tries to get people on board with the innersource program.
|
||||
|
||||
I wouldn't recommend either approach explicitly, but instead a combination. For any cultural change you are going to need a top-down approach in emphasizing and encouraging a new way of working from your executive team and company leaders. Rather than dictating rules, these leaders should instead be supporting an environment where your staff can help shape the workings and implementation of your innersource program in a more bottom-up way.
|
||||
|
||||
Let's be honest, everyone hates these kind of cultural adjustments. We have all lived through company executives bringing in new ways of working—agile, kanban, pair-programming, or whatever else. Often these new ways of working are pushed onto staff and it sets the dynamic off on the wrong foot: Instead of encouraging your staff to shape the culture you are instead mandating it to them.
|
||||
|
||||
The approach I recommend here is to put in place a regular cadence that iterates your innersource strategy.
|
||||
|
||||
For example, every six months a new cycle begins. Prior to the end of the previous cycle the leadership team will survey staff to get their perspectives on how the innersource program is working, review their feedback, and set out core goals for the new cycle. Staff will then have structured ways in which they can play a role in helping to shape how these goals can be accomplished. For example, this could be individual teams/groups that focus on communication, peer review, QA, community growth and engagement, and more. Throughout the entire process a core innersource team will facilitate these conversations, mentor and guide the leadership team, support individual staff in making worthwhile contributions, and keep the train moving forward.
|
||||
|
||||
This is how it works in open source. If you hack on a project you don't just get to submit code, but you can also play a role in the operational dynamics of the project too. It is important you bring this sense of influence into your innersource program and your staff—the most rewarding companies are the ones where the staff feel they can influence the culture in a positive way.
|
||||
|
||||
### Asynchronous and remote working
|
||||
|
||||
One of the most interesting challenges with innersource is that it depends on asynchronous collaboration extensively. That is, you can collaborate digitally without requiring participants to be in the same timezone or location.
|
||||
|
||||
As an example, some companies require all staff members to work from the same office and much of the business that gets done takes place in in-person meetings in conference rooms, on conference calls, and via email. This can make it difficult for the company to grow and hire remote workers or for staff to be productive when they are on the road at conferences or at home.
|
||||
|
||||
A core component of what makes open source work well is that participants can work asynchronously. All communication, development (code submission, issue management, and code review), QA, and release management can often be performed entirely online and from any timezone. The asynchronous with semi-regular in-person sprints/meetings combo is a hugely productive and efficient methodology.
|
||||
|
||||
This can be a difficult transition for companies. For example, some of my clients will have traditionally had in-person meetings to plan projects, perform code review over a board-room table, and don't have the infrastructure and communication channels to operate asynchronously. To do innersource well, it is important to put in place a plan to work as asynchronously as possible, while blending the benefits of in-person communication at the office, while also supporting digital collaboration.
|
||||
|
||||
The side benefit of doing this is that you build an environment that can support remote working. As anyone who works in technology will likely agree, hiring good people is _hard_ , and a blocker can often be a relocation to your city. Thus, your investment in innersource will also make it easier to not just hire people remotely (anyone can do that), but importantly to have an environment where remote workers can be successful.
|
||||
|
||||
### Peer review and workflow
|
||||
|
||||
For companies making the adjustment to innersource, one of the most interesting "sticking points" is the peer review process.
|
||||
|
||||
For those of you familiar with open source, there are two important principles that take place in everyday collaboration. Firstly, all contributions are reviewed by other developers (both new features and bug fixes), and secondly, this peer review takes place out in the open for other people to see. This open peer review element can be a tough pill to swallow for people new to open source. If your engineers have either not conducted code review or it took place privately, it can be socially awkward and unsettling to move over to a more open review process.
|
||||
|
||||
This adjustment is something that needs to be carefully managed. A large part of this is building a culture in which critique is something that should be favored not avoided, that we celebrate failure, and we cherish our peers helping us to be better at what we do. Again, this is about framing these adjustments to the benefit of your staff so that while it may be awkward at first, they will feel the ultimate benefits soon after.
|
||||
|
||||
As you can probably see, a core chunk of building communities (whether public communities or innersource communities in companies) is understanding the psychology of your community members and converting those psychological patterns into workflow that helps you encourage the behavior you want to see.
|
||||
|
||||
As an example, there are two behavioral economics principles that play a key role with peer review and workflow. The first is the _Ikea Effect_ . This is where if you and I were to put together the exact same Ikea table (or build something else), we will each think our respective table is somehow better or more valuable. Thus, we put more value into the things we make, often overstated value. Secondly, there is the principle of _autonomy_ , which is essentially that _choice_ is critically important to people. If we don't feel we have control of our destiny, that we can make choices, we feel boxed in and restricted.
|
||||
|
||||
Just these two principles have an important impact on workflow and peer review. In terms of the Ikea effect we should expect that most people's pull requests and fixes will likely be seen as very valuable to them. As such, we need to use the peer review process to objectively define the value of the contribution in an independent and unemotional way (e.g. reviewing specific pieces of the diff, requiring at least two reviewers, encouraging specific implementation feedback, etc). With the autonomy principle, we should ensure staff can refine, customize, and hack on their toolchain and workflow as much as possible, and regularly give them an opportunity to provide feedback and input on making it better.
|
||||
|
||||
### Reputation, incentives, and engagement
|
||||
|
||||
Another key element of building an innersource culture in a company is to carefully carve out how you will track great work and incentivize and encourage that kind of behavior.
|
||||
|
||||
This piece has three core components.
|
||||
|
||||
Firstly, we need to have a way of getting a quantitative representation of the quality of that person's work. This can be as involved as building a complex system for tracking individual actions and weighting their value, or as simple as observationally watching how people work. What is important here is that people should be judged on their merit, and not factors such as how much they schmooze with the bosses, or how many donuts they bring to the office.
|
||||
|
||||
Secondly, based on this representation of their work we need to provide different incentives and rewards to encourage the behavior we want to see.
|
||||
|
||||
There are two core types of rewards. _Extrinsic_ rewards are material in nature, such as T-shirts, hoodies, gift cards, money, and more. _Intrinsic_ rewards are of the more touchy-feely kind such as respect, recognition, and admiration. Both are important and it is important to get the right balance of these rewards. Based on the behavior you want to encourage, I recommend putting together incentives that inspire action and rewards that validate those actions.
|
||||
|
||||
Finally, it can be helpful to sub-divide your staff into different groups based on their work and engage them in different ways. For example, people who are new will benefit from mentoring, support, and wider guidance. On the other hand, the most active and accomplished staff members can be used as a tremendous source of insight, guidance, and them enjoying playing a role in helping to shape the company culture further.
|
||||
|
||||
So, there are a few starting points for broader brushstrokes that typically need to be made when painting an innersource picture inside your organization. As usual, let me know your thoughts in the comments and feel free to reach out to me with questions!
|
||||
|
||||
-----------------
|
||||
|
||||
作者简介
|
||||
|
||||
Jono Bacon - Jono Bacon is a leading community manager, speaker, author, and podcaster. He is the founder of [Jono Bacon Consulting][2] which provides community strategy/execution, developer workflow, and other services. He also previously served as director of community at GitHub, Canonical, XPRIZE, OpenAdvantage, and consulted and advised a range of organizations.[More about me][3]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/11/create-internal-innersource-community
|
||||
|
||||
作者:[Jono Bacon ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jonobacon
|
||||
[1]:https://opensource.com/life/16/11/create-internal-innersource-community?rate=QnpszlpgMXpNG5m2OLbZPYAg_RV_DA1i48tI00CPyTc
|
||||
[2]:http://www.jonobacon.com/consulting
|
||||
[3]:https://opensource.com/users/jonobacon
|
||||
[4]:https://opensource.com/user/26312/feed
|
||||
[5]:http://www.jonobacon.org/consulting
|
||||
[6]:https://opensource.com/users/jonobacon
|
||||
[7]:https://opensource.com/users/jonobacon
|
||||
[8]:https://opensource.com/users/jonobacon
|
||||
[9]:https://opensource.com/tags/community-management
|
||||
[10]:https://opensource.com/tags/six-degrees-column
|
||||
[11]:https://opensource.com/participate
|
@ -1,114 +0,0 @@
|
||||
User Editorial: Steam Machines & SteamOS after a year in the wild
|
||||
====
|
||||
|
||||
|
||||
On this day, last year, [Valve released Steam Machines onto the world][2], after the typical Valve delays. While the state of the Linux desktop regarding gaming has improved, Steam Machines have not taken off as a platform, and SteamOS remains stagnant. What happened with these projects from Valve? Why were they created, why did they fail, and what could have been done to make them succeed?
|
||||
|
||||
**Context**
|
||||
|
||||
In 2012, when Windows 8 released, it included an app store, much like iOS and Android. With the new touch-friendly user interface Microsoft debuted, there was a new set of APIs available called “WinRT,” for creating these immersive touch-friendly applications in the UI language called “Metro.” Applications created with this new API, however, could only be distributed via the Windows Store, with Microsoft taking out a 30% cut, just like the other stores. To Gabe Newell, CEO of Valve, this was unacceptable, and he saw the risks of Microsoft using its position to push the Windows Store and Metro applications to crush Valve, like what they had done to Netscape using Internet Explorer.
|
||||
|
||||
To Valve, the strength of the PC running Windows was it that was an open platform, where anyone could run whatever they want without control over the operating system or hardware vendor. The alternative to these proprietary platforms closing in on third-party application stores like Steam was to push a truly open platform that grants freedoms to change, to everyone – Linux. Linux is just a kernel, but you can easily create an operating system with it and other software like the GNU core utilities and Gnome, such as Ubuntu. While pushing Ubuntu and other Linux distributions would allow Valve a sanctuary platform in case Microsoft or Apple turned hostile, Linux gave them possibilities to create a new platform.
|
||||
|
||||
**Conception**
|
||||
|
||||
Valve seemed to have found an opportunity in the console space, if we can call Steam Machines consoles. To achieve the user interface expectations of a console, being used on a large screen television from afar, Big Picture Mode was created. A core principle of the machines was openness; the software was able to be swapped out for Windows, as an example, and the CAD designs for the controller are available for people’s projects.
|
||||
|
||||
Originally, Valve had planned to create their own box as a “flagship” machine. However, these only shipped as prototypes to testers in 2013\. They would also let other OEMs like Dell create their own Steam Machines as well, and allow a variety of pricing and specification options. A company called “Xi3” showed their small box, small enough to fit into a palm, as a possible candidate to become a premiere Steam Machine, which created more hype around Steam Machines. Ultimately, Valve decided to only go with OEM partners to make and advertise Steam Machines, rather than doing it themselves.
|
||||
|
||||
More “out there” ideas were considered. Biometrics, gaze tracking, and motion controllers were considered for the controller. Of them, the released Steam Controller had a gyroscope, and the HTC Vive controllers had various tracking and motion features that may have been originally intended for the original controller concepts. The controller was also originally intended to be more radical in its approach, with a touchscreen in the middle that had customizable, context-sensitive actions. Ultimately, the launch controller was more conservative, but still had features like the dual trackpads and advanced software that gave it flexibility. Valve had also considered making a version of Steam Machines and SteamOS for smaller hardware like laptops. This ultimately never bore any fruit, though the “Smach Z” handheld could be compared to this.
|
||||
|
||||
In [September 2013][3], Valve had announced Steam Machines and SteamOS, with an expected release in the middle of 2014\. The aforementioned 300 prototype machines were released to testers in December, and in January, 2000 more machines were provided to developers. SteamOS was released for testers experienced with Linux to try out. With the feedback given, Valve had decided to delay the release until November 2015.
|
||||
|
||||
The late launch caused problems with partners; Dell’s Steam Machine was launched a year early running Windows as the Alienware Alpha, with extra software to improve usability with a controller.
|
||||
|
||||
**Launch**
|
||||
|
||||
With the launch, Valve and their OEM partners released their machines, and Valve also released the Steam Controller and the Steam Link. A retail presence was established with GameStop and other brick and mortar stores providing space. Before release, some OEMs pulled out of the launch; Origin PC and Falcon Northwest, two high-end boutique builders. They had claimed performance issues and limitations had made them decide not to ship SteamOS.
|
||||
|
||||
The machines had launched to mixed reviews. The Steam Link was praised and many had considered buying one for their existing PC instead of buying a Steam Machine for the living room. The Steam Controller reception was muddled, due to its rich feature set but high learning curve. The Steam Machines themselves ultimately launched to the muddiest reception, however. Reviewers like LinusTechTips noticed glaring defects with the SteamOS software, including performance issues. Many of the machines were criticized for their high price point and poor value, especially when compared to the option of building a PC from the perspective of a PC gamer, or the price in comparison to other consoles. The use of SteamOS was criticized over compatibility, bugs, and lower performance than Windows. Of the available options, the Alienware Steam Machine was considered to be the most interesting option due to its value relative to other machines and small form factor.
|
||||
|
||||
By using Debian Linux as the base, Valve had many “launch titles” for the platform, as they had a library of pre-existing Linux titles. The initial availability of games was seen as favourable over other consoles. However, many titles originally announced for the platform never came out, or came out much later. Rocket League and Mad Max only came out only recently after the initial announcements a year ago, and titles like The Witcher 3 and Batman: Arkham Knight never came for the platform, despite initial promises from Valve or publishers. In the case of The Witcher 3, the developer, CD Projekt Red, had denied they ever announced a port, despite their game appearing in a list of titles on sale that had or were announced to have Linux and SteamOS support. In addition, many “AAA” titles have not been ported; though this situation continues to improve over time.
|
||||
|
||||
**Neglect**
|
||||
|
||||
With the Steam Machines launched, developers at Valve had moved on to other projects. Of the projects being worked on, virtual reality was seen as the most important, with about a third of employees working on it as of June. Valve had seen virtual reality as something to develop, and Steam as the prime ecosystem for delivering VR. Using HTC to manufacture, they had designed their own virtual reality headset and controllers, and would continue to develop new revisions. However, Linux and Steam Machines had fallen to the wayside with this focus. SteamVR, until recently, did not support Linux (it's still not public yet, but it was shown off at SteamDevDays on Linux), which put into question Valve’s commitments to Steam Machines and Linux as an open platform with a future.
|
||||
|
||||
There has been little development to SteamOS itself. The last major update, SteamOS 2.0 was mostly synchronizing with upstream Debian and required a reinstallation, and continued patches simply continue synchronizing with upstream sources. While Valve has made improvements to projects like Mesa, which have improved performance for many users, it has done little with Steam Machines as a product.
|
||||
|
||||
Many features continue to go undeveloped. Steam’s built in functionality like chat and broadcasting continue to be weak, but this affects all platforms that Steam runs on. More pressingly, services like Netflix, Twitch, and Spotify are not integrated into the interface like most major consoles, but accessing them requires using the browser, which can be slow and clunky, if it even achieves what is wanted, or to bring in software from third-party sources, which requires using the terminal, and the software might not be very usable using a controller –this is a poor UX for what’s considered to be an appliance.
|
||||
|
||||
Valve put little effort into marketing the platform, preferring to leave this to OEMs. However, most OEMs were either boutique builders or makers or barebones builders. Of the OEMs, only Dell was the major player in the PC market, and the only one who pushed Steam Machines with advertisements.
|
||||
|
||||
Sales were not strong. With 500,000 controllers sold 7 months on (stated in June 2016), including those bundled with a Steam Machine. This puts retail Steam Machines, not counting machines people have installed SteamOS on, in the low hundred thousand mark. Compared to the existing PC and console install bases, this is low.
|
||||
|
||||
**Post-mortem thoughts**
|
||||
|
||||
So, with the story of what happened, can we identify why Steam Machines failed, and ways they could succeed in the future?
|
||||
|
||||
_Vision and purpose_
|
||||
|
||||
Steam Machines did not make clear what they were in the market, nor did any advantages particularly stand out. On the PC flank, building PCs had become popular and is a cheaper option with better upgrade and flexibility options. On the console flank, they were outflanked by consoles with low initial investment, despite a possibly higher TCO with game prices, and a far simpler user experience.
|
||||
|
||||
With PCs, flexibility is seen as a core asset, with users being able to use their machines beyond gaming, doing work and other tasks. While Steam Machines were just PCs running SteamOS with no restrictions, the SteamOS software and marketing had solidified their view as consoles to PC gamers, compounded by the price and lower flexibility in hardware with some options. In the living room where these machines could have made sense to PC gamers, the Steam Link offered a way to access content on a PC in another room, and small form factor hardware like NUCs and Mini-ITX motherboards allowed for custom built PCs that are more socially acceptable in living rooms. The SteamOS software was also available to “convert” these PCs into Steam Machines, but people seeking flexibility and compatibility often opted for a Linux or Windows desktop. Recent strides in Windows and desktop Linux have simplified maintenance tasks associated with desktop-experience computers, automating most of it.
|
||||
|
||||
With consoles, simplicity is a virtue. Even as they have expanded in their roles, with media features often a priority, they are still a plug and play experience where compatibility and experience are guaranteed, with a low barrier of entry. Consoles also have long life cycles, ranging from four to seven years, and the fixed hardware during this life cycle allow developers to target and optimize especially for their specifications and features. New mid-life upgrades like “Scorpio” and PlayStation 4 Pro may change the unified experience previously shared by users, but manufactures are requiring games to work on the original model consoles to avoid the most problematic aspects. To keep users attached to the systems, social networks and exclusive games are used. Games also come on discs that can be freely reused and resold, which is a positive for retailers and users. Steam Machines have none of these guarantees; they carry PC complexity and higher initial prices despite a living room friendly exterior.
|
||||
|
||||
_Reconciliation_
|
||||
|
||||
With this, Steam Machines could be seen as a “worst of both worlds” product, carrying the burdens of both kinds of product, without showing clearly as one or the other, or some kind of new product category. There also exist many deficiencies that neither party experiences, like lack of AAA titles that appear on consoles and Windows PCs, and lack of clients for services like Netflix. Despite this, Valve has shown little effort into improving the product or even trying to resolve the seemingly contradictory goals like the mutual distrust of PC and console gaming.
|
||||
|
||||
Some things may make it impossible to reconcile the two concepts into one category or the other, though. Things like graphics settings and mods may make it hard to create a foolproof experience, and the complexity of the underlying system appears from time to time.
|
||||
|
||||
One of the most complex parts is the concept of having a lineup – users need to evaluate not only the costs and specifications of a system, but its value and value relative to other systems. You need some way for the user to know that their hardware can run any given game, either by some automated benchmark system with comparison, or a grading system, though these need to be simple and need to support (almost) every game in the library. In addition, you also need to worry about how these systems and grades will age – what does a “2016 Grade A” machine mean three years from now?
|
||||
|
||||
_Valve, effort, and organization_
|
||||
|
||||
Valve’s organizational structure may be detrimental to creating platforms like Steam Machines, let alone maintaining services like Steam. Their mostly leaderless structure with people supposedly moving their desks to ad-hoc units working on projects that they alone decide to work on can be great for creative endeavours, as well as research and development. It’s said Valve only hires what they consider to be the “cream of the crop,” with very strict standards, tying them to what they deem more "worthy" work. This view may be inaccurate though; as cliques often exist, the word of Gabe Newell is more important than the “leaked” employee handbook lets on, and people hired and then fired as needed, as a form of contractor working on certain aspects.
|
||||
|
||||
However, this leaves projects that aren’t glamorous or interesting, but need persistent and often mundane work, to wither on the vine. Customer support for Valve has been a constant headache, with enraged users felt ignored, and Valve sometimes only acting when legally required to do so, like with the automated refund system that was forced into action by Australian and European legislation, or the Counter-Strike: Global Offensive item gambling site controversy involving the gambling commission of Washington State that’s still ongoing.
|
||||
|
||||
This has affected Steam Machines as a result. With the launch delayed by a year, some partners’ hands were forced, by Dell launching the Alienware Steam Machine a year earlier as the Alienware Alpha – causing the hardware to be outdated on launch. These delays may have also affected game availability as well. The opinions of developers and hardware partners as a result of the delayed and non-monumental launch are not clear. Valve’s platform for virtual reality simply wasn’t available on Linux, and as such, SteamOS, until recently, even as SteamVR was receiving significant developer effort.
|
||||
|
||||
The “long game”
|
||||
|
||||
Valve is seen as playing a “long game” with Steam Machines and SteamOS, though it appears as if there is no roadmap. An example of Valve aiming for the long term is with Steam, from its humble and initially reviled beginnings as a patching platform for their games to the popular distribution and social network it is today. It also helped that Steam was required to play Valve’s games like Half-Life 2 and Counter-Strike 1.6\. However, it doesn’t seem as if Valve is putting in the effort to Steam Machines as they did with Steam before. There is also entrenched competition that Steam in the early days never really dealt with. Their competition includes arguably Valve itself, with Steam on Windows.
|
||||
|
||||
_Gambit_
|
||||
|
||||
With the lack of developments in Steam Machines, one wonders if the platform was a bargaining chip of sorts. Steam Machines had been originally started over Valve’s Linux efforts took fruit because of concerns that Microsoft and Apple would have pushed them out of the market with native app stores, and Steam Machines grew so Valve would have a safe haven in case this happened, and a bargaining chip so Valve can remind the developers of its host platforms of possible independence. When these turned out to be non-threatening, Valve slowed down development. I don’t see this however; Valve has expended a lot of goodwill with hardware partners and developers trying to push this, only to halt it. You could say both Microsoft and Valve called each other’s bluffs – Microsoft with a locked-down Windows 8, and Valve’s capability as an independent player.
|
||||
|
||||
Even then, who is to say developers wouldn’t follow Microsoft with a locked-in platform, if they can offer superior deals to publishers, or better customer relationships? In addition, now Microsoft is pushing Xbox on Windows integration with cross-buy, Xbox Live integration, and Xbox exclusive games on Windows, all while preserving Windows as an open platform – arguably more a threat to Steam.
|
||||
|
||||
Another point you could argue is that all of this with Steam Machines was simply to push Linux adoption with PC gaming, and Steam Machines were simply to make it more palatable to publishers and developers by implying a large push and continued support. However, this made it an awfully expensive gambit, and developers continued to support Linux before and after Steam Machines, and could have backfired with developers pulling out of Linux due to the lack of the Promised Land of SteamOS coming.
|
||||
|
||||
**My opinions on what could have been done**
|
||||
|
||||
I think there’s an interesting product with Steam Machines, and that there is a market for it, but lack of interest and effort, as well as possible confusion in what it should have been has been damaging for it. I see Steam Machines as a way to cut out the complexity of PC gaming of worrying about parts, life cycles, and maintenance; while giving the advantages like cheap games, mods, and an open platform that can be fiddled with if the user desires. However, they need to get core aspects like pricing, marketing, lineup, and software right.
|
||||
|
||||
I think Steam Machines can make compromises on things like upgradability (Though it’s possible to preserve this – but it should be done with attention to user experience.) and choices, to reduce friction. PCs would still exist to these options. The paralysis of choice is a real dilemma, and the sheer amount of poorly valued options available with Steam Machines didn't help. Valve needs a flagship machine to lead Steam Machines. Arguably, the Alienware model was close, but it wasn’t made officially so. There’s good industrial design talent in Valve, and if they focused on their own machine, and with effort put in, it might be worth it. A company like Dell or HTC can manufacture for Valve, bringing their experience in. Defining life cycles and only having one or two specifications updated periodically would help, especially if they worked with developers to establish this is a baseline that should be supported. I’m not sure with OEMs; if Valve is putting their effort behind one machine, they might be made redundant and ultimately only hindering development of the platform.
|
||||
|
||||
Addressing the software issues is essential. The lack of integration with services like Netflix and Twitch that exist fluidly on console and easily put into place on PC, despite living room user interface issues, are holding Steam Machines back. Although Valve has slowly been acquiring movie licenses for distribution on Steam, people will use existing and trusted streaming sources. This needs to be addressed, especially as people use their consoles as parts of their home theatre. Fixing issues with the Steam client and platform are essential, and feature parity with other platforms is a good idea. Performance issues with Linux and its graphics stack are also a problem, but this is slowly improving. Getting ports of games will also be another issue. Game porting shops like Feral Interactive and Aspyr Media help the library, but they need to be contracted by publishers and developers, and they often use wrappers that add overhead. Valve has helped studios directly with porting, such as with Rocket League, but this has rarely happened and when it did, slowly at the typical Valve pace. The monolith of AAA games can’t be ignored either – the situation has improved dramatically, but studios like Bethesda are still reluctant to port, especially with a small user base, lack of support from Valve with Steam Machines even if Linux is doing relatively well, and the lack of extra DRM like Denuvo.
|
||||
|
||||
Valve also needs to put effort into the other bits beyond hardware and software. With one machine, they have an interest and can subsidize the hardware effectively. This would put it into parity with consoles, and possibly cheaper than custom built PCs. Efforts to marketing the product to market segments that would be interested in the machines are essential, whatever they are. (I myself would be interested in the machines. I don’t like the hassle of dealing with PC building or the premium on prebuilt machines, but consoles often lack the games I want to play, and I have an existing library of games on Steam I acquired cheaply.) Retail partners may not be effective, due to their interest in selling and reselling physical copies of games.
|
||||
|
||||
Even with my suggestions towards the platform and product, I’m not sure how effective it would be to help Steam Machines achieve their full potential and do well in the marketplace. Ultimately, learning from not just your own mistakes, but the mistakes of previous entrants like 3DO and Pippin who relied on an open platform or were descended from desktop-experience computing, which are relevant to Valve’s current situation, and the future of Nintendo's Switch, which steps into the realm of possible confusion between values.
|
||||
|
||||
_Note: Clearing up done by liamdawe, all thoughts are from the submitter._
|
||||
|
||||
This article was submitted by a guest, we encourage anyone to [submit their own articles][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.gamingonlinux.com/articles/user-editorial-steam-machines-steamos-after-a-year-in-the-wild.8474
|
||||
|
||||
作者:[calvin][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.gamingonlinux.com/profiles/5163
|
||||
[1]:https://www.gamingonlinux.com/submit-article/
|
||||
[2]:https://www.gamingonlinux.com/articles/steam-machines-steam-link-steam-controller-officially-released-steamos-sale.6201
|
||||
[3]:https://www.gamingonlinux.com/articles/valve-announces-steam-machines-you-can-win-one-too.2469
|
@ -1,4 +1,3 @@
|
||||
【toutoudnf 翻译中】
|
||||
A public cloud migration in 22 days
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by trnhoe
|
||||
From Node to Go: A High-Level Comparison
|
||||
============================================================
|
||||
|
||||
|
@ -1,152 +0,0 @@
|
||||
FEWER MALLOCS IN CURL
|
||||
===========================================================
|
||||
|
||||

|
||||
|
||||
Today I landed yet [another small change][4] to libcurl internals that further reduces the number of small mallocs we do. This time the generic linked list functions got converted to become malloc-less (the way linked list functions should behave, really).
|
||||
|
||||
### Instrument mallocs
|
||||
|
||||
I started out my quest a few weeks ago by instrumenting our memory allocations. This is easy since we have our own memory debug and logging system in curl since many years. Using a debug build of curl I run this script in my build dir:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
export CURL_MEMDEBUG=$HOME/tmp/curlmem.log
|
||||
./src/curl http://localhost
|
||||
./tests/memanalyze.pl -v $HOME/tmp/curlmem.log
|
||||
```
|
||||
|
||||
For curl 7.53.1, this counted about 115 memory allocations. Is that many or a few?
|
||||
|
||||
The memory log is very basic. To give you an idea what it looks like, here’s an example snippet:
|
||||
|
||||
```
|
||||
MEM getinfo.c:70 free((nil))
|
||||
MEM getinfo.c:73 free((nil))
|
||||
MEM url.c:294 free((nil))
|
||||
MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98
|
||||
MEM url.c:294 free((nil))
|
||||
MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8
|
||||
MEM multi.c:302 calloc(1,480) = 0x559e73760ff8
|
||||
MEM hash.c:75 malloc(224) = 0x559e737611f8
|
||||
MEM hash.c:75 malloc(29152) = 0x559e737a2bc8
|
||||
MEM hash.c:75 malloc(3104) = 0x559e737a9dc8
|
||||
```
|
||||
|
||||
### Check the log
|
||||
|
||||
I then studied the log closer and I realized that there were many small memory allocations done from the same code lines. We clearly had some rather silly code patterns where we would allocate a struct and then add that struct to a linked list or a hash and that code would then subsequently add yet another small struct and similar – and then often do that in a loop. (I say _we_ here to avoid blaming anyone, but of course I myself am to blame for most of this…)
|
||||
|
||||
Those two allocations would always happen in pairs and they would be freed at the same time. I decided to address those. Doing very small (less than say 32 bytes) allocations is also wasteful just due to the very large amount of data in proportion that will be used just to keep track of that tiny little memory area (within the malloc system). Not to mention fragmentation of the heap.
|
||||
|
||||
So, fixing the hash code and the linked list code to not use mallocs were immediate and easy ways to remove over 20% of the mallocs for a plain and simple ‘curl http://localhost’ transfer.
|
||||
|
||||
At this point I sorted all allocations based on size and checked all the smallest ones. One that stood out was one we made in _curl_multi_wait(),_ a function that is called over and over in a typical curl transfer main loop. I converted it over to [use the stack][5] for most typical use cases. Avoiding mallocs in very repeatedly called functions is a good thing.
|
||||
|
||||
### Recount
|
||||
|
||||
Today, the script from above shows that the same “curl localhost” command is down to 80 allocations from the 115 curl 7.53.1 used. Without sacrificing anything really. An easy 26% improvement. Not bad at all!
|
||||
|
||||
But okay, since I modified curl_multi_wait() I wanted to also see how it actually improves things for a slightly more advanced transfer. I took the [multi-double.c][6] example code, added the call to initiate the memory logging, made it uses curl_multi_wait() and had it download these two URLs in parallel:
|
||||
|
||||
```
|
||||
http://www.example.com/
|
||||
http://localhost/512M
|
||||
```
|
||||
|
||||
The second one being just 512 megabytes of zeroes and the first being a 600 bytes something public html page. Here’s the [count-malloc.c code][7].
|
||||
|
||||
First, I brought out 7.53.1 and built the example against that and had the memanalyze script check it:
|
||||
|
||||
```
|
||||
Mallocs: 33901
|
||||
Reallocs: 5
|
||||
Callocs: 24
|
||||
Strdups: 31
|
||||
Wcsdups: 0
|
||||
Frees: 33956
|
||||
Allocations: 33961
|
||||
Maximum allocated: 160385
|
||||
```
|
||||
|
||||
Okay, so it used 160KB of memory totally and it did over 33,900 allocations. But ok, it downloaded over 512 megabytes of data so it makes one malloc per 15KB of data. Good or bad?
|
||||
|
||||
Back to git master, the version we call 7.54.1-DEV right now – since we’re not quite sure which version number it’ll become when we release the next release. It can become 7.54.1 or 7.55.0, it has not been determined yet. But I digress, I ran the same modified multi-double.c example again, ran memanalyze on the memory log again and it now reported…
|
||||
|
||||
```
|
||||
Mallocs: 69
|
||||
Reallocs: 5
|
||||
Callocs: 24
|
||||
Strdups: 31
|
||||
Wcsdups: 0
|
||||
Frees: 124
|
||||
Allocations: 129
|
||||
Maximum allocated: 153247
|
||||
```
|
||||
|
||||
I had to look twice. Did I do something wrong? I better run it again just to double-check. The results are the same no matter how many times I run it…
|
||||
|
||||
### 33,961 vs 129
|
||||
|
||||
curl_multi_wait() is called a lot of times in a typical transfer, and it had at least one of the memory allocations we normally did during a transfer so removing that single tiny allocation had a pretty dramatic impact on the counter. A normal transfer also moves things in and out of linked lists and hashes a bit, but they too are mostly malloc-less now. Simply put: the remaining allocations are not done in the transfer loop so they’re way less important.
|
||||
|
||||
The old curl did 263 times the number of allocations the current does for this example. Or the other way around: the new one does 0.37% the number of allocations the old one did…
|
||||
|
||||
As an added bonus, the new one also allocates less memory in total as it decreased that amount by 7KB (4.3%).
|
||||
|
||||
### Are mallocs important?
|
||||
|
||||
In the day and age with many gigabytes of RAM and all, does a few mallocs in a transfer really make a notable difference for mere mortals? What is the impact of 33,832 extra mallocs done for 512MB of data?
|
||||
|
||||
To measure what impact these changes have, I decided to compare HTTP transfers from localhost and see if we can see any speed difference. localhost is fine for this test since there’s no network speed limit, but the faster curl is the faster the download will be. The server side will be equally fast/slow since I’ll use the same set for both tests.
|
||||
|
||||
I built curl 7.53.1 and curl 7.54.1-DEV identically and ran this command line:
|
||||
|
||||
```
|
||||
curl http://localhost/80GB -o /dev/null
|
||||
```
|
||||
|
||||
80 gigabytes downloaded as fast as possible written into the void.
|
||||
|
||||
The exact numbers I got for this may not be totally interesting, as it will depend on CPU in the machine, which HTTP server that serves the file and optimization level when I build curl etc. But the relative numbers should still be highly relevant. The old code vs the new.
|
||||
|
||||
7.54.1-DEV repeatedly performed 30% faster! The 2200MB/sec in my build of the earlier release increased to over 2900 MB/sec with the current version.
|
||||
|
||||
The point here is of course not that it easily can transfer HTTP over 20GB/sec using a single core on my machine – since there are very few users who actually do that speedy transfers with curl. The point is rather that curl now uses less CPU per byte transferred, which leaves more CPU over to the rest of the system to perform whatever it needs to do. Or to save battery if the device is a portable one.
|
||||
|
||||
On the cost of malloc: The 512MB test I did resulted in 33832 more allocations using the old code. The old code transferred HTTP at a rate of about 2200MB/sec. That equals 145,827 mallocs/second – that are now removed! A 600 MB/sec improvement means that curl managed to transfer 4300 bytes extra for each malloc it didn’t do, each second.
|
||||
|
||||
### Was removing these mallocs hard?
|
||||
|
||||
Not at all, it was all straight forward. It is however interesting that there’s still room for changes like this in a project this old. I’ve had this idea for some years and I’m glad I finally took the time to make it happen. Thanks to our test suite I could do this level of “drastic” internal change with a fairly high degree of confidence that I don’t introduce too terrible regressions. Thanks to our APIs being good at hiding internals, this change could be done completely without changing anything for old or new applications.
|
||||
|
||||
(Yeah I haven’t shipped the entire change in a release yet so there’s of course a risk that I’ll have to regret my “this was easy” statement…)
|
||||
|
||||
### Caveats on the numbers
|
||||
|
||||
There have been 213 commits in the curl git repo from 7.53.1 till today. There’s a chance one or more other commits than just the pure alloc changes have made a performance impact, even if I can’t think of any.
|
||||
|
||||
### More?
|
||||
|
||||
Are there more “low hanging fruits” to pick here in the similar vein?
|
||||
|
||||
Perhaps. We don’t do a lot of performance measurements or comparisons so who knows, we might do more silly things that we could stop doing and do even better. One thing I’ve always wanted to do, but never got around to, was to add daily “monitoring” of memory/mallocs used and how fast curl performs in order to better track when we unknowingly regress in these areas.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/
|
||||
|
||||
作者:[DANIEL STENBERG ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://daniel.haxx.se/blog/author/daniel/
|
||||
[1]:https://daniel.haxx.se/blog/author/daniel/
|
||||
[2]:https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/
|
||||
[3]:https://daniel.haxx.se/blog/2017/04/22/fewer-mallocs-in-curl/#comments
|
||||
[4]:https://github.com/curl/curl/commit/cbae73e1dd95946597ea74ccb580c30f78e3fa73
|
||||
[5]:https://github.com/curl/curl/commit/5f1163517e1597339d
|
||||
[6]:https://github.com/curl/curl/commit/5f1163517e1597339d
|
||||
[7]:https://gist.github.com/bagder/dc4a42cb561e791e470362da7ef731d3
|
@ -1,5 +1,3 @@
|
||||
GHLandy Translating
|
||||
|
||||
LFCS sed Command
|
||||
=====================
|
||||
|
||||
|
@ -1,82 +0,0 @@
|
||||
[DNS Infrastructure at GitHub][1]
|
||||
============================================================
|
||||
|
||||
|
||||
At GitHub we recently revamped how we do DNS from the ground up. This included both how we [interact with external DNS providers][4] and how we serve records internally to our hosts. To do this, we had to design and build a new DNS infrastructure that could scale with GitHub’s growth and across many data centers.
|
||||
|
||||
Previously GitHub’s DNS infrastructure was fairly simple and straightforward. It included a local, forwarding only DNS cache on every server and a pair of hosts that acted as both caches and authorities used by all these hosts. These hosts were available both on the internal network as well as public internet. We configured zone stubs in the caching daemon to direct queries locally rather than recurse on the internet. We also had NS records set up at our DNS providers that pointed specific internal zones to the public IPs of this pair of hosts for queries external to our network.
|
||||
|
||||
This configuration worked for many years but was not without its downsides. Many applications are highly sensitive to resolving DNS queries and any performance or availability issues we ran into would cause queuing and degraded performance at best and customer impacting outages at worst. Configuration and code changes can cause large unexpected changes in query rates. As such scaling beyond these two hosts became an issue. Due to the network configuration of these hosts we would just need to keep adding IPs and hosts which has its own problems. While attempting to fire fight and remediate these issues, the old system made it difficult to identify causes due to a lack of metrics and visibility. In many cases we resorted to `tcpdump` to identify traffic and queries in question. Another issue was running on public DNS servers we run the risk of leaking internal network information. As a result we decided to build something better and began to identify our requirements for the new system.
|
||||
|
||||
We set out to design a new DNS infrastructure that would improve the aforementioned operational issues including scaling and visibility, as well as introducing some additional requirements. We wanted to continue to run our public DNS zones via external DNS providers so whatever system we build needed to be vendor agnostic. Additionally, we wanted this system to be capable of serving both our internal and external zones, meaning internal zones were only available on our internal network unless specifically configured otherwise and external zones are resolvable without leaving our internal network. We wanted the new DNS architecture to allow both a [deploy-based workflow for making changes][5] as well as API access to our records for automated changes via our inventory and provisioning systems. The new system could not have any external dependencies; too much relies on DNS functioning for it to get caught in a cascading failure. This includes connectivity to other data centers and DNS services that may reside there. Our old system mixed the use of caches and authorities on the same host; we wanted to move to a tiered design with isolated roles. Lastly, we wanted a system that could support many data center environments whether it be EC2 or bare metal.
|
||||
|
||||
### Implementation
|
||||
|
||||

|
||||
|
||||
To build this system we identified three classes of hosts: caches, edges, and authorities. Caches serve as recursive resolvers and DNS “routers” caching responses from the edge tier. The edge tier, running a DNS authority daemon, responds to queries from the caching tier for zones it is configured to zone transfer from the authority tier. The authority tier serve as hidden DNS masters as our canonical source for DNS data, servicing zone transfers from the edge hosts as well as providing an HTTP API for creating, modifying or deleting records.
|
||||
|
||||
In our new configuration, caches live in each data center meaning application hosts don’t need to traverse a data center boundary to retrieve a record. The caches are configured to map zones to the edge hosts within their region in order to route our internal zones to our own hosts. Any zone that is not explicitly configured will recurse on the internet to resolve an answer.
|
||||
|
||||
The edge hosts are regional hosts, living in our network edge PoPs (Point of Presence). Our PoPs have one or more data centers that rely on them for external connectivity, without the PoP the data center can’t get to the internet and the internet can’t get to them. The edges perform zone transfers with all authorities regardless of what region or location they exist in and store those zones locally on their disk.
|
||||
|
||||
Our authorities are also regional hosts, only containing zones applicable to the region it is contained in. Our inventory and provisioning systems determine which regional authority a zone lives in and will create and delete records via an HTTP API as servers come and go. OctoDNS maps zones to regional authorities and uses the same API to create static records and to ensure dynamic sources are in sync. We have an additional separate authority for external domains, such as github.com, to allow us to query our external domains during a disruption to connectivity. All records are stored in MySQL.
|
||||
|
||||
### Operability
|
||||
|
||||

|
||||
|
||||
One huge benefit of moving to a more modern DNS infrastructure is observability. Our old DNS system had little to no metrics and limited logging. A large factor in deciding which DNS servers to use was the breadth and depth of metrics they produce. We finalized on [Unbound][6] for the caches, [NSD][7] for the edge hosts and [PowerDNS][8] for the authorities, all of which have been proven in DNS infrastructures much larger than at GitHub.
|
||||
|
||||
When running in our bare metal data centers, caches are accessed via a private [anycast][9] IP resulting in it reaching the nearest available cache host. The caches have been deployed in a rack aware manner that provides some level of balanced load between them and isolation against some power and network failure modes. When a cache host fails, servers that would normally use it for lookups will now automatically be routed to the next closest cache, keeping latency low as well as providing tolerance to some failure modes. Anycast allows us to scale the number of caches behind a single IP address unlike our previous configuration, giving us the ability to run as many caching hosts as DNS demand requires.
|
||||
|
||||
Edge hosts perform zone transfers with the authority tier, regardless of region or location. Our zones are not large enough that keeping a copy of all of them in every region is a problem. This means for every zone, all caches will have access to a local edge server with a local copy of all zones even when a region is offline or upstream providers are having connectivity issues. This change alone has proven to be quite resilient in the face of connectivity issues and has helped keep GitHub available during failures that not long ago would have caused customer facing outages.
|
||||
|
||||
These zone transfers include both our internal and external zones from their respective authorities. As you might guess zones like github.com are external and zones like github.net are generally internal. The difference between them is only the types of use and data stored in them. Knowing which zones are internal and external gives us some flexibility in our configuration.
|
||||
|
||||
```
|
||||
$ dig +short github.com
|
||||
192.30.253.112
|
||||
192.30.253.113
|
||||
|
||||
```
|
||||
|
||||
Public zones are [sync’d][10] to external DNS providers and are records GitHub users use everyday. Addtionally, public zones are completely resolvable within our network without needing to communicate with our external providers. This means any service that needs to look up `api.github.com` can do so without needing to rely on external network connectivity. We also use the stub-first configuration option of Unbound which gives a lookup a second chance if our internal DNS service is down for some reason by looking it up externally when it fails.
|
||||
|
||||
```
|
||||
$ dig +short time.github.net
|
||||
10.127.6.10
|
||||
|
||||
```
|
||||
|
||||
Most of the `github.net` zone is completely private, inaccessible from the internet and only contains [RFC 1918][11] IP addresses. Private zones are split up per region and site. Each region and/or site has a set of sub-zones applicable to that location, sub-zones for management network, service discovery, specific service records and yet to be provisioned hosts that are in our inventory. Private zones also include reverse lookup zones for PTRs.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Replacing an old system with a new one that is ready to serve millions of customers is never easy. Using a pragmatic, requirements based approach to designing and implementing our new DNS system resulted in a DNS infrastructure that was able to hit the ground running and will hopefully grow with GitHub into the future.
|
||||
|
||||
Want to help the GitHub SRE team solve interesting problems like this? We’d love for you to join us. [Apply Here][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://githubengineering.com/dns-infrastructure-at-github/
|
||||
|
||||
作者:[Joe Williams ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/joewilliams
|
||||
[1]:https://githubengineering.com/dns-infrastructure-at-github/
|
||||
[2]:https://github.com/joewilliams
|
||||
[3]:https://github.com/joewilliams
|
||||
[4]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[5]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[6]:https://unbound.net/
|
||||
[7]:https://www.nlnetlabs.nl/projects/nsd/
|
||||
[8]:https://powerdns.com/
|
||||
[9]:https://en.wikipedia.org/wiki/Anycast
|
||||
[10]:https://githubengineering.com/enabling-split-authority-dns-with-octodns/
|
||||
[11]:http://www.faqs.org/rfcs/rfc1918.html
|
||||
[12]:https://boards.greenhouse.io/github/jobs/669805#.WPVqJlPyvUI
|
@ -1,241 +0,0 @@
|
||||
Top 8 IDEs for Raspberry Pi
|
||||
============================================================
|
||||
|
||||
|
||||
__
|
||||
|
||||
_The Raspberry Pi, a tiny single-board computer, has revolutionised the way in which computer science is being taught in schools. It has also turned out to be a boon for software developers. Currently, it has gained popularity much beyond its target market and is being used in robotics projects._
|
||||
|
||||
Raspberry Pi, a small development board minicomputer that runs the Linux operating system, was developed in the United Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in schools in the UK and in developing countries. Raspberry Pi has USB sockets, which support various peripheral plug-and-play devices like the keyboard, the mouse, the printer, etc. It contains ports like HDMI (High Definition Multimedia Interface) to provide users with video output. Its credit-card-like size makes it extremely portable and affordable. It requires just a 5V micro-USB power supply, similar to the one used to charge a mobile phone.
|
||||
|
||||
Over the years, the Raspberry Pi Foundation has released a few different versions of the Pi board. The first version was Raspberry Pi 1 Model B, which was followed by a simple and cheap Model A. In 2014, the Foundation released a significant and improved version of the board —Raspberry Pi 1 Model B+. In 2015, the Foundation revolutionised the design of the board by releasing a small form factor edition costing US$ 5 (about ` 323) called Raspberry Pi Zero.
|
||||
|
||||
In February 2016, Raspberry Pi 3 Model B was launched, which is currently the main product available. In 2017, the Foundation released the updated model of Raspberry Pi Zero named Raspberry Pi Zero W (W = wireless).
|
||||
|
||||
In the near future, a model that has improved technical specifications will arrive, offering a robust platform for embedded systems enthusiasts, researchers, hobbyists and engineers to use it in multi-functional ways to develop real-time applications.
|
||||
|
||||
[][3]
|
||||
|
||||
Figure 1: Raspberry Pi
|
||||
|
||||
**Raspberry Pi as an efficient programming device**
|
||||
|
||||
After getting the Pi powered up and the LXDE WM up and running, the user gets a full-fledged Linux box running a Debian based operating system, i.e., Raspbian. The Raspbian operating system comes with tons of free and open source utilities for users, covering programming, gaming, applications and even education.
|
||||
|
||||
The official programming language of Raspberry Pi is Python, which comes preloaded with the Raspbian operating system. The combination of Raspberry Pi and IDLE3, a Python integrated development environment, enables programmers to develop all sorts of Python based programs.
|
||||
|
||||
In addition to Python, various other languages are supported by Raspberry Pi. A number of IDEs (integrated development environments) that are free and open source are also available. These allow programmers, developers and application engineers to develop programs and applications on Pi.
|
||||
|
||||
[][4]
|
||||
|
||||
Figure 2: The BlueJ GUI interface
|
||||
|
||||
[][5]
|
||||
|
||||
Figure 3: The Geany IDE GUI interface
|
||||
|
||||
**Best IDEs for Raspberry Pi**
|
||||
|
||||
As a programmer and developer, the first thing you require is an IDE, which is regarded as a comprehensive software suite that integrates the basic tools that developers and programmers require to write, compile and test their software. An IDE contains a code editor, a compiler or interpreter and a debugger, which the developer can access via a graphical user interface (GUI). One of the main aims of an IDE is to reduce the configuration necessary to piece together multiple development utilities, and provide the same set of capabilities as a cohesive unit.
|
||||
|
||||
An IDE’s user interface is similar to that of a word processor, for which tools in the toolbar support colour-coding, formatting of source code, error diagnostics, reporting and intelligent code completion. IDEs are designed to integrate with third-party version control libraries like GitHub or Apache Subversion. Some IDEs are dedicated to a particular programming language, allowing a feature set that matches the programming language, while some support multiple languages.
|
||||
|
||||
Raspberry Pi has a wide range of IDEs that provide programmers with good interfaces to develop source code, applications and system programs.
|
||||
|
||||
Let’s explore the top IDEs for Raspberry Pi.
|
||||
|
||||
[][6]
|
||||
|
||||
Figure 4: The Adafruit WebIDE GUI interface
|
||||
|
||||
**BlueJ**
|
||||
|
||||
BlueJ is an IDE that is dedicated to the Java Programming Language and was mainly developed for educational purposes. It also supports short software development projects. Michael Kolling and John Rosenburg at Monash University, Australia, started BlueJ development in 2000 as a powerful successor to the Blue system, and BlueJ became free and open source in March 2009.
|
||||
|
||||
BlueJ provides an efficient way for learning object-oriented programming concepts and the GUI provides a class structure for applications like UML diagram. Every OOPS based concept, like class, objects and function calling, can be represented via interaction based design.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* _Simple and interactive interface:_ The user interface is simple and easy to learn as compared to other professional interfaces like NetBeans or Eclipse. Developers can focus mainly on programming rather than the environment.
|
||||
|
||||
* _Portable:_ BlueJ supports multiple platforms like Windows, Linux and Mac OS X, and can even run without any installation.
|
||||
|
||||
* _New innovations:_ BlueJ IDE is filled with innovations in terms of the object bench, code pad and scope colouring, which makes development fun even for newbies.
|
||||
|
||||
* _Strong technical support:_ BlueJ has a hard-core functioning team that responds to queries and offers solutions to all sorts of developer problems within 24 hours.
|
||||
|
||||
**Latest version:** 4.0.1
|
||||
|
||||
**Geany IDE**
|
||||
|
||||
Geany IDE is regarded as a very lightweight GUI based text editor that uses Scintilla and GTK+ with IDE environment support. The unique thing about Geany is that it is designed to be independent of a special desktop environment and requires only a few dependencies on other packages. It only requires GTK2 runtime libraries for execution. Geany IDE supports tons of programming languages like C, C++, C#, Java, HTML, PHP, Python, Perl, Ruby, Erlang and even LaTeX.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* Auto-completion of code and simple code navigation.
|
||||
|
||||
* Efficient syntax highlighting and code folding.
|
||||
|
||||
* Supports embedded terminal emulator, and is highly extensible and feature-rich since lots of plugins are available for free download.
|
||||
|
||||
* Simple project management and supports multiple file types, which include C, Java, PHP, HTML, Python, Perl, and many more.
|
||||
|
||||
* Highly customised interface for adding or removing options, bars and windows.
|
||||
|
||||
**Latest version:** 1.30.1
|
||||
|
||||
[][7]
|
||||
|
||||
Figure 5: The AlgoIDE GUI interface
|
||||
|
||||
[][8]
|
||||
|
||||
Figure 6: The Ninja IDE GUI interface
|
||||
|
||||
**Adafruit WebIDE**
|
||||
|
||||
Adafruit WebIDE provides a Web based interface for Raspberry Pi users to perform programming functions, and allows developers to compile the source code of various languages like Python, Ruby, JavaScript and many others.
|
||||
|
||||
Adafruit IDE allows developers to put the code in a GIT repository, which can be accessed anywhere via GitHub.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* Can be accessed via Web browser on ports 8080 or 80.
|
||||
|
||||
* Supports the easy compilation and running of source code.
|
||||
|
||||
* Bundled with a debugger and visualiser for proper tracking, the navigation of code and to test source code.
|
||||
|
||||
**AlgoIDE**
|
||||
|
||||
AlgoIDE is a combination of a scripting language and an IDE environment, designed to function together to take programming to the next paradigm. It incorporates a powerful debugger, real-time scope explorer and executes the code, step by step. It is basically designed for all age groups to design programs and do extensive research on algorithms.
|
||||
It supports various types of languages like C, C++, Python, Java, Smalltalk, Objective C, ActionScript, and many more.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* Automatic indentation and completion of source code.
|
||||
|
||||
* Effective syntax highlighting and error management.
|
||||
|
||||
* Contains a debugger, scope explorer and dynamic help system.
|
||||
|
||||
* Supports GUI and traditional Logo programming language Turtle for the development of source code.
|
||||
|
||||
**Latest version:** 2016-12-08 (when it was last updated)
|
||||
|
||||
**Ninja IDE**
|
||||
|
||||
Ninja IDE (Not Just Another IDE), which was designed by Diego Sarmentero, Horacio Duranm Gabriel Acosta, Pedro Mourelle and Jose Rostango, is written purely in Python and supports multiple platforms like Linux, Mac OS X and Windows, for execution. It is regarded as a cross-platform IDE software, especially designed to build Python based applications.
|
||||
|
||||
Ninja IDE is very lightweight and performs various functions like file handling, code locating, going to lines, tabs, automatic indentation of code and editor zoom. Apart from Python, several other languages are supported by this IDE.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* _An efficient code editor:_ Ninja-IDE is regarded as the most efficient code editor as it performs various functions like code completion and code indentation, and functions as an assistant.
|
||||
|
||||
* _Errors and PEP8 finder:_ It highlights static and PEP8 errors in the file.
|
||||
|
||||
* _Code locator:_ With this feature, quick and direct access to a file can be made. The user can just make use of the ‘CTRL+K’ shortcut to type anything, and the IDE will locate the specific text.
|
||||
|
||||
* Its unique project management features and tons of plugins make Ninja-IDE highly extensible.
|
||||
|
||||
**Latest version:** 2.3
|
||||
|
||||
[][9]
|
||||
|
||||
Figure 7: The Lazarus IDE GUI interface
|
||||
|
||||
**Lazarus IDE**
|
||||
|
||||
Lazarus IDE was developed by Cliff Baeseman, Shane Miller and Michael A. Hess in February 1999\. It is regarded as a cross-platform GUI based IDE for rapid application development, and it uses the Free Pascal Compiler. It inherits three primary features—compilation speed, execution speed and cross-compilation. Applications can be cross-compiled from Windows to other operating systems like Linux, Mac OS X, etc.
|
||||
|
||||
This IDE consists of the Lazarus component library, which provides varied facilities to developers in the form of a single and unified interface with different platform-specific implementations. It supports the principle of ‘Write once and compile anywhere’.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* Powerful and fast enough to handle any sort of source code, and supports performance testing.
|
||||
|
||||
* Easy to use GUI, which supports drag-and-drop components. Additional components can be added to the IDE through Lazarus package files.
|
||||
|
||||
* Makes use of Free Pascal, which is highly enhanced with new features and is even used in Android app development.
|
||||
|
||||
* Highly extensible, open source and supports various frameworks to compile additional languages.
|
||||
|
||||
**Latest version:** 1.6.4
|
||||
|
||||
**Codeblock IDE**
|
||||
|
||||
Codeblock IDE was written in C++ using wxWidgets as a GUI toolkit and was released in 2005\. It is a free, open source and cross-platform IDE supporting multiple compilers like GCC, Clang and Visual C++.
|
||||
|
||||
Codeblock IDE is highly intelligent and performs various functions like Syntax highlighting, code folding, code completion and indentation, and has a number of external plugins for varied customisations. It can run on Windows, Mac OS X and Linux operating systems.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* Supports multiple compilers like GCC, Visual C++, Borland C++, Watcom, Intel C++ and many more. Basically designed for C++, but today supports many languages.
|
||||
|
||||
* Intelligent debugger, which allows users to debug programs via access to the local function symbol and argument display, user defined watches, call stack, custom memory dump, thread switching and GNU debugger interface.
|
||||
|
||||
* Supports varied features for migrating code from Dev-C++, Visual C++ and others.
|
||||
|
||||
* Makes use of custom-built systems and stores information in XML extension files.
|
||||
|
||||
**Latest version:** 16.01
|
||||
|
||||
[][10]
|
||||
|
||||
Figure 8: Codeblock IDE interface
|
||||
|
||||
[][11]
|
||||
|
||||
Figure 9: Greenfoot IDE interface
|
||||
|
||||
**Greenfoot IDE**
|
||||
|
||||
Greenfoot IDE was designed by Michael Kolling at the University of Kent. It is a cross-platform Java based IDE basically designed for educational purposes for high schools and undergraduate students. The Greenfoot IDE features project management, automatic code completion and syntax highlighting, and has an easy GUI interface.
|
||||
|
||||
Greenfoot IDE programming consists of sub-classing of two main classes — World and Actor. World represents the class where the main execution occurs. Actors are objects that exist and act in World.
|
||||
|
||||
_**Features**_
|
||||
|
||||
* Simple and easy-to-use GUI, which is more interactive than BlueJ and other IDEs.
|
||||
|
||||
* Easy to use even for newbies and beginners.
|
||||
|
||||
* Highly powerful in executing Java code.
|
||||
|
||||
* Supports GNOME/KDE/X11 graphical environments.
|
||||
|
||||
* Other features include project management, auto-completion, syntax highlighting and auto-correction of errors.
|
||||
|
||||
**Latest version:** 3.1.0
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Anand Nayyar
|
||||
|
||||
The author is assistant professor in Department of Computer Applications & IT at KCL Institute of Management and Technology, Jalandhar, Punjab. He loves to work on Open Source technologies, embedded systems, cloud computing, wireless sensor networks and simulators. He can be reached at anand_nayyar@yahoo.co.in.
|
||||
|
||||
--------------------
|
||||
|
||||
via: http://opensourceforu.com/2017/06/top-ides-raspberry-pi/
|
||||
|
||||
作者:[Anand Nayyar ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensourceforu.com/author/anand-nayyar/
|
||||
[1]:http://opensourceforu.com/2017/06/top-ides-raspberry-pi/#disqus_thread
|
||||
[2]:http://opensourceforu.com/author/anand-nayyar/
|
||||
[3]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-1-8.jpg
|
||||
[4]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-2-6.jpg
|
||||
[5]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-3-3.jpg
|
||||
[6]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-4-3.jpg
|
||||
[7]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-5-2.jpg
|
||||
[8]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-6-1.jpg
|
||||
[9]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-7.jpg
|
||||
[10]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-8.jpg
|
||||
[11]:http://opensourceforu.com/wp-content/uploads/2017/05/Figure-9.jpg
|
@ -1,4 +1,3 @@
|
||||
translated by zhousiyu325
|
||||
What all you need to know about HTML5
|
||||
============================================================
|
||||
|
||||
|
@ -1,111 +0,0 @@
|
||||
翻译中 by zky001
|
||||
# [Open source social robot kit runs on Raspberry Pi and Arduino][22]
|
||||
|
||||
|
||||
|
||||

|
||||
Thecorpora’s Scratch-ready “Q.bo One” robot is based on the RPi 3 and Arduino, and offers stereo cams, mics, a speaker, and visual and language recognition.
|
||||
|
||||
In 2010, robotics developer Francisco Paz and his Barcelona-based Thecorpora startup introduced the first [Qbo][6] “Cue-be-oh” robot as an open source proof-of-concept and research project for exploring AI capabilities in multi-sensory, interactive robots. Now, after a preview in February at Mobile World Congress, Thecorpora has gone to Indiegogo to launch the first mass produced version of the social robot in partnership with Arrow.
|
||||
|
||||
|
||||
[][7] [][8]
|
||||
**Q.bo One from angle (left) and top**
|
||||
|
||||
|
||||
Like the original, the new Q.bo One has a spherical head with eyes (dual stereoscopic cameras), ears (3x mics), and mouth (speakers), and is controlled by WiFi and Bluetooth. The Q.bo One also similarly features open source Linux software and open spec hardware. Instead of using an Intel Atom-based Mini-ITX board, however, it runs Raspbian on a Raspberry Pi 3 linked to an Arduino compatible mainboard.
|
||||
|
||||
|
||||
[][9]
|
||||
**Q.bo One side views**
|
||||
|
||||
|
||||
The Q.bo One is available on Indiegogo through mid-July starting at $369 (early bird) or $399 in kit form including its built-in Raspberry Pi 3 and Arduino-based “Qboard” controller board. It also sells for $499 fully assembled. The Indiegogo campaign is currently about 15 percent toward its flexible $100,000 goal, and shipments are due in December.
|
||||
|
||||
More proficient roboticists and embedded developers may instead want the $99 package with just the RPi and Qboard PCBs and software, or the $249 version, which gives you the robot kit without the boards. With this package, you could replace the Qboard with your own Arduino controller, and swap out the RPi 3 for another Linux SBC. Thecorpora lists the Banana Pi, BeagleBone, Tinker Board, and [soon to be retired Intel Edison][10], as examples of compatible alternatives.
|
||||
|
||||
<center>
|
||||
[][11]
|
||||
**Q.bo One kit**
|
||||
(click image to enlarge)
|
||||
</center>
|
||||
|
||||
Unlike the 2010 Qbo, the Q.bo One is not mobile aside from its spherical head, which swivels in its base with the help of dual servos in order to track voices and motion. The Robotis Dynamixel servos, which are also found in the open source, Raspberry Pi based [TurtleBot 3][23] robot kit, can move up and down in addition to left and right.
|
||||
|
||||
<center>
|
||||
[][12] [][13]
|
||||
**Q.bo One detail view (left) and Qboard detail**
|
||||
(click images to enlarge)
|
||||
</center>
|
||||
|
||||
The Q.bo One can also be compared with the similarly stationary, Linux-based [Jibo][24] “social robot,” which launched on Indiegogo in 2014 to the tune of $3.6 million. The Jibo has yet to ship, however, with the [latest delays][25] pushing it toward a release sometime this year.
|
||||
|
||||
|
|
||||

|
||||
|
||||
**Q.bo One** |
|
||||
|
||||
We’ll go out on a limb and predict the Q.bo One will ship closer to its Dec. 2017 target. The core technology and AI software has been proven, and so are the Raspberry Pi and Arduino technologies. The Qboard mainboard has already been built and certified for manufacturing by Arrow.
|
||||
|
||||
The open source design suggests that even a mobile version wouldn’t be out of the question. That would make it more like the rolling, humanoid [Pepper][14], a similarly AI-infused conversational robot from Softbank and Aldeberan.
|
||||
|
||||
The Q.bo One has added a few tricks since the original, such as a “mouth” formed by 20 LEDs that light up in different, programmable patterns to mimic lips moving during speech. There are also three touch sensors around head if you want to tap the bot to get its attention. But all you really need to do is speak, and the Q.bo One will swivel and gaze adoringly at you like a cocker spaniel.
|
||||
|
||||
Interfaces include everything you have on the Raspberry Pi 3, which just demolished the competition in our [2017 hacker board survey][15]. An antenna mount is provided for the RPi 3’s WiFi and Bluetooth radios.
|
||||
|
||||
<center>
|
||||
[][16] [][17]
|
||||
**Q.bo One software architecture (left) and Q.bo One with Scratch screen**
|
||||
(click images to enlarge)
|
||||
</center>
|
||||
|
||||
The Qboard (also referred to as the Q.board) runs Arduino code on an Atmel ATSAMD21 MCU, and houses the three microphones, speaker, touch sensors, Dynamixel controller, and the LED matrix for the mouth. Other features include GPIO, an I2C interface, and a micro-USB port that can connect to a desktop computer.
|
||||
|
||||
The Q.bo One can recognize faces and track movements, and the bot can even recognize itself in a mirror. With the help of a cloud connection, the robot can recognize and converse with other Q.bo One bots. The robot can respond to questions with the help of natural language processing, and read aloud with text-to-speech
|
||||
|
||||
Scratch programming is available, enabling the robot’s main function, which is to teach kids about robots and programming. The robot is also designed for educators and makers, and can act as a companion to the elderly.
|
||||
|
||||
The Raspbian based software uses OpenCV for vision processing, and can be programmed with a wide variety of languages including C++. The software also offers hooks to IBM Bluemix, NodeRED, and ROS. Presumably, you could also integrate an [Alexa][18] or [Google Assistant][19] voice agent, although Thecorpora makes no mention of this.
|
||||
|
||||
|
||||
|
||||
**Further information**
|
||||
|
||||
The Q.bo One is available on Indiegogo through mid-July starting at $369 for the full kit and $499 fully assembled. Shipments are expected in Dec. 2017\. More information may be found on the [Q.bo One Indiegogo page][20] and [Thecorpora website][21].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
|
||||
作者:[ Eric Brown][a]
|
||||
译者:[zky001](https://github.com/zky001)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[1]:http://twitter.com/share?url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/&text=Open+source+social+robot+kit+runs+on+Raspberry+Pi+and+Arduino+
|
||||
[2]:https://plus.google.com/share?url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[3]:http://www.facebook.com/sharer.php?u=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[5]:http://reddit.com/submit?url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/&title=Open%20source%20social%20robot%20kit%20runs%20on%20Raspberry%20Pi%20and%20Arduino
|
||||
[6]:http://linuxdevices.linuxgizmos.com/open-source-robot-is-all-eyes/
|
||||
[7]:http://linuxgizmos.com/files/thecorpora_qboone.jpg
|
||||
[8]:http://linuxgizmos.com/files/thecorpora_qboone2.jpg
|
||||
[9]:http://linuxgizmos.com/files/thecorpora_qboone_side.jpg
|
||||
[10]:http://linuxgizmos.com/intel-pulls-the-plug-on-its-joule-edison-and-galileo-boards/
|
||||
[11]:http://linuxgizmos.com/files/thecorpora_qboone_kit.jpg
|
||||
[12]:http://linuxgizmos.com/files/thecorpora_qboone_detail.jpg
|
||||
[13]:http://linuxgizmos.com/files/thecorpora_qboone_qboard.jpg
|
||||
[14]:http://linuxgizmos.com/worlds-first-emotional-robot-runs-linux/
|
||||
[15]:http://linuxgizmos.com/2017-hacker-board-survey-raspberry-pi-still-rules-but-x86-sbcs-make-gains/
|
||||
[16]:http://linuxgizmos.com/files/thecorpora_qboone_arch.jpg
|
||||
[17]:http://linuxgizmos.com/files/thecorpora_qboone_scratch.jpg
|
||||
[18]:http://linuxgizmos.com/how-to-add-alexa-to-your-raspberry-pi-3-gizmo/
|
||||
[19]:http://linuxgizmos.com/free-raspberry-pi-voice-kit-taps-google-assistant-sdk/
|
||||
[20]:https://www.indiegogo.com/projects/q-bo-one-an-open-source-robot-for-everyone#/
|
||||
[21]:http://thecorpora.com/
|
||||
[22]:http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[23]:http://linuxgizmos.com/ubuntu-driven-turtlebot-gets-a-major-rev-with-a-pi-or-joule-in-the-drivers-seat/
|
||||
[24]:http://linuxgizmos.com/cheery-social-robot-owes-it-all-to-its-inner-linux/
|
||||
[25]:https://www.slashgear.com/jibo-delayed-to-2017-as-social-robot-hits-more-hurdles-20464725/
|
@ -1,5 +1,4 @@
|
||||
translating by cycoe
|
||||
|
||||
Translating by TimeBear
|
||||
18 open source translation tools to localize your project
|
||||
============================================================
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
translating by chenxinlong
|
||||
An introduction to functional programming in JavaScript
|
||||
============================================================
|
||||
|
||||
|
@ -1,71 +0,0 @@
|
||||
Translating by Zhipeng-li
|
||||
Kubernetes: Why does it matter?
|
||||
============================================================
|
||||
|
||||
### The Kubernetes platform for running containerized workloads takes on some of the heavy lifting when developing and deploying cloud-native applications.
|
||||
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
Developing and deploying cloud-native applications has become very popular—for very good reasons. There are clear advantages to a process that allows rapid deployment and continuous delivery of bug fixes and new features, but there's a chicken-and-egg problem no one talks about: How do you get there from here? Building the infrastructure and developing processes to develop and maintain cloud-native applications—all from scratch—are non-trivial, time-intensive tasks.
|
||||
|
||||
[Kubernetes][3], a relatively new platform for running containerized workloads, addresses these problems. Originally an internal project within Google, Kubernetes was donated to the [Cloud Native Computing Foundation][4] in 2015 and has attracted developers from the open source community around the world. Kubernetes' design is based on 15 years of experience in running both production and development workloads. Since it is open source, anyone can download and use it and realize its benefits.
|
||||
|
||||
So why is such a big fuss being made over Kubernetes? I believe that it hits a sweet spot between an Infrastructure as a Service (IaaS) solution, like OpenStack, and a full Platform as a Service (PaaS) resource where the lower-level runtime implementation is completely controlled by a vendor. Kubernetes provides the benefits of both worlds: abstractions to manage infrastructure, as well as tools and features to drill down to bare metal for troubleshooting.
|
||||
|
||||
### IaaS vs. PaaS
|
||||
|
||||
OpenStack is classified by most people as an IaaS solution, where pools of physical resources, such as processors, networking, and storage, are allocated and shared among different users. Isolation between users is implemented using traditional, hardware-based virtualization.
|
||||
|
||||
OpenStack's REST API allows infrastructure to be created automatically using code, but therein lies the problem. The output of the IaaS product is yet more infrastructure. There's not much in the way of services to support and manage the extra infrastructure once it has been created. After a certain point, it becomes a lot of work to manage the low-level infrastructure, such as servers and IP addresses, produced by OpenStack. One well-known outcome is virtual machine (VM) sprawl, but the same concept applies to networks, cryptographic keys, and storage volumes. This leaves less time for developers to work on building and maintaining an application.
|
||||
|
||||
Like other cluster-based solutions, Kubernetes operates at the individual server level to implement horizontal scaling. New servers can be added easily and workloads scheduled on the hardware immediately. Similarly, servers can be removed from the cluster when they're not being utilized effectively or when maintenance is needed. Orchestration activities, such as job scheduling, health monitoring, and maintaining high availability, are other tasks automatically handled by Kubernetes.
|
||||
|
||||
Networking is another area that can be difficult to reliably orchestrate in an IaaS environment. Communication of IP addresses between services to link microservices can be particularly tricky. Kubernetes implements IP address management, load balancing, service discovery, and DNS name registration to provide a headache-free, transparent networking environment within the cluster.
|
||||
|
||||
### Designed for deployment
|
||||
|
||||
Once you have created the environment to run your application, there is the small matter of deploying it. Reliably deploying an application is one of those tasks that's easily said, but not easily done—not in the slightest. The huge advantage that Kubernetes has over other environments is that deployment is a first-class citizen.
|
||||
|
||||
There is a single command, using the Kubernetes command-line interface (CLI), that takes a description of the application and installs it on the cluster. Kubernetes implements the entire application lifecycle from initial deployment, rolling out new releases as well as rolling them back—a critical feature when things go wrong. In-progress deployments can also be paused and resumed. The advantage of having existing, built-in tools and support for application deployment, rather than building a deployment system yourself, cannot be overstated. Kubernetes users do not have to reinvent the application deployment wheel nor discover what a difficult task it is.
|
||||
|
||||
Kubernetes also has the facility to monitor the status of an in-progress deployment. While you can write this in an IaaS environment, like the deployment process itself, it's a surprisingly difficult task where corner cases abound.
|
||||
|
||||
### Designed for DevOps
|
||||
|
||||
As you gain more experience in developing and deploying applications for Kubernetes, you will be traveling the same path that Google and others have before you. You'll discover there are several Kubernetes features that are essential to effectively developing and troubleshooting a multi-service application.
|
||||
|
||||
First, Kubernetes' ability to easily examine the logs or SSH (secure shell) into a running service is vitally important. With a single command line invocation, an administrator can examine the logs of a service running under Kubernetes. This may sound like a simple task, but in an IaaS environment it's not easy unless you have already put some work into it. Large applications often have hardware and personnel dedicated just for log collection and analysis. Logging in Kubernetes may not replace a full-featured logging and metrics solution, but it provides enough to enable basic troubleshooting.
|
||||
|
||||
Second, Kubernetes offers built-in secret management. Another hitch known by teams who have developed their own deployment systems from scratch is that deploying sensitive data, such as passwords and API tokens, securely to VMs is hard. By making secrets first-class citizens, Kubernetes stops your team from inventing its own insecure, buggy secret-distribution system or just hardcoding credentials in deployment scripts.
|
||||
|
||||
Finally, there is a slew of features in Kubernetes for automatically scaling, load-balancing, and restarting your application. Again, these features are tempting targets for developers to write when using IaaS or bare metal. Scaling and health checks for your Kubernetes application are declared in the service definition, and Kubernetes ensures that the correct number of instances is running and healthy.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The differences between IaaS and PaaS systems are enormous, including that PaaS can save a vast amount of development and debugging time. As a PaaS, Kubernetes implements a potent and effective set of features to help you develop, deploy, and debug cloud-native applications. Its architecture and design represent decades of hard-won experience that can your team can take advantage of—for free.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Tim Potter - Tim is a senior software engineer working at Hewlett Packard Enterprise. He has been a contributor to free and open source software for nearly two decades working on a variety of projects including Samba, Wireshark, OpenPegasus, and Docker. Tim blogs at https://elegantinfrastructure.com/ about Docker, Kubernetes and other infrastructure-related topics.
|
||||
|
||||
-----
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/6/introducing-kubernetes
|
||||
|
||||
作者:[ Tim Potter][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tpot
|
||||
[1]:https://opensource.com/article/17/6/introducing-kubernetes?rate=RPoUoHXYQXbTb7DHQCDsHgR1ZcfLSoquZ8xVZzfMtxM
|
||||
[2]:https://opensource.com/user/63281/feed
|
||||
[3]:https://kubernetes.io/
|
||||
[4]:https://www.cncf.io/
|
||||
[5]:https://opensource.com/users/tpot
|
@ -1,312 +0,0 @@
|
||||
MonkeyDEcho translated
|
||||
|
||||
[MySQL infrastructure testing automation at GitHub][31]
|
||||
============================================================
|
||||
|
||||
Our MySQL infrastructure is a critical component to GitHub. MySQL serves GitHub.com, GitHub’s API, authentication and more. Every `git` request touches MySQL in some way. We are tasked with keeping the data available, and maintaining its integrity. Even while our MySQL clusters serve traffic, we need to be able to perform tasks such as heavy duty cleanups, ad-hoc updates, online schema migrations, cluster topology refactoring, pooling and load balancing and more. We have the infrastructure to automate away such operations; in this post we share a few examples of how we build trust in our infrastructure through continuous testing. It is essentially how we sleep well at night.
|
||||
|
||||
### Backups[][36]
|
||||
|
||||
It is incredibly important to take backups of your data. If you are not taking backups of your database, it is likely a matter of time before this will become an issue. Percona [Xtrabackup][37]is the tool we have been using for issuing full backups for our MySQL databases. If there is data that we need to be certain is saved, we have a server that is backing up the data.
|
||||
|
||||
In addition to the full binary backups, we run logical backups several times a day. These backups allow our engineers to get a copy of recent data. There are times that they would like a complete set of data from a table so they can test an index change on a production sized table or see data from a certain point of time. Hubot allows us to restore a backed up table and will ping us when the table is ready to use.
|
||||
|
||||

|
||||
**tomkrouper**.mysql backup-list locations
|
||||

|
||||
**Hubot**
|
||||
```
|
||||
+-----------+------------+---------------+---------------------+---------------------+----------------------------------------------+
|
||||
| Backup ID | Table Name | Donor Host | Backup Start | Backup End | File Name |
|
||||
+-----------+------------+---------------+---------------------+---------------------+----------------------------------------------+
|
||||
| 1699494 | locations | db-mysql-0903 | 2017-07-01 22:09:17 | 2017-07-01 22:09:17 | backup-mycluster-locations-1498593122.sql.gz |
|
||||
| 1699133 | locations | db-mysql-0903 | 2017-07-01 16:11:37 | 2017-07-01 16:11:39 | backup-mycluster-locations-1498571521.sql.gz |
|
||||
| 1698772 | locations | db-mysql-0903 | 2017-07-01 10:09:21 | 2017-07-01 10:09:22 | backup-mycluster-locations-1498549921.sql.gz |
|
||||
| 1698411 | locations | db-mysql-0903 | 2017-07-01 04:12:32 | 2017-07-01 04:12:32 | backup-mycluster-locations-1498528321.sql.gz |
|
||||
| 1698050 | locations | db-mysql-0903 | 2017-06-30 22:18:23 | 2017-06-30 22:18:23 | backup-mycluster-locations-1498506721.sql.gz |
|
||||
| ...
|
||||
| 1262253 | locations | db-mysql-0088 | 2016-08-01 01:58:51 | 2016-08-01 01:58:54 | backup-mycluster-locations-1470034801.sql.gz |
|
||||
| 1064984 | locations | db-mysql-0088 | 2016-04-04 13:07:40 | 2016-04-04 13:07:43 | backup-mycluster-locations-1459494001.sql.gz |
|
||||
+-----------+------------+---------------+---------------------+---------------------+----------------------------------------------+
|
||||
|
||||
```
|
||||
|
||||

|
||||
**tomkrouper**.mysql restore 1699133
|
||||

|
||||
**Hubot**A restore job has been created for the backup job 1699133\. You will be notified in #database-ops when the restore is complete.
|
||||

|
||||
**Hubot**[@tomkrouper][1]: the locations table has been restored as locations_2017_07_01_16_11 in the restores database on db-mysql-0482
|
||||
|
||||
The data is loaded onto a non-production database which is accessible to the engineer requesting the restore.
|
||||
|
||||
The last way we keep a “backup” of data around is we use [delayed replicas][38]. This is less of a backup and more of a safeguard. For each production cluster we have a host that has replication delayed by 4 hours. If a query is run that shouldn’t have, we can run `mysql panic` in chatops. This will cause all of our delayed replicas to stop replication immediately. This will also page the on-call DBA. From there we can use delayed replica to verify there is an issue, and then fast forward the binary logs to the point right before the error. We can then restore this data to the master, thus recovering data to that point.
|
||||
|
||||
Backups are great, however they are worthless if some unknown or uncaught error occurs corrupting the backup. A benefit of having a script to restore backups is it allows us to automate the verification of backups via cron. We have set up a dedicated host for each cluster that runs a restore of the latest backup. This ensures that the backup ran correctly and that we are able to retrieve the data from the backup.
|
||||
|
||||
Depending on dataset size, we run several restores per day. Restored servers are expected to join the replication stream and to be able to catch up with replication. This tests not only that we took a restorable backup, but also that we correctly identified the point in time at which it was taken and can further apply changes from that point in time. We are alerted if anything goes wrong in the restore process.
|
||||
|
||||
We furthermore track the time the restore takes, so we have a good idea of how long it will take to build a new replica or restore in cases of emergency.
|
||||
|
||||
The following is an output from an automated restore process, written by Hubot in our robots chat room.
|
||||
|
||||

|
||||
**Hubot**gh-mysql-backup-restore: db-mysql-0752: restore_log.id = 4447
|
||||
gh-mysql-backup-restore: db-mysql-0752: Determining backup to restore for cluster 'prodcluster'.
|
||||
gh-mysql-backup-restore: db-mysql-0752: Enabling maintenance mode
|
||||
gh-mysql-backup-restore: db-mysql-0752: Setting orchestrator downtime
|
||||
gh-mysql-backup-restore: db-mysql-0752: Disabling Puppet
|
||||
gh-mysql-backup-restore: db-mysql-0752: Stopping MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: Removing MySQL files
|
||||
gh-mysql-backup-restore: db-mysql-0752: Running gh-xtrabackup-restore
|
||||
gh-mysql-backup-restore: db-mysql-0752: Restore file: xtrabackup-notify-2017-07-02_0000.xbstream
|
||||
gh-mysql-backup-restore: db-mysql-0752: Running gh-xtrabackup-prepare
|
||||
gh-mysql-backup-restore: db-mysql-0752: Starting MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: Update file ownership
|
||||
gh-mysql-backup-restore: db-mysql-0752: Upgrade MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: Stopping MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: Starting MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: Backup Host: db-mysql-0034
|
||||
gh-mysql-backup-restore: db-mysql-0752: Setting up replication
|
||||
gh-mysql-backup-restore: db-mysql-0752: Starting replication
|
||||
gh-mysql-backup-restore: db-mysql-0752: Replication catch-up
|
||||
gh-mysql-backup-restore: db-mysql-0752: Restore complete (replication running)
|
||||
gh-mysql-backup-restore: db-mysql-0752: Enabling Puppet
|
||||
gh-mysql-backup-restore: db-mysql-0752: Disabling maintenance mode
|
||||
gh-mysql-backup-restore: db-mysql-0752: Setting orchestrator downtime
|
||||
gh-mysql-backup-restore: db-mysql-0752: Restore process complete.
|
||||
|
||||
One thing we use backups for is adding a new replica to an existing set of MySQL servers. We will initiate the build of a new server, and once we are notified it is ready, we can start a restore of the latest backup for that particular cluster. We have a script in place that runs all of the restore commands that we would otherwise have to do by hand. Our automated restore system essentially uses the same script. This simplifies the system build process and allows us to have a host up and running with a handful of chat commands opposed to dozens of manual processes. Shown below is a restore kicked manually in chat:
|
||||
|
||||

|
||||
**jessbreckenridge**.mysql backup-restore -H db-mysql-0007 -o -r magic_word=daily_rotating_word
|
||||

|
||||
**Hubot**[@jessbreckenridge][2] gh-mysql-backup-restore: db-mysql-0007: Determining backup to restore for cluster 'mycluster'.
|
||||
[@jessbreckenridge][3] gh-mysql-backup-restore: db-mysql-0007: restore_log.id = 4449
|
||||
[@jessbreckenridge][4] gh-mysql-backup-restore: db-mysql-0007: Enabling maintenance mode
|
||||
[@jessbreckenridge][5] gh-mysql-backup-restore: db-mysql-0007: Setting orchestrator downtime
|
||||
[@jessbreckenridge][6] gh-mysql-backup-restore: db-mysql-0007: Disabling Puppet
|
||||
[@jessbreckenridge][7] gh-mysql-backup-restore: db-mysql-0007: Stopping MySQL
|
||||
[@jessbreckenridge][8] gh-mysql-backup-restore: db-mysql-0007: Removing MySQL files
|
||||
[@jessbreckenridge][9] gh-mysql-backup-restore: db-mysql-0007: Running gh-xtrabackup-restore
|
||||
[@jessbreckenridge][10] gh-mysql-backup-restore: db-mysql-0007: Restore file: xtrabackup-mycluster-2017-07-02_0015.xbstream
|
||||
[@jessbreckenridge][11] gh-mysql-backup-restore: db-mysql-0007: Running gh-xtrabackup-prepare
|
||||
[@jessbreckenridge][12] gh-mysql-backup-restore: db-mysql-0007: Update file ownership
|
||||
[@jessbreckenridge][13] gh-mysql-backup-restore: db-mysql-0007: Starting MySQL
|
||||
[@jessbreckenridge][14] gh-mysql-backup-restore: db-mysql-0007: Upgrade MySQL
|
||||
[@jessbreckenridge][15] gh-mysql-backup-restore: db-mysql-0007: Stopping MySQL
|
||||
[@jessbreckenridge][16] gh-mysql-backup-restore: db-mysql-0007: Starting MySQL
|
||||
[@jessbreckenridge][17] gh-mysql-backup-restore: db-mysql-0007: Setting up replication
|
||||
[@jessbreckenridge][18] gh-mysql-backup-restore: db-mysql-0007: Starting replication
|
||||
[@jessbreckenridge][19] gh-mysql-backup-restore: db-mysql-0007: Backup Host: db-mysql-0201
|
||||
[@jessbreckenridge][20] gh-mysql-backup-restore: db-mysql-0007: Replication catch-up
|
||||
[@jessbreckenridge][21] gh-mysql-backup-restore: db-mysql-0007: Replication behind by 4589 seconds, waiting 1800 seconds before next check.
|
||||
[@jessbreckenridge][22] gh-mysql-backup-restore: db-mysql-0007: Restore complete (replication running)
|
||||
[@jessbreckenridge][23] gh-mysql-backup-restore: db-mysql-0007: Enabling puppet
|
||||
[@jessbreckenridge][24] gh-mysql-backup-restore: db-mysql-0007: Disabling maintenance mode
|
||||
|
||||
### Failovers[][39]
|
||||
|
||||
[We use orchestrator][40] to perform automated failovers for masters and intermediate masters. We expect `orchestrator` to correctly detect master failure, designate a replica for promotion, heal the topology under said designated replica, make the promotion. We expect VIPs to change, pools to change, clients to reconnect, `puppet` to run essential components on promoted master, and more. A failover is a complex task that touches many aspects of our infrastructure.
|
||||
|
||||
To build trust in our failovers we set up a _production-like_ , test cluster, and we continuously crash it to observe failovers.
|
||||
|
||||
The _production-like_ cluster is a replication setup that is identical in all aspects to our production clusters: types of hardware, operating systems, MySQL versions, network environments, VIP, `puppet` configurations, [haproxy setup][41], etc. The only thing different to this cluster is that it doesn’t send/receive production traffic.
|
||||
|
||||
We emulate a write load on the test cluster, while avoiding replication lag. The write load is not too heavy, but has queries that are intentionally contending to write on same datasets. This isn’t too interesting in normal times, but proves to be useful upon failovers, as we will shortly describe.
|
||||
|
||||
Our test cluster has representative servers from three data centers. We would _like_ the failover to promote a replacement replica from within the same data center. We would _like_ to be able to salvage as many replicas as possible under such constraint. We _require_ that both apply whenever possible. `orchestrator` has no prior assumption on the topology; it must react on whatever the state was at time of the crash.
|
||||
|
||||
We, however, are interested in creating complex and varying scenarios for failovers. Our failover testing script prepares the grounds for the failover:
|
||||
|
||||
* It identifies existing master
|
||||
|
||||
* It refactors the topology to have representatives of all three data centers under the master. Different DCs have different network latencies and are expected to react in different timing to master’s crash.
|
||||
|
||||
* It chooses a crash method. We choose from shooting the master (`kill -9`) or network partitioning it: `iptables -j REJECT` (nice-ish) or `iptables -j DROP`(unresponsive).
|
||||
|
||||
The script proceeds to crash the master by chosen method, and waits for `orchestrator` to reliably detect the crash and to perform failover. While we expect detection and promotion to both complete within `30` seconds, the script relaxes this expectation a bit, and sleeps for a designated time before looking into failover results. It will then:
|
||||
|
||||
* Check that a new (different) master is in place
|
||||
|
||||
* There is a good number of replicas in the cluster
|
||||
|
||||
* The master is writable
|
||||
|
||||
* Writes to the master are visible on the replicas
|
||||
|
||||
* Internal service discovery entries are updated (identity of new master is as expected; old master removed)
|
||||
|
||||
* Other internal checks
|
||||
|
||||
These tests confirm that the failover was successful, not only MySQL-wise but also on our larger infrastructure scope. A VIP has been assumed; specific services have been started; information got to where it was supposed to go.
|
||||
|
||||
The script further proceeds to restore the failed server:
|
||||
|
||||
* Restoring it from backup, thereby implicitly testing our backup/restore procedure
|
||||
|
||||
* Verifying server configuration is as expected (the server no longer believes it’s the master)
|
||||
|
||||
* Returning it to the replication cluster, expecting to find data written on the master
|
||||
|
||||
Consider the following visualization of a scheduled failover test: from having a well-running cluster, to seeing problems on some replicas, to diagnosing the master (`7136`) is dead, to choosing a server to promote (`a79d`), refactoring the topology below that server, to promoting it (failover successful), to restoring the dead master and placing it back into the cluster.
|
||||
|
||||

|
||||
|
||||
#### What would a test failure look like?
|
||||
|
||||
Our testing script uses a stop-the-world approach. A single failure in any of the failover components fails the entire test, disabling any future automated tests until a human resolves the matter. We get alerted and proceed to check the status and logs.
|
||||
|
||||
The script would fail on an unacceptable detection or failover time; on backup/restore issues; on losing too many servers; on unexpected configuration following the failover; etc.
|
||||
|
||||
We need to be certain `orchestrator` connects the servers correctly. This is where the contending write load comes useful: if set up incorrectly, replication is easily susceptible to break. We would get `DUPLICATE KEY` or other errors to suggest something went wrong.
|
||||
|
||||
This is particularly important as we make improvements and introduce new behavior to `orchestrator`, and allows us to test such changes in a safe environment.
|
||||
|
||||
#### Coming up: chaos testing
|
||||
|
||||
The testing procedure illustrated above will catch (and has caught) problems on many parts of our infrastructure. Is it enough?
|
||||
|
||||
In a production environment there’s always something else. Something about the particular test method that won’t apply to our production clusters. They don’t share the same traffic and traffic manipulation, nor the exact same set of servers. The types of failure can vary.
|
||||
|
||||
We are designing chaos testing for our production clusters. Chaos testing would literally destroy pieces in our production, but on expected schedule and under sufficiently controlled manner. Chaos testing introduces a higher level of trust in the recovery mechanism and affects (thus tests) larger parts of our infrastructure and application.
|
||||
|
||||
This is delicate work: while we acknowledge the need for chaos testing, we also wish to avoid unnecessary impact to our service. Different tests will differ in risk level and impact, and we will work to ensure availability of our service.
|
||||
|
||||
### Schema migrations[][42]
|
||||
|
||||
[We use gh-ost][43] to run live schema migrations. `gh-ost` is stable, but also under active developments, with major new features being added or planned.
|
||||
|
||||
`gh-ost` migrates tables by copying data onto a _ghost_ table, applying ongoing changes intercepted by the binary logs onto the _ghost_ table, even as the original table is being written to. It then swaps the _ghost_ table in place of the original table. At migration completion GitHub proceeds to work with a table generated and populated by `gh-ost`.
|
||||
|
||||
At this time almost all of GitHub’s MySQL data has been recreated by `gh-ost`, and most of it multiple times. We must have high trust in `gh-ost` to let it tamper with our data over and over again, even in face of active development. Here’s how we gain this trust.
|
||||
|
||||
`gh-ost` provides a testing-in-production capability. It supports running a migration on a replica, in much the same way as it would run on the master: `gh-ost` would connect to the replica and treat it as if it were the master. It would parse its binary logs the same way it would for a real master migration. However it would copy rows and apply binlog events to the replica, and avoid making writes onto the master.
|
||||
|
||||
We run `gh-ost`-dedicated replicas in production. These replicas do not serve production traffic. Each such replica retrieves the current list of production tables and iterates them in random order. One by one it picks a table and performs a replica-migration on that table. The migration doesn’t actually modify table structure, but instead runs a trivial `ENGINE=InnoDB`. The test runs the migration even as the table is being used in production, thus copying real production data and applying true production traffic off the binary logs.
|
||||
|
||||
These migrations can be audited. Here’s how we can inspect status of running tests from chat:
|
||||
|
||||

|
||||
**ggunson**.migration test-status
|
||||

|
||||
**Hubot**
|
||||
```
|
||||
# Migrating `prod`.`pull_requests`; Ghost table is `prod`.`_pull_requests_gho`
|
||||
# Migrating ghost-db-mysql-0007:3306; inspecting ghost-db-mysql-0007:3306; executing on ghost-db-mysql-0007
|
||||
# Migration started at Mon Jan 30 02:13:39 -0800 2017
|
||||
# chunk-size: 2500; max-lag-millis: 1500ms; max-load: Threads_running=30; critical-load: Threads_running=1000; nice-ratio: 0.000000
|
||||
# throttle-additional-flag-file: /tmp/gh-ost.throttle
|
||||
# panic-flag-file: /tmp/ghost-test-panic.flag
|
||||
# Serving on unix socket: /tmp/gh-ost.test.sock
|
||||
Copy: 57992500/86684838 66.9%; Applied: 57708; Backlog: 1/100; Time: 3h28m38s(total), 3h28m36s(copy); streamer: mysql-bin.000576:142993938; State: migrating; ETA: 1h43m12s
|
||||
```
|
||||
|
||||
When a test migration completes copying of table data it stops replication and performs the cut-over, replacing the original table with the _ghost_ table, and then swaps back. We’re not interested in actually replacing the data. Instead we are left with both the original table and the _ghost_ table, which should both be identical. We verify that by checksumming the entire table data for both tables.
|
||||
|
||||
A test can complete with:
|
||||
|
||||
* _success_ : All went well and checksum is identical. We expect to see this.
|
||||
|
||||
* _failure_ : Execution problem. This can occasionally happen due to the migration process being killed, a replication issue etc., and is typically unrelated to `gh-ost` itself.
|
||||
|
||||
* _checksum failure_ : table data inconsistency. For a tested branch, this call for fixes. For an ongoing `master` branch test, this would imply immediate blocking of production migrations. We don’t get the latter.
|
||||
|
||||
Test results are audited, sent to robot chatrooms, sent as events to our metrics systems. Each vertical line in the following graph represents a successful migration test:
|
||||
|
||||

|
||||
|
||||
These tests run continuously. We are notified by alerts in case of failures. And of course we can always visit the robots chatroom to know what’s going on.
|
||||
|
||||
#### Testing new versions
|
||||
|
||||
We continuously improve `gh-ost`. Our development flow is based on `git` branches, which we then offer to merge via [pull requests][44].
|
||||
|
||||
A submitted `gh-ost` pull request goes through Continuous Integration (CI) which runs basic compilation and unit tests. Once past this, the PR is technically eligible for merging, but even more interestingly it is [eligible for deployment via Heaven][45]. Being the sensitive component in our infrastructure that it is, we take care to deploy `gh-ost` branches for intensive testing before merging into `master`.
|
||||
|
||||

|
||||
**shlomi-noach**.deploy gh-ost/fix-reappearing-throttled-reasons to prod/ghost-db-mysql-0007
|
||||

|
||||
**Hubot**[@shlomi-noach][25] is deploying gh-ost/fix-reappearing-throttled-reasons (baee4f6) to production (ghost-db-mysql-0007).
|
||||
[@shlomi-noach][26]'s production deployment of gh-ost/fix-reappearing-throttled-reasons (baee4f6) is done! (2s)
|
||||
[@shlomi-noach][27], make sure you watch for exceptions in haystack
|
||||

|
||||
**jonahberquist**.deploy gh-ost/interactive-command-question to prod/ghost-db-mysql-0012
|
||||

|
||||
**Hubot**[@jonahberquist][28] is deploying gh-ost/interactive-command-question (be1ab17) to production (ghost-db-mysql-0012).
|
||||
[@jonahberquist][29]'s production deployment of gh-ost/interactive-command-question (be1ab17) is done! (2s)
|
||||
[@jonahberquist][30], make sure you watch for exceptions in haystack
|
||||

|
||||
**shlomi-noach**.wcid gh-ost
|
||||

|
||||
**Hubot**shlomi-noach testing fix-reappearing-throttled-reasons 41 seconds ago: ghost-db-mysql-0007
|
||||
jonahberquist testing interactive-command-question 7 seconds ago: ghost-db-mysql-0012
|
||||
|
||||
Nobody is in the queue.
|
||||
|
||||
Some PRs are small and do not affect the data itself. Changes to status messages, interactive commands etc. are of lesser impact to the `gh-ost` app. Others pose significant changes to the migration logic and operation. We would tests these rigorously, running through our production tables fleet until satisfied these changes do not pose data corruption threat.
|
||||
|
||||
### Summary[][46]
|
||||
|
||||
Throughout testing we build trust in our systems. By automating these tests, in production, we get repetitive confirmation that everything is working as expected. As we continue to develop our infrastructure we also follow up by adapting tests to cover the newest changes.
|
||||
|
||||
Production always surprises with scenarios not covered by tests. The more we test on production environment, the more input we get on our app’s expectations and our infrastructure’s capabilities.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://githubengineering.com/mysql-testing-automation-at-github/
|
||||
|
||||
作者:[tomkrouper ][a], [Shlomi Noach][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/tomkrouper
|
||||
[b]:https://github.com/shlomi-noach
|
||||
[1]:https://github.com/tomkrouper
|
||||
[2]:https://github.com/jessbreckenridge
|
||||
[3]:https://github.com/jessbreckenridge
|
||||
[4]:https://github.com/jessbreckenridge
|
||||
[5]:https://github.com/jessbreckenridge
|
||||
[6]:https://github.com/jessbreckenridge
|
||||
[7]:https://github.com/jessbreckenridge
|
||||
[8]:https://github.com/jessbreckenridge
|
||||
[9]:https://github.com/jessbreckenridge
|
||||
[10]:https://github.com/jessbreckenridge
|
||||
[11]:https://github.com/jessbreckenridge
|
||||
[12]:https://github.com/jessbreckenridge
|
||||
[13]:https://github.com/jessbreckenridge
|
||||
[14]:https://github.com/jessbreckenridge
|
||||
[15]:https://github.com/jessbreckenridge
|
||||
[16]:https://github.com/jessbreckenridge
|
||||
[17]:https://github.com/jessbreckenridge
|
||||
[18]:https://github.com/jessbreckenridge
|
||||
[19]:https://github.com/jessbreckenridge
|
||||
[20]:https://github.com/jessbreckenridge
|
||||
[21]:https://github.com/jessbreckenridge
|
||||
[22]:https://github.com/jessbreckenridge
|
||||
[23]:https://github.com/jessbreckenridge
|
||||
[24]:https://github.com/jessbreckenridge
|
||||
[25]:https://github.com/shlomi-noach
|
||||
[26]:https://github.com/shlomi-noach
|
||||
[27]:https://github.com/shlomi-noach
|
||||
[28]:https://github.com/jonahberquist
|
||||
[29]:https://github.com/jonahberquist
|
||||
[30]:https://github.com/jonahberquist
|
||||
[31]:https://githubengineering.com/mysql-testing-automation-at-github/
|
||||
[32]:https://github.com/tomkrouper
|
||||
[33]:https://github.com/tomkrouper
|
||||
[34]:https://github.com/shlomi-noach
|
||||
[35]:https://github.com/shlomi-noach
|
||||
[36]:https://githubengineering.com/mysql-testing-automation-at-github/#backups
|
||||
[37]:https://www.percona.com/software/mysql-database/percona-xtrabackup
|
||||
[38]:https://dev.mysql.com/doc/refman/5.6/en/replication-delayed.html
|
||||
[39]:https://githubengineering.com/mysql-testing-automation-at-github/#failovers
|
||||
[40]:http://githubengineering.com/orchestrator-github/
|
||||
[41]:https://githubengineering.com/context-aware-mysql-pools-via-haproxy/
|
||||
[42]:https://githubengineering.com/mysql-testing-automation-at-github/#schema-migrations
|
||||
[43]:http://githubengineering.com/gh-ost-github-s-online-migration-tool-for-mysql/
|
||||
[44]:https://github.com/github/gh-ost/pulls
|
||||
[45]:https://githubengineering.com/deploying-branches-to-github-com/
|
||||
[46]:https://githubengineering.com/mysql-testing-automation-at-github/#summary
|
@ -1,335 +0,0 @@
|
||||
Writing a Linux Debugger Part 9: Handling variables
|
||||
============================================================
|
||||
|
||||
Variables are sneaky. At one moment they’ll be happily sitting in registers, but as soon as you turn your head they’re spilled to the stack. Maybe the compiler completely throws them out of the window for the sake of optimization. Regardless of how often variables move around in memory, we need some way to track and manipulate them in our debugger. This post will teach you more about handling variables in your debugger and demonstrate a simple implementation using `libelfin`.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
1. [Setup][1]
|
||||
|
||||
2. [Breakpoints][2]
|
||||
|
||||
3. [Registers and memory][3]
|
||||
|
||||
4. [Elves and dwarves][4]
|
||||
|
||||
5. [Source and signals][5]
|
||||
|
||||
6. [Source-level stepping][6]
|
||||
|
||||
7. [Source-level breakpoints][7]
|
||||
|
||||
8. [Stack unwinding][8]
|
||||
|
||||
9. [Handling variables][9]
|
||||
|
||||
10. [Advanced topics][10]
|
||||
|
||||
* * *
|
||||
|
||||
Before you get started, make sure that the version of `libelfin` you are using is the [`fbreg` branch of my fork][11]. This contains some hacks to support getting the base of the current stack frame and evaluating location lists, neither of which are supported by vanilla `libelfin`. You might need to pass `-gdwarf-2` to GCC to get it to generate compatible DWARF information. But before we get into the implementation, I’ll give a more detailed description of how locations are encoded in DWARF 5, which is the most recent specification. If you want more information than what I write here, then you can grab the standard from [here][12].
|
||||
|
||||
### DWARF locations
|
||||
|
||||
The location of a variable in memory at a given moment is encoded in the DWARF information using the `DW_AT_location`attribute. Location descriptions can be either single location descriptions, composite location descriptions, or location lists.
|
||||
|
||||
* Simple location descriptions describe the location of one contiguous piece (usually all) of an object. A simple location description may describe a location in addressable memory, or in a register, or the lack of a location (with or without a known value).
|
||||
* Example:
|
||||
* `DW_OP_fbreg -32`
|
||||
|
||||
* A variable which is entirely stored -32 bytes from the stack frame base
|
||||
|
||||
* Composite location descriptions describe an object in terms of pieces, each of which may be contained in part of a register or stored in a memory location unrelated to other pieces.
|
||||
* Example:
|
||||
* `DW_OP_reg3 DW_OP_piece 4 DW_OP_reg10 DW_OP_piece 2`
|
||||
|
||||
* A variable whose first four bytes reside in register 3 and whose next two bytes reside in register 10.
|
||||
|
||||
* Location lists describe objects which have a limited lifetime or change location during their lifetime.
|
||||
* Example:
|
||||
* `<loclist with 3 entries follows>`
|
||||
* `[ 0]<lowpc=0x2e00><highpc=0x2e19>DW_OP_reg0`
|
||||
|
||||
* `[ 1]<lowpc=0x2e19><highpc=0x2e3f>DW_OP_reg3`
|
||||
|
||||
* `[ 2]<lowpc=0x2ec4><highpc=0x2ec7>DW_OP_reg2`
|
||||
|
||||
* A variable whose location moves between registers depending on the current value of the program counter
|
||||
|
||||
The `DW_AT_location` is encoded in one of three different ways, depending on the kind of location description. `exprloc`s encode simple and composite location descriptions. They consist of a byte length followed by a DWARF expression or location description. `loclist`s and `loclistptr`s encode location lists. They give indexes or offsets into the `.debug_loclists` section, which describes the actual location lists.
|
||||
|
||||
### DWARF Expressions
|
||||
|
||||
The actual location of the variables is computed using DWARF expressions. These consist of a series of operations which operate on a stack of values. There are an impressive number of DWARF operations available, so I won’t explain them all in detail. Instead I’ll give a few examples from each class of expression to give you a taste of what is available. Also, don’t get scared off by these; `libelfin` will take care off all of this complexity for us.
|
||||
|
||||
* Literal encodings
|
||||
* `DW_OP_lit0`, `DW_OP_lit1`, …, `DW_OP_lit31`
|
||||
* Push the literal value on to the stack
|
||||
|
||||
* `DW_OP_addr <addr>`
|
||||
* Pushes the address operand on to the stack
|
||||
|
||||
* `DW_OP_constu <unsigned>`
|
||||
* Pushes the unsigned value on to the stack
|
||||
|
||||
* Register values
|
||||
* `DW_OP_fbreg <offset>`
|
||||
* Pushes the value found at the base of the stack frame, offset by the given value
|
||||
|
||||
* `DW_OP_breg0`, `DW_OP_breg1`, …, `DW_OP_breg31 <offset>`
|
||||
* Pushes the contents of the given register plus the given offset to the stack
|
||||
|
||||
* Stack operations
|
||||
* `DW_OP_dup`
|
||||
* Duplicate the value at the top of the stack
|
||||
|
||||
* `DW_OP_deref`
|
||||
* Treats the top of the stack as a memory address, and replaces it with the contents of that address
|
||||
|
||||
* Arithmetic and logical operations
|
||||
* `DW_OP_and`
|
||||
* Pops the top two values from the stack and pushes back the logical `AND` of them
|
||||
|
||||
* `DW_OP_plus`
|
||||
* Same as `DW_OP_and`, but adds the values
|
||||
|
||||
* Control flow operations
|
||||
* `DW_OP_le`, `DW_OP_eq`, `DW_OP_gt`, etc.
|
||||
* Pops the top two values, compares them, and pushes `1` if the condition is true and `0`otherwise
|
||||
|
||||
* `DW_OP_bra <offset>`
|
||||
* Conditional branch: if the top of the stack is not `0`, skips back or forward in the expression by `offset`
|
||||
|
||||
* Type conversions
|
||||
* `DW_OP_convert <DIE offset>`
|
||||
* Converts value on the top of the stack to a different type, which is described by the DWARF information entry at the given offset
|
||||
|
||||
* Special operations
|
||||
* `DW_OP_nop`
|
||||
* Do nothing!
|
||||
|
||||
### DWARF types
|
||||
|
||||
DWARF’s representation of types needs to be strong enough to give debugger users useful variable representations. Users most often want to be able to debug at the level of their application rather than at the level of their machine, and they need a good idea of what their variables are doing to achieve that.
|
||||
|
||||
DWARF types are encoded in DIEs along with the majority of the other debug information. They can have attributes to indicate their name, encoding, size, endianness, etc. A myriad of type tags are available to express pointers, arrays, structures, typedefs, anything else you could see in a C or C++ program.
|
||||
|
||||
Take this simple structure as an example:
|
||||
|
||||
```
|
||||
struct test{
|
||||
int i;
|
||||
float j;
|
||||
int k[42];
|
||||
test* next;
|
||||
};
|
||||
```
|
||||
|
||||
The parent DIE for this struct is this:
|
||||
|
||||
```
|
||||
< 1><0x0000002a> DW_TAG_structure_type
|
||||
DW_AT_name "test"
|
||||
DW_AT_byte_size 0x000000b8
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000001
|
||||
|
||||
```
|
||||
|
||||
The above says that we have a structure called `test` of size `0xb8`, declared at line `1` of `test.cpp`. All there are then many children DIEs which describe the members.
|
||||
|
||||
```
|
||||
< 2><0x00000032> DW_TAG_member
|
||||
DW_AT_name "i"
|
||||
DW_AT_type <0x00000063>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000002
|
||||
DW_AT_data_member_location 0
|
||||
< 2><0x0000003e> DW_TAG_member
|
||||
DW_AT_name "j"
|
||||
DW_AT_type <0x0000006a>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000003
|
||||
DW_AT_data_member_location 4
|
||||
< 2><0x0000004a> DW_TAG_member
|
||||
DW_AT_name "k"
|
||||
DW_AT_type <0x00000071>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000004
|
||||
DW_AT_data_member_location 8
|
||||
< 2><0x00000056> DW_TAG_member
|
||||
DW_AT_name "next"
|
||||
DW_AT_type <0x00000084>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000005
|
||||
DW_AT_data_member_location 176(as signed = -80)
|
||||
|
||||
```
|
||||
|
||||
Each member has a name, a type (which is a DIE offset), a declaration file and line, and a byte offset into the structure where the member is located. The types which are pointed to come next.
|
||||
|
||||
```
|
||||
< 1><0x00000063> DW_TAG_base_type
|
||||
DW_AT_name "int"
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000004
|
||||
< 1><0x0000006a> DW_TAG_base_type
|
||||
DW_AT_name "float"
|
||||
DW_AT_encoding DW_ATE_float
|
||||
DW_AT_byte_size 0x00000004
|
||||
< 1><0x00000071> DW_TAG_array_type
|
||||
DW_AT_type <0x00000063>
|
||||
< 2><0x00000076> DW_TAG_subrange_type
|
||||
DW_AT_type <0x0000007d>
|
||||
DW_AT_count 0x0000002a
|
||||
< 1><0x0000007d> DW_TAG_base_type
|
||||
DW_AT_name "sizetype"
|
||||
DW_AT_byte_size 0x00000008
|
||||
DW_AT_encoding DW_ATE_unsigned
|
||||
< 1><0x00000084> DW_TAG_pointer_type
|
||||
DW_AT_type <0x0000002a>
|
||||
|
||||
```
|
||||
|
||||
As you can see, `int` on my laptop is a 4-byte signed integer type, and `float` is a 4-byte float. The integer array type is defined by pointing to the `int` type as its element type, a `sizetype` (think `size_t`) as the index type, with `2a` elements. The `test*` type is a `DW_TAG_pointer_type` which references the `test` DIE.
|
||||
|
||||
* * *
|
||||
|
||||
### Implementing a simple variable reader
|
||||
|
||||
As mentioned, `libelfin` will deal with most of the complexity for us. However, it doesn’t implement all of the different methods for representing variable locations, and handling a lot of them in our code would get pretty complex. As such, I’ve chosen to only support `exprloc`s for now. Feel free to add support for more types of expression. If you’re really feeling brave, submit some patches to `libelfin` to help complete the necessary support!
|
||||
|
||||
Handling variables is mostly down to locating the different parts in memory or registers, then reading or writing is the same as you’ve seen before. I’ll only show you how to implement reading for the sake of simplicity.
|
||||
|
||||
First we need to tell `libelfin` how to read registers from our process. We do this by creating a class which inherits from `expr_context` and uses `ptrace` to handle everything:
|
||||
|
||||
```
|
||||
class ptrace_expr_context : public dwarf::expr_context {
|
||||
public:
|
||||
ptrace_expr_context (pid_t pid) : m_pid{pid} {}
|
||||
|
||||
dwarf::taddr reg (unsigned regnum) override {
|
||||
return get_register_value_from_dwarf_register(m_pid, regnum);
|
||||
}
|
||||
|
||||
dwarf::taddr pc() override {
|
||||
struct user_regs_struct regs;
|
||||
ptrace(PTRACE_GETREGS, m_pid, nullptr, ®s);
|
||||
return regs.rip;
|
||||
}
|
||||
|
||||
dwarf::taddr deref_size (dwarf::taddr address, unsigned size) override {
|
||||
//TODO take into account size
|
||||
return ptrace(PTRACE_PEEKDATA, m_pid, address, nullptr);
|
||||
}
|
||||
|
||||
private:
|
||||
pid_t m_pid;
|
||||
};
|
||||
```
|
||||
|
||||
The reading will be handled by a `read_variables` function in our `debugger` class:
|
||||
|
||||
```
|
||||
void debugger::read_variables() {
|
||||
using namespace dwarf;
|
||||
|
||||
auto func = get_function_from_pc(get_pc());
|
||||
|
||||
//...
|
||||
}
|
||||
```
|
||||
|
||||
The first thing we do above is find the function which we’re currently in. Then we need to loop through the entries in that function, looking for variables:
|
||||
|
||||
```
|
||||
for (const auto& die : func) {
|
||||
if (die.tag == DW_TAG::variable) {
|
||||
//...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We get the location information by looking up the `DW_AT_location` entry in the DIE:
|
||||
|
||||
```
|
||||
auto loc_val = die[DW_AT::location];
|
||||
```
|
||||
|
||||
Then we ensure that it’s an `exprloc` and ask `libelfin` to evaluate the expression for us:
|
||||
|
||||
```
|
||||
if (loc_val.get_type() == value::type::exprloc) {
|
||||
ptrace_expr_context context {m_pid};
|
||||
auto result = loc_val.as_exprloc().evaluate(&context);
|
||||
```
|
||||
|
||||
Now that we’ve evaluated the expression, we need to read the contents of the variable. It could be in memory or a register, so we’ll handle both cases:
|
||||
|
||||
```
|
||||
switch (result.location_type) {
|
||||
case expr_result::type::address:
|
||||
{
|
||||
auto value = read_memory(result.value);
|
||||
std::cout << at_name(die) << " (0x" << std::hex << result.value << ") = "
|
||||
<< value << std::endl;
|
||||
break;
|
||||
}
|
||||
|
||||
case expr_result::type::reg:
|
||||
{
|
||||
auto value = get_register_value_from_dwarf_register(m_pid, result.value);
|
||||
std::cout << at_name(die) << " (reg " << result.value << ") = "
|
||||
<< value << std::endl;
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
throw std::runtime_error{"Unhandled variable location"};
|
||||
}
|
||||
```
|
||||
|
||||
As you can see I’ve simply printed out the value without interpreting it based on the type of the variable. Hopefully from this code you can see how you could support writing variables, or searching for variables with a given name.
|
||||
|
||||
Finally we can add this to our command parser:
|
||||
|
||||
```
|
||||
else if(is_prefix(command, "variables")) {
|
||||
read_variables();
|
||||
}
|
||||
```
|
||||
|
||||
### Testing it out
|
||||
|
||||
Write a few small functions which have some variables, compile it without optimizations and with debug info, then see if you can read the values of your variables. Try writing to the memory address where a variable is stored and see the behaviour of the program change.
|
||||
|
||||
* * *
|
||||
|
||||
Nine posts down, one to go! Next time I’ll be talking about some more advanced concepts which might interest you. For now you can find the code for this post [here][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/writing-a-linux-debugger-variables/
|
||||
|
||||
作者:[ Simon Brand][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-break/
|
||||
[8]:https://blog.tartanllama.xyz/writing-a-linux-debugger-unwinding/
|
||||
[9]:https://blog.tartanllama.xyz/writing-a-linux-debugger-variables/
|
||||
[10]:https://blog.tartanllama.xyz/writing-a-linux-debugger-advanced-topics/
|
||||
[11]:https://github.com/TartanLlama/libelfin/tree/fbreg
|
||||
[12]:http://dwarfstd.org/
|
||||
[13]:https://github.com/TartanLlama/minidbg/tree/tut_variable
|
@ -1,149 +0,0 @@
|
||||
Writing a Linux Debugger Part 10: Advanced topics
|
||||
============================================================
|
||||
|
||||
We’re finally here at the last post of the series! This time I’ll be giving a high-level overview of some more advanced concepts in debugging: remote debugging, shared library support, expression evaluation, and multi-threaded support. These ideas are more complex to implement, so I won’t walk through how to do so in detail, but I’m happy to answer questions about these concepts if you have any.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
1. [Setup][1]
|
||||
|
||||
2. [Breakpoints][2]
|
||||
|
||||
3. [Registers and memory][3]
|
||||
|
||||
4. [Elves and dwarves][4]
|
||||
|
||||
5. [Source and signals][5]
|
||||
|
||||
6. [Source-level stepping][6]
|
||||
|
||||
7. [Source-level breakpoints][7]
|
||||
|
||||
8. [Stack unwinding][8]
|
||||
|
||||
9. [Handling variables][9]
|
||||
|
||||
10. [Advanced topics][10]
|
||||
|
||||
* * *
|
||||
|
||||
### Remote debugging
|
||||
|
||||
Remote debugging is very useful for embedded systems or debugging the effects of environment differences. It also sets a nice divide between the high-level debugger operations and the interaction with the operating system and hardware. In fact, debuggers like GDB and LLDB operate as remote debuggers even when debugging local programs. The general architecture is this:
|
||||
|
||||

|
||||
|
||||
The debugger is the component which we interact with through the command line. Maybe if you’re using an IDE there’ll be another layer on top which communicates with the debugger through the _machine interface_ . On the target machine (which may be the same as the host) there will be a _debug stub_ , which in theory is a very small wrapper around the OS debug library which carries out all of your low-level debugging tasks like setting breakpoints on addresses. I say “in theory” because stubs are getting larger and larger these days. The LLDB debug stub on my machine is 7.6MB, for example. The debug stub communicates with the debugee process using some OS-specific features (in our case, `ptrace`), and with the debugger though some remote protocol.
|
||||
|
||||
The most common remote protocol for debugging is the GDB remote protocol. This is a text-based packet format for communicating commands and information between the debugger and debug stub. I won’t go into detail about it, but you can read all you could want to know about it [here][11]. If you launch LLDB and execute the command `log enable gdb-remote packets` then you’ll get a trace of all packets sent through the remote protocol. On GDB you can write `set remotelogfile <file>` to do the same.
|
||||
|
||||
As a simple example, here’s the packet to set a breakpoint:
|
||||
|
||||
```
|
||||
$Z0,400570,1#43
|
||||
|
||||
```
|
||||
|
||||
`$` marks the start of the packet. `Z0` is the command to insert a memory breakpoint. `400570` and `1` are the argumets, where the former is the address to set a breakpoint on and the latter is a target-specific breakpoint kind specifier. Finally, the `#43` is a checksum to ensure that there was no data corruption.
|
||||
|
||||
The GDB remote protocol is very easy to extend for custom packets, which is very useful for implementing platform- or language-specific functionality.
|
||||
|
||||
* * *
|
||||
|
||||
### Shared library and dynamic loading support
|
||||
|
||||
The debugger needs to know what shared libraries have been loaded by the debuggee so that it can set breakpoints, get source-level information and symbols, etc. As well as finding libraries which have been dynamically linked against, the debugger must track libraries which are loaded at runtime through `dlopen`. To facilitate this, the dynamic linker maintains a _rendezvous structure_ . This structure maintains a linked list of shared library descriptors, along with a pointer to a function which is called whenever the linked list is updated. This structure is stored where the `.dynamic` section of the ELF file is loaded, and is initialized before program execution.
|
||||
|
||||
A simple tracing algorithm is this:
|
||||
|
||||
* The tracer looks up the entry point of the program in the ELF header (or it could use the auxillary vector stored in `/proc/<pid>/aux`)
|
||||
|
||||
* The tracer places a breakpoint on the entry point of the program and begins execution.
|
||||
|
||||
* When the breakpoint is hit, the address of the rendezvous structure is found by looking up the load address of `.dynamic` in the ELF file.
|
||||
|
||||
* The rendezvous structure is examined to get the list of currently loaded libraries.
|
||||
|
||||
* A breakpoint is set on the linker update function.
|
||||
|
||||
* Whenever the breakpoint is hit, the list is updated.
|
||||
|
||||
* The tracer infinitely loops, continuing the program and waiting for a signal until the tracee signals that it has exited.
|
||||
|
||||
I’ve written a small demonstration of these concepts, which you can find [here][12]. I can do a more detailed write up of this in the future if anyone is interested.
|
||||
|
||||
* * *
|
||||
|
||||
### Expression evaluation
|
||||
|
||||
Expression evaluation is a feature which lets users evaluate expressions in the original source language while debugging their application. For example, in LLDB or GDB you could execute `print foo()` to call the `foo` function and print the result.
|
||||
|
||||
Depending on how complex the expression is, there are a few different ways of evaluating it. If the expression is a simple identifier, then the debugger can look at the debug information, locate the variable and print out the value, just like we did in the last part of this series. If the expression is a bit more complex, then it may be possible to compile the code to an intermediate representation (IR) and interpret that to get the result. For example, for some expressions LLDB will use Clang to compile the expression to LLVM IR and interpret that. If the expression is even more complex, or requires calling some function, then the code might need to be JITted to the target and executed in the address space of the debuggee. This involves calling `mmap` to allocate some executable memory, then the compiled code is copied to this block and is executed. LLDB does this by using LLVM’s JIT functionality.
|
||||
|
||||
If you want to know more about JIT compilation, I’d highly recommend [Eli Bendersky’s posts on the subject][13].
|
||||
|
||||
* * *
|
||||
|
||||
### Multi-threaded debugging support
|
||||
|
||||
The debugger shown in this series only supports single threaded applications, but to debug most real-world applications, multi-threaded support is highly desirable. The simplest way to support this is to trace thread creation and parse the procfs to get the information you want.
|
||||
|
||||
The Linux threading library is called `pthreads`. When `pthread_create` is called, the library creates a new thread using the `clone` syscall, and we can trace this syscall with `ptrace` (assuming your kernel is older than 2.5.46). To do this, you’ll need to set some `ptrace` options after attaching to the debuggee:
|
||||
|
||||
```
|
||||
ptrace(PTRACE_SETOPTIONS, m_pid, nullptr, PTRACE_O_TRACECLONE);
|
||||
```
|
||||
|
||||
Now when `clone` is called, the process will be signaled with our old friend `SIGTRAP`. For the debugger in this series, you can add a case to `handle_sigtrap` which can handle the creation of the new thread:
|
||||
|
||||
```
|
||||
case (SIGTRAP | (PTRACE_EVENT_CLONE << 8)):
|
||||
//get the new thread ID
|
||||
unsigned long event_message = 0;
|
||||
ptrace(PTRACE_GETEVENTMSG, pid, nullptr, message);
|
||||
|
||||
//handle creation
|
||||
//...
|
||||
```
|
||||
|
||||
Once you’ve got that, you can look in `/proc/<pid>/task/` and read the memory maps and suchlike to get all the information you need.
|
||||
|
||||
GDB uses `libthread_db`, which provides a bunch of helper functions so that you don’t need to do all the parsing and processing yourself. Setting up this library is pretty weird and I won’t show how it works here, but you can go and read [this tutorial][14] if you’d like to use it.
|
||||
|
||||
The most complex part of multithreaded support is modelling the thread state in the debugger, particularly if you want to support [non-stop mode][15] or some kind of heterogeneous debugging where you have more than just a CPU involved in your computation.
|
||||
|
||||
* * *
|
||||
|
||||
### The end!
|
||||
|
||||
Whew! This series took a long time to write, but I learned a lot in the process and I hope it was helpful. Get in touch on Twitter [@TartanLlama][16] or in the comments section if you want to chat about debugging or have any questions about the series. If there are any other debugging topics you’d like to see covered then let me know and I might do a bonus post.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/writing-a-linux-debugger-advanced-topics/
|
||||
|
||||
作者:[Simon Brand ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-break/
|
||||
[8]:https://blog.tartanllama.xyz/writing-a-linux-debugger-unwinding/
|
||||
[9]:https://blog.tartanllama.xyz/writing-a-linux-debugger-variables/
|
||||
[10]:https://blog.tartanllama.xyz/writing-a-linux-debugger-advanced-topics/
|
||||
[11]:https://sourceware.org/gdb/onlinedocs/gdb/Remote-Protocol.html
|
||||
[12]:https://github.com/TartanLlama/dltrace
|
||||
[13]:http://eli.thegreenplace.net/tag/code-generation
|
||||
[14]:http://timetobleed.com/notes-about-an-odd-esoteric-yet-incredibly-useful-library-libthread_db/
|
||||
[15]:https://sourceware.org/gdb/onlinedocs/gdb/Non_002dStop-Mode.html
|
||||
[16]:https://twitter.com/TartanLlama
|
@ -1,93 +0,0 @@
|
||||
Creating better disaster recovery plans
|
||||
============================================================
|
||||
|
||||
Five questions for Tanya Reilly: How service interdependencies make recovery harder and why it’s a good idea to deliberately and preemptively manage dependencies.
|
||||
|
||||
[Register for the O'Reilly Velocity Conference][5] to join Tanya Reilly and other industry experts. Use code ORM20 to save 20% on your conference pass (Gold, Silver, and Bronze passes).
|
||||
|
||||
I recently asked Tanya Reilly, Site Reliability Engineer at Google, to share her thoughts on how to make better disaster recovery plans. Tanya is presenting a session titled [_Have you tried turning it off and turning it on again?_][9] at the O’Reilly Velocity Conference, taking place Oct. 1-4 in New York.
|
||||
|
||||
### 1\. What are the most common mistakes people make when planning their backup systems strategy?
|
||||
|
||||
The classic line is "you don't need a backup strategy, you need a restore strategy." If you have backups, but you haven't tested restoring them, you don't really have backups. Testing doesn't just mean knowing you can get the data back; it means knowing how to put it back into the database, how to handle incremental changes, how to reinstall the whole thing if you need to. It means being sure that your recovery path doesn't rely on some system that could be lost at the same time as the data.
|
||||
|
||||
But testing restores is tedious. It's the sort of thing that people will cut corners on if they're busy. It's worth taking the time to make it as simple and painless and automated as possible; never rely on human willpower for anything! At the same time, you have to be sure that the people involved know what to do, so it's good to plan regular wide-scale disaster tests. Recovery exercises are a great way to find out that the documentation for the process is missing or out of date, or that you don't have enough resources (disk, network, etc.) to transfer and reinsert the data.
|
||||
|
||||
### 2\. What are the most common challenges in creating a disaster recovery (DR) plan?
|
||||
|
||||
I think a lot of DR is an afterthought: "We have this great system, and our business relies on it ... I guess we should do DR for it?" And by that point, the system is extremely complex, full of interdependencies and hard to duplicate.
|
||||
|
||||
The first time something is installed, it's often hand-crafted by a human who is tweaking things and getting it right, and sometimes that's the version that sticks around. When you build the _second_ one, it's hard to be sure it's exactly the same. Even in sites with serious config management, you can leave something out, or let it get out of date.
|
||||
|
||||
Encrypted backups aren't much use if you've lost access to the decryption key, for example. And any parts that are only used in a disaster may have bit-rotted since you last checked in on them. The only way to be sure you've covered everything is to fail over in earnest. Plan your disaster for a time when you're ready for it!
|
||||
|
||||
It's better if you can design the system so that the disaster recovery modes are part of normal operation. If your service is designed from the start to be replicated, adding more replicas is a regular operation and probably automated. There are no new pathways; it's just a capacity problem. But there can still be some forgotten components of the system that only run in one or two places. An occasional scheduled fake disaster is good for shaking those out.
|
||||
|
||||
By the way, those forgotten components could include information that's only in one person's brain, so if you find yourself saying, "We can't do our DR failover test until X is back from vacation," then that person is a dangerous single point of failure.
|
||||
|
||||
Parts of the system that are only used in disasters need the most testing, or they'll fail you when you need them. The fewer of those you have, the safer you are and the less toilsome testing you have to do.
|
||||
|
||||
### 3\. Why do service interdependencies make recovery harder after a disaster?
|
||||
|
||||
If you've got just one binary, then recovering it is relatively easy: you start that binary back up. But we increasingly break out common functionality into separate services. Microservices mean we have more flexibility and less reinvention of wheels: if we need a backend to do something and one already exists, great, we can just use that. But someone needs to keep a big picture of what depends on what, because it can get very tangled very fast.
|
||||
|
||||
#### MANAGE, GROW, AND EVOLVE YOUR SYSTEMS
|
||||
|
||||
|
||||
You may know what backends you use directly, but you might not notice when new ones are added into libraries you use. You might depend on something that also indirectly depends on you. After an outage, you can end up with a deadlock: two systems that each can't start until the other is running and providing some functionality. It's a hard situation to recover from!
|
||||
|
||||
You can even end up with things that indirectly depend on themselves—for example, a device that you need to configure to bring up the network, but you can't get to it while the network is down. Often people have thought about these circular dependencies in advance and have some sort of fallback plan, but those are inherently the road less traveled: they're only intended to be used in extreme cases, and they follow a different path through your systems or processes or code. This means they're more likely to have a bug that won't be uncovered until you really, really need them to work.
|
||||
|
||||
### 4\. You advise people to start deliberately managing their dependencies long before they think they need to in order to ward off potentially catastrophic system failure. Why is this important and what’s your advice for doing it effectively?
|
||||
|
||||
Managing your dependencies is essential for being sure you can recover from a disaster. It makes operating the systems easier too. If your dependencies aren't reliable, you can't be reliable, so you need to know what they are.
|
||||
|
||||
It's possible to start managing dependencies after they've become chaotic, but it's much, much easier if you start early. You can set policies on the use of various services—for example, you must be this high in the stack to depend on this set of systems. You can introduce a culture of thinking about dependencies by making it a regular part of design document review. But bear in mind that lists of dependencies will quickly become stale; it's best if you have programmatic dependency discovery, and even dependency enforcement. [My Velocity talk][10] covers more about how we do that.
|
||||
|
||||
The other advantage of starting early is that you can split up your services into vertical "strata," where the functionality in each stratum must be able to come completely online before the next one begins. So, for example, you could say that the network has to be able to completely start up without using any other services. Then, say, your storage systems should depend on nothing but the network, the application backends should only depend on network and storage, and so on. Different strata will make sense for different architectures.
|
||||
|
||||
If you plan this in advance, it's much easier for new services to choose dependencies. Each one should only depend on services lower in the stack. You can still end up with cycles—things in the same stratum depending on each other—but they're more tightly contained and easier to deal with on a case-by-case basis.
|
||||
|
||||
### 5\. What other parts of the program for Velocity NY are of interest to you?
|
||||
|
||||
I've got my whole Tuesday and Wednesday schedule completely worked out! As you might have gathered, I care a lot about making huge interdependent systems manageable, so I'm looking forward to hearing [Carin Meier's thoughts on managing system complexity][11], [Sarah Wells on microservices][12] and [Baron Schwartz on observability][13]. I'm fascinated to hear [Jon Moore's story][14] on how Comcast went from yearly release cycles to releasing daily. And as an ex-sysadmin, I'm looking forward to hearing [where Bryan Liles sees that role going][15].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Nikki McDonald
|
||||
|
||||
Nikki McDonald is a content director at O'Reilly Media, Inc. She lives in Ann Arbor, Michigan.
|
||||
|
||||
Tanya Reilly
|
||||
|
||||
Tanya Reilly has been a Systems Administrator and Site Reliability Engineer at Google since 2005, working on low-level infrastructure like distributed locking, load balancing, and bootstrapping. Before Google, she was a Systems Administrator at eircom.net, Ireland’s largest ISP, and before that she was the entire IT Department for a small software house.
|
||||
|
||||
----------------------------
|
||||
|
||||
via: https://www.oreilly.com/ideas/creating-better-disaster-recovery-plans
|
||||
|
||||
作者:[ Nikki McDonald][a],[Tanya Reilly][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/nikki-mcdonald
|
||||
[b]:https://www.oreilly.com/people/5c97a-tanya-reilly
|
||||
[1]:https://pixabay.com/en/crane-baukran-load-crane-crane-arm-2436704/
|
||||
[2]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/nikki-mcdonald
|
||||
[4]:https://www.oreilly.com/people/5c97a-tanya-reilly
|
||||
[5]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_text_cta
|
||||
[6]:https://www.oreilly.com/ideas/creating-better-disaster-recovery-plans
|
||||
[7]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_right_rail_cta
|
||||
[8]:https://conferences.oreilly.com/velocity/vl-ny?intcmp=il-webops-confreg-reg-vlny17_new_site_right_rail_cta
|
||||
[9]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61400?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[10]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61400?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[11]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62779?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[12]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61597?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[13]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/61630?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[14]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62733?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
||||
[15]:https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/62893?intcmp=il-webops-confreg-reg-vlny17_new_site_creating_better_disaster_recovery_plans_body_text_cta
|
@ -1,4 +1,4 @@
|
||||
Go vs .NET Core in terms of HTTP performance
|
||||
(翻译中by runningwater)Go vs .NET Core in terms of HTTP performance
|
||||
============================================================
|
||||
|
||||

|
||||
|
@ -0,0 +1,86 @@
|
||||
Orchestration tools fully exploit Linux container technology
|
||||
============================================================
|
||||
|
||||
### Once companies get past the “let’s see how these container things work” stage, they end up with a lot of containers running in a lot of different places
|
||||
|
||||

|
||||
>Thinkstock
|
||||
|
||||
Companies that need to deliver applications quickly and efficiently—and today, what company doesn’t need to do this?— are turning to Linux containers. What they are also finding is that once they get past the “let’s see how these container things work” stage, they are going to end up with a lot of containers running in a lot of different places.
|
||||
|
||||
Linux container technology is not new, but it has increased in popularity due to factors including the innovative packaging format (now [Open Container Initiative (OCI) format][3]) originally invented by Docker, as well as the competitive requirement for continual development and deployment of new applications. In a May 2016 Forrester study commissioned by Red Hat, 48 percent of respondents said they were already using containers in development, a figure projected to rise to 53 percent this year. Only one-fifth of respondents said that they wouldn’t leverage containers in development processes in 2017.
|
||||
|
||||
Like Lego blocks, container images enable easy reuse of code and services. Each container image is like a separate Lego block, designed to do one part of the job really well. This could be a database, a data store, or even a booking service, or analytics service. By packaging each part separately, they can be used in different applications. But, without some sort of application definition (the instruction booklet), it’s difficult to create copies of the full application in different environments. That’s where container orchestration comes in.
|
||||
|
||||

|
||||
Scott McCarty
|
||||
|
||||
Container orchestration provides an infrastructure like the Lego system – the developer can provide simple instructions for how to build the application. The orchestration engine will know how to run it. This makes it easy to create multiple copies of the same application, spanning developer laptops, CI/CD system, and even production data centers and cloud provider environments.
|
||||
|
||||
Linux container images allow companies to package and isolate the building blocks of applications with their entire runtime environment (operating system pieces). Building on this, container orchestration makes it easy to define and run all of the blocks together as a full applications. Once the work has been invested to define the full application, they can be moved between different environments (dev, test, production, and so on) without breaking them, and without changing how they behave.
|
||||
|
||||
### Kicking the tires on containers
|
||||
|
||||
It’s clear that containers make sense, and more and more companies are figuratively kicking the tires on containers. In the beginning, it might be one developer working with a single container, or a team of developers working with multiple containers. In the latter scenario, the developers are likely writing home-grown code to deal with the complexities that quickly arise once a container deployment grows beyond a single instance.
|
||||
|
||||
This is all well and good: They’re developers, after all – they’ve got this. But it’s going to get messy, even in the developer world, and the home-grown code model is just not going to fly once containers move to QA and [dun, dun, duuuuunnnnn] production.
|
||||
|
||||
Orchestration tools do essentially two things. First, they help developers define what their application looks like – the set of services it takes to build up an instance of their application – the databases, data stores, web servers, etc., for each application instance. Orchestrators help standardize what all the parts of an application look like, running together and communicating to each other, what I would call a standardized application definition. Second, they manage the process of starting, stopping, upgrading and running these multiple containers in a cluster of compute resources, which is especially useful when running multiple copies of any given application, for things like continuous integration (CI) and continuous delivery (CD).
|
||||
|
||||
|
||||
Think about it like an apartment building. Everyone who lives there has the same street address, but each person has a number or letter or combination of both that specifically identifies him or her. This is necessary, for example, for the delivery of the right mail and packages to the right tenants.
|
||||
|
||||
Likewise with containers, as soon as you have two containers or two hosts that you want to run those containers on, you have to keep track of things like where developers go to test a database connect or where users go to connect to a service running in a container. Container orchestration tools essentially help manage the logistics of containers across multiple hosts. They extend life cycle management capabilities to full applications, made of multiple containers, deployed on a cluster of machines, allowing users to treat the entire cluster as a single deployment target.
|
||||
|
||||
It’s really that simple—and that complicated. Orchestration tools provide a number of capabilities, ranging from provisioning containers, to identifying and rescheduling failed containers, to exposing containers to systems and services outside the cluster, to adding and removing containers on demand.
|
||||
|
||||
While container technology has been around for a while, container orchestration tools have been available only for a few years. Orchestrators were developed from lessons learned with high-performance computing (HPC) and application management internally at Google. In essence, to deal with the monstrosity that is, running a bunch of stuff (batch jobs, services, etc.) on a bunch of servers. Since then, orchestrators have evolved to enable companies to strategically leverage containers.
|
||||
|
||||
Once your company determines that it needs container orchestration, the next step is figuring out which platform makes the most sense for the business. When evaluating container orchestrators, look closely at (among other things):
|
||||
|
||||
* Application definition language
|
||||
|
||||
* Existing capability set
|
||||
|
||||
* Rate at which new capabilities are being added
|
||||
|
||||
* Whether it is open source or proprietary
|
||||
|
||||
* Community health (how active/productive members are, the quality/quantity of member submissions, diversity of contributors – individuals and companies)
|
||||
|
||||
* Hardening efforts
|
||||
|
||||
* Reference architectures
|
||||
|
||||
* Certifications
|
||||
|
||||
* Process for productization
|
||||
|
||||
There are three major container orchestration platforms, which seem to be ahead of the others, each with its own history.
|
||||
|
||||
1. **Docker Swarm:** Swarm is an add-on to Docker – arguably, the container poster child. Swarm allows users to establish and manage a cluster of Docker nodes as a single virtual system. The challenge with Swarm is it seems on track to become a single-vendor project.
|
||||
|
||||
2. **Mesos**: Mesos grew up from Apache and high-performance computing, and thus serves as an excellent scheduler. Mesos is also very technically advanced, although it doesn’t seem to have the velocity or investment compared to others.
|
||||
|
||||
3. **Kubernetes:** Developed by Google, with lessons from an internal orchestrator named Borg, Kubernetes is widely used and has a robust community around it. In fact, it’s the No. 1 project on GitHub. Mesos may currently have a slight technical advantage over Kubernetes, but Kubernetes is a fast-moving project, which is also making architectural investments for long-term technical gains. It should catch up and surpass Mesos in terms of technical capabilities in the very near future.
|
||||
|
||||
### The future of orchestration
|
||||
|
||||
Looking ahead, companies can expect to see orchestration tools moving in an application- and service-focused direction. Because, in reality, rapid application development today is really about quickly leveraging a mix of services, code, and data. Whether those services are open source and deployed by your internal team or consumed from a cloud provider, the future looks like a mix of both. Since today’s orchestrators are also tackling the application definition challenge, expect to see them tackle the integration of external services more and more.
|
||||
|
||||
For the here and now, companies that want to take full advantage of containers must take advantage of container orchestration.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.infoworld.com/article/3205304/containers/orchestration-tools-enable-companies-to-fully-exploit-linux-container-technology.html
|
||||
|
||||
作者:[ Scott McCarty][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.infoworld.com/author/Scott-McCarty/
|
||||
[1]:https://www.infoworld.com/article/3204171/what-is-docker-linux-containers-explained.html#tk.ifw-infsb
|
||||
[2]:https://www.infoworld.com/resources/16373/application-virtualization/the-beginners-guide-to-docker.html#tk.ifw-infsb
|
||||
[3]:https://github.com/opencontainers/image-spec
|
@ -1,202 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Getting started with ImageMagick
|
||||
============================================================
|
||||
|
||||
### Learn common ways to view and modify images with this lightweight image editor.
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
In a recent article about [lightweight image viewers][8], author Scott Nesbitt mentioned display, one of the components in [ImageMagick][9]. ImageMagick is not merely an image viewer—it offers a large number of utilities and options for image editing. This tutorial will explain more about using the **display** command and other command-line utilities in ImageMagick.
|
||||
|
||||
With a number of excellent image editors available, you may be wondering why someone would choose a mainly non-GUI, command-line based program like ImageMagick. For one thing, it is rock-solid dependable. But an even bigger benefit is that it allows you to set up methods to edit a large number of images in a particular way.
|
||||
|
||||
This introduction to common ImageMagick commands should get you started.
|
||||
|
||||
### The display command
|
||||
|
||||
Let's start with the command Scott mentioned: **display**. Say you have a directory with a lot of images you want to look at. Start **display** with the following command:
|
||||
|
||||
```
|
||||
cd Pictures
|
||||
display *.JPG
|
||||
```
|
||||
|
||||
This will load your JPG files sequentially in alphanumeric order, one at a time in a simple window. Left-clicking on an image brings up a simple, standalone menu (the only GUI feature you'll see in ImageMagick).
|
||||
|
||||
### [display_menu.png][1]
|
||||
|
||||

|
||||
|
||||
Here's what you'll find in the **display** menu:
|
||||
|
||||
* **File** contains the options _Open, Next, Former, Select, Save, Print, Delete, New, Visual Directory_ , and _Quit_ . _Select _ picks a specific image file to display, _Visual Directory_ shows all of the files (not just the images) in the current working directory. If you want to scroll through all the selected images, you can use _Next_ and _Former_ , but it's easier to use their keyboard shortcuts (Spacebar for the next image and Backspace for the previous).
|
||||
|
||||
* **Edit** offers _Undo, Redo, Cut, Copy_ , and _Paste_ , which are just auxiliary commands to more specific editing process. _Undo _ is especially useful when you're playing around with different edits to see what they do.
|
||||
|
||||
* **View** has _Half Size, Original Size, Double Size, Resize, Apply, Refresh_ , and _Restore_ . These are mostly self-explanatory and, unless you save the image after applying one of them, the image file isn't changed. _Resize_ brings up a dialog to name a specific size either in pixels, with or without constrained dimensions, or a percentage. I'm not sure what _Apply _ does.
|
||||
|
||||
* **Transform** shows _Crop, Chop, Flop, Flip, Rotate Right, Rotate Left, Rotate, Shear, Roll_ , and _Trim Edges_ . _Chop _ uses a click-drag operation to cut out a vertical or horizontal section of the image, pasting the edges together. The best way to learn how these features work is to play with them, rather than reading about them.
|
||||
|
||||
* **Enhance** provides _Hue, Saturation, Brightness, Gamma, Spiff, Dull, Contrast Stretch, Sigmoidal Contrast, Normalize, Equalize, Negate, Grayscale, Map_ , and _Quantize_ . These are operations for color manipulation and adjusting brightness and contrast.
|
||||
|
||||
* **Effects** has _Despeckle, Emboss, Reduce Noise, Add Noise, Sharpen, Blur, Threshold, Edge Detect, Spread, Shade, Raise_ , and _Segment_ . These are fairly standard image editing effects.
|
||||
|
||||
* **F/X** options are _Solarize, Sepia Tone, Swirl, Implode, Vignette, Wave, Oil Paint_ , and _Charcoal Draw_ , also very common effects in image editors.
|
||||
|
||||
* **Image Edit** contains _Annotate, Draw, Color, Matte, Composite, Add Border, Add Frame, Comment, Launch_ , and _Region of Interest_ . _Launch _ will open the current image in GIMP (in my Fedora at least). _Region of Interest_ allows you to select an area to apply editing; press Esc to deselect the region.
|
||||
|
||||
* **Miscellany** offers _Image Info, Zoom Image, Show Preview, Show Histogram, Show Matte, Background, Slide Show_ , and _Preferences_ . _Show Preview_ seems interesting, but I struggled to get it to work.
|
||||
|
||||
* **Help** shows _Overview, Browse Documentation_ , and _About Display_ . _Overview_ gives a lot of basic information about display and includes a large number of built-in keyboard equivalents for various commands and operations. In my Fedora, _Browse Documentation_ took me nowhere.
|
||||
|
||||
Although **display**'s GUI interface provides a reasonably competent image editor, ImageMagick also provides 89 command-line options, many of which correspond to the menu items above. For example, if I'm displaying a directory of digital images that are larger than my screen size, rather than resizing them individually after they appear on my screen, I can specify:
|
||||
|
||||
```
|
||||
display -resize 50% *.JPG
|
||||
```
|
||||
|
||||
Many of the operations in the menus above can also be done by adding an option in the command line. But there are others that aren't available from the menu, including **‑monochrome**, which converts the image to black and white (not grayscale), and **‑colors**, where you can specify how many colors to use in the image. For example, try these out:
|
||||
|
||||
```
|
||||
display -resize 50% -monochrome *.JPG
|
||||
```
|
||||
|
||||
```
|
||||
display -resize 50% -colors 8 *.JPG
|
||||
```
|
||||
|
||||
These operations create interesting images. Try enhancing colors or making other edits after reducing colors. Remember, unless you save and overwrite them, the original files remain unchanged.
|
||||
|
||||
### The convert command
|
||||
|
||||
The **convert** command has 237 options—yes 237—that provide a wide range of things you can do (some of which display can also do). I'll only cover a few of them, mostly sticking with image manipulation. Two simple things you can do with **convert** would be:
|
||||
|
||||
```
|
||||
convert DSC_0001.JPG dsc0001.png
|
||||
```
|
||||
|
||||
```
|
||||
convert *.bmp *.png
|
||||
```
|
||||
|
||||
The first command would convert a single file (DSC_0001) from JPG to PNG format without changing the original. The second would do this operation on all the BMP images in a directory.
|
||||
|
||||
If you want to see the formats ImageMagick can work with, type:
|
||||
|
||||
```
|
||||
identify -list format
|
||||
```
|
||||
|
||||
Let's pick through a few interesting ways we can use the **convert** command to manipulate images. Here is the general format for this command:
|
||||
|
||||
```
|
||||
convert inputfilename [options] outputfilename
|
||||
```
|
||||
|
||||
You can have multiple options, and they are done in the order they are arranged, from left to right.
|
||||
|
||||
Here are a couple of simple options:
|
||||
|
||||
```
|
||||
convert monochrome_source.jpg -monochrome monochrome_example.jpg
|
||||
```
|
||||
|
||||
### [monochrome_demo.jpg][2]
|
||||
|
||||

|
||||
|
||||
```
|
||||
convert DSC_0008.jpg -charcoal 1.2 charcoal_example.jpg
|
||||
```
|
||||
|
||||
### [charcoal_demo.jpg][3]
|
||||
|
||||

|
||||
|
||||
The **‑monochrome** option has no associated setting, but the **‑charcoal** variable needs an associated factor. In my experience, it needs to be a small number (even less than 1) to achieve something that resembles a charcoal drawing, otherwise you get pretty heavy blobs of black. Even so, the sharp edges in an image are quite distinct, unlike in a charcoal drawing.
|
||||
|
||||
Now let's look at these:
|
||||
|
||||
```
|
||||
convert DSC_0032.JPG -edge 3 edge_demo.jpg
|
||||
```
|
||||
|
||||
```
|
||||
convert DSC_0032.JPG -colors 4 reduced4_demo.jpg
|
||||
```
|
||||
|
||||
```
|
||||
convert DSC_0032.JPG -colors 4 -edge 3 reduced+edge_demo.jpg
|
||||
```
|
||||
|
||||
### [reduced_demo.jpg][4]
|
||||
|
||||

|
||||
|
||||
The original image is in the upper left. In the first command, I applied an **‑edge**option with a setting of 3 (see the upper-right image)—anything less than that was too subtle for my liking. In the second command (the lower-left image), we have reduced the number of colors to four, which doesn't look much different from the original. But look what happens when we combine these two in the third command (lower-right image)! Perhaps it's a bit garish, but who would have expected this result from the original image or either option on its own?
|
||||
|
||||
The **‑canny** command provided another surprise. This is another kind of edge detector, called a "multi-stage algorithm." Using **‑canny** alone produces a mostly black image and some white lines. I followed that with a **‑negate** command:
|
||||
|
||||
```
|
||||
convert DSC_0049.jpg -canny 0x1 -negate canny_egret.jpg
|
||||
convert DSC_0023.jpg -canny 0x1 -negate canny_ship.jpg
|
||||
```
|
||||
|
||||
### [canny_demos.jpg][5]
|
||||
|
||||

|
||||
|
||||
It's a bit minimalist, but I think it resembles a pen-and-ink drawing, a rather remarkable difference from the original photos. It doesn't work well with all images; generally, it works best with images with sharp lines. Elements that are out of focus are likely to disappear; notice how the background sandbar in the egret picture doesn't show up because it is blurred. Also notice in the ship picture, while most edges show up very well, without colors we lose the gestalt of the picture, so perhaps this could be the basis for some digital coloration or even coloring after printing.
|
||||
|
||||
### The montage command
|
||||
|
||||
Finally, I want to talk about the **montage** command. I've already shown examples of it above, where I have combined single images into composites.
|
||||
|
||||
Here's how I generated the charcoal example (note that it would all be on one line):
|
||||
|
||||
```
|
||||
montage -label %f DSC_0008.jpg charcoal_example.jpg -geometry +10+10
|
||||
-resize 25% -shadow -title 'charcoal demo' charcoal_demo.jpg
|
||||
```
|
||||
|
||||
The **-label** option labels each image with its filename (**%f**) underneath. Without the **‑geometry** option, all the images would be thumbnail size (120 pixels wide), and **+10+10** manages the border size. Next, I resized the entire final composite (**‑resize 25%**) and added a shadow (with no settings, so it's the default), and finally created a **title** for the montage.
|
||||
|
||||
You can place all the image names at the end, with the last image name the file where the montage is saved. This might be useful to create an alias for the command and all its options, then I can simply type the alias followed by the appropriate filenames. I've done this on occasion to reduce the typing needed to create a series of montages.
|
||||
|
||||
In the **‑canny** examples, I had four images in the montage. I added the **‑tile**option, specifically **‑tile 2x**, which created a montage of two columns. I could have specified a **matrix**, **‑tile 2x2**, or **‑tile x2** to produce the same result.
|
||||
|
||||
There is a lot more to learn about ImageMagick, so I plan to write more about it, maybe even about using [Perl][10] to script ImageMagick commands. ImageMagick has extensive [documentation][11], although the site is short on examples or showing results, and I think the best way to learn is by experimenting and changing various settings and options.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Greg Pittman - Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.
|
||||
|
||||
---------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/imagemagick
|
||||
|
||||
作者:[Greg Pittman ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/greg-p
|
||||
[1]:https://opensource.com/file/367401
|
||||
[2]:https://opensource.com/file/367391
|
||||
[3]:https://opensource.com/file/367396
|
||||
[4]:https://opensource.com/file/367381
|
||||
[5]:https://opensource.com/file/367406
|
||||
[6]:https://opensource.com/article/17/8/imagemagick?rate=W2W3j4nu4L14gOClu1RhT7GOMDS31pUdyw-dsgFNqYI
|
||||
[7]:https://opensource.com/user/30666/feed
|
||||
[8]:https://opensource.com/article/17/7/4-lightweight-image-viewers-linux-desktop
|
||||
[9]:https://www.imagemagick.org/script/index.php
|
||||
[10]:https://opensource.com/sitewide-search?search_api_views_fulltext=perl
|
||||
[11]:https://imagemagick.org/script/index.php
|
||||
[12]:https://opensource.com/users/greg-p
|
||||
[13]:https://opensource.com/users/greg-p
|
||||
[14]:https://opensource.com/article/17/8/imagemagick#comments
|
@ -1,188 +0,0 @@
|
||||
Running WordPress in a Kubernetes Cluster
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
As a developer I try to keep my eye on the progression of technologies that I might not use every day, but are important to understand as they might indirectly affect my work. For example the recent rise of containerization, [popularized by Docker][8], used for hosting web apps at scale. I’m not technically a devops person but as I build web apps on a daily basis it’s good for me to keep my eye on how these technologies are progressing.
|
||||
|
||||
A good example of this progression is the rapid development of container orchestration platforms that allow you to easily deploy, scale and manage containerized applications. The main players at the moment seem to be [Kubernetes (by Google)][9], [Docker Swarm][10] and [Apache Mesos][11]. If you want a good intro to each of these technologies and their differences I recommend giving [this article][12] a read.
|
||||
|
||||
In this article, we’re going to start simple and take a look at the Kubernetes platform and how you can set up a WordPress site on a single node cluster on your local machine.
|
||||
|
||||
### Installing Kubernetes
|
||||
|
||||
The [Kubernetes docs][13] have a great interactive tutorial that covers a lot of this stuff but for the purpose of this article I’m just going to cover installation and usage on macOS.
|
||||
|
||||
The first thing we need to do is install Kubernetes on your local machine. We’re going to use a tool called [Minikube][14] which is specifically designed to make it easy to set up a Kubernetes cluster on your local machine for testing.
|
||||
|
||||
As per the Minikube docs, there are a few prerequisites before we get going. Make sure you have a Hypervisor installed (‘m going to use Virtualbox). Next we need to [install the Kubernetes command-line tool][15] (known as `kubectl`). If you use Homebrew this is as simple as running:
|
||||
|
||||
```
|
||||
$ brew install kubectl
|
||||
|
||||
```
|
||||
|
||||
Now we can actually [install Minikube][16]:
|
||||
|
||||
```
|
||||
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
|
||||
```
|
||||
|
||||
Finally we want to [start Minikube][17] which will create a virtual machine which will act as our single-node Kubernetes cluster. At this point I should state that, although we’re running things locally in this article, most of the following concepts will apply when running a full Kubernetes cluster on [real servers][18]. On a multi-node cluster a “master” node would be responsible for managing the other worker nodes (VM’s or physical servers) and Kubernetes would automate the distribution and scheduling of application containers across the cluster.
|
||||
|
||||
```
|
||||
$ minikube start --vm-driver=virtualbox
|
||||
|
||||
```
|
||||
|
||||
### Installing Helm
|
||||
|
||||
At this point we should now have a (single node) Kubernetes cluster running on our local machine. We can now interact with Kubernetes in any way we want. I found [kubernetesbyexample.com][19] to be a good introduction to Kubernetes concepts and terms if you want to start playing around.
|
||||
|
||||
While we could set things up manually, we’re actually going to use a separate tool to install our WordPress application to our Kubernetes cluster. [Helm][20] is labelled as a “package manager for Kubernetes” and works by allowing you to easily deploy pre-built software packages to your cluster, known as “Charts”. You can think of a Chart as a group of container definitions and configs that are designed for a specific application (such as WordPress). First let’s install Helm on our local machine:
|
||||
|
||||
```
|
||||
$ brew install kubernetes-helm
|
||||
|
||||
```
|
||||
|
||||
Next we need to install Helm on our cluster. Thankfully this is as simple as running:
|
||||
|
||||
```
|
||||
$ helm init
|
||||
|
||||
```
|
||||
|
||||
### Installing WordPress
|
||||
|
||||
Now that Helm is running on our cluster we can install the [WordPress chart][21] by running:
|
||||
|
||||
```
|
||||
$ helm install --namespace wordpress --name wordpress --set serviceType=NodePort stable/wordpress
|
||||
|
||||
```
|
||||
|
||||
The will install and run WordPress in a container and MariaDB in a container for the database. This is known as a “Pod” in Kubernetes. A [Pod][22] is basically an abstraction that represents a group of one or more application containers and some shared resources for those containers (e.g. storage volumes, networking etc.).
|
||||
|
||||
We give the release a namespace and a name to keep things organized and make them easy to find. We also set the `serviceType` to `NodePort`. This is important because, by default, the service type will be set to `LoadBalancer` and, as we currently don’t have a load balancer for our cluster, we wouldn’t be able to access our WordPress site from outside the cluster.
|
||||
|
||||
In the last part of the output from this command you will notice some helpful instructions on how to access your WordPress site. Run these commands to get the external IP address and port for our WordPress site:
|
||||
|
||||
```
|
||||
$ export NODE_PORT=$(kubectl get --namespace wordpress -o jsonpath="{.spec.ports[0].nodePort}" services wordpress-wordpress)
|
||||
$ export NODE_IP=$(kubectl get nodes --namespace wordpress -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
$ echo http://$NODE_IP:$NODE_PORT/admin
|
||||
|
||||
```
|
||||
|
||||
You should now be able to visit the resulting URL (ignoring the `/admin` bit) and see WordPress running on your very own Kubernetes cluster!
|
||||
|
||||
### Scaling WordPress
|
||||
|
||||
One of the great things about container orchestration platforms such as Kubernetes is that it makes scaling and managing your application really simple. Let’s check the status of our deployments:
|
||||
|
||||
```
|
||||
$ kubectl get deployments --namespace=wordpress
|
||||
|
||||
```
|
||||
|
||||
[][23]
|
||||
|
||||
We should see that we have 2 deployments, one for the Mariadb database and one for WordPress itself. Now let’s say your WordPress site is starting to see a lot of traffic and we want to split the load over multiple instances. We can scale our `wordpress-wordpress` deployment by running a simple command:
|
||||
|
||||
```
|
||||
$ kubectl scale --replicas 2 deployments wordpress-wordpress --namespace=wordpress
|
||||
|
||||
```
|
||||
|
||||
If we run the `kubectl get deployments` command again we should now see something like this:
|
||||
|
||||
[][24]
|
||||
|
||||
You’ve just scaled up your WordPress site! Easy peasy, right? There are now multiple WordPress containers that traffic can be load-balanced across. For more info on Kubernetes scaling check out [this tutorial][25].
|
||||
|
||||
### High Availability
|
||||
|
||||
Another great feature of platforms such as Kubernetes is the ability to not only scale easily, but to provide high availability by implementing self-healing components. Say one of your WordPress deployments fails for some reason. Kubernetes will automatically replace the deployment instantly. We can simulate this by deleting one of the pods running in our WordPress deployment.
|
||||
|
||||
First get a list of pods by running:
|
||||
|
||||
```
|
||||
$ kubectl get pods --namespace=wordpress
|
||||
|
||||
```
|
||||
|
||||
[][26]
|
||||
|
||||
Then delete one of the pods:
|
||||
|
||||
```
|
||||
$ kubectl delete pod {POD-ID} --namespace=wordpress
|
||||
|
||||
```
|
||||
|
||||
If you run the `kubectl get pods` command again you should see Kubernetes spinning up the replacement pod straight away.
|
||||
|
||||
[][27]
|
||||
|
||||
### Going Further
|
||||
|
||||
We’ve only really scratched the surface of what Kubernetes can do. If you want to delve a bit deeper, I would recommend having a look at some of the following features:
|
||||
|
||||
* [Horizontal scaling][2]
|
||||
|
||||
* [Self healing][3]
|
||||
|
||||
* [Automated rollouts and rollbacks][4]
|
||||
|
||||
* [Secret management][5]
|
||||
|
||||
Have you ever run WordPress on a container platform? Have you ever used Kubernetes (or another container orchestration platform) and got any good tips? How do you normally scale your WordPress sites? Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Gilbert loves to build software. From jQuery scripts to WordPress plugins to full blown SaaS apps, Gilbert has been creating elegant software his whole career. Probably most famous for creating the Nivo Slider.
|
||||
|
||||
|
||||
--------
|
||||
|
||||
|
||||
via: https://deliciousbrains.com/running-wordpress-kubernetes-cluster/
|
||||
|
||||
作者:[ Gilbert Pellegrom][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://deliciousbrains.com/author/gilbert-pellegrom/
|
||||
[1]:https://deliciousbrains.com/author/gilbert-pellegrom/
|
||||
[2]:https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
|
||||
[3]:https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller
|
||||
[4]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#what-is-a-deployment
|
||||
[5]:https://kubernetes.io/docs/concepts/configuration/secret/
|
||||
[6]:https://deliciousbrains.com/running-wordpress-kubernetes-cluster/
|
||||
[7]:https://deliciousbrains.com/running-wordpress-kubernetes-cluster/
|
||||
[8]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
|
||||
[9]:https://kubernetes.io/
|
||||
[10]:https://docs.docker.com/engine/swarm/
|
||||
[11]:http://mesos.apache.org/
|
||||
[12]:https://mesosphere.com/blog/docker-vs-kubernetes-vs-apache-mesos/
|
||||
[13]:https://kubernetes.io/docs/tutorials/kubernetes-basics/
|
||||
[14]:https://kubernetes.io/docs/getting-started-guides/minikube/
|
||||
[15]:https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[16]:https://github.com/kubernetes/minikube/releases
|
||||
[17]:https://kubernetes.io/docs/getting-started-guides/minikube/#quickstart
|
||||
[18]:https://kubernetes.io/docs/tutorials/kubernetes-basics/
|
||||
[19]:http://kubernetesbyexample.com/
|
||||
[20]:https://docs.helm.sh/
|
||||
[21]:https://kubeapps.com/charts/stable/wordpress
|
||||
[22]:https://kubernetes.io/docs/tutorials/kubernetes-basics/explore-intro/
|
||||
[23]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120711/image4.png
|
||||
[24]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120710/image2.png
|
||||
[25]:https://kubernetes.io/docs/tutorials/kubernetes-basics/scale-intro/
|
||||
[26]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120711/image3.png
|
||||
[27]:https://cdn.deliciousbrains.com/content/uploads/2017/08/07120709/image1.png
|
@ -1,62 +0,0 @@
|
||||
An economically efficient model for open source software license compliance
|
||||
============================================================
|
||||
|
||||
### Using open source the way it was intended benefits your bottom line and the open source ecosystem.
|
||||
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
"The Compliance Industrial Complex" is a term that evokes dystopian imagery of organizations engaging in elaborate and highly expensive processes to comply with open source license terms. As life often imitates art, many organizations engage in this practice, sadly robbing them of the many benefits of the open source model. This article presents an economically efficient approach to open source software license compliance.
|
||||
|
||||
Open source licenses generally impose three requirements on a distributor of code licensed from a third party:
|
||||
|
||||
1. Provide a copy of the open source license(s)
|
||||
|
||||
2. Include copyright notices
|
||||
|
||||
3. For copyleft licenses (like GPL), make the corresponding source code available to the distributees
|
||||
|
||||
_(As with any general statement, there may be exceptions, so it is always advised to review license terms and, if necessary, seek the advice of an attorney.)_
|
||||
|
||||
Because the source code (and any associated files, e.g. license/README) generally contains all of this information, the easiest way to comply is to simply provide the source code along with your binary/executable application.
|
||||
|
||||
The alternative is more difficult and expensive, because, in most situations, you are still required to provide a copy of the open source licenses and retain copyright notices. Extracting this information to accompany your binary/executable release is not trivial. You need processes, systems, and people to copy this information out of the sources and associated files and insert them into a separate text file or document.
|
||||
|
||||
The amount of time and expense to create this file is not to be underestimated. Although there are software tools that may be used to partially automate the process, these tools often require resources (e.g., engineers, quality managers, release managers) to prepare code for scan and to review the results for accuracy (no tool is perfect and review is almost always required). Your organization has finite resources, and diverting them to this activity leads to opportunity costs. Compounding this expense, each subsequent release—major or minor—will require a new analysis and revision.
|
||||
|
||||
There are also other costs resulting from not choosing to release sources that are not well recognized. These stem from not releasing source code back to the original authors and/or maintainers of the open source project, an activity known as upstreaming. Upstreaming alone seldom meets the requirements of most open source licenses, which is why this article advocates releasing sources along with your binary/executable; however, both upstreaming and providing the source code along with your binary/executable affords additional economic benefits. This is because your organization will no longer be required to keep a private fork of your code changes that must be internally merged with the open source bits upon every release—an increasingly costly and messy endeavor as your internal code base diverges from the community project. Upstreaming also enhances the open source ecosystem, which encourages further innovations from the community from which your organization may benefit.
|
||||
|
||||
So why do a significant number of organizations not release source code for their products to simplify their compliance efforts? In many cases, this is because they are under the belief that it may reveal information that gives them a competitive edge. This belief may be misplaced in many situations, considering that substantial amounts of code in these proprietary products are likely direct copies of open source code to enable functions such as WiFi or cloud services, foundational features of most contemporary products.
|
||||
|
||||
Even if changes are made to these open source works to adapt them for proprietary offerings, such changes are often de minimis and contain little new copyright expression or patentable content. As such, any organization should look at its code through this lens, as it may discover that an overwhelming percentage of its code base is open source, with only a small percentage truly proprietary and enabling differentiation from its competitors. So why then not distribute and upstream the source to those non-differentiating bits?
|
||||
|
||||
Consider rejecting the Compliance Industrial Complex mindset to lower your cost and drastically simplify compliance. Use open source the way it was intended and experience the joy of releasing your source code to benefit your bottom line and the open source ecosystem from which you will continue to reap increasing benefits.
|
||||
|
||||
------------------------
|
||||
|
||||
作者简介
|
||||
|
||||
Jeffrey Robert Kaufman - Jeffrey R. Kaufman is an Open Source IP Attorney for Red Hat, Inc., the world’s leading provider of open source software solutions. Jeffrey also serves as an adjunct professor at the Thomas Jefferson School of Law. Previous to Red Hat, Jeffrey served as Patent Counsel at Qualcomm Incorporated providing open source counsel to the Office of the Chief Scientist. Jeffrey holds multiple patents in RFID, barcoding, image processing, and printing technologies.[More about me][2]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/economically-efficient-model
|
||||
|
||||
作者:[ Jeffrey Robert Kaufman ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jkaufman
|
||||
[1]:https://opensource.com/article/17/9/economically-efficient-model?rate=0SO3DeFAxtgLdmZxE2ZZQyTRTTbu2OOlksFZSUXmjJk
|
||||
[2]:https://opensource.com/users/jkaufman
|
||||
[3]:https://opensource.com/user/74461/feed
|
||||
[4]:https://opensource.com/users/jkaufman
|
||||
[5]:https://opensource.com/users/jkaufman
|
||||
[6]:https://opensource.com/users/jkaufman
|
||||
[7]:https://opensource.com/tags/law
|
||||
[8]:https://opensource.com/tags/licensing
|
||||
[9]:https://opensource.com/participate
|
@ -0,0 +1,88 @@
|
||||
Translating by Penney94
|
||||
[GIVE AWAY YOUR CODE, BUT NEVER YOUR TIME][23]
|
||||
============================================================
|
||||
|
||||
As software developers, I think we can agree that open-source code has [transformed the world][9]. Its public nature tears down the walls that prevent some pieces of software from becoming the best they can be. The problem is that too many valuable projects stagnate, with burned-out leaders:
|
||||
|
||||
> “I do not have the time or energy to invest in open source any more. I am not being paid at all to do any open source work, and so the work that I do there is time that I could be spending doing ‘life stuff’, or writing…It’s for this reason that I've decided to end all my engagements with open source effective today.”
|
||||
>
|
||||
> —[Ryan Bigg, former maintainer of several Ruby and Elixir projects][1]
|
||||
>
|
||||
> “It’s also been a massive opportunity cost because of all the things I haven’t learned or done in the meantime because FubuMVC takes up so much of my time and that’s the main reason that it has to stop now.”
|
||||
>
|
||||
> —[Jeremy Miller, former project lead of FubuMVC][2]
|
||||
>
|
||||
> “When we decide to start having kids, I will probably quit open source for good…I anticipate that ultimately this will be the solution to my problem: the nuclear option.”
|
||||
>
|
||||
> —[Nolan Lawson, one of the maintainers of PouchDB][3]
|
||||
|
||||
What we need is a new industry norm, that project leaders will _always_ be compensated for their time. We also need to bury the idea that any developer who submits an issue or pull request is automatically entitled to the attention of a maintainer.
|
||||
|
||||
Let’s first review how an open-source code base works in the market. It is a building block. It is [utility software][10], a cost that must be incurred by a business to make profit elsewhere. The community around the software grows if users can both understand the purpose of the code and see that it is a better value than the alternatives (closed-source off-the-shelf, custom in-house solution, etc.). It can be better, cheaper, or both.
|
||||
|
||||
If an organization needs to improve the code, they are free to hire any developer they want. It’s usually [in their interest][11] to contribute the improvement back to the community because, due to the complexity of merging, that’s the only way they can easily receive future improvements from other users. This “gravity” tends to hold communities together.
|
||||
|
||||
But it also burdens project maintainers since they must respond to these incoming improvements. And what do they get in return? At best, a community contribution may be something they can use in the future but not right now. At worst, it is nothing more than a selfish request wearing the mask of altruism.
|
||||
|
||||
One class of open-source projects has avoided this trap. What do Linux, MySQL, Android, Chromium, and .NET Core have in common, besides being famous? They are all _strategically important_ to one or more big-business interests because they complement those interests. [Smart companies commoditize their complements][12] and there’s no commodity cheaper than open-source software. Red Hat needs companies using Linux in order to sell Enterprise Linux, Oracle uses MySQL as a gateway drug that leads to MySQL Enterprise, Google wants everyone in the world to have a phone and web browser, and Microsoft is trying to hook developers on a platform and then pull them into the Azure cloud. These projects are all directly funded by the respective companies.
|
||||
|
||||
But what about the rest of the projects out there, that aren’t at the center of a big player’s strategy?
|
||||
|
||||
If you’re the leader of one of these projects, charge an annual fee for community membership. _Open source, closed community._ The message to users should be “do whatever you want with the code, but _pay us for our time_ if you want to influence the project’s future.” Lock non-paying users out of the forum and issue tracker, and ignore their emails. People who don’t pay should feel like they are missing out on the party.
|
||||
|
||||
Also charge contributors for the time it takes to merge nontrivial pull requests. If a particular submission will not immediately benefit you, charge full price for your time. Be disciplined and [remember YAGNI][13].
|
||||
|
||||
Will this lead to a drastically smaller community, and more forks? Absolutely. But if you persevere in building out your vision, and it delivers value to anyone else, they will pay as soon as they have a contribution to make. _Your willingness to merge contributions is [the scarce resource][4]._ Without it, users must repeatedly reconcile their changes with every new version you release.
|
||||
|
||||
Restricting the community is especially important if you want to maintain a high level of [conceptual integrity][14] in the code base. Headless projects with [liberal contribution policies][15] have less of a need to charge.
|
||||
|
||||
To implement larger pieces of your vision that do not justify their cost for your business alone, but may benefit others, [crowdfund][16]. There are many success stories:
|
||||
|
||||
> [Font Awesome 5][5]
|
||||
>
|
||||
> [Ruby enVironment Management (RVM)][6]
|
||||
>
|
||||
> [Django REST framework 3][7]
|
||||
|
||||
[Crowdfunding has limitations][17]. It [doesn’t work][18] for [huge projects][19]. But again, open-source code is utility software, which doesn’t need ambitious, risky game-changers. It has already [permeated every industry][20] with only incremental updates.
|
||||
|
||||
These ideas represent a sustainable path forward, and they could also fix the [diversity problem in open source][21], which may be rooted in its historically-unpaid nature. But above all, let’s remember that we only have [so many keystrokes left in our lives][22], and that we will someday regret the ones we waste.
|
||||
|
||||
_When I say “open source”, I mean code [licensed][8] in a way that it can be used to build proprietary things. This usually means a permissive license (MIT or Apache or BSD), but not always. Linux is the core of today’s tech industry, yet it is licensed under the GPL._
|
||||
|
||||
Thanks to Jason Haley, Don McNamara, Bryan Hogan, and Nadia Eghbal for reading drafts of this.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://wgross.net/essays/give-away-your-code-but-never-your-time
|
||||
|
||||
作者:[William Gross][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://wgross.net/#about-section
|
||||
[1]:http://ryanbigg.com/2015/11/open-source-work
|
||||
[2]:https://jeremydmiller.com/2014/04/03/im-throwing-in-the-towel-in-fubumvc/
|
||||
[3]:https://nolanlawson.com/2017/03/05/what-it-feels-like-to-be-an-open-source-maintainer/
|
||||
[4]:https://hbr.org/2010/11/column-to-win-create-whats-scarce
|
||||
[5]:https://www.kickstarter.com/projects/232193852/font-awesome-5
|
||||
[6]:https://www.bountysource.com/teams/rvm/fundraiser
|
||||
[7]:https://www.kickstarter.com/projects/tomchristie/django-rest-framework-3
|
||||
[8]:https://choosealicense.com/
|
||||
[9]:https://www.wired.com/insights/2013/07/in-a-world-without-open-source/
|
||||
[10]:https://martinfowler.com/bliki/UtilityVsStrategicDichotomy.html
|
||||
[11]:https://tessel.io/blog/67472869771/monetizing-open-source
|
||||
[12]:https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
|
||||
[13]:https://martinfowler.com/bliki/Yagni.html
|
||||
[14]:http://wiki.c2.com/?ConceptualIntegrity
|
||||
[15]:https://opensource.com/life/16/5/growing-contributor-base-modern-open-source
|
||||
[16]:https://poststatus.com/kickstarter-open-source-project/
|
||||
[17]:http://blog.felixbreuer.net/2013/04/24/crowdfunding-for-open-source.html
|
||||
[18]:https://www.indiegogo.com/projects/geary-a-beautiful-modern-open-source-email-client#/
|
||||
[19]:http://www.itworld.com/article/2708360/open-source-tools/canonical-misses-smartphone-crowdfunding-goal-by--19-million.html
|
||||
[20]:http://www.infoworld.com/article/2914643/open-source-software/rise-and-rise-of-open-source.html
|
||||
[21]:http://readwrite.com/2013/12/11/open-source-diversity/
|
||||
[22]:http://keysleft.com/
|
||||
[23]:http://wgross.net/essays/give-away-your-code-but-never-your-time
|
467
sources/tech/20170908 Betting on the Web.md
Normal file
467
sources/tech/20170908 Betting on the Web.md
Normal file
@ -0,0 +1,467 @@
|
||||
[Betting on the Web][27]
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
_Note: I just spoke at [Coldfront 2017][12] about why I’m such a big proponent of the Web. What follows is essentially that talk as a blog post (I’ll add a link to the video once it is published)._
|
||||
|
||||
_Also: the Starbucks PWA mentioned in the talk has shipped! 🎉_
|
||||
|
||||
I’m _not_ going to tell you what to do. Instead, I’m going to explain why I’ve chosen to bet my whole career on this crazy Web thing. "Betting" sounds a bit haphazard, it’s more calculated than that. It would probably be better described as "investing."
|
||||
|
||||
Investing what? Our time and attention.
|
||||
|
||||
Many of us only have maybe 6 or so _really_ productive hours per day when we’re capable of being super focused and doing our absolute best work. So how we chose to invest that very limited time is kind of a big deal. Even though I really enjoy programming I rarely do it aimlessly just for the pure joy of it. Ultimately, I’m investing that productive time expecting to get _some kind of return_ even if it’s just mastering something or solving a difficult problem.
|
||||
|
||||
[### "So what, what’s your point?"][28]
|
||||
|
||||
> > More than most of us realize we are _constantly_ investing
|
||||
|
||||
Sure, someone may be paying for our time directly but there’s more to it than just trading hours for money. In the long run, what we chose to invest our professional efforts into has other effects:
|
||||
|
||||
**1\. Building Expertise:** We learn as we work and gain valuable experience in the technologies and platform we’re investing in. That expertise impacts our future earning potential and what types of products we’re capable of building.
|
||||
|
||||
**2\. Building Equity:** Hopefully we’re generating equity and adding value to whatever product we’re building.
|
||||
|
||||
**3\. Shaping tomorrow’s job market:** We’re building tomorrow’s legacy code today™. Today’s new hotness is tomorrow’s maintenance burden. In many cases the people that initially build a product or service are not the ones that ultimately maintain it. This means the technology choices we make when building a new product or service, determine whether or not there will be jobs later that require expertise in that particular platform/technology. So, those tech choices _literally shape tomorrow’s job market!_
|
||||
|
||||
**4\. Body of knowledge:** As developers, we’re pretty good at sharing what we learn. We blog, we "Stack-Overflow", etc. These things all contribute to the corpus of knowledge available about that given platform which adds significant value by making it easier/faster for others to build things using these tools.
|
||||
|
||||
**5\. Open Source:** We solve problems and share our work. When lots of developers do this it adds _tremendous value_ to the technologies and platforms these tools are for. The sheer volume of work that we _don’t have to do_ because we can use someone else’s library that already does it is mind-boggling. Millions and millions of hours of development work are available to us for free with a simple `npm install`.
|
||||
|
||||
**6\. Building apps for users on that platform:** Last but not least, without apps there is no platform. By making more software available to end users, we’re contributing significant value to the platforms that run our apps.
|
||||
|
||||
Looking at that list, the last four items are not about _us_ at all. They represent other significant long-term impacts.
|
||||
|
||||
> > We often have a broader impact than we realize
|
||||
|
||||
We’re not just investing time into a job, we're also shaping the platform, community, and technologies we use.
|
||||
|
||||
We’re going to come back to this, but hopefully, recognizing that greater impact can help us make better investments.
|
||||
|
||||
[### With all investing comes _risk_][29]
|
||||
|
||||
We can’t talk about investing without talking about risk. So what are some of the potential risks?
|
||||
|
||||
[### Are we building for the right platform?][30]
|
||||
|
||||
Platform stability is indeed A Thing™. Just ask a Flash developer, Windows Phone developer, or Blackberry developer. Platforms _can_ go away.
|
||||
|
||||
If we look at those three platforms, what do they have in common? They’re _closed_ platforms. What I mean is there’s a single controlling interest. When you build for them, you’re building for a specific operating system and coding against a particular implementation as opposed to coding against a set of _open standards_ . You could argue, that at least to some degree, Flash died because of its "closed-ness". Regardless, one thing is clear from a risk mitigation perspective: open is better than closed.
|
||||
|
||||
the Web is _incredibly_ open. It would be quite difficult for any one entity to kill it off.
|
||||
|
||||
Now, for Windows Phone/Blackberry it failed due to a lack of interested users... or was it lack of interested developers??
|
||||
|
||||

|
||||
|
||||
Maybe if Ballmer ☝️ has just yelled "developers" _one more time_ we’d all have Windows Phones in our pockets right now 😜.
|
||||
|
||||
From a risk mitigation perspective, two things are clear with regard to platform stability:
|
||||
|
||||
1. Having _many users_ is better than having few users
|
||||
|
||||
2. Having _more developers_ building for the platform is better than having few developers
|
||||
|
||||
> > There is no bigger more popular open platform than the Web
|
||||
|
||||
[### Are we building the right software?][31]
|
||||
|
||||
Many of us are building apps. Well, we used to build "applications" but that wasn’t nearly cool enough. So now we build "apps" instead 😎.
|
||||
|
||||
What does "app" mean to a user? This is important because I think it’s changed a bit over the years. To a user, I would suggest it basically means: "a thing I put on my phone."
|
||||
|
||||
But for our purposes I want to get a bit more specific. I’d propose that an app is really:
|
||||
|
||||
1. An "ad hoc" user interface
|
||||
|
||||
2. That is local(ish) to the device
|
||||
|
||||
The term "ad hoc" is Latin and translates to **"for this"**. This actually matches pretty closely with what Apple’s marketing campaigns have been teaching the masses:
|
||||
|
||||
> There’s an app **for that**
|
||||
>
|
||||
> – Apple
|
||||
|
||||
The point is it helps you _do_ something. The emphasis is on action. I happen to think this is largely the difference between a "site" and an "app". A news site for example has articles that are resources in and of themselves. Where a news app is software that runs on the device that helps you consume news articles.
|
||||
|
||||
Another way to put it would be that site is more like a book, while an app is a tool.
|
||||
|
||||
[### Should we be building apps at all?!][32]
|
||||
|
||||
Remember when chatbots were supposed to take over the world? Or perhaps we’ll all be walking around with augmented reality glasses and that’s how we’ll interact with the world?
|
||||
|
||||
I’ve heard it said that "the future app is _no_ app" and virtual assistants will take over everything.
|
||||
|
||||

|
||||
|
||||
I’ve had one of these sitting in my living room for a couple of years, but I find it all but useless. It’s just a nice bluetooth speaker that I can yell at to play me music.
|
||||
|
||||
But I find it very interesting that:
|
||||
|
||||
> > Even Alexa has an app!
|
||||
|
||||
Why? Because there’s no screen! As it turns out these "ad hoc visual interfaces" are extremely efficient.
|
||||
|
||||
Sure, I can yell out "Alexa, what’s the weather going to be like today" and I’ll hear a reply with high and low and whether it’s cloudy, rainy, or sunny. But in that same amount of time, I can pull my phone out tap the weather app and before Alexa can finish telling me those 3 pieces of data, I can visually scan the entire week’s worth of data, air quality, sunrise/sunset times, etc. It’s just _so much more_ efficient as a mechanism for consuming this type of data.
|
||||
|
||||
As a result of that natural efficiency, I believe that having a visual interface is going to continue to be useful for all sorts of things for a long time to come.
|
||||
|
||||
That’s _not_ to say virtual assistants aren’t useful! Google Assistant on my Pixel is quite useful in part because it can show me answers and can tolerate vagueness in a way that an app with a fixed set of buttons never could.
|
||||
|
||||
But, as is so often the case with new useful tech, rarely does it complete replace everything that came before it, instead, it augments what we already have.
|
||||
|
||||
[### If apps are so great why are we so "apped out"?][33]
|
||||
|
||||
How do we explain that supposed efficiency when there’s data like this?
|
||||
|
||||
* [65% of smartphone users download zero apps per month][13]
|
||||
|
||||
* [More than 75% of app downloads open an app once and never come back][14]
|
||||
|
||||
I think to answer that we have to really look at what isn’t working well.
|
||||
|
||||
[### What sucks about apps?][34]
|
||||
|
||||
1. **Downloading them certainly sucks.** No one wants to open an app store, search for the app they’re trying to find, then wait to download the huge file. These days a 50mb app is pretty small. Facebook for iOS 346MB, Twitter iOS 212MB.
|
||||
|
||||
2. **Updating them sucks.** Every night I plug in my phone I download a whole slew of app updates that I, as a user, **could not possibly care less about**. In addition, many of these apps are things I installed _once_ and will **never open again, ever!**. I’d love to know the global stats on how much bandwidth has been wasted on app updates for apps that were never opened again.
|
||||
|
||||
3. **Managing them sucks.** Sure, when I first got an iPhone ages ago and could first download apps my home screen was impeccable. Then when we got folders!! Wow... what an amazing development! Now I could finally put all those pesky uninstallable Apple apps in a folder called "💩" and pretend they didn’t exist. But now, my home screen is a bit of a disaster. Sitting there dragging apps around is not my idea of a good time. So eventually things get all cluttered up again.
|
||||
|
||||
The thing I’ve come to realize, is this:
|
||||
|
||||
> > We don’t care how they got there. We only care that they’re _there_ when we need them.
|
||||
|
||||
For example, I love to go mountain biking and I enjoy tracking my rides with an app called Strava. I get all geared up for my ride, get on my bike and then go, "Oh right, gotta start Strava." So I pull out my phone _with my gloves on_ and go: "Ok Google, open Strava".
|
||||
|
||||
I _could not care less_ about where that app was or where it came from when I said that.
|
||||
|
||||
I don’t care if it was already installed, I don’t care if it never existed on my home screen, or if it was generated out of thin air on the spot.
|
||||
|
||||
> > Context is _everything_ !
|
||||
|
||||
If I’m at a parking meter, I want the app _for that_ . If I’m visiting Portland, I want their public transit app.
|
||||
|
||||
But I certainly _do not_ want it as soon as I’ve left.
|
||||
|
||||
If I’m at a conference, I might want a conference app to see the schedule, post questions to speakers, or whatnot. But wow, talk about something that quickly becomes worthless as soon as that conference is over!
|
||||
|
||||
As it turns out the more "ad hoc" these things are, the better! The more _disposable_ and _re-inflatable_ the better!
|
||||
|
||||
Which also reminds me of something that I feel like we often forget. We always assume people want our shiny apps and we measure things like "engagement" and "time spent in the app" when really, and there certainly are exceptions to this such as apps that are essentially entertainment, but often...
|
||||
|
||||
> > People don’t want to use your app. They want _to be done_ using your app.
|
||||
|
||||
[### Enter PWAs][35]
|
||||
|
||||
I’ve been contracting with Starbucks for the past 18 months. They’ve taken on the ambitious project of essentially re-building a lot of their web stuff in Node.js and React. One of the things I’ve helped them with (and pushed hard for) was to build a PWA (Progressive Web App) that could provide similar functionality as their native apps. Coincidentally it was launched today: [https://preview.starbucks.com][18]!
|
||||
|
||||
<twitterwidget class="twitter-tweet twitter-tweet-rendered" id="twitter-widget-0" data-tweet-id="905931990444244995" style="box-sizing: inherit; max-width: 100%; position: static; visibility: visible; display: block; transform: rotate(0deg); width: 500px; min-width: 220px; margin-top: 10px; margin-bottom: 10px;">[View image on Twitter][10] [][5]
|
||||
|
||||
> [ Follow][1] [ David Brunelle @davidbrunelle][6]
|
||||
>
|
||||
> My team at [@Starbucks][7] has been building a PWA, and it's now in beta! Check it out at [https://preview.starbucks.com ][8] if you're an existing customer!
|
||||
>
|
||||
> [<time class="dt-updated" datetime="2017-09-07T23:13:12+0000" pubdate="" title="Time posted: September 07, 2017 23:13:12 (UTC)">7:13 AM - Sep 8, 2017</time>][9]
|
||||
>
|
||||
> * [ 4949 Replies][2]
|
||||
>
|
||||
> * [ 140140 Retweets][3]
|
||||
>
|
||||
> * [ 454454 likes][4]
|
||||
|
||||
[Twitter Ads info and privacy][11]</twitterwidget>
|
||||
|
||||
This gives is a nice real world example:
|
||||
|
||||
* Starbucks iOS: 146MB
|
||||
|
||||
* Starbucks PWA: ~600KB
|
||||
|
||||
The point is there’s a _tremendous_ size difference.
|
||||
|
||||
It’s 0.4% of the size. To put it differently, I could download the PWA **243 times**in the same amount of time it would take to download the iOS app. Then, of course on iOS it then also still has to install and boot up!
|
||||
|
||||
Personally, I’d have loved it if the app ended up even smaller and there are plans to shrink it further. But even still, they’re _not even on the same planet_ in terms of file-size!
|
||||
|
||||
Market forces are _strongly_ aligned with PWAs here:
|
||||
|
||||
* Few app downloads
|
||||
|
||||
* User acquisition is _hard_
|
||||
|
||||
* User acquisition is _expensive_
|
||||
|
||||
If the goal is to get people to sign up for the rewards program, that type of size difference could very well make the difference of getting someone signed up and using the app experience (via PWA) by the time they reach the front of the line at Starbucks or not.
|
||||
|
||||
User acquisition is hard enough already, the more time and barriers that can be removed from that process, the better.
|
||||
|
||||
[### Quick PWA primer][36]
|
||||
|
||||
As mentioned, PWA stands for "Progressive Web Apps" or, as I like to call them: "Web Apps" 😄
|
||||
|
||||
Personally I’ve been trying to build what a user would define as an "app" with web technology for _years_ . But until PWAs came along, as hard as we tried, you couldn’t quite build a _real app_ with just web tech. Honestly, I kinda hate myself for saying that, but in terms of something that a user would understand as an "app" I’m afraid that statement has probably true until very recently.
|
||||
|
||||
So what’s a PWA? As one of its primary contributors put it:
|
||||
|
||||
> It’s just a website that took all the right vitamins.
|
||||
>
|
||||
> – Alex Russell
|
||||
|
||||
It involves a few specific technologies, namely:
|
||||
|
||||
* Service Worker. Which enable true reliability on the web. What I mean by that is I can build an app that as long as you loaded it while you were online, from then on it will _always_ open, even if you’re not. This puts it on equal footing with other apps.
|
||||
|
||||
* HTTPS. Requires encrypted connections
|
||||
|
||||
* Web App Manifest. A simple JSON file that describes your application. What icons to use is someone adds it to their home screen, what its name is, etc.
|
||||
|
||||
There are plenty of other resources about PWAs on the web. The point for my purposes is:
|
||||
|
||||
> > It is now possible to build PWAs that are _indistinguishable_ from their native counter parts
|
||||
|
||||
They can be up and running in a fraction of the time whether or not they were already "installed" and unlike "apps" can be saved as an app on the device _at the user’s discretion!_
|
||||
|
||||
Essentially they’re really great for creating "ad hoc" experiences that can be "cold started" on a whim nearly as fast as if it were already installed.
|
||||
|
||||
I’ve said it before and I’ll say it again:
|
||||
|
||||
> PWAs are the biggest thing to happen to the mobile web since the iPhone.
|
||||
>
|
||||
> – Um... that was me
|
||||
|
||||
[### Let’s talk Internet of things][37]
|
||||
|
||||
I happen to think that PWAs + IoT = ✨ MAGIC ✨. As several smart folks have pointed out.
|
||||
|
||||
The one-app-per-device approach to smart devices probably isn’t particularly smart.
|
||||
|
||||
It doesn’t scale well and it completely fails in terms of "ad hoc"-ness. Sure, if I have a Nest thermostat and Phillips Hue lightbulbs, it’s reasonable to have two apps installed. But even that sucks as soon as I want someone else to be able to use control them. If _I just let you into my house_ , trust me... I’m perfectly happy to let you flip a light switch, you’re in my house, after all. But for the vast majority of these things there’s no concept of "nearby apps" and, it’s silly for my guest (or a house-sitter) to download an app they don’t actually want, just so I can let them control my lights.
|
||||
|
||||
The whole "nearby apps" thing has so many uses:
|
||||
|
||||
* thermostat
|
||||
|
||||
* lights
|
||||
|
||||
* locks
|
||||
|
||||
* garage doors
|
||||
|
||||
* parking meter
|
||||
|
||||
* setting refrigerator temp
|
||||
|
||||
* conference apps
|
||||
|
||||
Today there are lots of new capabilities being added to the web to enable web apps to interact with physical devices in the real world. Things like WebUSB, WebBluetooth, WebNFC, and efforts like [Physical Web][19]. Even for things like Augmented (and Virtual) reality, the idea of the items we want to interact with having URLs makes so much sense and I can’t imagine a better, more flexible use of those URLs than for them to point to a PWA that lets you interact with that device!
|
||||
|
||||
[### Forward looking statements...][38]
|
||||
|
||||
I’ve been talking about all this in terms of investing. If you’ve ever read any company statement that discusses the future you always see this line explaining that things that are about to be discussed contains "forward looking statements" that may or may not ultimately happen.
|
||||
|
||||
So, here are _my_ forward looking statements.
|
||||
|
||||
[### 1\. PWA-only startups][39]
|
||||
|
||||
Given the cost (and challenge) of user-acquisition and the quality of app you can build with PWAs these days, I feel like this is inevitable. If you’re trying to get something off the ground, it just isn’t very efficient to spin up _three whole teams_ to build for iOS, Android, and the Web.
|
||||
|
||||
[### 2\. PWAs listed in App Stores][40]
|
||||
|
||||
So, there’s a problem with "web only" which is that for the good part of a decade we’ve been training users to look for apps in the app store for their given platform. So if you’re already a recognized brand, especially if you already have a native app that you’re trying to replace, it simply isn’t smart for you _not to exist_ in the app stores.
|
||||
|
||||
So, some of this isn’t all that "forward looking" as it turns out [Microsoft has already committed to listing PWAs in the Windows Store][20], more than once!
|
||||
|
||||
**They haven’t even finished implementing Service Worker in Edge yet!** But they’re already committing hard to PWAs. In addition to post linked above, one of their lead Developer Relations folks, Aaron Gustafson just [wrote an article for A List Apart][21] telling everyone to build PWAs.
|
||||
|
||||
But if you think about it from their perspective, of course they should do that! As I said earlier they’ve struggled to attract developer to build for their mobile phones. In fact, they’ve at times _paid_ companies to write apps for them simply to make sure apps exist so that users will be able to have apps they want when using a Windows Phone. Remember how I said developer time is a scarce resource and without apps, the platform is worthless? So _of course_ they should add first class support for PWAs. If you build a PWA like a lot of folks are doing then TADA!!! 🎉 You just made a Windows/Windows Phone app!
|
||||
|
||||
I’m of the opinion that the writing is on the wall for Google to do the same thing. It’s pure speculation, but it certainly seems like they are taking steps that suggest they may be planning on listing PWAs too. Namely that the Chrome folks recently shipped a feature referred to as "WebAPKs" for Chrome stable on Android (yep, everyone). In the past I’ve [explained in more detail][22] why I think this is a big deal. But a shorted version would be that before this change, sure you could save a PWA to your home screen... _But_ , in reality it was actually a glorified bookmark. That’s what changes with WebAPKs. Instead, when you add a PWA to your home screen it generates and "side loads" an actual `.apk`file on the fly. This allows that PWA to enjoy some privileges that were simply impossible until the operating system recognized it as "an app." For example:
|
||||
|
||||
* You can now mute push notifications for a specific PWA without muting it for all of Chrome.
|
||||
|
||||
* The PWA is listed in the "app tray" that shows all installed apps (previously it was just the home screen).
|
||||
|
||||
* You can see power usage, and permissions granted to the PWA just like any other app.
|
||||
|
||||
* The app developer can now update the icon for the app by publishing an update to the app manifest. Before, there was no way to updated the icon once it had been added.
|
||||
|
||||
* And a slew of other similar benefits...
|
||||
|
||||
If you’ve ever installed an Android app from a source other than the Play Store (or carriers/OEMs store) you know that you have to flip a switch in settings to allow installs from "untrusted sources". So, how then, you might ask, can they generate and install an actual `.apk` file for a PWA without requiring that you change that setting? As it turns out the answer is quite simple: Use a trusted source!
|
||||
|
||||
> > As it turns out WebAPKs are managed through Google Play Services!
|
||||
|
||||
I’m no rocket scientist, but based on their natural business alignment with the web, their promotion of PWAs, the lengths they’ve gone to to grant PWAs equal status on the operating system as native apps, it only seems natural that they’d eventually _list them in the store_ .
|
||||
|
||||
Additionally, if Google did start listing PWAs in the Play Store both them and Microsoft would be doing it _leaving Apple sticking out like a sore thumb and looking like the laggard_ . Essentially, app developers would be able to target a _massive_ number of users on a range of platforms with a single well-built PWA. But, just like developers grew to despise IE for not keeping up with the times and forcing them to jump through extra hoops to support it, the same thing would happen here. Apple does _not_ want to be the next IE and I’ve already seen many prominent developers suggesting they already are.
|
||||
|
||||
Which bring us to another forward-looking statement:
|
||||
|
||||
[### 3\. PWAs on iOS][41]
|
||||
|
||||
Just a few weeks ago the Safari folks announced that Service Worker is now [officially under development][23].
|
||||
|
||||
[### 4\. PWAs everywhere][42]
|
||||
|
||||
I really think we’ll start seeing them everywhere:
|
||||
|
||||
* Inside VR/AR/MR experiences
|
||||
|
||||
* Inside chat bots (again, pulling up an ad-hoc interface is so much more efficient).
|
||||
|
||||
* Inside Xbox?!
|
||||
|
||||
As it turns out, if you look at Microsoft’s status page for Edge about Service Worker you see this:
|
||||
|
||||

|
||||
|
||||
I hinted at this already, but I also think PWAs pair very nicely with virtual assistants being able to pull up an PWA on a whim without requiring it to already be installed would add tremendous power to the virtual assistant. Incidentally, this also becomes easier if there’s a known "registered" name of a PWA listed in an app store.
|
||||
|
||||
Some other fun use cases:
|
||||
|
||||
* Apparently the new digital menu displays in McDonald’s Restaurants (at least in the U.S.) are actually a web app built with Polymer ([source][15]). I don’t know if there’s a Service Worker or not, but it would make sense for there to be.
|
||||
|
||||
* Sports score boards!? I’m a [independent consultant][16], and someone approached me about potentially using a set of TVs and web apps to build a score keeping system at an arena. Point is, there are so many cool examples!
|
||||
|
||||
The web really is the universal platform!
|
||||
|
||||
[### For those who think PWAs are just a Google thing][43]
|
||||
|
||||
First off, I’m pretty sure Microsoft, Opera, Firefox, and Samsung folks would want to punch you for that. It [simply isn’t true][24] and increasingly we’re seeing a lot more compatibility efforts between browser vendors.
|
||||
|
||||
For example: check out the [Web Platform Tests][25] which is essentially Continuous Integration for web features that are run against new releases of major browsers. Some folks will recall that when Apple first claimed they implemented IndexedDb in Safari, the version they shipped was essentially unusable because it had major shortcomings and bugs.
|
||||
|
||||
Now, with the WPTs, you can drill into these features (to quite some detail) and see whether a given browser passes or fails. No more claiming "we shipped!" but not actually shipping.
|
||||
|
||||
[### What about feature "x" on platform "y" that we need?][44]
|
||||
|
||||
It could well be that you have a need that isn’t yet covered by the web platform. In reality, that list is getting shorter and shorter, also... HAVE YOU ASKED?! Despite what it may feel like, browser vendors eagerly want to know what you’re trying to do that you can’t. If there are missing features, be loud, be nice, but from my experience it’s worth making your desires known.
|
||||
|
||||
Also, it doesn’t take much to wrap a web view and add hooks into the native OS that your JavaScript can call to do things that aren’t _quite_ possible yet.
|
||||
|
||||
But that also brings me to another point, in terms of investing, as the world’s greatest hockey player said:
|
||||
|
||||
> Skate to where the puck is going, not where it has been.
|
||||
>
|
||||
> – Wayne Gretzky
|
||||
|
||||
Based on what I’ve outlined thus far, it could be more risky to building an entire application for a whole other platform that you ultimately may not need than to at least exhaust your options seeing what you can do with the Web first.
|
||||
|
||||
So to line ’em up in terms of PWA support:
|
||||
|
||||
* Chrome: yup
|
||||
|
||||
* Firefox: yup
|
||||
|
||||
* Opera: yup
|
||||
|
||||
* Samsung Internet ([the 3rd largest browser surprise!][17]): yup
|
||||
|
||||
* Microsoft: huge public commitment
|
||||
|
||||
* Safari: at least implementing Service Worker
|
||||
|
||||
[### Ask them add your feature!][45]
|
||||
|
||||
Sure, it may not happen, it may take a long time but _at least_ try. Remember, developers have a lot more influence over platforms than we typically realize. Make. your. voice. heard.
|
||||
|
||||
[### Side note about React-Native/Expo][46]
|
||||
|
||||
These projects are run by awesome people, the tech is incredibly impressive. If you’re Facebook and you’re trying to consolidate your development efforts, for the same basic reasons as why it makes sense for them to create their on [VM for running PHP][26]. They have realities to deal with at a scale that most of us will never have to deal with. Personally, I’m not Facebook.
|
||||
|
||||
As a side note, I find it interesting that building native apps and having as many people do that as possible, plays nicely into their advertising competition with Google.
|
||||
|
||||
It just so happens that Google is well positioned to capitalize off of people using the Web. Inversely, I’m fairly certain Facebook wouldn’t mind that ad revenue _not_ going Google. Facebook, seemingly would much rather _be_ your web, that be part of the Web.
|
||||
|
||||
Anyway, all that aside, for me it’s also about investing well.
|
||||
|
||||
By building a native app you’re volunteering for a 30% app-store tax. Plus, like we covered earlier odds are that no one wants to go download your app. Also, though it seems incredibly unlikely, I feel compelled to point out that in terms of "openness" Apple’s App Store is very clearly _anything_ but that. Apple could decide one day that they really don’t like how it’s possible to essentially circumvent their normal update/review process when you use Expo. One day they could just decide to reject all React Native apps. I really don’t think they would because of the uproar it would cause. I’m simply pointing out that it’s _their_ platform and they would have _every_ right to do so.
|
||||
|
||||
[### So is it all about investing for your own gain?][47]
|
||||
|
||||
So far, I’ve presented all this from kind of a cold, heartless investor perspective: getting the most for your time.
|
||||
|
||||
But, that’s not the whole story is it?
|
||||
|
||||
Life isn’t all about me. Life isn’t all about us.
|
||||
|
||||
I want to invest in platforms that increase opportunities **for others**. Personally, I really hope the next friggin’ Mark Zuckerburg isn’t an ivy-league dude. Wouldn’t it be amazing if instead the next huge success was, I don’t know, perhaps a young woman in Nairobi or something? The thing is, if owning an iPhone is a prerequisite for building apps, it _dramatically_ decreases the odds of something like that happening. I feel like the Web really is the closest thing we have to a level playing field.
|
||||
|
||||
**I want to invest in and improve _that_ platform!**
|
||||
|
||||
This quote really struck me and has stayed with me when thinking about these things:
|
||||
|
||||
> If you’re the kind of person who tends to succeed in what you start,
|
||||
>
|
||||
> changing what you start could be _the most extraordinary thing_ you could do.
|
||||
>
|
||||
> – Anand Giridharadas
|
||||
|
||||
Thanks for your valuable attention ❤️. I’ve presented the facts as I see them and I’ve done my best not to "should on you."
|
||||
|
||||
Ultimately though, no matter how prepared we are or how much research we’ve done; investing is always a bit of a gamble.
|
||||
|
||||
So I guess the only thing left to say is:
|
||||
|
||||
> > I’m all in.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://joreteg.com/blog/betting-on-the-web
|
||||
|
||||
作者:[Joreteg][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://joreteg.com/
|
||||
[1]:https://twitter.com/davidbrunelle
|
||||
[2]:https://twitter.com/intent/tweet?in_reply_to=905931990444244995
|
||||
[3]:https://twitter.com/intent/retweet?tweet_id=905931990444244995
|
||||
[4]:https://twitter.com/intent/like?tweet_id=905931990444244995
|
||||
[5]:https://twitter.com/davidbrunelle/status/905931990444244995/photo/1
|
||||
[6]:https://twitter.com/davidbrunelle
|
||||
[7]:https://twitter.com/Starbucks
|
||||
[8]:https://t.co/tEUXM8BLgP
|
||||
[9]:https://twitter.com/davidbrunelle/status/905931990444244995
|
||||
[10]:https://twitter.com/davidbrunelle/status/905931990444244995/photo/1
|
||||
[11]:https://support.twitter.com/articles/20175256
|
||||
[12]:https://2017.coldfront.co/
|
||||
[13]:https://qz.com/253618/most-smartphone-users-download-zero-apps-per-month/
|
||||
[14]:http://fortune.com/2016/05/19/app-economy/
|
||||
[15]:https://twitter.com/AJStacy06/status/857628546507968512
|
||||
[16]:http://consulting.joreteg.com/
|
||||
[17]:https://medium.com/samsung-internet-dev/think-you-know-the-top-web-browsers-458a0a070175
|
||||
[18]:https://preview.starbucks.com/
|
||||
[19]:https://google.github.io/physical-web/
|
||||
[20]:https://blogs.windows.com/msedgedev/2016/07/08/the-progress-of-web-apps/
|
||||
[21]:https://alistapart.com/article/yes-that-web-project-should-be-a-pwa
|
||||
[22]:https://joreteg.com/blog/installing-web-apps-for-real
|
||||
[23]:https://webkit.org/status/#specification-service-workers
|
||||
[24]:https://jakearchibald.github.io/isserviceworkerready/
|
||||
[25]:http://wpt.fyi/
|
||||
[26]:http://hhvm.com/
|
||||
[27]:https://joreteg.com/blog/betting-on-the-web
|
||||
[28]:https://joreteg.com/blog/betting-on-the-web#quotso-what-whats-your-pointquot
|
||||
[29]:https://joreteg.com/blog/betting-on-the-web#with-all-investing-comes
|
||||
[30]:https://joreteg.com/blog/betting-on-the-web#are-we-building-for-the-right-platform
|
||||
[31]:https://joreteg.com/blog/betting-on-the-web#are-we-building-the-right-software
|
||||
[32]:https://joreteg.com/blog/betting-on-the-web#should-we-be-building-apps-at-all
|
||||
[33]:https://joreteg.com/blog/betting-on-the-web#if-apps-are-so-great-why-are-we-so-quotapped-outquot
|
||||
[34]:https://joreteg.com/blog/betting-on-the-web#what-sucks-about-apps
|
||||
[35]:https://joreteg.com/blog/betting-on-the-web#enter-pwas
|
||||
[36]:https://joreteg.com/blog/betting-on-the-web#quick-pwa-primer
|
||||
[37]:https://joreteg.com/blog/betting-on-the-web#lets-talk-internet-of-things
|
||||
[38]:https://joreteg.com/blog/betting-on-the-web#forward-looking-statements
|
||||
[39]:https://joreteg.com/blog/betting-on-the-web#1-pwa-only-startups
|
||||
[40]:https://joreteg.com/blog/betting-on-the-web#2-pwas-listed-in-app-stores
|
||||
[41]:https://joreteg.com/blog/betting-on-the-web#3-pwas-on-ios
|
||||
[42]:https://joreteg.com/blog/betting-on-the-web#4-pwas-everywhere
|
||||
[43]:https://joreteg.com/blog/betting-on-the-web#for-those-who-think-pwas-are-just-a-google-thing
|
||||
[44]:https://joreteg.com/blog/betting-on-the-web#what-about-feature-quotxquot-on-platform-quotyquot-that-we-need
|
||||
[45]:https://joreteg.com/blog/betting-on-the-web#ask-them-add-your-feature
|
||||
[46]:https://joreteg.com/blog/betting-on-the-web#side-note-about-react-nativeexpo
|
||||
[47]:https://joreteg.com/blog/betting-on-the-web#so-is-it-all-about-investing-for-your-own-gain
|
@ -0,0 +1,513 @@
|
||||
What every software engineer should know about search
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
|
||||
### Want to build or improve a search experience? Start here.
|
||||
|
||||
Ask a software engineer: “[How would you add search functionality to your product?][78]” or “[How do I build a search engine?][79]” You’ll probably immediately hear back something like: “Oh, we’d just launch an ElasticSearch cluster. Search is easy these days.”
|
||||
|
||||
But is it? Numerous current products [still][80] [have][81] [suboptimal][82] [search][83] [experiences][84]. Any true search expert will tell you that few engineers have a very deep understanding of how search engines work, knowledge that’s often needed to improve search quality.
|
||||
|
||||
Even though many open source software packages exist, and the research is vast, the knowledge around building solid search experiences is limited to a select few. Ironically, [searching online][85] for search-related expertise doesn’t yield any recent, thoughtful overviews.
|
||||
|
||||
#### Emoji Legend
|
||||
|
||||
```
|
||||
❗ “Serious” gotcha: consequences of ignorance can be deadly
|
||||
🔷 Especially notable idea or piece of technology
|
||||
☁️ ️Cloud/SaaS
|
||||
🍺 Open source / free software
|
||||
🦏 JavaScript
|
||||
🐍 Python
|
||||
☕ Java
|
||||
🇨 C/C++
|
||||
```
|
||||
|
||||
### Why read this?
|
||||
|
||||
Think of this post as a collection of insights and resources that could help you to build search experiences. It can’t be a complete reference, of course, but hopefully we can improve it based on feedback (please comment or reach out!).
|
||||
|
||||
I’ll point at some of the most popular approaches, algorithms, techniques, and tools, based on my work on general purpose and niche search experiences of varying sizes at Google, Airbnb and several startups.
|
||||
|
||||
❗️Not appreciating or understanding the scope and complexity of search problems can lead to bad user experiences, wasted engineering effort, and product failure.
|
||||
|
||||
If you’re impatient or already know a lot of this, you might find it useful to jump ahead to the tools and services sections.
|
||||
|
||||
### Some philosophy
|
||||
|
||||
This is a long read. But most of what we cover has four underlying principles:
|
||||
|
||||
#### 🔷 Search is an inherently messy problem:
|
||||
|
||||
* Queries are highly variable. The search problems are highly variablebased on product needs.
|
||||
|
||||
* Think about how different Facebook search (searching a graph of people).
|
||||
|
||||
* YouTube search (searching individual videos).
|
||||
|
||||
* Or how different both of those are are from Kayak ([air travel planning is a really hairy problem][2]).
|
||||
|
||||
* Google Maps (making sense of geo-spacial data).
|
||||
|
||||
* Pinterest (pictures of a brunch you might cook one day).
|
||||
|
||||
#### Quality, metrics, and processes matter a lot:
|
||||
|
||||
* There is no magic bullet (like PageRank) nor a magic ranking formula that makes for a good approach. Processes are always evolving collection of techniques and processes that solve aspects of the problem and improve overall experience, usually gradually and continuously.
|
||||
|
||||
* ❗️In other words, search is not just just about building software that does ranking or retrieval (which we will discuss below) for a specific domain. Search systems are usually an evolving pipeline of components that are tuned and evolve over time and that build up to a cohesive experience.
|
||||
|
||||
* In particular, the key to success in search is building processes for evaluation and tuning into the product and development cycles. A search system architect should think about processes and metrics, not just technologies.
|
||||
|
||||
#### Use existing technologies first:
|
||||
|
||||
* As in most engineering problems, don’t reinvent the wheel yourself. When possible, use existing services or open source tools. If an existing SaaS (such as [Algolia][3] or managed Elasticsearch) fits your constraints and you can afford to pay for it, use it. This solution will likely will be the best choice for your product at first, even if down the road you need to customize, enhance, or replace it.
|
||||
|
||||
#### ❗️Even if you buy, know the details:
|
||||
|
||||
* Even if you are using an existing open source or commercial solution, you should have some sense of the complexity of the search problem and where there are likely to be pitfalls.
|
||||
|
||||
### Theory: the search problem
|
||||
|
||||
Search is different for every product, and choices depend on many technical details of the requirements. It helps to identify the key parameters of your search problem:
|
||||
|
||||
1. Size: How big is the corpus (a complete set of documents that need to be searched)? Is it thousands or billions of documents?
|
||||
|
||||
2. Media: Are you searching through text, images, graphical relationships, or geospatial data?
|
||||
|
||||
3. 🔷 Corpus control and quality: Are the sources for the documents under your control, or coming from a (potentially adversarial) third party? Are all the documents ready to be indexed or need to be cleaned up and selected?
|
||||
|
||||
4. Indexing speed: Do you need real-time indexing, or is building indices in batch is fine?
|
||||
|
||||
5. Query language: Are the queries structured, or you need to support unstructured ones?
|
||||
|
||||
6. Query structure: Are your queries textual, images, sounds? Street addresses, record ids, people’s faces?
|
||||
|
||||
7. Context-dependence: Do the results depend on who the user is, what is their history with the product, their geographical location, time of the day etc?
|
||||
|
||||
8. Suggest support: Do you need to support incomplete queries?
|
||||
|
||||
9. Latency: What are the serving latency requirements? 100 milliseconds or 100 seconds?
|
||||
|
||||
10. Access control: Is it entirely public or should users only see a restricted subset of the documents?
|
||||
|
||||
11. Compliance: Are there compliance or organizational limitations?
|
||||
|
||||
12. Internationalization: Do you need to support documents with multilingual character sets or Unicode? (Hint: Always use UTF-8 unless you really know what you’re doing.) Do you need to support a multilingual corpus? Multilingual queries?
|
||||
|
||||
Thinking through these points up front can help you make significant choices designing and building individual search system components.
|
||||
|
||||
** 此处有Canvas,请手动处理 **
|
||||
|
||||

|
||||
A production indexing pipeline.
|
||||
|
||||
### Theory: the search pipeline
|
||||
|
||||
Now let’s go through a list of search sub-problems. These are usually solved by separate subsystems that form a pipeline. What that means is that a given subsystem consumes the output of previous subsystems, and produces input for the following subsystems.
|
||||
|
||||
This leads to an important property of the ecosystem: once you change how an upstream subsystem works, you need to evaluate the effect of the change and possibly change the behavior downstream.
|
||||
|
||||
Here are the most important problems you need to solve:
|
||||
|
||||
#### Index selection:
|
||||
|
||||
given a set of documents (e.g. the entirety of the Internet, all the Twitter posts, all the pictures on Instagram), select a potentially smaller subset of documents that may be worthy for consideration as search results and only include those in the index, discarding the rest. This is done to keep your indexes compact, and is almost orthogonal to selecting the documents to show to the user. Examples of particular classes of documents that don’t make the cut may include:
|
||||
|
||||
#### Spam:
|
||||
|
||||
oh, all the different shapes and sizes of search spam! A giant topic in itself, worthy of a separate guide. [A good web spam taxonomy overview][86].
|
||||
|
||||
#### Undesirable documents:
|
||||
|
||||
domain constraints might require filtering: [porn][87], illegal content, etc. The techniques are similar to spam filtering, probably with extra heuristics.
|
||||
|
||||
#### Duplicates:
|
||||
|
||||
Or near-duplicates and redundant documents. Can be done with [Locality-sensitive hashing][88], [similarity measures][89], clustering techniques or even [clickthrough data][90]. A [good overview][91] of techniques.
|
||||
|
||||
#### Low-utility documents:
|
||||
|
||||
The definition of utility depends highly on the problem domain, so it’s hard to recommend the approaches here. Some ideas are: it might be possible to build a utility function for your documents; heuristics might work, or example an image that contains only black pixels is not a useful document; utility might be learned from user behavior.
|
||||
|
||||
#### Index construction:
|
||||
|
||||
For most search systems, document retrieval is performed using an [inverted index][92] — often just called the index.
|
||||
|
||||
* The index is a mapping of search terms to documents. A search term could be a word, an image feature or any other document derivative useful for query-to-document matching. The list of the documents for a given term is called a [posting list][1]. It can be sorted by some metric, like document quality.
|
||||
|
||||
* Figure out whether you need to index the data in real time.❗️Many companies with large corpora of documents use a batch-oriented indexing approach, but then find this is unsuited to a product where users expect results to be current.
|
||||
|
||||
* With text documents, term extraction usually involves using NLP techniques, such as stop lists, [stemming][4] and [entity extraction][5]; for images or videos computer vision methods are used etc.
|
||||
|
||||
* In addition, documents are mined for statistical and meta information, such as references to other documents (used in the famous [PageRank][6]ranking signal), [topics][7], counts of term occurrences, document size, entities A mentioned etc. That information can be later used in ranking signal construction or document clustering. Some larger systems might contain several indexes, e.g. for documents of different types.
|
||||
|
||||
* Index formats. The actual structure and layout of the index is a complex topic, since it can be optimized in many ways. For instance there are [posting lists compression methods][8], one could target [mmap()able data representation][9] or use[ LSM-tree][10] for continuously updated index.
|
||||
|
||||
#### Query analysis and document retrieval:
|
||||
|
||||
Most popular search systems allow non-structured queries. That means the system has to extract structure out of the query itself. In the case of an inverted index, you need to extract search terms using [NLP][93] techniques.
|
||||
|
||||
The extracted terms can be used to retrieve relevant documents. Unfortunately, most queries are not very well formulated, so it pays to do additional query expansion and rewriting, like:
|
||||
|
||||
* [Term re-weighting][11].
|
||||
|
||||
* [Spell checking][12]. Historical query logs are very useful as a dictionary.
|
||||
|
||||
* [Synonym matching][13]. [Another survey][14].
|
||||
|
||||
* [Named entity recognition][15]. A good approach is to use [HMM-based language modeling][16].
|
||||
|
||||
* Query classification. Detect queries of particular type. For example, Google Search detects queries that contain a geographical entity, a porny query, or a query about something in the news. The retrieval algorithm can then make a decision about which corpora or indexes to look at.
|
||||
|
||||
* Expansion through [personalization][17] or [local context][18]. Useful for queries like “gas stations around me”.
|
||||
|
||||
#### Ranking:
|
||||
|
||||
Given a list of documents (retrieved in the previous step), their signals, and a processed query, create an optimal ordering (ranking) for those documents.
|
||||
|
||||
Originally, most ranking models in use were hand-tuned weighted combinations of all the document signals. Signal sets might include PageRank, clickthrough data, topicality information and [others][94].
|
||||
|
||||
To further complicate things, many of those signals, such as PageRank, or ones generated by [statistical language models][95] contain parameters that greatly affect the performance of a signal. Those have to be hand-tuned too.
|
||||
|
||||
Lately, 🔷 [learning to rank][96], signal-based discriminative supervised approaches are becoming more and more popular. Some popular examples of LtR are [McRank][97] and [LambdaRank][98] from Microsoft, and [MatrixNet][99] from Yandex.
|
||||
|
||||
A new, [vector space based approach][100] for semantic retrieval and ranking is gaining popularity lately. The idea is to learn individual low-dimensional vector document representations, then build a model which maps queries into the same vector space.
|
||||
|
||||
Then, retrieval is just finding several documents that are closest by some metric (e.g. Eucledian distance) to the query vector. Ranking is the distance itself. If the mapping of both the documents and queries is built well, the documents are chosen not by a fact of presence of some simple pattern (like a word), but how close the documents are to the query by _meaning_ .
|
||||
|
||||
### Indexing pipeline operation
|
||||
|
||||
Usually, each of the above pieces of the pipeline must be operated on a regular basis to keep the search index and search experience current.
|
||||
|
||||
❗️Operating a search pipeline can be complex and involve a lot of moving pieces. Not only is the data moving through the pipeline, but the code for each module and the formats and assumptions embedded in the data will change over time.
|
||||
|
||||
A pipeline can be run in “batch” or based on a regular or occasional basis (if indexing speed does not need to be real time) or in a streamed way (if real-time indexing is needed) or based on certain triggers.
|
||||
|
||||
Some complex search engines (like Google) have several layers of pipelines operating on different time scales — for example, a page that changes often (like [cnn.com][101]) is indexed with a higher frequency than a static page that hasn’t changed in years.
|
||||
|
||||
### Serving systems
|
||||
|
||||
Ultimately, the goal of a search system is to accept queries, and use the index to return appropriately ranked results. While this subject can be incredibly complex and technical, we mention a few of the key aspects to this part of the system.
|
||||
|
||||
* Performance: users notice when the system they interact with is laggy. ❗️Google has done [extensive research][19], and they have noticed that number of searches falls 0.6%, when serving is slowed by 300ms. They recommend to serve results under 200 ms for most of your queries. A good article [on the topic][20]. This is the hard part: the system needs to collect documents from, possibly, many computers, than merge them into possible a very long list and then sort that list in the ranking order. To complicate things further, ranking might be query-dependent, so, while sorting, the system is not just comparing 2 numbers, but performing computation.
|
||||
|
||||
* 🔷 Caching results: is often necessary to achieve decent performance. ❗️ But caches are just one large gotcha. The might show stale results when indices are updated or some results are blacklisted. Purging caches is a can of warm of itself: a search system might not have the capacity to serve the entire query stream with an empty (cold) cache, so the [cache needs to be pre-warmed][21] before the queries start arriving. Overall, caches complicate a system’s performance profile. Choosing a cache size and a replacement algorithm is also a [challenge][22].
|
||||
|
||||
* Availability: is often defined by an uptime/(uptime + downtime) metric. When index is distributed, in order to serve any search results, the system often needs to query all the shards for their share of results. ❗️That means, that if one shard is unavailable, the entire search system is compromised. The more machines are involved in serving the index — the higher the probability of one of them becoming defunct and bringing the whole system down.
|
||||
|
||||
* Managing multiple indices: Indices for large systems may separated into shards (pieces) or divided by media type or indexing cadence (fresh versus long-term indices). Results can then be merged.
|
||||
|
||||
* Merging results of different kinds: e.g. Google showing results from Maps, News etc.
|
||||
|
||||
** 此处有Canvas,请手动处理 **
|
||||
|
||||

|
||||
A human rater. Yeah, you should still have those.
|
||||
|
||||
### Quality, evaluation, and improvement
|
||||
|
||||
So you’ve launched your indexing pipeline and search servers, and it’s all running nicely. Unfortunately the road to a solid search experience only begins with running infrastructure.
|
||||
|
||||
Next, you’ll need to build a set of processes around continuous search quality evaluation and improvement. In fact, this is actually most of the work and the hardest problem you’ll have to solve.
|
||||
|
||||
🔷 What is quality? First, you’ll need to determine (and get your boss or the product lead to agree), what quality means in your case:
|
||||
|
||||
* Self-reported user satisfaction (includes UX)
|
||||
|
||||
* Perceived relevance of the returned results (not including UX)
|
||||
|
||||
* Satisfaction relative to competitors
|
||||
|
||||
* Satisfaction relative performance of the previous version of the search engine (e.g. last week)
|
||||
|
||||
* [User engagement][23]
|
||||
|
||||
Metrics: Some of these concepts can be quite hard to quantify. On the other hand, it’s incredibly useful to be able to express how well a search engine is performing in a single number, a quality metric.
|
||||
|
||||
Continuously computing such a metric for your (and your competitors’) system you can both track your progress and explain how well you are doing to your boss. Here are some classical ways to quantify quality, that can help you construct your magic quality metric formula:
|
||||
|
||||
* [Precision][24] and [recall][25] measure how well the retrieved set of documents corresponds to the set you expected to see.
|
||||
|
||||
* [F score][26] (specifically F1 score) is a single number, that represents both precision and recall well.
|
||||
|
||||
* [Mean Average Precision][27] (MAP) allows to quantify the relevance of the top returned results.
|
||||
|
||||
* 🔷 [Normalized Discounted Cumulative Gain][28] (nDCG) is like MAP, but weights the relevance of the result by its position.
|
||||
|
||||
* [Long and short clicks][29] — Allow to quantify how useful the results are to the real users.
|
||||
|
||||
* [A good detailed overview][30].
|
||||
|
||||
🔷 Human evaluations: Quality metrics might seem like statistical calculations, but they can’t all be done by automated calculations. Ultimately, metrics need to represent subjective human evaluation, and this is where a “human in the loop” comes into play.
|
||||
|
||||
❗️Skipping human evaluation is probably the most spread reason of sub-par search experiences.
|
||||
|
||||
Usually, at early stages the developers themselves evaluate the results manually. At later point [human raters][102] (or assessors) may get involved. Raters typically use custom tools to look at returned search results and provide feedback on the quality of the results.
|
||||
|
||||
Subsequently, you can use the feedback signals to guide development, help make launch decisions or even feed them back into the index selection, retrieval or ranking systems.
|
||||
|
||||
Here is the list of some other types of human-driven evaluation, that can be done on a search system:
|
||||
|
||||
* Basic user evaluation: The user ranks their satisfaction with the whole experience
|
||||
|
||||
* Comparative evaluation: Compare with other search results (compare with search results from earlier versions of the system or competitors)
|
||||
|
||||
* Retrieval evaluation: The query analysis and retrieval quality is often evaluated using manually constructed query-document sets. A user is shown a query and the list of the retrieved documents. She can then mark all the documents that are relevant to the query, and the ones that are not. The resulting pairs of (query, [relevant docs]) are called a “golden set”. Golden sets are remarkably useful. For one, an engineer can set up automatic retrieval regression tests using those sets. The selection signal from golden sets can also be fed back as ground truth to term re-weighting and other query re-writing models.
|
||||
|
||||
* Ranking evaluation: Raters are presented with a query and two documents side-by-side. The rater must choose the document that fits the query better. This creates a partial ordering on the documents for a given query. That ordering can be later be compared to the output of the ranking system. The usual ranking quality measures used are MAP and nDCG.
|
||||
|
||||
#### Evaluation datasets:
|
||||
|
||||
One should start thinking about the datasets used for evaluation (like “golden sets” mentioned above) early in the search experience design process. How you collect and update them? How you push them to the production eval pipeline? Is there a built-in bias?
|
||||
|
||||
Live experiments:
|
||||
|
||||
After your search engine catches on and gains enough users, you might want to start conducting [live search experiments][103] on a portion of your traffic. The basic idea is to turn some optimization on for a group of people, and then compare the outcome with that of a “control” group — a similar sample of your users that did not have the experiment feature on for them. How you would measure the outcome is, once again, very product specific: it could be clicks on results, clicks on ads etc.
|
||||
|
||||
Evaluation cycle time: How fast you improve your search quality is directly related to how fast you can complete the above cycle of measurement and improvement. It is essential from the beginning to ask yourself, “how fast can we measure and improve our performance?”
|
||||
|
||||
Will it take days, hours, minutes or seconds to make changes and see if they improve quality? ❗️Running evaluation should also be as easy as possible for the engineers and should not take too much hands-on time.
|
||||
|
||||
### 🔷 So… How do I PRACTICALLY build it?
|
||||
|
||||
This blogpost is not meant as a tutorial, but here is a brief outline of how I’d approach building a search experience right now:
|
||||
|
||||
1. As was said above, if you can afford it — just buy the existing SaaS (some good ones are listed below). An existing service fits if:
|
||||
|
||||
* Your experience is a “connected” one (your service or app has internet connection).
|
||||
|
||||
* Does it support all the functionality you need out of box? This post gives a pretty good idea of what functions would you want. To name a few, I’d at least consider: support for the media you are searching; real-time indexing support; query flexibility, including context-dependent queries.
|
||||
|
||||
* Given the size of the corpus and the expected [QpS][31], can you afford to pay for it for the next 12 months?
|
||||
|
||||
* Can the service support your expected traffic within the required latency limits? In case when you are querying the service from an app, make sure that the given service is accessible quickly enough from where your users are.
|
||||
|
||||
2\. If a hosted solution does not fit your needs or resources, you probably want to use one of the open source libraries or tools. In case of connected apps or websites, I’d choose ElasticSearch right now. For embedded experiences, there are multiple tools below.
|
||||
|
||||
3\. You most likely want to do index selection and clean up your documents (say extract relevant text from HTML pages) before uploading them to the search index. This will decrease the index size and make getting to good results easier. If your corpus fits on a single machine, just write a script (or several) to do that. If not, I’d use [Spark][104].
|
||||
|
||||
** 此处有Canvas,请手动处理 **
|
||||
|
||||

|
||||
You can never have too many tools.
|
||||
|
||||
### ☁️ SaaS
|
||||
|
||||
☁️ 🔷[Algolia][105] — a proprietary SaaS that indexes a client’s website and provides an API to search the website’s pages. They also have an API to submit your own documents, support context dependent searches and serve results really fast. If I were building a web search experience right now and could afford it, I’d probably use Algolia first — and buy myself time to build a comparable search experience.
|
||||
|
||||
* Various ElasticSearch providers: AWS (☁️ [ElasticSearch Cloud)][32], ☁️[elastic.co][33] and from ☁️ [Qbox][34].
|
||||
|
||||
* ☁️[ Azure Search][35] — a SaaS solution from Microsoft. Accessible through a REST API, it can scale to billions of documents. Has a Lucene query interface to simplify migrations from Lucene-based solutions.
|
||||
|
||||
* ☁️[ Swiftype][36] — an enterprise SaaS that indexes your company’s internal services, like Salesforce, G Suite, Dropbox and the intranet site.
|
||||
|
||||
### Tools and libraries
|
||||
|
||||
🍺☕🔷[ Lucene][106] is the most popular IR library. Implements query analysis, index retrieval and ranking. Either of the components can be replaced by an alternative implementation. There is also a C port — 🍺[Lucy][107].
|
||||
|
||||
* 🍺☕🔷[ Solr][37] is a complete search server, based on Lucene. It’s a part of the [Hadoop][38] ecosystem of tools.
|
||||
|
||||
* 🍺☕🔷[ Hadoop][39] is the most widely used open source MapReduce system, originally designed as a indexing pipeline framework for Solr. It has been gradually loosing ground to 🍺[Spark][40] as the batch data processing framework used for indexing. ☁️[EMR][41] is a proprietary implementation of MapReduce on AWS.
|
||||
|
||||
* 🍺☕🔷 [ElasticSearch][42] is also based on Lucene ([feature comparison with Solr][43]). It has been getting more attention lately, so much that a lot of people think of ES when they hear “search”, and for good reasons: it’s well supported, has [extensive API][44], [integrates with Hadoop][45] and [scales well][46]. There are open source and [Enterprise][47] versions. ES is also available as a SaaS on Can scale to billions of documents, but scaling to that point can be very challenging, so typical scenario would involve orders of magnitude smaller corpus.
|
||||
|
||||
* 🍺🇨 [Xapian][48] — a C++-based IR library. Relatively compact, so good for embedding into desktop or mobile applications.
|
||||
|
||||
* 🍺🇨 [Sphinx][49] — an full-text search server. Has a SQL-like query language. Can also act as a [storage engine for MySQL][50] or used as a library.
|
||||
|
||||
* 🍺☕ [Nutch][51] — a web crawler. Can be used in conjunction with Solr. It’s also the tool behind [🍺Common Crawl][52].
|
||||
|
||||
* 🍺🦏 [Lunr][53] — a compact embedded search library for web apps on the client-side.
|
||||
|
||||
* 🍺🦏 [searchkit][54] — a library of web UI components to use with ElasticSearch.
|
||||
|
||||
* 🍺🦏 [Norch][55] — a [LevelDB][56]-based search engine library for Node.js.
|
||||
|
||||
* 🍺🐍 [Whoosh][57] — a fast, full-featured search library implemented in pure Python.
|
||||
|
||||
* OpenStreetMaps has it’s own 🍺[deck of search software][58].
|
||||
|
||||
### Datasets
|
||||
|
||||
A few fun or useful data sets to try building a search engine or evaluating search engine quality:
|
||||
|
||||
* 🍺🔷 [Commoncrawl][59] — a regularly-updated open web crawl data. There is a [mirror on AWS][60], accessible for free within the service.
|
||||
|
||||
* 🍺🔷 [Openstreetmap data dump][61] is a very rich source of data for someone building a geospacial search engine.
|
||||
|
||||
* 🍺 [Google Books N-grams][62] can be very useful for building language models.
|
||||
|
||||
* 🍺 [Wikipedia dumps][63] are a classic source to build, among other things, an entity graph out of. There is a [wide range of helper tools][64] available.
|
||||
|
||||
* [IMDb dumps][65] are a fun dataset to build a small toy search engine for.
|
||||
|
||||
### References
|
||||
|
||||
* [Modern Information Retrieval][66] by R. Baeza-Yates and B. Ribeiro-Neto is a good, deep academic treatment of the subject. This is a good overview for someone completely new to the topic.
|
||||
|
||||
* [Information Retrieval][67] by S. Büttcher, C. Clarke and G. Cormack is another academic textbook with a wide coverage and is more up-to-date. Covers learn-to-rank and does a pretty good job at discussing theory of search systems evaluation. Also is a good overview.
|
||||
|
||||
* [Learning to Rank][68] by T-Y Liu is a best theoretical treatment of LtR. Pretty thin on practical aspects though. Someone considering building an LtR system should probably check this out.
|
||||
|
||||
* [Managing Gigabytes][69] — published in 1999, is still a definitive reference for anyone embarking on building an efficient index of a significant size.
|
||||
|
||||
* [Text Retrieval and Search Engines][70] — a MOOC from Coursera. A decent overview of basics.
|
||||
|
||||
* [Indexing the World Wide Web: The Journey So Far][71] ([PDF][72]), an overview of web search from 2012, by Ankit Jain and Abhishek Das of Google.
|
||||
|
||||
* [Why Writing Your Own Search Engine is Hard][73] a classic article from 2004 from Anna Patterson.
|
||||
|
||||
* [https://github.com/harpribot/awesome-information-retrieval][74] — a curated list of search-related resources.
|
||||
|
||||
* A [great blog][75] on everything search by [Daniel Tunkelang][76].
|
||||
|
||||
* Some good slides on [search engine evaluation][77].
|
||||
|
||||
This concludes my humble attempt to make a somewhat-useful “map” for an aspiring search engine engineer. Did I miss something important? I’m pretty sure I did — you know, [the margin is too narrow][108] to contain this enormous topic. Let me know if you think that something should be here and is not — you can reach [me][109] at[ forwidur@gmail.com][110] or at [@forwidur][111].
|
||||
|
||||
> P.S. — This post is part of a open, collaborative effort to build an online reference, the Open Guide to Practical AI, which we’ll release in draft form soon. See [this popular guide][112] for an example of what’s coming. If you’d like to get updates on or help with with this effort, sign up [here][113].
|
||||
|
||||
> Special thanks to [Joshua Levy][114], [Leo Polovets][115] and [Abhishek Das][116] for reading drafts of this and their invaluable feedback!
|
||||
|
||||
> Header image courtesy of [Mickaël Forrett][117]. The beautiful toolbox is called [The Studley Tool Chest][118].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Max Grigorev
|
||||
distributed systems, data, AI
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://medium.com/startup-grind/what-every-software-engineer-should-know-about-search-27d1df99f80d
|
||||
|
||||
作者:[Max Grigorev][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@forwidur?source=post_header_lockup
|
||||
[1]:https://en.wikipedia.org/wiki/Inverted_index
|
||||
[2]:http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/ITA-software-travel-complexity.pdf
|
||||
[3]:https://www.algolia.com/
|
||||
[4]:https://en.wikipedia.org/wiki/Stemming
|
||||
[5]:https://en.wikipedia.org/wiki/Named-entity_recognition
|
||||
[6]:http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
|
||||
[7]:https://gofishdigital.com/semantic-topic-modeling/
|
||||
[8]:https://nlp.stanford.edu/IR-book/html/htmledition/postings-file-compression-1.html
|
||||
[9]:https://deplinenoise.wordpress.com/2013/03/31/fast-mmapable-data-structures/
|
||||
[10]:https://en.wikipedia.org/wiki/Log-structured_merge-tree
|
||||
[11]:http://orion.lcg.ufrj.br/Dr.Dobbs/books/book5/chap11.htm
|
||||
[12]:http://norvig.com/spell-correct.html
|
||||
[13]:http://nlp.stanford.edu/IR-book/html/htmledition/query-expansion-1.html
|
||||
[14]:https://www.iro.umontreal.ca/~nie/IFT6255/carpineto-Survey-QE.pdf
|
||||
[15]:https://en.wikipedia.org/wiki/Named-entity_recognition
|
||||
[16]:http://www.aclweb.org/anthology/P02-1060
|
||||
[17]:https://en.wikipedia.org/wiki/Personalized_search
|
||||
[18]:http://searchengineland.com/future-search-engines-context-217550
|
||||
[19]:http://services.google.com/fh/files/blogs/google_delayexp.pdf
|
||||
[20]:http://highscalability.com/latency-everywhere-and-it-costs-you-sales-how-crush-it
|
||||
[21]:https://stackoverflow.com/questions/22756092/what-does-it-mean-by-cold-cache-and-warm-cache-concept
|
||||
[22]:https://en.wikipedia.org/wiki/Cache_performance_measurement_and_metric
|
||||
[23]:http://blog.popcornmetrics.com/5-user-engagement-metrics-for-growth/
|
||||
[24]:https://en.wikipedia.org/wiki/Information_retrieval#Precision
|
||||
[25]:https://en.wikipedia.org/wiki/Information_retrieval#Recall
|
||||
[26]:https://en.wikipedia.org/wiki/F1_score
|
||||
[27]:http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/
|
||||
[28]:https://en.wikipedia.org/wiki/Discounted_cumulative_gain
|
||||
[29]:http://www.blindfiveyearold.com/short-clicks-versus-long-clicks
|
||||
[30]:https://arxiv.org/pdf/1302.2318.pdf
|
||||
[31]:https://en.wikipedia.org/wiki/Queries_per_second
|
||||
[32]:https://aws.amazon.com/elasticsearch-service/
|
||||
[33]:https://www.elastic.co/
|
||||
[34]:https://qbox.io/
|
||||
[35]:https://azure.microsoft.com/en-us/services/search/
|
||||
[36]:https://swiftype.com/
|
||||
[37]:http://lucene.apache.org/solr/
|
||||
[38]:http://hadoop.apache.org/
|
||||
[39]:http://hadoop.apache.org/
|
||||
[40]:http://spark.apache.org/
|
||||
[41]:https://aws.amazon.com/emr/
|
||||
[42]:https://www.elastic.co/products/elasticsearch
|
||||
[43]:http://solr-vs-elasticsearch.com/
|
||||
[44]:https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html
|
||||
[45]:https://github.com/elastic/elasticsearch-hadoop
|
||||
[46]:https://www.elastic.co/guide/en/elasticsearch/guide/current/distributed-cluster.html
|
||||
[47]:https://www.elastic.co/cloud/enterprise
|
||||
[48]:https://xapian.org/
|
||||
[49]:http://sphinxsearch.com/
|
||||
[50]:https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/
|
||||
[51]:https://nutch.apache.org/
|
||||
[52]:http://commoncrawl.org/
|
||||
[53]:https://lunrjs.com/
|
||||
[54]:https://github.com/searchkit/searchkit
|
||||
[55]:https://github.com/fergiemcdowall/norch
|
||||
[56]:https://github.com/google/leveldb
|
||||
[57]:https://bitbucket.org/mchaput/whoosh/wiki/Home
|
||||
[58]:http://wiki.openstreetmap.org/wiki/Search_engines
|
||||
[59]:http://commoncrawl.org/
|
||||
[60]:https://aws.amazon.com/public-datasets/common-crawl/
|
||||
[61]:http://wiki.openstreetmap.org/wiki/Downloading_data
|
||||
[62]:http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
|
||||
[63]:https://dumps.wikimedia.org/
|
||||
[64]:https://www.mediawiki.org/wiki/Alternative_parsers
|
||||
[65]:http://www.imdb.com/interfaces
|
||||
[66]:https://www.amazon.com/dp/0321416910
|
||||
[67]:https://www.amazon.com/dp/0262528878/
|
||||
[68]:https://www.amazon.com/dp/3642142664/
|
||||
[69]:https://www.amazon.com/dp/1558605703
|
||||
[70]:https://www.coursera.org/learn/text-retrieval
|
||||
[71]:https://research.google.com/pubs/pub37043.html
|
||||
[72]:https://pdfs.semanticscholar.org/28d8/288bff1b1fc693e6d80c238de9fe8b5e8160.pdf
|
||||
[73]:http://queue.acm.org/detail.cfm?id=988407
|
||||
[74]:https://github.com/harpribot/awesome-information-retrieval
|
||||
[75]:https://medium.com/@dtunkelang
|
||||
[76]:https://www.cs.cmu.edu/~quixote/
|
||||
[77]:https://web.stanford.edu/class/cs276/handouts/lecture8-evaluation_2014-one-per-page.pdf
|
||||
[78]:https://stackoverflow.com/questions/34314/how-do-i-implement-search-functionality-in-a-website
|
||||
[79]:https://www.quora.com/How-to-build-a-search-engine-from-scratch
|
||||
[80]:https://github.com/isaacs/github/issues/908
|
||||
[81]:https://www.reddit.com/r/Windows10/comments/4jbxgo/can_we_talk_about_how_bad_windows_10_search_sucks/d365mce/
|
||||
[82]:https://www.reddit.com/r/spotify/comments/2apwpd/the_search_function_sucks_let_me_explain/
|
||||
[83]:https://medium.com/@RohitPaulK/github-issues-suck-723a5b80a1a3#.yp8ui3g9i
|
||||
[84]:https://thenextweb.com/opinion/2016/01/11/netflix-search-sucks-flixed-fixes-it/
|
||||
[85]:https://www.google.com/search?q=building+a+search+engine
|
||||
[86]:http://airweb.cse.lehigh.edu/2005/gyongyi.pdf
|
||||
[87]:https://www.researchgate.net/profile/Gabriel_Sanchez-Perez/publication/262371199_Explicit_image_detection_using_YCbCr_space_color_model_as_skin_detection/links/549839cf0cf2519f5a1dd966.pdf
|
||||
[88]:https://en.wikipedia.org/wiki/Locality-sensitive_hashing
|
||||
[89]:https://en.wikipedia.org/wiki/Similarity_measure
|
||||
[90]:https://www.microsoft.com/en-us/research/wp-content/uploads/2011/02/RadlinskiBennettYilmaz_WSDM2011.pdf
|
||||
[91]:http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
|
||||
[92]:https://en.wikipedia.org/wiki/Inverted_index
|
||||
[93]:https://en.wikipedia.org/wiki/Natural_language_processing
|
||||
[94]:http://backlinko.com/google-ranking-factors
|
||||
[95]:http://times.cs.uiuc.edu/czhai/pub/slmir-now.pdf
|
||||
[96]:https://en.wikipedia.org/wiki/Learning_to_rank
|
||||
[97]:https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf
|
||||
[98]:https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/lambdarank.pdf
|
||||
[99]:https://yandex.com/company/technologies/matrixnet/
|
||||
[100]:https://arxiv.org/abs/1708.02702
|
||||
[101]:http://cnn.com/
|
||||
[102]:http://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf
|
||||
[103]:https://googleblog.blogspot.co.uk/2008/08/search-experiments-large-and-small.html
|
||||
[104]:https://spark.apache.org/
|
||||
[105]:https://www.algolia.com/
|
||||
[106]:https://lucene.apache.org/
|
||||
[107]:https://lucy.apache.org/
|
||||
[108]:https://www.brainyquote.com/quotes/quotes/p/pierredefe204944.html
|
||||
[109]:https://www.linkedin.com/in/grigorev/
|
||||
[110]:mailto:forwidur@gmail.com
|
||||
[111]:https://twitter.com/forwidur
|
||||
[112]:https://github.com/open-guides/og-aws
|
||||
[113]:https://upscri.be/d29cfe/
|
||||
[114]:https://twitter.com/ojoshe
|
||||
[115]:https://twitter.com/lpolovets
|
||||
[116]:https://www.linkedin.com/in/abhishek-das-3280053/
|
||||
[117]:https://www.behance.net/gallery/3530289/-HORIZON-
|
||||
[118]:https://en.wikipedia.org/wiki/Henry_O._Studley
|
@ -1,95 +0,0 @@
|
||||
如何使用 pull requests 来改善你的代码审查
|
||||
============================================================
|
||||
|
||||
在 Github 上使用 pull requests 来做代码审查,花费更多的时间去构建,而更少的时间去修改。
|
||||
|
||||

|
||||
|
||||
|
||||
>看看 Brent 和 Peter’s 的[ _Introducing GitHub_ ][5]的书, 了解更多有关创建项目,开始 pull requests 和团队软件开发流程的概述。
|
||||
|
||||
|
||||
如果你每天不编写代码,你可能不知道软件开发人员每天面临的一些问题。
|
||||
|
||||
* 代码中的安全漏洞
|
||||
* 导致应用程序崩溃的代码
|
||||
* 被称作 “技术债务” 和之后需要重写的代码
|
||||
* 已经重写在你所不知道地方的代码
|
||||
|
||||
|
||||
代码审查可以允许其他人和工具检查来帮助我们改善所编写的软件。这种审查通过自动化代码分析或者测试覆盖工具来进行软件开发过程中重要的二个部分,节省数小时的手工劳动或同行评审。同行的审查是开发人员审查彼此工作的过程。在软件开发的过程中,速度和紧迫性是经常面临的问题中二个重要的部分。如果你没有尽快的发布,你的竞争对手可能会在你之前发布相似的产品。如果你不经常发不新的版本,你的用户可能会怀疑您是否仍然关心你的应用程序的改进优化。
|
||||
|
||||
### 衡量时间权衡:代码审查 vs. bug 修复
|
||||
|
||||
如果有人能够以最小争议的方式汇集多种类型的代码审查,那么随着时间的推移,该软件的质量将会得到改善。认为引入新的工具或流程在最初不会推迟时间,这是天真的想法。但是更昂贵的是:修复生产中的错误的时候,在还是软件生产之前改进软件,即使新工具延迟了时间,客户可以发布和欣赏新功能,随着软件开发人员提高自己的技能,软件开发周期将会回升到以前的水平,同时应该减少错误。
|
||||
|
||||
通过代码审查实现提升代码质量目标的关键之一就是使用一个足够灵活的平台,允许软件开发人员快速编写代码,使用他们熟悉的工具,并行彼此进行同行评审码。 GitHub 就是这样一个平台的。然而,把你的代码放在 [GitHub][9] 上并不只是神奇地使代码审查发生; 你必须使用 pull requests ,来开始这个美妙的旅程。
|
||||
|
||||
### Pull requests: 一个代码的生活讨论的工具
|
||||
|
||||
[Pull requests][10] 是 Github 上的一个工具,允许软件开发人员讨论并提出对项目的主要代码库的更改,稍后可以让所用用户看到。它们在 2008 年 2 月创建的,目的是在接受(合并)某人之前的建议进行更改,然后在部署到生产中,供最终用户看到这种变化。
|
||||
|
||||
Pull requests 开始是一种松散的方式为某人的项目提供改变,但是它已经演变成:
|
||||
|
||||
* 关于你想要合并的代码的生活讨论
|
||||
* 增加功能,这种可见性的修改(更改)
|
||||
* 整合你最喜爱的工具
|
||||
* 作为受保护的分支工作流程的一部分可能需要显式提取请求评估
|
||||
|
||||
### 考虑带代码: URL 是永久的
|
||||
|
||||
看看上面的前两个点,pull requests 促成了一个正在进行的代码讨论,使代码变化非常明显,并且使您很容易在回顾的过程中找到所需的代码。对于新人和有经验的开发人员来说,能够回顾以前的讨论,了解为什么一个功能被开发出来,或者与另一个关于相关功能的讨论这样的联系方式是便捷的。当跨多个项目协调功能并使每个人尽可能接近代码时,前后讨论的内容也非常重要。如果这些功能仍在开发中,重要的是能够看到上次审查以来更改了哪些内容。毕竟,[对小的更改比大的修改要容易得多][11],但大的功能并不总是可能的。因此,重要的是能够拿起你上次审查,并只看到从那时以来的变化。
|
||||
|
||||
### 集成工具: 软件开发人员的建议
|
||||
|
||||
考虑到上述第三点,GitHub 上的 pull requests 有很多功能,但开发人员将始终对第三方工具有偏好。代码质量是代码审查的整个领域,涉及到其他组件的代码评审,而这些评审不一定是人的。检测“低效”或缓慢、潜在的安全漏洞或不符合公司标准的代码是留给自动化工具的任务。
|
||||
[SonarQube][12] 和 [Code Climatecan][13] 分析你的代码的工具,而像 [Codecov][14] 和 [Coveralls][15] 的工具可以告诉你如果你只是写新代码没有得到很好的测试。这些令人惊奇工具最大的特点就是,他们可以插到 GitHub 和 pull requests 报告他们的发现!这意味着不仅让人们检查代码,而且工具也在那里报告情况。每个人都可以停留在一个如何发展循环中的功能。
|
||||
|
||||
最后,根据您的团队的偏好,您可以利用[受保护的分支工作流][16]所需的状态特性来进行工具和同行评审。
|
||||
|
||||
虽然您可能只是开始您的软件开发之旅,一个希望知道一个项目正在做什么的业务利益相关者,或者是想要确保项目的及时性和质量的项目经理,可以通过设置参与 pull requests 批准工作流程,并考虑与其他工具集成以确保质量,在任何级别的软件开发中都很重要。
|
||||
|
||||
无论是为您的个人网站,贵公司的在线商店,还是最新的组合,以最大的收益收获今年的玉米,编写好的软件都需要进行良好的代码审查。良好的代码审查涉及到正确的工具和平台。要了解有关 GitHub 和软件开发过程的更多信息,请参阅 O'Reilly 的 [ _GitHub 简介_ ][17] 一书, 您可以在其中了解创建项目,启动拉取请求以及概述团队的“软件开发流程”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
**Brent Beer**
|
||||
|
||||
Brent Beer 使用 Git 和 GitHub 已经超过五年了,通过大学的课程,对开源项目的贡献,以及专业网站开发人员。在担任 GitHub 上的培训师时,他也成为 O’Reilly 的 “GitHub简介” 的出版作者。他现在担任 Amsterdam GitHub 上的解决方案工程师,帮助 Git 和 GitHub 向世界各地的开发人员提供服务。
|
||||
|
||||
**Peter Bell**
|
||||
|
||||
Peter Bell 是 Ronin 实验室的创始人以及 CTO。Training is broken - we're fixing it through technology enhanced training!他是一位有经验的企业家,技术专家,敏捷教练和CTO,专门从事 EdTech 项目。他为 O'Reilly 撰写了 “ GitHub 简介” ,为代码学校创建了“掌握 GitHub ”课程,为 Pearson 创建了“ Git 和 GitHub LiveLessons ”课程。他经常在国际和国际会议上提供 ruby , nodejs , NoSQL (尤其是 MongoDB 和 neo4j ),云计算,软件工艺,java,groovy,j ...
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
|
||||
作者:[Brent Beer][a],[Peter Bell][b]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[b]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[1]:https://pixabay.com/en/measure-measures-rule-metro-106354/
|
||||
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[4]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[5]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews
|
||||
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[8]:https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
[9]:https://github.com/about
|
||||
[10]:https://help.github.com/articles/about-pull-requests/
|
||||
[11]:https://blog.skyliner.io/ship-small-diffs-741308bec0d1
|
||||
[12]:https://github.com/integrations/sonarqube
|
||||
[13]:https://github.com/integrations/code-climate
|
||||
[14]:https://github.com/integrations/codecov
|
||||
[15]:https://github.com/integrations/coveralls
|
||||
[16]:https://help.github.com/articles/about-protected-branches/
|
||||
[17]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews-lower
|
@ -0,0 +1,118 @@
|
||||
当你只想将事情搞定时,为什么开放式工作这么难?
|
||||
============================================================
|
||||
|
||||
### 学习使用开放式决策框架来写一本书
|
||||
|
||||

|
||||
>图片来源 : opensource.com
|
||||
|
||||
GSD(get stuff done 的缩写,即搞定)指导着我的工作方式。数年来,我将各种方法论融入我日常工作的习惯中,包括精益方法的反馈循环,和敏捷开发的迭代优化,以此来更好地 GSD(如果把 GSD 当作动词的话)。这意味着我必须非常有效地利用我的时间:列出清晰,各自独立的目标;标记已完成的项目;用迭代的方式地持续推进项目进度。但是当我们默认使用开放的时仍然能够 GSD 吗?又或者 GSD 的方法完全行不通呢?大多数人都认为这会导致糟糕的状况,但我发现事实并不一定这样。
|
||||
|
||||
在开放的环境中工作,遵循[开放式决策框架][6]中的指导,会让项目起步变慢。但是在最近的一个项目中,我们作出了一个决定,一个从开始就正确的决定:以开放的方式工作,并与我们的社群一起合作。
|
||||
|
||||
关于开放式组织的资料
|
||||
|
||||
* [下载《开放式组织 IT 文化变革指南》][1]
|
||||
* [下载《开放式组织领袖手册》][2]
|
||||
* [什么是开放式组织][3]
|
||||
* [什么是开放决策][4]
|
||||
|
||||
这是我们能做的最好的决定。
|
||||
|
||||
我们来看看这次经历带来的意想不到的结果,再看看我们如何将 GSD 思想融入开放式组织框架。
|
||||
|
||||
### 建立社区
|
||||
|
||||
2014 年 10 月,我接手了一个新的项目:当时红帽的 CEO Jim Whitehurst 即将推出一本新书《开放式组织》,我要根据书中提出的概念,建立一个社区。“太棒了,这听起来是一个挑战,我加入了!”我这样想。但不久,[冒牌者综合征][7]便出现了,我又开始想:“我们究竟要做什么呢?怎样才算成功呢?”
|
||||
|
||||
让我剧透一下,在这本书的结尾处,Jim 鼓励读者访问 Opensource.com,继续探讨 21 世纪的开放和管理。所以,在 2015 年 5 月,我们的团队在网站上建立了一个新的板块来讨论这些想法。我们计划讲一些故事,就像我们在 Opensource.com 上常做的那样,只不过这次围绕着书中的观点与概念。之后,我们每周都发布新的文章,在 Twitter 上举办了一个在线的读书俱乐部,还将《开放式组织》打造成了系列书籍。
|
||||
|
||||
我们内部独自完成了该系列书籍的前三期,每隔六个月发布一期。每完成一期,我们就向社区发布。然后我们继续完成下一期的工作,如此循环下去。
|
||||
|
||||
这种工作方式,让我们看到了很大的成功。近 3000 人订阅了[该系列的新书][9],《开放式组织领袖手册》。我们用 6 个月的周期来完成这个项目,这样新书的发行日正好是前书的两周年纪念日。
|
||||
|
||||
在这样的背景下,我们完成这本书的方式是简单直接的:针对开放工作这个主题,我们收集了最好的故事,并将它们组织起来形成文章,招募作者填补一些内容上的空白,使用开源工具调整字体样式,与设计师一起完成封面,最终发布这本书。这样的工作方式使得我们能按照自己的时间线(GSD)全速前进。到[第三本书][10]时,我们的工作流已经基本完善了。
|
||||
|
||||
然而这一切在我们计划开始《开放式组织》的最后一本书时改变了,这本书将重点放在开放式组织和 IT 文化的交融上。我提议使用开放式决策框架来完成这本书,因为我想通过这本书证明开放式的工作方法能得到更好的结果,尽管我知道这可能会完全改变我们的工作方式。时间非常紧张(只有两个半月),但我们还是决定试一试。
|
||||
|
||||
### 用开放式决策框架来完成一本书
|
||||
|
||||
开放式决策框架列出了组成开放决策制定过程的 4 个阶段。下面是我们在每个阶段中的工作情况(以及开放是如何帮助完成工作的)。
|
||||
|
||||
### 1\. 构思
|
||||
|
||||
我们首先写了一份草稿,罗列了对项目设想的愿景。我们需要拿出东西来和潜在的“顾客”分享(在这个例子中,“顾客”指潜在的利益相关者和作者)。然后我们约了一些领域专家面谈,这些专家能够给我们直接的诚实的意见。这些专家表现出的热情与他们提供的指导验证了我们的想法,同时提出了反馈意见使我们能继续向前。如果我们没有得到这些验证,我们会退回到我们最初的想法,再决定从哪里重新开始。
|
||||
|
||||
### 2\. 计划与研究
|
||||
|
||||
经过几次面谈,我们准备在 [Opensource.com 上公布这个项目][11]。同时,我们在 [Github 上也公布了这个项目][12], 提供了项目描述,预计的时间线,并阐明了我们所受的约束。这次公布得到了很好的效果,我们最初计划的目录中欠缺了一些内容,在项目公布之后的 72 小时内就被补充完整了。另外(也是更重要的),读者针对一些章节,提出了本不在我们计划中的想法,但是读者觉得这些想法能够补充我们最初设想的版本。
|
||||
|
||||
我们体会到了 [Linus 法则][16]: "With more eyes, all _typos_ are shallow."
|
||||
|
||||
回顾过去,我觉得在项目的第一和第二个阶段,开放项目并不会影响我们搞定项目的能力。事实上,这样工作有一个很大的好处:发现并填补内容的空缺。我们不只是填补了空缺,我们是迅速地就填补了空缺,并且还是用我们自己从未考虑过的点子。这并不一定要求我们做更多的工作,只是改变了我们的工作方式。我们动用有限的人脉,邀请别人来写作,再组织收到的内容,设置上下文,将人们导向正确的方向。
|
||||
|
||||
### 3\. 设计,开发和测试
|
||||
|
||||
项目的这个阶段完全围绕项目管理,管理一些像猫一样特立独行的人,并处理项目的预期。我们有明确的截止时间,我们提前沟通,频繁沟通。我们还使用了一个战略:列出了贡献者和利益相关者,在项目的整个过程中向他们告知项目的进度,尤其是我们在 Github 上标出的里程碑。
|
||||
|
||||
最后,我们的书需要一个名字。我们收集了许多反馈,指出书名应该是什么,更重要的反馈指出了书名不应该是什么。我们通过 [Github 上的 issue][13] 收集反馈意见,并公开表示我们的团队将作最后的决定。当我们准备宣布最后的书名时,我的同事 Bryan Behrenshausen 做了很好的工作,[分享了我们作出决定的过程][14]。人们似乎对此感到高兴——即使他们不同意我们最后的书名。
|
||||
|
||||
书的“测试”阶段需要大量的[校对][15]。社区成员真的参与到回答这个“求助”贴中来。我们在 GitHub issue 上收到了大约 80 条意见,汇报校对工作的进度(更不用说通过电子邮件和其他反馈渠道获得的许多额外的反馈)。
|
||||
|
||||
关于搞定任务:在这个阶段,我们亲身体会了 [Linus 法则][16]:"With more eyes, all _typos_ are shallow." 如果我们像前三本书一样自己独立完成,那么整个校对的负担就会落在我们的肩上(就像这些书一样)!相反,社区成员慷慨地帮我们承担了校对的重担,我们的工作从自己校对(尽管我们仍然做了很多工作)转向管理所有的 change requests。对我们团队来说,这是一个受大家欢迎的改变;对社区来说,这是一个参与的机会。如果我们自己做的话,我们肯定能更快地完成校对,但是在开放的情况下,我们在截止日期之前发现了更多的错误,这一点毋庸置疑。
|
||||
|
||||
### 4\. Launch
|
||||
|
||||
### 4\. 发布
|
||||
|
||||
好了,我们现在推出了这本书的最终版本。(或者只是第一版?)
|
||||
|
||||
遵循开放决策框架是《IT 文化变革指南》成功的关键。
|
||||
|
||||
我们把发布分为两个阶段。首先,根据我们的公开的项目时间表,在最终日期之前的几天,我们安静地推出了这本书,以便让我们的社区贡献者帮助我们测试[下载表格][17]。第二阶段也就是现在,这本书的[通用版][18]的正式公布。当然,我们在发布后的仍然接受反馈,开源方式也正是如此。
|
||||
|
||||
### 成就解锁
|
||||
|
||||
遵循开放式决策框架是《IT 文化变革指南》成功的关键。通过与客户和利益相关者的合作,分享我们的制约因素,工作透明化,我们甚至超出了自己对图书项目的期望。
|
||||
|
||||
我对整个项目中的合作,反馈和活动感到非常满意。虽然有一段时间内没有像我想要的那样快速完成任务,这让我有一种焦虑感,但我很快就意识到,开放这个过程实际上让我们能完成更多的事情。基于上面我的概述这一点显而易见。
|
||||
|
||||
所以也许我应该重新考虑我的 GSD 心态,并将其扩展到 GMD:Get **more** done,搞定**更多**工作,并且就这个例子说,取得更好的结果。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Jason Hibbets - Jason Hibbets 是 Red Hat 企业营销中的高级社区传播者,也是 Opensource.com 的社区经理。 他自2003年以来一直在 Red Hat,并且是开源城市基金会的创立者。之前的职位包括高级营销专员,项目经理,Red Hat 知识库维护人员和支持工程师。
|
||||
|
||||
-----------
|
||||
|
||||
via: https://opensource.com/open-organization/17/6/working-open-and-gsd
|
||||
|
||||
作者:[Jason Hibbets ][a]
|
||||
译者:[explosic4](https://github.com/explosic4)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jhibbets
|
||||
[1]:https://opensource.com/open-organization/resources/culture-change?src=too_resource_menu
|
||||
[2]:https://opensource.com/open-organization/resources/leaders-manual?src=too_resource_menu
|
||||
[3]:https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu
|
||||
[4]:https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu
|
||||
[5]:https://opensource.com/open-organization/17/6/working-open-and-gsd?rate=ZgpGc0D07SjGkTOf708lnNqbF_HvkhXTXeSzRKMhvVM
|
||||
[6]:https://opensource.com/open-organization/resources/open-decision-framework
|
||||
[7]:https://opensource.com/open-organization/17/5/team-impostor-syndrome
|
||||
[8]:https://opensource.com/open-organization/resources
|
||||
[9]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[10]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[11]:https://opensource.com/open-organization/17/3/announcing-it-culture-book
|
||||
[12]:https://github.com/open-organization-ambassadors/open-org-it-culture
|
||||
[13]:https://github.com/open-organization-ambassadors/open-org-it-culture/issues/20
|
||||
[14]:https://github.com/open-organization-ambassadors/open-org-it-culture/issues/20#issuecomment-297970303
|
||||
[15]:https://github.com/open-organization-ambassadors/open-org-it-culture/issues/29
|
||||
[16]:https://en.wikipedia.org/wiki/Linus%27s_Law
|
||||
[17]:https://opensource.com/open-organization/resources/culture-change
|
||||
[18]:https://opensource.com/open-organization/resources/culture-change
|
||||
[19]:https://opensource.com/user/10530/feed
|
||||
[20]:https://opensource.com/users/jhibbets
|
101
translated/tech/20141107 Dont Waste Time Writing Perfect Code.md
Normal file
101
translated/tech/20141107 Dont Waste Time Writing Perfect Code.md
Normal file
@ -0,0 +1,101 @@
|
||||
不要浪费时间写出完美的代码
|
||||
============================================================
|
||||
|
||||
|
||||
系统可以持续 5 年或 10 年甚至 20 年或者更多年。但是,特定代码行的生命,即使是设计,通常要短得多:当你通过不同的方法来解决问题,它会有几个月或几天甚至几分钟的生命。
|
||||
|
||||
### 一些代码比其他代码重要
|
||||
|
||||
通过研究[代码如何随时间变化][4],Michael Feathers 确定了[一个代码库曲线][5]。每个系统都有代码,通常有很多是一次性写的,永远都不会改变。但是有少量的代码,包括最重要和最有用的代码,会一次又一次地改变、几次重构或者从头重写。
|
||||
|
||||
当你在一个系统中有更多体验,或者有一个问题领域或体系结构方法时,应该更容易了解并预测什么代码将永远改变,哪些代码将永远不会改变:什么代码重要,什么代码不重要。
|
||||
|
||||
### 我们应该尝试编写完美的代码么?
|
||||
|
||||
我们知道我们应该写[干净的代码][6],代码是一致的,很明显也要尽可能简单的。
|
||||
|
||||
有些人把这变成极端,他们迫使自己写出[美丽][7]、优雅,接近[完美][8]的代码,[痴迷重构][9]并且纠结每个细节。
|
||||
|
||||
但是,如果代码只写一次而不改变,或者如果在另一个极端下,它一直在改变的话,就如尝试写完美的需求护着尝试完美的前期设计那样,写完美的代码难道不是既浪费又没有必要(也不可能实现)的么?
|
||||
|
||||
> “你不能写完美的软件。这受伤么?作为生活的公理接受它、拥抱它、庆祝它。因为完美的软件不存在。计算机的短暂历史中没有人写过一个完美的软件。你不可能成为第一个。除非你接受这个事实,否则你最终会浪费时间和精力追逐不可能的梦想。”
|
||||
> Andrew Hunt,[务实的程序员: 从熟练工到大师][10]
|
||||
|
||||
一次性写的代码不需要美观优雅。但它必须是正确的、可以理解的 - 因为不会改变的代码在系统的整个生命周期内可能仍然被阅读很多次。它不需要干净和紧凑 - 只要干净够了。代码中[复制和粘贴][11]和其他小的裁剪是允许的,至少要达到这点。这是永远不需要打磨的代码。即使周围的其他代码正在更改,这也是不需要重构的代码(除非你需要更改它)。这是不值得花费额外时间的代码。
|
||||
|
||||
你一直在改变的代码怎么样?纠结风格以及提出最优雅的解决方案是浪费时间,因为这段代码可能会再次更改,甚至可能会在几天或几周内重写。因此,每当你进行更改时,都会[痴迷重构][12]代码,或者没有重构没有改变的代码,因为它可能会更好。代码总是可以更好。但这并不重要。
|
||||
|
||||
重要的是:代码是否做了应该做的 - 是正确的、可用的和高效的?它可以[处理错误和不良数据][13]而不会崩溃 - 至少[安全地失败][14]?调试容易吗?改变是否容易安全?这些不是美的主观方面。这些是成功与失败实际措施之间的差异。
|
||||
|
||||
### 务实编码和重构
|
||||
|
||||
精益发展的核心思想是:不要浪费时间在不重要的事情上。这应该提醒我们该如何编写代码,以及我们如何重构它、审查它、测试它。
|
||||
|
||||
为了让工作完成,只[重构你需要的][15] - [Martin Fowler][16] 称之为机会主义重构(理解、清理、[童子军规则][17] )和准备重构。足够使变化更加容易和安全,而不是更多。如果你不改变代码,那么它并不会如看起来的那么重要。
|
||||
|
||||
在代码审查中,只聚焦在[重要的事上][18]。代码是否正确?有防御吗?是否安全?你能理解么?改变是否安全?
|
||||
|
||||
忘记风格(除非风格变成无法理解)。让你的 IDE 处理格式化。不要争议代码是否应该是“更多的 OO”。只要它有意义,它是否适当地遵循这种或那种模式并不重要。无论你喜欢还是不喜欢都没关系。无论你有更好的方式做到这一点并不重要 - 除非你在教新接触这个平台或者语言的人,而且需要在做代码审查时做一部分指导。
|
||||
|
||||
写测试很重要。测试涵盖主要流程和重要的意外情况。测试让你用最少的工作获得最多的信息和最大的信心。[大面积覆盖测试,或小型测试][19] - 都没关系,只要他们做这个工作,在编写代码之前或之后编写测试并不重要。
|
||||
|
||||
### 不是(只是)关于代码
|
||||
|
||||
建筑和工程隐喻从未对软件有效。我们不是设计和建造几年或几代将保持基本不变的桥梁或摩天大楼。我们构建的更加弹性和抽象,更加短暂的东西。代码写来是被修改的 - 这就是为什么它被称为“软件”。
|
||||
|
||||
> “经过五年的使用和修改,成功的软件程序的源码通常和它的原始形式完全无法识别,而五年后的成功建筑几乎没有变化。”
|
||||
> Kevin Tate,[可持续软件开发][20]
|
||||
|
||||
我们需要将代码看作是我们工作的一个暂时的人工品:
|
||||
|
||||
> 有时候面对更重要的事情时,我们被引导迷信代码。我们经常有一个错觉,让发出的产品有价值的是代码,然而实际上可能是对问题领域的了解、设计难题的进展甚至是客户反馈。
|
||||
> Dan Grover,[Code and Creative Destruction][21]
|
||||
|
||||
迭代开发教会我们来体验和研究我们工作的结果 - 我们是否解决了这个问题,如果没有,我们学到了什么,我们如何改进?软件构建从不会完成。即使设计和代码是正确的,它们也可能只是一段时间,直到环境要求再次更改或替换为更好的东西。
|
||||
|
||||
我们需要编写好的代码:代码可以理解、正确、安全和可靠。我们需要重构和审查它,并写出好的有用的测试,同时知道这其中一些或者所有的代码,可能会很快被抛弃,或者它可能永远不会被再被查看,或者它可能根本不用。我们需要认识到,我们的一些工作必然会被浪费,并为此而进行优化。做需要做的,没有别的了。不要浪费时间尝试编写完美的代码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Jim Bird
|
||||
|
||||
|
||||
我是一名经验丰富的软件开发经理,项目经理和 CTO,专注于软件开发和维护、软件质量和安全性方面的困难问题。在过去 15 年中,我一直在管理建立全球证券交易所和投资银行电子交易平台的团队。我特别感兴趣的是,小团队在构建真正的软件中如何有效率:在可靠性,性能和适应性极限限制下的高质量,安全系统。
|
||||
|
||||
------
|
||||
|
||||
via: https://dzone.com/articles/dont-waste-time-writing
|
||||
|
||||
作者:[Jim Bird][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://dzone.com/users/722527/jim.bird.html
|
||||
[1]:https://dzone.com/users/722527/jim.bird.html
|
||||
[2]:https://dzone.com/users/722527/jim.bird.html
|
||||
[3]:https://dzone.com/articles/dont-waste-time-writing?utm_source=wanqu.co&utm_campaign=Wanqu%20Daily&utm_medium=website#
|
||||
[4]:http://www.youtube.com/watch?v=0eAhzJ_KM-Q
|
||||
[5]:http://swreflections.blogspot.ca/2012/10/bad-things-happen-to-good-code.html
|
||||
[6]:http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
|
||||
[7]:http://www.makinggoodsoftware.com/2011/03/27/the-obsession-with-beautiful-code-the-refactor-syndrome/
|
||||
[8]:http://stackoverflow.com/questions/1196405/how-to-keep-yourself-from-perfectionism-when-coding
|
||||
[9]:http://programmers.stackexchange.com/questions/43506/is-it-bad-to-have-an-obsessive-refactoring-disorder
|
||||
[10]:https://pragprog.com/the-pragmatic-programmer
|
||||
[11]:http://swreflections.blogspot.com/2012/03/is-copy-and-paste-programming-really.html
|
||||
[12]:http://programmers.stackexchange.com/questions/43506/is-it-bad-to-have-an-obsessive-refactoring-disorder
|
||||
[13]:http://swreflections.blogspot.com/2012/03/defensive-programming-being-just-enough.html
|
||||
[14]:https://buildsecurityin.us-cert.gov/articles/knowledge/principles/failing-securely
|
||||
[15]:http://swreflections.blogspot.com/2012/04/what-refactoring-is-and-what-it-isnt.html
|
||||
[16]:http://martinfowler.com/articles/workflowsOfRefactoring/
|
||||
[17]:http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule
|
||||
[18]:http://randomthoughtsonjavaprogramming.blogspot.com/2014/08/building-real-software-dont-waste-time.html
|
||||
[19]:http://swreflections.blogspot.com/2012/08/whats-better-big-fat-tests-or-little.html
|
||||
[20]:http://www.amazon.com/Sustainable-Software-Development-Agile-Perspective/dp/0321286081
|
||||
[21]:http://dangrover.com/2013/07/16/code-and-creative-destruction/
|
||||
[22]:https://dzone.com/devops-tutorials-tools-news
|
||||
[23]:https://dzone.com/articles/dont-waste-time-writing?utm_source=wanqu.co&utm_campaign=Wanqu%20Daily&utm_medium=website#
|
||||
[24]:https://dzone.com/go?i=228233&u=https%3A%2F%2Foffers.automic.com%2Fblueprint-to-continuous-delivery-with-automic-release-automation%3Futm_campaign%3DAMER%252520Online%252520Syndication%252520DZone%252520Platinum%252520Sponsorship%252520Ads%252520JULY-2017%26utm_source%3DDzone%252520Ads%26utm_medium%3DBlueprint%252520to%252520CD
|
@ -0,0 +1,113 @@
|
||||
用户报告:Steam Machines 与 SteamOS 发布一周年记
|
||||
====
|
||||
|
||||
去年今日,在非常符合 Valve 风格的跳票之后大众迎来了 [Steam Machine 的发布][2]。即使是在 Linux 桌面环境对于游戏的支持大步进步的今天,Steam Machines 作为一个平台依然没有飞跃,而 SteamOS 似乎也止步不前。这些由 Valve 发起的项目究竟怎么了?这些项目为何被发起,又是如何失败的?一些改进又是否曾有机会挽救这些项目的成败?
|
||||
|
||||
**行业环境**
|
||||
|
||||
在 2012 年 Windows 8 发布的时候,微软像 iOS 与 Android 那样,为 Windows 集成了一个应用商店。在微软试图推广对触摸体验友好的界面时,为了更好的提供 “Metro” UI 语言指导下的沉浸式触摸体验,他们同时推出了一系列叫做 “WinRT” 的 API。然而为了能够使用这套 API,应用开发者们必须把应用程序通过 Windows 应用商城发布,并且正如其他应用商城那样,微软从中抽成30%。对于 Valve 的 CEO,Gabe Newell (G胖) 而言,这种限制发布平台和抽成行为是让人无法接受的,而且他前瞻地看到了微软利用行业龙头地位来推广 Windows 商店和 Metro 应用对于 Valve 潜在的危险,正如当年微软用 IE 浏览器击垮 Netscape 浏览器一样。
|
||||
|
||||
对于 Valve 来说,运行 Windows 的 PC 的优势在于任何人都可以不受操作系统和硬件方的限制运行各种软件。当像 Windows 这样的专有平台对像 Steam 这样的第三方软件限制越来越严格时,应用开发者们自然会想要寻找一个对任何人都更开放和自由的替代品,他们很自然的会想到 Linux 。Linux 本质上只是一套内核,但你可以轻易地使用 GNU 组件,Gnnome 等软件在这套内核上开发出一个操作系统,比如 Ubuntu 就是这么来的。推行 Ubuntu 或者其他 Linux 发行版自然可以为 Valve 提供一个无拘无束的平台,以防止微软或者苹果变成 Valve 作为第三方平台之路上的的敌人,但 Linux 甚至给了 Valve 一个创造新的操作系统平台的机会。
|
||||
|
||||
**概念化**
|
||||
|
||||
如果我们把 Steam Machines 叫做主机的话,Valve 当时似乎认定了主机平台是一个机会。为了迎合用户对于电视主机平台用户界面的审美期待,同时也为了让玩家更好地从稍远的距离上在电视上玩游戏,Valve 为 Steam 推出了 Big Picture 模式。Steam Machines 的核心要点是开放性;比方说所有的软件都被设计成可以脱离 Windows 工作,又比如说 Steam Machines 手柄的 CAD 图纸也被公布出来以便支持玩家二次创作。
|
||||
|
||||
原初计划中,Valve 打算设计一款官方的 Steam Machine 作为旗舰机型。但最终,这些机型只在 2013 年的时候作为原型机给与了部分测试者用于测试。Valve 后来也允许像戴尔这样的 OEM 厂商们制造 Steam Machines,并且也赋予了他们定制价格和配置规格的权利。有一家叫做 “Xi3” 的公司展示了他们设计的 Steam Machine 小型机型,那款机型小到可以放在手掌上,这一新闻创造了围绕 Steam Machines 的更多热烈讨论。最终,Valve 决定不自己设计知道 Steam Machines,而全权交给 OEM 合作厂商们。
|
||||
|
||||
这一过程中还有很多天马行空的创意被列入考量,比如在手柄上加入生物识别技术,眼球追踪以及动作控制等。在这些最初的想法里,陀螺仪被加入了 Steam Controller 手柄,HTC Vive 的手柄也有各种动作追踪仪器;这些想法可能最初都来源于 Steam 手柄的设计过程中。手柄最初还有些更激进的设计,比如在中心放置一块可定制化并且会随着游戏内容变化的触摸屏。但最后的最后,发布会上的手柄偏向保守了许多,但也有诸如双触摸板和内置软件等黑科技。Valve 也考虑过制作面向笔记本类型硬件的 Steam Machines 和 SteamOS。这个企划最终没有任何成果,但也许 “Smach Z” 手持游戏机会是发展的方向之一。
|
||||
|
||||
在 [2013年九月][3],Valve 对外界宣布了 Steam Machines 和 SteamOS, 并且预告会在 2014 年中发布。前述的 300 台原型机在当年 12 月被分发给了测试者们,随后次年 1 月,2000 台原型机又被分发给了开发者们。SteamOS 也在那段时间被分发给有 Linux 经验的测试者们试用。根据当时的测试反馈,Valve 最终决定把产品发布延期到 2015 年 11 月。
|
||||
|
||||
SteamOS 的延期跳票给合作伙伴带来了问题;戴尔的 Steam Machine 由于早发售了一年结果不得不改为搭配了额外软件甚至运行着 Windows 操作系统的 Alienware Alpha。
|
||||
|
||||
**正式发布**
|
||||
|
||||
在最终的正式发布会上,Valve 和 OEM 合作商们发布了 Steam Machines,同时 Valve 还推出了 Steam Controller 手柄和 Steam Link 串流游戏设备。Valve 也在线下零售行业比如 GameStop 里开辟了货架空间。在发布会前,有几家 OEM 合作商退出了与 Valve 的合作;比如 Origin PC 和 Falcon Northwest 这两家高端精品主机设计商。他们宣称 Steam 生态的性能问题和一些限制迫使他们决定弃用 SteamOS。
|
||||
|
||||
Steam Machines 在发布后收到了褒贬不一的评价。另一方面 Steam Link 则普遍受到好评,很多人表示愿意在客厅电视旁为他们已有的 PC 系统购买 Steam Link, 而不是购置一台全新的 Steam Machine。Steam Controller 手柄则受到其丰富功能伴随而来的陡峭学习曲线影响,评价一败涂地。然而针对 Steam Machines 的批评则是最猛烈的。诸如 LinusTechTips 这样的评测团体 (译者:YouTube硬件界老大,个人也经常看他们节目) 注意到了主机的明显的不足,其中甚至不乏性能为题。很多厂商的 Machines 都被批评为性价比极低,特别是经过和玩家们自己组装的同配置机器或者电视主机做对比之后。SteamOS 而被批评为兼容性有问题,Bugs 太多,以及性能不及 Windows。在所有 Machines 里,戴尔的 Alienware Alpha 被评价为最有意思的一款,主要是由于品牌价值和机型外观极小的缘故。
|
||||
|
||||
通过把 Debian Linux 操作系统作为开发基础,Valve 得以为 SteamOS 平台找到很多原本就存在与 Steam 平台上的 Linux 兼容游戏来作为“首发游戏”。所以起初大家认为在“首发游戏”上 Steam Machines 对比其他新发布的主机优势明显。然而,很多宣称会在新平台上发布的游戏要么跳票要么被中断了。Rocket League 和 Mad Max 在宣布支持新平台整整一年后才真正发布,而 巫师3 和蝙蝠侠:阿克汉姆骑士 甚至从来没有发布在新平台上。就 巫师3 的情况而言,他们的开发者 CD Projekt Red 拒绝承认他们曾经说过要支持新平台;然而他们的游戏曾在宣布支持 Linux 和 SteamOS 的游戏列表里赫然醒目。雪上加霜的是,很多 AAA 级的大作甚至没被宣布移植,虽然最近这种情况稍有所好转了。
|
||||
|
||||
**被忽视的**
|
||||
|
||||
在 Stame Machines 发售后,Valve 的开发者们很快转移到了其他项目的工作中去了。在当时,VR 项目最为被内部所重视,6 月份的时候大约有 1/3 的员工都在相关项目上工作。Valve 把 VR 视为亟待开发的一片领域,而他们的 Steam 则应该作为分发 VR 内容的生态环境。通过与 HTC 合作生产,Valve 设计并制造出了他们自己的 VR 头戴和手柄,并计划在将来更新换代。然而与此同时,Linux 和 Steam Machines 都渐渐淡出了视野。SteamVR 甚至直到最近才刚刚支持 Linux (其实还没对普通消费者开放使用,只在 SteamDevDays 上展示过对 Linux 的支持),而这一点则让我们怀疑 Valve 在 Stame Machines 和 Linux 的开发上是否下定了足够的决心。
|
||||
|
||||
SteamOS 自发布以来几乎止步不前。SteamOS 2.0 作为上一个大版本号更新,几乎只是同步了 Debian 上游的变化,而且还需要用户重新安装整个系统,而之后的小补丁也只是在做些上游更新的配合。当 Valve 在其他事关性能和用户体验的项目,例如 Mesa,上进步匪浅的时候,针对 Steam Machines 的相关项目则少有顾及。
|
||||
|
||||
很多原本应有的功能都从未完成。Steam 的内置功能,例如聊天和直播,都依然处于较弱的状态,而且这种落后会影响所有平台上的 Steam 用户体验。更具体来说,Steam 没有像其他主流主机平台一样把诸如 Netflix,Twitch 和 Spotify 之类的服务集成到客户端里,而通过 Steam 内置的浏览器使用这些服务则体验极差,甚至无法使用;而如果要使用第三方软件则需要开启 Terminal,而且很多软件甚至无法支持控制手柄 —— 无论从哪方面讲这样的用户界面体验都糟糕透顶。
|
||||
|
||||
Valve 同时也几乎没有花任何力气去推广他们的新平台而选择把一切都交由 OEM 厂商们去做。然而,几乎所有 OEM 合作商们要么是高端主机定制商,要么是电脑生产商,要么是廉价电脑公司(译者:简而言之没有一家有大型宣传渠道)。在所有 OEM 中,只有戴尔是 PC 市场的大碗,也只有他们真正给 Steam Machines 做了广告宣传。
|
||||
|
||||
最终销量也不尽人意。截至 2016 年 6 月,7 个月间 Steam Controller 手柄的销量在包括捆绑销售的情况下仅销售 500,000 件。这让 Steam Machines 的零售情况差到只能被归类到十万俱乐部的最底部。对比已经存在的巨大 PC 和主机游戏平台,可以说销量极低。
|
||||
|
||||
**事后诸葛亮**
|
||||
|
||||
既然知道了 Steam Machines 的历史,我们又能否总结出失败的原因以及可能存在的翻身改进呢?
|
||||
|
||||
_视野与目标_
|
||||
|
||||
Steam Machines 从来没搞清楚他们在市场里的定位究竟是什么,也从来没说清楚他们具体有何优势。从 PC 市场的角度来说,自己搭建台式机已经非常普及并且往往让电脑的可以匹配玩家自己的目标,同时升级性也非常好。从主机平台的角度来说,Steam Machines 又被主机本身的相对廉价所打败,虽然算上游戏可能稍微便宜一些,但主机上的用户体验也直白很多。
|
||||
|
||||
PC 用户会把多功能性看得很重,他们不仅能用电脑打游戏,也一样能办公和做各种各样的事情。即使 Steam Machines 也是跑着的 SteamOS 操作系统的自由的 Linux 电脑,但操作系统和市场宣传加固了 PC 玩家们对 Steam Machines 是不可定制硬件,低价的更接近主机的印象。即使这些 PC 用户能接受在客厅里购置一台 Steam Machines,他们也有 Steam Link 可以选择,而且很多更小型机比如 NUC 和 Mini-ITX 主板定制机可以让他们搭建更适合放在客厅里的电脑。SteamOS 软件也允许把这些硬件转变为 Steam Machines,但寻求灵活性和兼容性的用户通常都会使用一般 Linux 发行版或者 Windows。二矿最近的 Windows 和 Linux 桌面环境都让维护一般用户的操作系统变得自动化和简单了。
|
||||
|
||||
电视主机用户们则把易用性放在第一。虽然近年来主机的功能也逐渐扩展,比如可以播放视频或者串流,但总体而言用户还是把即插即用即玩,不用担心兼容性和性能问题和低门槛放在第一。主机的使用寿命也往往较常,一般在 4-7 年左右,而统一固定的硬件也让游戏开发者们能针对其更好的优化和调试软件。现在刚刚新起的中代升级,例如天蝎和 PS 4 Pro 则可能会打破这样统一的游戏体验,但无论如何厂商还是会要求开发者们需要保证游戏在原机型上的体验。为了提高用户粘性,主机也会有自己的社交系统和独占游戏。而主机上的游戏也有实体版,以便将来重用或者二手转卖,这对零售商和用户都是好事儿。Steam Machines 则完全没有这方面的保证;即使长的像一台客厅主机,他们却有 PC 高昂的价格和复杂的硬件情况。
|
||||
|
||||
_妥协_
|
||||
|
||||
综上所述,Steam Machines 可以说是“集大成者”,吸取了两边的缺点,又没有自己明确的定位。更糟糕的是 Steam Machines 还展现了 PC 和主机都没有的毛病,比如没有 AAA 大作,又没有 Netflix 这样的客户端。抛开这些不说,Valve 在提高他们产品这件事上几乎没有出力,甚至没有尝试着解决 PC 和主机两头定位矛盾这一点。
|
||||
|
||||
然而在有些事情上也许原本 PC 和主机就没法折中妥协。像图像设定和 Mod 等内容的加入会无法保证“傻瓜机”一般的可靠,而且系统下层的复杂性也会逐渐暴露。
|
||||
|
||||
而最复杂的是 Steam Machines 多变的硬件情况,这使得用户不仅要考虑价格还要考虑配置,还要考虑这个价格下和别的系统(PC 和主机)比起来划算与否。更关键的是,Valve 无论如何也应该做出某种自动硬件检测机制,这样玩家才能知道是否能玩某个游戏,而且这个测试既得简单明了,又要能支持 Steam 上几乎所有游戏。同时,Valve 还要操心未来游戏对配置需求的变化,比如2016 年的 "A" 等主机三年后该给什么评分呢?
|
||||
|
||||
_Valve, 个人努力与公司结构_
|
||||
|
||||
尽管 Valve 在 Steam 上创造了辉煌,但其公司的内部结构可能对于开发一个像 Steam Machines 一样的平台是有害的。他们几乎没有领导的自由办公结构,以及所有人都可以自由移动到想要工作的项目组里决定了他们具有极大的创新,研发,甚至开发能力。据说 Valve 只愿意招他们眼中的的 "顶尖人才",通过极其严格的筛选标准,并通过让他们在自己认为“有意义”的项目里工作以保持热情。然而这种思路很可能是错误的;拉帮结派总是存在,而 G胖 的话或许比公司手册上写的还管用,而又有人时不时会由于特殊原因被雇佣或解雇。
|
||||
|
||||
正因为如此,很多虽不闪闪发光甚至维护起来有些无聊但又需要大量时间的项目很容易枯萎。Valve 的客服已是被人诟病已久的毛病,玩家经常觉得被无视了,而 Valve 则经常不到万不得已法律要求的情况下绝不行动:例如自动退款系统,就是在澳大利亚和欧盟法律的要求下才被加入的;更有目前还没结案的华盛顿州 CS:GO 物品在线赌博网站一案。
|
||||
|
||||
各种因素最后也反映在 Steam Machines 这一项目上。Valve 方面的跳票迫使一些合作方做出了尴尬的决定,比如戴尔提前一年发布了 Alienware Alpha 外观的 Steam Machine 就在一年后的正式发布时显得硬件状况落后了。跳票很可能也导致了游戏数量上的问题。开发者和硬件合作商的对跳票和最终毫无轰动的发布也不明朗。Valve 的 VR 平台干脆直接不支持 Linux,而直到最近,SteamVR 都风风火火迭代了好几次之后,SteamOS 和 Linux 依然不支持 VR。
|
||||
|
||||
_“长线钓鱼”_
|
||||
|
||||
尽管 Valve 方面对未来的规划毫无透露,有些人依然认为 Valve 在 Steam Machine 和 SteamOS 上是放长线钓大鱼。他们论点是 Steam 本身也是这样的项目 —— 一开始作为游戏补丁平台出现,到现在无敌的游戏零售和玩家社交网络。虽然 Valve 的独占游戏比如 Half-Life 2 和 CS 也帮助了 Steam 平台的传播。但现今我们完全无法看到 Valve 像当初对 Steam 那样上心 Steam Machines。同时现在 Steam Machines 也面临着 Steam 从没碰到过的激烈竞争。而这些竞争里自然也包含 Valve 自己的那些把 Windows 作为平台的 Steam 客户端。
|
||||
|
||||
_真正目的_
|
||||
|
||||
介于投入在 Steam Machines 上的努力如此之少,有些人怀疑整个产品平台是不是仅仅作为某种博弈的筹码才被开发出来。原初 Steam Machines 就发家于担心微软和苹果通过自己的应用市场垄断游戏的反制手段当中,Valve 寄希望于 Steam Machines 可以在不备之时脱离那些操作系统的支持而运行,同时也是提醒开发者们,也许有一日整个 Steam 平台会独立出来。而当微软和苹果等方面的风口没有继续收紧的情况下,Valve 自然就放慢了开发进度。然而我不这样认为;Valve 其实已经花了不少精力与硬件商和游戏开发者们共同推行这件事,不可能仅仅是为了吓吓他人就终止项目。你可以把这件事想成,微软和 Valve 都在吓唬对方 —— 微软推出了突然收紧的 Windows 8 而 Valve 则展示了一下可以独立门户的能力。
|
||||
|
||||
但即使如此,谁能保证开发者不会愿意跟着微软的封闭环境跑了呢?万一微软最后能提供更好的待遇和用户群体呢?更何况,微软现在正大力推行 Xbox 和 Windows 的交叉和整合,甚至 Xbox 独占游戏也出现在 Windows 上,这一切都没有损害 Windows 原本的平台性定位 —— 谁还能说微软方面不是 Steam 的直接竞争对手呢?
|
||||
|
||||
还会有人说这一切一切都是为了推进 Linux 生态环境尽快接纳 PC 游戏,而 Steam Machines 只是想为此大力推一把。但如果是这样,那这个目的实在是性价比极低,因为本愿意支持 Linux 的自然会开发,而 Steam Machines 这一出甚至会让开发者对平台期待额落空从而伤害到他们。
|
||||
|
||||
**大家眼中 Valve 曾经的机会**
|
||||
|
||||
我认为 Steam Machines 的创意还是很有趣的,而也有一个与之匹配的市场,但就结果而言 Valve 投入的创意和努力还不够多,而定位模糊也伤害了这个产品。我认为 Steam Machines 的优势在于能砍掉 PC 游戏传统的复杂性,比如硬件问题,整机寿命和维护等;但又能拥有游戏便宜,可以打 Mod 等好处,而且也可以做各种定制化以满足用户需求。但他们必须要让产品的核心内容:价格,市场营销,机型产品线还有软件的质量有所保证才行。
|
||||
|
||||
我认为 Steam Machines 可以做出一点妥协,比如硬件升级性(尽管这一点还是有可能被保留下来的 —— 但也要极为小心整个过程对用户体验的影响)和产品选择性,来减少摩擦成本。PC 一直会是一个并列的选项。想给用户产品可选性带来的只有一个困境,成吨的质量低下的 Steam Machines 根本不能解决。Valve 得自己造一台旗舰机型来指明 Steam Machines 的方向。毫无疑问,Alienware 的产品是最接近理想目标的,但他说到底也不是 Valve 的官方之作。Valve 内部不乏优秀的工业设计人才,如果他们愿意投入足够多的重视,我认为结果也许会值得他们努力。而像戴尔和 HTC 这样的公司则可以用他们丰富的经验帮 Valve 制造成品。直接钦定 Steam Machines 的硬件周期,并且在期间只推出 1-2 台机型也有助于帮助解决问题,更不用说他们还可以依次和开发商们确立性能的基准线。我不知道 OEM 合作商们该怎么办;如果 Valve 专注于自己的几台设备里,OEM 们很可能会变得多余甚至拖平台后腿。
|
||||
|
||||
我觉得修复软件问题是最关键的。很多问题在严重拖着 Steam Machines 的后退,比如缺少主机上遍地都是,又能轻易安装在 PC 上的的 Netflix 和 Twitch,即使做好了客厅体验问题依然是严重的败笔。即使 Valve 已经在逐步购买电影的版权以便在 Steam 上发售,我觉得用户还是会倾向于去使用已经在市场上建立口碑的一些串流服务。这些问题需要被严肃地对待,因为玩家日益倾向于把主机作为家庭影院系统的一部分。同时,修复 Steam 客户端和平台的问题也很重要,和更多第三方服务商合作增加内容应该会是个好主意。性能问题和 Linux 下的显卡问题也很严重,不过好在他们最近在慢慢进步。移植游戏也是个问题。类似 Feral Interactive 或者 Aspyr Media 这样的游戏移植商可以帮助扩展 Steam 的商店游戏数量,但联系开发者和出版社可能会有问题,而且这两家移植商经常在移植的容器上搞自己的花样。Valve 已经在帮助游戏工作室自己移植游戏了,比如 Rocket League,不过这种情况很少见,而且就算 Valve 去帮忙了,也是非常符合 Valve 风格的拖拉。而 AAA 大作这一块内容也绝不应该被忽略 —— 近来这方面的情况已经有极大好转了,虽然 Linux 平台的支持好了很多,但在玩家数量不够以及 Valve 为 Steam Machines 提供的开发帮助甚少的情况下,Bethesda 这样的开发商依然不愿意移植游戏;同时,也有像 Denuvo 一样缺乏数字版权管理的公司难以向 Steam Machines 移植游戏。
|
||||
|
||||
在我看来 Valve 需要在除了软件和硬件的地方也多花些功夫。如果他们只有一个机型的话,他们可以很方便的在硬件生产上贴点钱。这样 Steam Machines 的价格就能跻身主机的行列,而且还能比自己组装 PC 要便宜。针对正确的市场群体做营销也很关键,即便我们还不知道目标玩家应该是谁(我个人会对这样的 Steam Machines 感兴趣,而且我有一整堆已经在 Steam 上以相当便宜的价格买好的游戏)。最后,我觉得零售商们其实不会对 Valve 的计划很感冒,毕竟他们要靠卖和倒卖实体游戏赚钱。
|
||||
|
||||
就算 Valve 在产品和平台上采纳过这些改进,我也不知道怎样才能激活 Steam Machines 的全市场潜力。总的来说,Valve 不仅得学习自己的经验教训,还应该参考曾经有过类似尝试的厂商们,比如尝试依靠开放平台的 3DO 和 Pippin;又或者那些从台式机体验的竞争力退赛的那些公司,其实 Valve 如今的情况和他们也有几分相似。亦或者他们也可以观察一下任天堂 Switch —— 毕竟任天堂也在尝试跨界的创新。
|
||||
|
||||
_注解: 上述点子由 liamdawe 整理,所有的想法都由用户提交。_
|
||||
|
||||
本文是被一位访客提交的,我们欢迎大家前来[提交文章][1]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.gamingonlinux.com/articles/user-editorial-steam-machines-steamos-after-a-year-in-the-wild.8474
|
||||
|
||||
作者:[calvin][a]
|
||||
译者:[Moelf](https://github.com/Moelf)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.gamingonlinux.com/profiles/5163
|
||||
[1]:https://www.gamingonlinux.com/submit-article/
|
||||
[2]:https://www.gamingonlinux.com/articles/steam-machines-steam-link-steam-controller-officially-released-steamos-sale.6201
|
||||
[3]:https://www.gamingonlinux.com/articles/valve-announces-steam-machines-you-can-win-one-too.2469
|
@ -1,49 +1,49 @@
|
||||
探索传统 JavaScript 基准测试
|
||||
============================================================
|
||||
|
||||
可以很公平地说,[JavaScript][22] 是当下软件工程最重要的技术。对于那些深入接触过编程语言、编译器和虚拟机的人来说,这仍然有点令人惊讶,因为在语言设计者们看来,JavaScript 不是十分优雅;在编译器工程师们看来,它没有多少可优化的地方;而且还没有一个伟大的标准库。这取决于你和谁吐槽,JavaScript 的缺点你花上数周都枚举不完,不过你总会找到一些你从所未知的神奇的东西。尽管这看起来明显困难重重,不过 JavaScript 还是成为了当今 web 的核心,并且还(通过 [Node.js][23])成为服务器端/云端的主导技术,甚至还开辟了进军物联网空间的道路。
|
||||
可以很公平地说,[JavaScript][22] 是当下软件工程中*最重要的技术*。对于那些深入接触过编程语言、编译器和虚拟机的人来说,这仍然有点令人惊讶,因为在语言设计者们看来,JavaScript 不是十分优雅;在编译器工程师们看来,它没有多少可优化的地方;甚至还没有一个伟大的标准库。这取决于你和谁吐槽,JavaScript 的缺点你花上数周都枚举不完,而你总会找到一些你从所未知的奇怪的东西。尽管这看起来明显困难重重,不过 JavaScript 还是成为了当今 web 的核心,并且还(通过 [Node.js][23])成为服务器端和云端的主导技术,甚至还开辟了进军物联网领域的道路。
|
||||
|
||||
问题来了,为什么 JavaScript 如此受欢迎?或者说如此成功?我知道没有一个很好的答案。如今我们有许多使用 JavaScript 的好理由,或许最重要的是围绕其构建的庞大的生态系统,以及今天大量可用的资源。但所有这一切实际上是发展到一定程度的后果。为什么 JavaScript 变得流行起来了?嗯,你或许会说,这是 web 多年来的通用语了。但是在很长一段时间里,人们极其讨厌 JavaScript。回顾过去,似乎第一波 JavaScript 浪潮爆发在上个年代的后半段。那个时候 JavaScript 引擎加速了各种不同的任务的执行,很自然的,这可能让很多人对 JavaScript 刮目相看。
|
||||
那么问题来了,为什么 JavaScript 如此受欢迎?或者说如此成功?我知道没有一个很好的答案。如今我们有许多使用 JavaScript 的好理由,或许最重要的是围绕其构建的庞大的生态系统,以及现今大量可用的资源。但所有这一切实际上是发展到一定程度的后果。为什么 JavaScript 变得流行起来了?嗯,你或许会说,这是 web 多年来的通用语了。但是在很长一段时间里,人们极其讨厌 JavaScript。回顾过去,似乎第一波 JavaScript 浪潮爆发在上个年代的后半段。那个时候 JavaScript 引擎加速了各种不同的任务的执行,很自然的,这可能让很多人对 JavaScript 刮目相看。
|
||||
|
||||
回到过去那些日子,这些加速测试使用了现在所谓的传统 JavaScript 基准——从苹果的 [SunSpider 基准][24](JavaScript 微基准之母)到 Mozilla 的 [Kraken 基准][25] 和谷歌的 V8 基准。后来,V8 基准被 [Octane 基准][26] 取代,而苹果发布了新的 [JetStream 基准][27]。这些传统的 JavaScript 基准测试驱动了无数人的努力,使 JavaScript 的性能达到了本世纪初没人能预料到的水平。据报道其性能加速达到了 1000 倍,一夜之间在网站使用 `<script>` 标签不再是魔鬼的舞蹈,做客户端不再仅仅是可能的了,甚至是被鼓励的。
|
||||
回到过去那些日子,这些加速使用了现在所谓的传统 JavaScript 基准进行测试——从苹果的 [SunSpider 基准][24](JavaScript 微基准之母)到 Mozilla 的 [Kraken 基准][25] 和谷歌的 V8 基准。后来,V8 基准被 [Octane 基准][26] 取代,而苹果发布了新的 [JetStream 基准][27]。这些传统的 JavaScript 基准测试驱动了无数人的努力,使 JavaScript 的性能达到了本世纪初没人能预料到的水平。据报道其性能加速达到了 1000 倍,一夜之间在网站使用 `<script>` 标签不再是与魔鬼共舞,做客户端不再仅仅是可能的了,甚至是被鼓励的。
|
||||
|
||||
[][28]
|
||||
|
||||
(来源: [Advanced JS performance with V8 and Web Assembly](https://www.youtube.com/watch?v=PvZdTZ1Nl5o), Chrome Developer Summit 2016, @s3ththompson。)
|
||||
|
||||
现在是 2016 年,所有(相关的)JavaScript 引擎的性能都达到了一个令人难以置信的水平,web 应用像原生应用一样快(或者能够像原生应用一样快)。引擎配有复杂的优化编译器,通过收集之前的关于类型/形状的反馈来推测某些操作(例如属性访问、二进制操作、比较、调用等),生成高度优化的机器代码的短序列。大多数优化是由 SunSpider 或 Kraken 等微基准以及 Octane 和 JetStream 等静态测试套件驱动的。由于有像 [asm.js][29] 和 [Emscripten][30] 这样的 JavaScript 技术,我们甚至可以将大型 C++ 应用程序编译成 JavaScript,并在你的浏览器上运行,而无需下载或安装任何东西。例如,现在你可以在 web 上玩 [AngryBots][31],无需沙盒,而过去的 web 游戏需要安装一堆诸如 Adobe Flash 或 Chrome PNaCl 的插件。
|
||||
现在是 2016 年,所有(相关的)JavaScript 引擎的性能都达到了一个令人难以置信的水平,web 应用像原生应用一样快(或者能够像原生应用一样快)。引擎配有复杂的优化编译器,通过收集之前的关于类型/形状的反馈来推测某些操作(例如属性访问、二进制操作、比较、调用等),生成高度优化的机器代码的短序列。大多数优化是由 SunSpider 或 Kraken 等微基准以及 Octane 和 JetStream 等静态测试套件驱动的。由于有像 [asm.js][29] 和 [Emscripten][30] 这样的 JavaScript 技术,我们甚至可以将大型 C++ 应用程序编译成 JavaScript,并在你的浏览器上运行,而无需下载或安装任何东西。例如,现在你可以在 web 上玩 [AngryBots][31],无需沙盒,而过去的 web 游戏需要安装一堆诸如 Adobe Flash 或 Chrome PNaCl 的特殊插件。
|
||||
|
||||
这些成就绝大多数都要归功于这些微基准和静态性能测试套件的出现,以及与这些传统 JavaScript 基准间的竞争的结果。你可以对 SunSpider 表示不满,但很显然,没有 SunSpider,JavaScript 的性能可能达不到今天的高度。好吧,赞美到此为止。现在看看另一方面,所有静态性能测试——无论是微基准还是大型应用的宏基准,都注定要随着时间的推移变成噩梦!为什么?因为在开始摆弄它之前,基准只能教你这么多。一旦达到某个阔值以上(或以下),那么有益于特定基准的优化的一般适用性将呈指数下降。例如,我们将 Octane 作为现实世界中 web 应用性能的代表,并且在相当长的一段时间里,它可能做得很不错,但是现在,Octane 与现实场景中的时间分布是截然不同的,因此即使眼下再优化 Octane 乃至超越自身,可能在现实世界中还是得不到任何显著的改进(无论是通用 web 还是 Node.js 的工作负载)。
|
||||
这些成就绝大多数都要归功于这些微基准和静态性能测试套件的出现,以及与这些传统的 JavaScript 基准间的竞争的结果。你可以对 SunSpider 表示不满,但很显然,没有 SunSpider,JavaScript 的性能可能达不到今天的高度。好吧,赞美到此为止。现在看看另一方面,所有的静态性能测试——无论是<ruby>微基准<rt>micro-benchmark</rt></ruby>还是大型应用的<ruby>宏基准<rt>macro-benchmark</rt></ruby>,都注定要随着时间的推移变成噩梦!为什么?因为在开始摆弄它之前,基准只能教你这么多。一旦达到某个阔值以上(或以下),那么有益于特定基准的优化的一般适用性将呈指数级下降。例如,我们将 Octane 作为现实世界中 web 应用性能的代表,并且在相当长的一段时间里,它可能做得很不错,但是现在,Octane 与现实场景中的时间分布是截然不同的,因此即使眼下再优化 Octane 乃至超越自身,可能在现实世界中还是得不到任何显著的改进(无论是通用 web 还是 Node.js 的工作负载)。
|
||||
|
||||
[][32]
|
||||
|
||||
(来源:[Real-World JavaScript Performance](https://youtu.be/xCx4uC7mn6Y),BlinkOn 6 conference,@tverwaes)
|
||||
|
||||
由于传统 JavaScript 基准(包括最新版的 JetStream 和 Octane)可能已经背离其有用性变得越来越远,我们开始在年初寻找新的方法来测量现实场景的性能,为 V8 和 Chrome 添加了大量新的性能追踪钩子。我们还特意添加一些机制来查看我们在浏览 web 时的时间开销,例如,是否是脚本执行、垃圾回收、编译等,并且这些调查的结果非常有趣和令人惊讶。从上面的幻灯片可以看出,运行 Octane 花费超过 70% 的时间去执行 JavaScript 和回收垃圾,而浏览 web 的时候,通常执行 JavaScript 花费的时间不到 30%,垃圾回收占用的时间永远不会超过 5%。在 Octane 中并没有体现出它花费了大量时间来解析和编译。因此,将更多的时间用在优化 JavaScript 执行上将提高你的 Octane 跑分,但不会对加载 [youtube.com][33] 有任何积极的影响。事实上,花费更多的时间来优化 JavaScript 执行甚至可能有损你现实场景的性能,因为编译器需要更多的时间,或者你需要跟踪更多的反馈,最终在编译、IC 和运行时桶开销了更多的时间。
|
||||
由于传统 JavaScript 基准(包括最新版的 JetStream 和 Octane)可能已经背离其有用性变得越来越远,我们开始在 2016 年初寻找新的方法来测量现实场景的性能,为 V8 和 Chrome 添加了大量新的性能追踪钩子。我们还特意添加一些机制来查看我们在浏览 web 时的时间究竟开销在哪里,例如,是脚本执行、垃圾回收、编译,还是什么地方?而这些调查的结果非常有趣和令人惊讶。从上面的幻灯片可以看出,运行 Octane 花费了 70% 以上的时间去执行 JavaScript 和垃圾回收,而浏览 web 的时候,通常执行 JavaScript 花费的时间不到 30%,垃圾回收占用的时间永远不会超过 5%。在 Octane 中并没有体现出它花费了大量时间来解析和编译。因此,将更多的时间用在优化 JavaScript 执行上将提高你的 Octane 跑分,但不会对加载 [youtube.com][33] 有任何积极的影响。事实上,花费更多的时间来优化 JavaScript 执行甚至可能有损你现实场景的性能,因为编译器需要更多的时间,或者你需要跟踪更多的反馈,最终在编译、垃圾回收和<ruby>运行时桶<rt>Runtime bucket</rt></ruby>等方面开销了更多的时间。
|
||||
|
||||
[][34]
|
||||
|
||||
还有另外一组基准测试用于测量浏览器整体性能(包括 JavaScript 和 DOM 性能),最新推出的是 [Speedometer 基准][35]。该基准试图通过运行一个用不同的主流 web 框架实现的简单的 [TodoMVC][36] 应用(现在看来有点过时了,不过新版本正在研发中)以捕获真实性能。上述幻灯片中的各种测试 (Angular、Ember、React、Vanilla、Flight 和 Backbone)挨着放在 Octane 之后,你可以看到这些测试似乎更好地代表了现在的性能指标。但是请注意,这些数据收集在本文撰写将近 6 个月以前,而且我们优化了更多的现实场景模式(例如我们正在重构垃圾回收系统以显著地降低开销,并且 [解析器也正在重新设计][37])。还要注意的是,虽然这看起来像是只和浏览器相关,但我们有非常强有力的证据表明传统的峰值性能基准也不能很好的代表现实场景中 Node.js 应用性能。
|
||||
还有另外一组基准测试用于测量浏览器整体性能(包括 JavaScript 和 DOM 性能),最新推出的是 [Speedometer 基准][35]。该基准试图通过运行一个用不同的主流 web 框架实现的简单的 [TodoMVC][36] 应用(现在看来有点过时了,不过新版本正在研发中)以捕获更真实的现实场景的性能。上述幻灯片中的各种测试 (Angular、Ember、React、Vanilla、Flight 和 Backbone)挨着放在 Octane 之后,你可以看到,此时此刻这些测试似乎更好地代表了现实世界的性能指标。但是请注意,这些数据收集在本文撰写将近 6 个月以前,而且我们优化了更多的现实场景模式(例如我们正在重构垃圾回收系统以显著地降低开销,并且 [解析器也正在重新设计][37])。还要注意的是,虽然这看起来像是只和浏览器相关,但我们有非常强有力的证据表明传统的峰值性能基准也不能很好的代表现实场景中 Node.js 应用性能。
|
||||
|
||||
[][38]
|
||||
|
||||
(来源: [Real-World JavaScript Performance](https://youtu.be/xCx4uC7mn6Y), BlinkOn 6 conference, @tverwaes.)
|
||||
|
||||
所有这一切可能已经路人皆知了,因此我将用本文剩下的部分强调一些具体案例,它们对关于我为什么认为这不仅有用,而且必须停止关注某一阔值的静态峰值性能基准测试对于 JavaScript 社区的健康是很关键的。让我通过一些例子说明 JavaScript 引擎怎样来玩弄基准的。
|
||||
所有这一切可能已经路人皆知了,因此我将用本文剩下的部分强调一些具体案例,它们对关于我为什么认为这不仅有用,而且必须停止关注某一阈值的静态峰值性能基准测试对于 JavaScript 社区的健康是很关键的。让我通过一些例子说明 JavaScript 引擎怎样来玩弄基准的。
|
||||
|
||||
### 臭名昭著的 SunSpider 案例
|
||||
|
||||
一篇关于传统 JavaScript 基准测试的博客如果没有指出 SunSpider 明显的问题是不完整的。让我们从性能测试的最佳实践开始,它在现实场景中不是很适用:[`bitops-bitwise-and.js` 性能测试][39]
|
||||
一篇关于传统 JavaScript 基准测试的博客如果没有指出 SunSpider 那个明显的问题是不完整的。让我们从性能测试的最佳实践开始,它在现实场景中不是很适用:[bitops-bitwise-and.js` 性能测试][39]。
|
||||
|
||||
[][40]
|
||||
|
||||
有一些算法需要进行快速的位运算,特别是从 `C/C++` 转译成 JavaScript 的地方,所以快速执行按位操作确实有点意义。然而,现实场景中的网页可能不关心引擎是否可以执行位运算,并且能否在循环中比另一个引擎快两倍。但是再盯着这段代码几秒钟,你可能会注意到,在第一次循环迭代之后 `bitwiseAndValue` 将变成 `0`,并且在接下来的 599999 次迭代中将保持为 `0`。所以一旦你在此获得好性能,即在体面的硬件上所有测试均低于 5ms,在经过尝试之后意识到,只有循环的第一次是必要的,而剩余的迭代只是在浪费时间(例如 [loop peeling][41] 后面的死代码),现在你可以开始玩弄这个基准了。这需要 JavaScript 中的一些机制来执行这种转换,即你需要检查 `bitwiseAndValue` 是全局对象的常规属性还是在执行脚本之前不存在,全局对象或者它的原型上必须没有拦截器。但如果你真的想要赢得这个基准测试,并且你愿意全力以赴,那么你可以在不到 1ms 的时间内完成这个测试。然而,这种优化将局限于这种特殊情况,并且测试的轻微修改可能不再触发它。
|
||||
有一些算法需要进行快速的 AND 位运算,特别是从 `C/C++` 转译成 JavaScript 的地方,所以快速执行该操作确实有点意义。然而,现实场景中的网页可能不关心引擎在循环中执行 AND 位运算是否比另一个引擎快两倍。但是再盯着这段代码几秒钟后,你可能会注意到在第一次循环迭代之后 `bitwiseAndValue` 将变成 `0`,并且在接下来的 599999 次迭代中将保持为 `0`。所以一旦你让此获得了好的性能,比如在差不多的硬件上所有测试均低于 5ms,在经过尝试之后你会意识到,只有循环的第一次是必要的,而剩余的迭代只是在浪费时间(例如 [loop peeling][41] 后面的死代码),那你现在就可以开始玩弄这个基准测试了。这需要 JavaScript 中的一些机制来执行这种转换,即你需要检查 `bitwiseAndValue` 是全局对象的常规属性还是在执行脚本之前不存在,全局对象或者它的原型上必须没有拦截器。但如果你真的想要赢得这个基准测试,并且你愿意全力以赴,那么你可以在不到 1ms 的时间内完成这个测试。然而,这种优化将局限于这种特殊情况,并且测试的轻微修改可能不再触发它。
|
||||
|
||||
好吧,那么 [`bitops-bitwise-and.js`][42] 测试彻底肯定是微基准最失败的案例。让我们继续转移到 SunSpider 中更逼真的场景——[`string-tagcloud.js`][43] 测试,它的底层运行着一个较早版本的 `json.js polyfill`。该测试可以说看起来比位运算测试更合理,但是查看基准的配置之后立刻显示:大量的时间浪费在一条 `eval` 表达式(高达 20% 的总执行时间被用于解析和编译,再加上实际执行编译后代码的 10% 的时间)。
|
||||
好吧,那么 [bitops-bitwise-and.js][42] 测试彻底肯定是微基准最失败的案例。让我们继续转移到 SunSpider 中更逼真的场景——[string-tagcloud.js][43] 测试,它基本上是运行一个较早版本的 `json.js polyfill`。该测试可以说看起来比位运算测试更合理,但是花点时间查看基准的配置之后立刻会发现:大量的时间浪费在一条 `eval` 表达式(高达 20% 的总执行时间被用于解析和编译,再加上实际执行编译后代码的 10% 的时间)。
|
||||
|
||||
[][44]
|
||||
|
||||
仔细看看,这个 `eval` 只执行了一次,并传递一个 `JSON` 格式的字符串,它包含一个由 2501 个含有 `tag` 和 `popularity` 属性的对象组成的数组:
|
||||
仔细看看,这个 `eval` 只执行了一次,并传递一个 JSON 格式的字符串,它包含一个由 2501 个含有 `tag` 和 `popularity` 属性的对象组成的数组:
|
||||
|
||||
```
|
||||
([
|
||||
@ -83,7 +83,7 @@
|
||||
])
|
||||
```
|
||||
|
||||
显然,解析这些对象字面量,为其生成本地代码,然后执行该代码的成本很高。将输入的字符串解析为 `JSON` 并生成适当的对象图的开销将更加低廉。所以,加快这个基准测试的一个小把戏就是模拟 `eval`,并尝试总是将数据首先作为 `JSON` 解析,然后再回溯到真实的解析、编译、执行,直到尝试读取 `JSON` 失败(尽管需要一些额外的黑魔法来跳过括号)。早在 2007 年,这甚至不算是一个坏点子,因为没有 [`JSON.parse`][45],不过在 2017 年这只是 JavaScript 引擎的技术债,可能会让 `eval` 的合法使用遥遥无期。
|
||||
显然,解析这些对象字面量,为其生成本地代码,然后执行该代码的成本很高。将输入的字符串解析为 JSON 并生成适当的对象图的开销将更加低廉。所以,加快这个基准测试的一个小把戏就是模拟 `eval`,并尝试总是将数据首先作为 JSON 解析,如果以 JSON 方式读取失败,才回退进行真实的解析、编译、执行(尽管需要一些额外的黑魔法来跳过括号)。早在 2007 年,这甚至不算是一个坏点子,因为没有 [JSON.parse][45],不过在 2017 年这只是 JavaScript 引擎的技术债,可能会让 `eval` 的合法使用遥遥无期。
|
||||
|
||||
```
|
||||
--- string-tagcloud.js.ORIG 2016-12-14 09:00:52.869887104 +0100
|
||||
@ -99,7 +99,7 @@
|
||||
}
|
||||
```
|
||||
|
||||
事实上,将基准测试更新到现代 JavaScript 会立刻提升性能,正如今天的 `V8 LKGR` 从 36ms 降到了 26ms,性能足足提升了 30%!
|
||||
事实上,将基准测试更新到现代 JavaScript 会立刻会性能暴增,正如今天的 `V8 LKGR` 从 36ms 降到了 26ms,性能足足提升了 30%!
|
||||
|
||||
```
|
||||
$ node string-tagcloud.js.ORIG
|
||||
@ -111,9 +111,9 @@ v8.0.0-pre
|
||||
$
|
||||
```
|
||||
|
||||
这是静态基准和性能测试套件常见的一个问题。今天,没有人会正儿八经地用 `eval` 解析 `JSON` 数据(不仅是因为性能问题,还出于严重的安全性考虑),而是坚持为所有代码使用诞生于五年前的 [`JSON.parse`][46]。事实上,使用 `eval` 解析 `JSON` 可能会被视作生产环境的一个漏洞!所以引擎作者致力于新代码的性能所作的努力并没有反映在这个古老的基准中,相反地,使用 `eval` 来赢得 `string-tagcloud.js` 测试是没有必要的。
|
||||
这是静态基准和性能测试套件常见的一个问题。今天,没有人会正儿八经地用 `eval` 解析 `JSON` 数据(不仅是因为性能问题,还出于严重的安全性考虑),而是坚持为最近五年写的代码使用 [JSON.parse][46]。事实上,使用 `eval` 解析 JSON 可能会被视作产品级代码的的一个漏洞!所以引擎作者致力于新代码的性能所作的努力并没有反映在这个古老的基准中,相反地,而是使得 `eval` 不必要地~~更智能~~复杂化,从而赢得 `string-tagcloud.js` 测试。
|
||||
|
||||
好吧,让我们看看另一个例子:[`3d-cube.js`][47]。这个基准测试做了很多矩阵运算,即便是最聪明的编译器仅仅执行这个运算都做不了这么多。基本上,基准测试花了大量的时间执行 `Loop` 函数及其调用的函数。
|
||||
好吧,让我们看看另一个例子:[3d-cube.js][47]。这个基准测试做了很多矩阵运算,即便是最聪明的编译器对此也无可奈何,只能说执行而已。基本上,该基准测试花了大量的时间执行 `Loop` 函数及其调用的函数。
|
||||
|
||||
[][48]
|
||||
|
||||
@ -121,55 +121,59 @@ $
|
||||
|
||||
[][49]
|
||||
|
||||
这意味着我们基本上总是为 [`Math.sin`][50] 和 [`Math.cos`][51] 计算相同的值,每次执行都要计算 204 次。只有 3 个不同的输入值:
|
||||
这意味着我们基本上总是为 [Math.sin][50] 和 [Math.cos][51] 计算相同的值,每次执行都要计算 204 次。只有 3 个不同的输入值:
|
||||
|
||||
* 0.017453292519943295,
|
||||
* 0.05235987755982989,
|
||||
* 0.017453292519943295
|
||||
* 0.05235987755982989
|
||||
* 0.08726646259971647
|
||||
|
||||
显然,你可以在这里做的一件事情就是通过缓存以前的计算值来避免重复计算相同的正弦值和余弦值。事实上,这是 V8 以前的做法,而其它引擎例如 `SpiderMonkey` 仍然这样做。我们从 V8 中删除了所谓的超载缓存,因为缓存的开销在现实中的工作负载是不可忽视的,你不可能总是在一行代码中计算相同的值,这在其它地方倒不稀奇。当我们在 2013 和 2014 年移除这个特定的基准优化时,我们对 SunSpider 基准产生了强烈的冲击,但我们完全相信,优化基准并没有任何意义,同时以这种方式批判现实场景中的使用案例。
|
||||
显然,你可以在这里做的一件事情就是通过缓存以前的计算值来避免重复计算相同的正弦值和余弦值。事实上,这是 V8 以前的做法,而其它引擎例如 `SpiderMonkey` 目前仍然在这样做。我们从 V8 中删除了所谓的<ruby>超载缓存<rt>transcendental cache</rt></ruby>,因为缓存的开销在实际的工作负载中是不可忽视的,你不可能总是在一行代码中计算相同的值,这在其它地方倒不稀奇。当我们在 2013 和 2014 年移除这个特定的基准优化时,我们对 SunSpider 基准产生了强烈的冲击,但我们完全相信,为基准而优化并没有任何意义,并同时以这种方式批判了现实场景中的使用案例。
|
||||
|
||||
[][52]
|
||||
|
||||
显然,处理恒定正弦/余弦输入的更好的方法是一个内联的启发式算法,它试图平衡内联因素与其它不同的因素,例如在调用位置优先选择内联,其中恒定折叠可以是有益的,例如在 `RotateX`、`RotateY` 和 `RotateZ` 中调用边界值的案例。但是出于各种原因,这对于 `Crankshaft` 编译器并不可行。使用 `Ignition` 和 `TurboFan` 倒是一个明智的选择,我们已经在开发更好的[内联启发式算法][53]。
|
||||
(来源:[arewefastyet.com](https://arewefastyet.com/#machine=12&view=single&suite=ss&subtest=cube&start=1343350217&end=1415382608))
|
||||
|
||||
### 垃圾回收是有害的
|
||||
显然,处理恒定正弦/余弦输入的更好的方法是一个内联的启发式算法,它试图平衡内联因素与其它不同的因素,例如在调用位置优先选择内联,其中<ruby>常量叠算<rt>constant folding</rt></ruby>可以是有益的,例如在 `RotateX`、`RotateY` 和 `RotateZ` 调用位置的案例中。但是出于各种原因,这对于 `Crankshaft` 编译器并不可行。使用 `Ignition` 和 `TurboFan` 倒是一个明智的选择,我们已经在开发更好的[内联启发式算法][53]。
|
||||
|
||||
除了这些非常具体的测试问题,SunSpider 还有一个根本的问题:总体执行时间。目前 V8 在体面的英特尔硬件上运行整个基准测试大概只需要 200ms(使用默认配置)。次要的 `GC` 在 1ms 到 25ms 之间(取决于新空间中的活对象和旧空间的碎片),而主 `GC` 暂停可以浪费掉 30ms(甚至不考虑增量标记的开销),这超过了 SunSpider 套件总体执行时间的 10%!因此,任何不想因 `GC` 循环而造成减速 10-20% 的引擎,必须用某种方式确保它在运行 SunSpider 时不会触发 `GC`。
|
||||
#### 垃圾回收(GC)是有害的
|
||||
|
||||
除了这些非常具体的测试问题,SunSpider 基准测试还有一个根本性的问题:总体执行时间。目前 V8 在适当的英特尔硬件上运行整个基准测试大概只需要 200ms(使用默认配置)。<ruby>次垃圾回收<rt>minor GC</rt></ruby>在 1ms 到 25ms 之间(取决于新空间中的存活对象和旧空间的碎片),而<ruby>主垃圾回收<rt>major GC</rt></ruby>暂停的话可以轻松减掉 30ms(甚至不考虑增量标记的开销),这超过了 SunSpider 套件总体执行时间的 10%!因此,任何不想因垃圾回收循环而造成减速 10-20% 的引擎,必须用某种方式确保它在运行 SunSpider 时不会触发垃圾回收。
|
||||
|
||||
[][54]
|
||||
|
||||
就实现而言,有不同的方案,不过就我所知,没有一个在现实场景中产生了任何积极的影响。V8 使用了一个相当简单的技巧:由于每个 SunSpider 套件都运行在一个新的 `<iframe>` 中,这对应于 V8 中一个新的本地上下文,我们只需检测快速的 `<iframe>` 创建和处理(所有的 SunSpider 测试花费的时间小于 50ms),在这种情况下,在处理和创建之间执行垃圾回收,以确保我们在实际运行测试的时候不会触发 `GC`。这个技巧很好,99.9% 的案例没有与实际用途冲突;除了每时每刻,无论你在做什么,都让你看起来像是 V8 的 SunSpider 测试驱动程序,那么你可能会遇到困难,或许你可以通过强制 `GC` 来解决,不过这对你的应用可能会有负面影响。所以紧记一点:**不要让你的应用看起来像 SunSpider!**
|
||||
就实现而言,有不同的方案,不过就我所知,没有一个在现实场景中产生了任何积极的影响。V8 使用了一个相当简单的技巧:由于每个 SunSpider 套件都运行在一个新的 `<iframe>` 中,这对应于 V8 中一个新的本地上下文,我们只需检测快速的 `<iframe>` 创建和处理(所有的 SunSpider 测试每个花费的时间小于 50ms),在这种情况下,在处理和创建之间执行垃圾回收,以确保我们在实际运行测试的时候不会触发垃圾回收。这个技巧运行的很好,在 99.9% 的案例中没有与实际用途冲突;除了时不时的你可能会受到打击,不管出于什么原因,如果你做的事情让你看起来像是 V8 的 SunSpider 测试驱动程序,你就可能被强制的垃圾回收打击到,这有可能对你的应用导致负面影响。所以谨记一点:**不要让你的应用看起来像 SunSpider!**
|
||||
|
||||
我可以继续展示更多 SunSpider 示例,但我不认为这非常有用。到目前为止,应该清楚的是,SunSpider 为刷新业绩而做的进一步优化在现实场景中没有带来任何好处。事实上,世界可能会因为没有 SunSpider 而更美好,因为引擎可以放弃只是用于 SunSpider 的奇淫技巧,甚至可以伤害到现实中的用例。不幸的是,SunSpider 仍然被(科技)媒体大量地用来比较他们眼中的浏览器性能,或者甚至用来比较手机!所以手机制造商和安卓制造商对于让 SunSpider(以及其它现在毫无意义的基准 `FWIW`) 上的 Chrome 看起来比较体面自然有一定的兴趣。手机制造商通过销售手机来赚钱,所以获得良好的评价对于电话部门甚至整间公司的成功至关重要。其中一些部门甚至在其手机中配置在 SunSpider 中得分较高的旧版 V8,向他们的用户展示各种未修复的安全漏洞(在新版中早已被修复),并保护用户免受最新版本的 V8 的任何现实场景的性能优势!
|
||||
我可以继续展示更多 SunSpider 示例,但我不认为这非常有用。到目前为止,应该清楚的是,为刷新 SunSpider 评分而做的进一步优化在现实场景中没有带来任何好处。事实上,世界可能会因为没有 SunSpider 而更美好,因为引擎可以放弃只是用于 SunSpider 的奇淫技巧,或者甚至可以伤害到现实中的用例。不幸的是,SunSpider 仍然被(科技)媒体大量地用来比较他们眼中的浏览器性能,或者甚至用来比较手机!所以手机制造商和安卓制造商对于让 SunSpider(以及其它现在毫无意义的基准 FWIW) 上的 Chrome 看起来比较体面自然有一定的兴趣。手机制造商通过销售手机来赚钱,所以获得良好的评价对于电话部门甚至整间公司的成功至关重要。其中一些部门甚至在其手机中配置在 SunSpider 中得分较高的旧版 V8,将他们的用户置于各种未修复的安全漏洞之下(在新版中早已被修复),而让用户被最新版本的 V8 带来的任何现实场景的性能优势拒之门外!
|
||||
|
||||
[][55]
|
||||
|
||||
作为 JavaScript 社区的一员,如果我们真的想认真对待 JavaScript 领域现实场景的性能,我们需要让各大技术媒体停止使用传统的 JavaScript 基准来比较浏览器或手机。我看到的一个好处是能够在每个浏览器中运行一个基准测试,并比较它的得分,但是请使用一个与当今世界相关的基准,例如真实的 `web` 页面;如果你觉得需要通过浏览器基准来比较两部手机,请至少考虑使用 [Speedometer][56]。
|
||||
(来源:[www.engadget.com](https://www.engadget.com/2016/03/08/galaxy-s7-and-s7-edge-review/))
|
||||
|
||||
### 轻松一刻
|
||||
作为 JavaScript 社区的一员,如果我们真的想认真对待 JavaScript 领域的现实场景的性能,我们需要让各大技术媒体停止使用传统的 JavaScript 基准来比较浏览器或手机。能够在每个浏览器中运行一个基准测试,并比较它的得分自然是好的,但是请使用一个与当今世界相关的基准,例如真实的 web 页面;如果你觉得需要通过浏览器基准来比较两部手机,请至少考虑使用 [Speedometer][56]。
|
||||
|
||||
#### 轻松一刻
|
||||
|
||||

|
||||
|
||||
我一直很喜欢这个 [Myles Borins][57] 谈话,所以我不得不无耻地向他偷师。现在我们从 SunSpider 的谴责中回过头来,让我们继续检查其它经典基准。
|
||||
|
||||
### 不是那么详细的 Kraken 案例
|
||||
### 不是那么显眼的 Kraken 案例
|
||||
|
||||
Kraken 基准是 [Mozilla 于 2010 年 9 月 发布的][58],据说它包含了现实场景应用的片段/内核,并且与 SunSpider 相比少了一个微基准。我不想花太多时间在 Kraken 上,因为我认为它不像 SunSpider 和 Octane 一样对 JavaScript 性能有着深远的影响,所以我将强调一个特别的案例——[`audio-oscillator.js`][59] 测试。
|
||||
Kraken 基准是 [Mozilla 于 2010 年 9 月 发布的][58],据说它包含了现实场景应用的片段/内核,并且与 SunSpider 相比少了一个微基准。我不想在 Kraken 上花太多口舌,因为我认为它不像 SunSpider 和 Octane 一样对 JavaScript 性能有着深远的影响,所以我将强调一个特别的案例——[audio-oscillator.js][59] 测试。
|
||||
|
||||
[][60]
|
||||
|
||||
正如你所见,测试调用了 `calcOsc` 函数 500 次。`calcOsc` 首先在全局的正弦 `Oscillator` 上调用 `generate`,然后创建一个新的 `Oscillator`,调用 `generate` 并将其添加到全局正弦的 `oscillator`。没有详细说明测试为什么是这样做的,让我们看看 `Oscillator` 原型上的 `generate` 方法。
|
||||
正如你所见,测试调用了 `calcOsc` 函数 500 次。`calcOsc` 首先在全局的 `sine` `Oscillator` 上调用 `generate`,然后创建一个新的 `Oscillator`,调用它的 `generate` 方法并将其添加到全局的 `sine` `Oscillator` 里。没有详细说明测试为什么是这样做的,让我们看看 `Oscillator` 原型上的 `generate` 方法。
|
||||
|
||||
[][61]
|
||||
|
||||
看看代码,你会期望这是被循环中的数组访问或者乘法或者 [`Math.round`][62] 调用所主导的,但令人惊讶的是 `offset % this.waveTableLength` 表达式完全支配了 `Oscillator.prototype.generate` 的运行。在任何的英特尔机器上的分析器中运行此基准测试显示,超过 20% 的通过数据都归功于我们为模数生成的 `idiv` 指令。然而一个有趣的发现是,`Oscillator` 实例的 `waveTableLength` 字段总是包含相同的值——2048,因为它在 `Oscillator` 构造器中只分配一次。
|
||||
让我们看看代码,你也许会觉得这里主要是循环中的数组访问或者乘法或者 [Math.round][62] 调用,但令人惊讶的是 `offset % this.waveTableLength` 表达式完全支配了 `Oscillator.prototype.generate` 的运行。在任何的英特尔机器上的分析器中运行此基准测试显示,超过 20% 的时间占用都属于我们为模数生成的 `idiv` 指令。然而一个有趣的发现是,`Oscillator` 实例的 `waveTableLength` 字段总是包含相同的值——2048,因为它在 `Oscillator` 构造器中只分配一次。
|
||||
|
||||
[][63]
|
||||
|
||||
如果我们知道整数模数运算的右边是 2 的幂,我们可以生成[更好的代码][64],显然完全避免了英特尔上的 `idiv` 指令。所以我们需要获取一种信息使 `this.waveTableLength` 从 `Oscillator` 构造器到 `Oscillator.prototype.generate` 中的模运算都是 2048。一个显而易见的方法是尝试依赖于将所有内容内嵌到 `calcOsc` 函数,并让 `load/store` 消除为我们进行的常量传播,但这对于在 `calcOsc` 函数之外分配的正弦振荡器无效。
|
||||
如果我们知道整数模数运算的右边是 2 的幂,我们可以生成显然[更好的代码][64],完全避免了英特尔上的 `idiv` 指令。所以我们需要获取一种信息使 `this.waveTableLength` 从 `Oscillator` 构造器到 `Oscillator.prototype.generate` 中的模运算都是 2048。一个显而易见的方法是尝试依赖于将所有内容内嵌到 `calcOsc` 函数,并让 `load/store` 消除为我们进行的常量传播,但这对于在 `calcOsc` 函数之外分配的 `sine` `oscillator` 无效。
|
||||
|
||||
因此,我们所做的就是添加支持跟踪某些常数值作为模运算符的右侧反馈。这在 V8 中是有意义的,因为我们为诸如 `+`、`*` 和 `%` 的二进制操作跟踪类型反馈,这意味着操作者跟踪输入的类型和产生的输出类型(参见最近圆桌讨论关于[动态语言的快速运算][65]的幻灯片)。当然,用 `fullcodegen` 和 `Crankshaft` 挂接起来也是相当容易的,`MOD` 的 `BinaryOpIC` 也可以跟踪两个右边的已知权。
|
||||
因此,我们所做的就是添加支持跟踪某些常数值作为模运算符的右侧反馈。这在 V8 中是有意义的,因为我们为诸如 `+`、`*` 和 `%` 的二进制操作跟踪类型反馈,这意味着操作者跟踪输入的类型和产生的输出类型(参见最近的圆桌讨论中关于[动态语言的快速运算][65]的幻灯片)。当然,用 `fullcodegen` 和 `Crankshaft` 挂接起来也是相当容易的,`MOD` 的 `BinaryOpIC` 也可以跟踪右边已知的 2 的冥。
|
||||
|
||||
```
|
||||
$ ~/Projects/v8/out/Release/d8 --trace-ic audio-oscillator.js
|
||||
@ -179,7 +183,7 @@ $ ~/Projects/v8/out/Release/d8 --trace-ic audio-oscillator.js
|
||||
$
|
||||
```
|
||||
|
||||
显示表明 `BinaryOpIC` 正在为模数的右侧拾取适当的恒定反馈,并正确跟踪左侧始终是一个小整数(V8 的 `Smi` 说),我们也总是产生一个小的整数结果 。 使用 `--print-opt-code -code-comments` 查看生成的代码,很快就显示出,`Crankshaft` 利用反馈在 `Oscillator.prototype.generate` 中为整数模数生成一个有效的代码序列:
|
||||
事实上,以默认配置运行的 V8 (带有 Crankshaft 和 fullcodegen)表明 `BinaryOpIC` 正在为模数的右侧拾取适当的恒定反馈,并正确跟踪左侧始终是一个小整数(以 V8 的话叫做 `Smi`),我们也总是产生一个小整数结果。 使用 `--print-opt-code -code-comments` 查看生成的代码,很快就显示出,`Crankshaft` 利用反馈在 `Oscillator.prototype.generate` 中为整数模数生成一个有效的代码序列:
|
||||
|
||||
```
|
||||
[...SNIP...]
|
||||
@ -203,11 +207,11 @@ $
|
||||
[...SNIP...]
|
||||
```
|
||||
|
||||
所以你看到我们加载 `this.waveTableLength`(`rbx` 持有 `this` 的引用)的值,检查它仍然是 2048(十六进制的 0x800),如果是这样,只是执行一个按位操作和适当的掩码 0x7ff(`r11` 包含循环感应变量 `i` 的值),而不是使用 `idiv` 指令(注意保留左侧的符号)。
|
||||
所以你看到我们加载 `this.waveTableLength`(`rbx` 持有 `this` 的引用)的值,检查它仍然是 2048(十六进制的 0x800),如果是这样,就只用适当的掩码 0x7ff(`r11` 包含循环感应变量 `i` 的值)执行一个位操作 AND ,而不是使用 `idiv` 指令(注意保留左侧的符号)。
|
||||
|
||||
### 过度专业化的问题
|
||||
#### 过度特定的问题
|
||||
|
||||
所以这个技巧酷毙了,但正如许多基准关注的技巧都有一个主要的缺点:太过于专业了!一旦右侧发生变化,所有优化过的代码需要重构(假设右手始终是两个不再拥有的权),任何进一步的优化尝试都必须再次使用 `idiv`,因为 `BinaryOpIC` 很可能以 `Smi * Smi -> Smi` 的形式报告反馈。例如,假设我们实例化另一个 `Oscillator`,在其上设置不同的 `waveTableLength`,并为振荡器调用 `generate`,那么即使实际上有趣的 `Oscillator` 不受影响,我们也会损失 20% 的性能(例如,引擎在这里实行非局部惩罚)。
|
||||
所以这个技巧酷毙了,但正如许多基准关注的技巧都有一个主要的缺点:太过于特定了!一旦右侧发生变化,所有优化过的代码就失去了优化(假设右手始终是不再处理的 2 的冥),任何进一步的优化尝试都必须再次使用 `idiv`,因为 `BinaryOpIC` 很可能以 `Smi * Smi -> Smi` 的形式报告反馈。例如,假设我们实例化另一个 `Oscillator`,在其上设置不同的 `waveTableLength`,并为 `Oscillator` 调用 `generate`,那么即使我们实际上感兴趣的 `Oscillator` 不受影响,我们也会损失 20% 的性能(例如,引擎在这里实行非局部惩罚)。
|
||||
|
||||
```
|
||||
--- audio-oscillator.js.ORIG 2016-12-15 22:01:43.897033156 +0100
|
||||
@ -224,7 +228,7 @@ $
|
||||
sine.generate();
|
||||
```
|
||||
|
||||
将原始的 `audio-oscillator.js` 执行时间与包含额外未使用的 `Oscillator` 实例与修改的 `waveTableLength` 的版本进行比较,可以显示预期的结果:
|
||||
将原始的 `audio-oscillator.js` 执行时间与包含额外未使用的 `Oscillator` 实例与修改的 `waveTableLength` 的版本进行比较,显示的是预期的结果:
|
||||
|
||||
```
|
||||
$ ~/Projects/v8/out/Release/d8 audio-oscillator.js.ORIG
|
||||
@ -234,9 +238,9 @@ Time (audio-oscillator-once): 81 ms.
|
||||
$
|
||||
```
|
||||
|
||||
这是一个非常可怕的性能悬崖的例子:假设开发人员为库编写代码,并使用某些样本输入值进行仔细的调整和优化,性能是体面的。现在,用户开始使用该库读取性能日志,但不知何故从性能悬崖下降,因为她/他正在以一种稍微不同的方式使用库,即以某种方式污染某种 `BinaryOpIC` 的类型反馈,并且遭受 20% 的减速(与该库作者的测量相比),该库的作者和用户都无法解释,这似乎是随机的。
|
||||
这是一个非常可怕的性能悬崖的例子:假设开发人员编写代码库,并使用某些样本输入值进行仔细的调整和优化,性能是体面的。现在,用户读过了性能说明开始使用该库,但不知何故从性能悬崖下降,因为她/他正在以一种稍微不同的方式使用库,即特定的 `BinaryOpIC` 的某种污染方式的类型反馈,并且遭受 20% 的减速(与该库作者的测量相比),该库的作者和用户都无法解释,这似乎是随机的。
|
||||
|
||||
现在这在 JavaScript 领域并不少见,不幸的是,这些悬崖中有几个是不可避免的,因为它们是由于 JavaScript 的性能是基于乐观的假设和猜测的事实。我们已经花了 **大量** 时间和精力来试图找到避免这些性能悬崖的方法,不过依旧提供(几乎)相同的性能。事实证明,尽可能避免 `idiv` 是很有意义的,即使你不一定知道右边总是一个 2 的幂(通过动态反馈),所以为什么 `TurboFan` 的做法有异于 `Crankshaft` 的做法,因为它总是在运行时检查输入是否是 2 的幂,所以一般情况下,对于有符整数模数,优化两个右手侧的(未知)权看起来像这样(在伪代码中):
|
||||
现在这种情况在 JavaScript 领域并不少见,不幸的是,这些悬崖中有几个是不可避免的,因为它们是由于 JavaScript 的性能是基于乐观的假设和猜测。我们已经花了 **大量** 时间和精力来试图找到避免这些性能悬崖的方法,而仍提供了(几乎)相同的性能。事实证明,尽可能避免 `idiv` 是很有意义的,即使你不一定知道右边总是一个 2 的幂(通过动态反馈),所以为什么 `TurboFan` 的做法有异于 `Crankshaft` 的做法,因为它总是在运行时检查输入是否是 2 的幂,所以一般情况下,对于有符整数模数,优化右手侧的(未知的) 2 的冥看起来像这样(伪代码):
|
||||
|
||||
```
|
||||
if 0 < rhs then
|
||||
@ -265,7 +269,7 @@ Time (audio-oscillator-once): 69 ms.
|
||||
$
|
||||
```
|
||||
|
||||
基准和过度专业化的问题在于基准可以给你提示在哪里可以看看以及该怎么做,但它不告诉你你应该走多远,不能保护优化。例如,所有 JavaScript 引擎都使用基准来防止性能下降,但是运行 Kraken 不能保护我们在 `TurboFan` 中的一般方法,即我们可以将 `TurboFan` 中的模优化降级到过度专业版本的 `Crankshaft`,而基准不会告诉我们却在倒退的事实,因为从基准的角度来看这很好!现在你可以扩展基准,也许以上面我们做的相同的方式,并试图用基准覆盖一切,这是引擎实现者在一定程度上做的事情,但这种方法不会任意缩放。即使基准测试方便,易于用来沟通和竞争,以常识所见你还是需要留下空间,否则过度专业化将支配一切,你会有一个真正的、可接受的、巨大的性能悬崖线。
|
||||
基准和过度特定化的问题在于基准可以给你提示可以看看哪里以及该怎么做,但它不告诉你应该做到什么程度,不能保护合理优化。例如,所有 JavaScript 引擎都使用基准来防止性能回退,但是运行 Kraken 不能保护我们在 `TurboFan` 中使用的常规方法,即我们可以将 `TurboFan` 中的模优化降级到过度特定的版本的 `Crankshaft`,而基准不会告诉我们性能回退的事实,因为从基准的角度来看这很好!现在你可以扩展基准,也许以上面我们相同的方式,并试图用基准覆盖一切,这是引擎实现者在一定程度上做的事情,但这种方法不能任意缩放。即使基准测试方便,易于用来沟通和竞争,以常识所见你还是需要留下空间,否则过度特定化将支配一切,你会有一个真正的、非常好的可接受的性能,以及巨大的性能悬崖线。
|
||||
|
||||
Kraken 测试还有许多其它的问题,不过现在让我们继续讨论过去五年中最有影响力的 JavaScript 基准测试—— Octane 测试。
|
||||
|
||||
@ -328,11 +332,11 @@ function(t) {
|
||||
}
|
||||
```
|
||||
|
||||
更准确地说,时间并不是开销在这个函数本身,而是由此触发的操作和内置库函数。结果,我们花费了基准调用的总体执行时间的 4-7% 在 [`Compare` 运行时函数][76]上,它实现了[抽象关系][77]比较的一般情况。
|
||||
更准确地说,时间并不是开销在这个函数本身,而是由此触发的操作和内置库函数。结果,我们花费了基准调用的总体执行时间的 4-7% 在 [Compare` 运行时函数][76]上,它实现了[抽象关系][77]比较的一般情况。
|
||||
|
||||

|
||||
|
||||
几乎所有对运行时函数的调用都来自 [`CompareICStub`][78],它用于内部函数中的两个关系比较:
|
||||
几乎所有对运行时函数的调用都来自 [CompareICStub][78],它用于内部函数中的两个关系比较:
|
||||
|
||||
```
|
||||
x.proxyA = t < m ? t : m;
|
||||
@ -343,12 +347,12 @@ x.proxyB = t >= m ? t : m;
|
||||
|
||||
1. 调用 [ToPrimitive][12](`t`, `hint Number`)。
|
||||
2. 运行 [OrdinaryToPrimitive][13](`t`, `"number"`),因为这里没有 `Symbol.toPrimitive`。
|
||||
3. 执行 `t.valueOf()`,这会获得 `t` 自身的值,因为它调用了默认的 [`Object.prototype.valueOf`][14]。
|
||||
4. 接着执行 `t.toString()`,这会生成 `"[object Object]"`,因为调用了默认的 [`Object.prototype.toString`][15],并且没有找到 `L` 的 [`Symbol.toStringTag`][16]。
|
||||
3. 执行 `t.valueOf()`,这会获得 `t` 自身的值,因为它调用了默认的 [Object.prototype.valueOf][14]。
|
||||
4. 接着执行 `t.toString()`,这会生成 `"[object Object]"`,因为调用了默认的 [Object.prototype.toString][15],并且没有找到 `L` 的 [Symbol.toStringTag][16]。
|
||||
5. 调用 [ToPrimitive][17](`m`, `hint Number`)。
|
||||
6. 运行 [OrdinaryToPrimitive][18](`m`, `"number"`),因为这里没有 `Symbol.toPrimitive`。
|
||||
7. 执行 `m.valueOf()`,这会获得 `m` 自身的值,因为它调用了默认的 [`Object.prototype.valueOf`][19]。
|
||||
8. 接着执行 `m.toString()`,这会生成 `"[object Object]"`,因为调用了默认的 [`Object.prototype.toString`][20],并且没有找到 `L` 的 [`Symbol.toStringTag`][21]。
|
||||
7. 执行 `m.valueOf()`,这会获得 `m` 自身的值,因为它调用了默认的 [Object.prototype.valueOf][19]。
|
||||
8. 接着执行 `m.toString()`,这会生成 `"[object Object]"`,因为调用了默认的 [Object.prototype.toString][20],并且没有找到 `L` 的 [Symbol.toStringTag][21]。
|
||||
9. 执行比较 `"[object Object]" < "[object Object]"`,结果是 `false`。
|
||||
|
||||
至于 `t >= m` 亦复如是,它总会输出 `true`。所以这里是一个漏洞——使用抽象关系比较这种方法没有意义。而利用它的方法是使编译器常数折叠,即给基准打补丁:
|
||||
@ -410,7 +414,7 @@ $
|
||||
|
||||
由此可见,在检测 `global_init` 和避免昂贵的预解析步骤我们几乎提升了 2 倍。我们不太确定这是否会对真实用例产生负面影响,不过保证你在预解析大函数的时候将会受益匪浅(因为它们不会立即执行)。
|
||||
|
||||
让我们来看看另一个稍有争议的基准测试:[`splay.js`][89] 测试,一个用于处理伸展树(二叉查找树的一种)和练习自动内存管理子系统(也被成为垃圾回收器)的数据操作基准。它自带一个延迟测试,这会引导 `Splay` 代码通过频繁的测量检测点,检测点之间的长时间停顿表明垃圾回收器的延迟很高。此测试测量延迟暂停的频率,将它们分类到桶中,并以较低的分数惩罚频繁的长暂停。这听起来很棒!没有 `GC` 停顿,没有垃圾。纸上谈兵到此为止。让我们看看这个基准,以下是整个伸展树业务的核心:
|
||||
让我们来看看另一个稍有争议的基准测试:[splay.js][89] 测试,一个用于处理伸展树(二叉查找树的一种)和练习自动内存管理子系统(也被成为垃圾回收器)的数据操作基准。它自带一个延迟测试,这会引导 `Splay` 代码通过频繁的测量检测点,检测点之间的长时间停顿表明垃圾回收器的延迟很高。此测试测量延迟暂停的频率,将它们分类到桶中,并以较低的分数惩罚频繁的长暂停。这听起来很棒!没有 `GC` 停顿,没有垃圾。纸上谈兵到此为止。让我们看看这个基准,以下是整个伸展树业务的核心:
|
||||
|
||||
[][90]
|
||||
|
||||
@ -560,7 +564,6 @@ $
|
||||
|
||||
作者简介:
|
||||
|
||||

|
||||
|
||||
我是 Benedikt Meurer,住在 Ottobrunn(德国巴伐利亚州慕尼黑东南部的一个市镇)的一名软件工程师。我于 2007 年在锡根大学获得应用计算机科学与电气工程的文凭,打那以后的 5 年里我在编译器和软件分析领域担任研究员(2007 至 2008 年间还研究过微系统设计)。2013 年我加入了谷歌的慕尼黑办公室,我的工作目标主要是 V8 JavaScript 引擎,目前是 JavaScript 执行性能优化团队的一名技术领导。
|
||||
|
||||
|
@ -1,142 +0,0 @@
|
||||
### Kubernetes 是什么?
|
||||
|
||||
Kubernetes,简称 k8s(k,8个字符,s)或者 “kube”,是一个开源的 [Linux 容器][3]自动化平台,消除了容器化应用程序在部署、伸缩时涉及到的许多手动操作。换句话说,你可以将多台主机组合成集群来运行 Linux 容器,Kubernetes 能帮助你简单高效地管理集群。而且构成这些集群的主机还可以跨越[公有云][4]、[私有云][5]以及混合云。
|
||||
|
||||
Kubernetes 最开始是由 Google 的工程师设计开发的。Google 作为 [Linux 容器技术的早期贡献者][6]之一,曾公开演讲介绍 [Google 如何将一切都运行于容器之中][7](这是 Google 云服务背后的技术)。Google 一周的容器部署超过 20 亿次,全部的工作都由内部平台 [Borg][8] 支撑。Borg 是 Kubernetes 的前身,几年来开发 Borg 的经验教训也成了影响 Kubernetes 中许多技术的主要因素。
|
||||
|
||||
_趣闻: Kubernetes logo 中的七个辐条来源于项目原先的名称, “[Seven of Nine 项目][1]”(译者:Borg 是「星际迷航」中的一个宇宙种族,Seven of Nine 是该种族的一名女性角色)。_
|
||||
|
||||
红帽作为最早与 Google 合作开发 Kubernetes 的公司之一(甚至早于 Kubernetes 的发行),已经是 Kubernetes 上游项目的第二大贡献者。Google 在 2015 年把 Kubernetes 项目捐献给了新成立的 [CNCF(Cloud Native Computing Foundation)基金会][11]。
|
||||
|
||||
* * *
|
||||
|
||||
### 为什么你需要 Kubernetes ?
|
||||
|
||||
真实的生产环境应用会包含多个容器,而这些容器还很可能会跨越服务器主机。Kubernetes 提供了为工作负载大规模部署容器的编排与管理能力。Kubernetes 的编排器让你能够构建多容器的应用服务,在集群上调度或伸缩这些容器,以及管理它们随时间变化的健康状态。
|
||||
|
||||
Kubernetes also needs to integrate with networking, storage, security, telemetry and other services to provide a comprehensive container infrastructure.
|
||||
Kubernetes 需要与网络、存储、安全、监控等其它服务集成才能提供综合性的容器基础设施。
|
||||
|
||||

|
||||
|
||||
当然,这取决于你如何在你的环境中使用容器。一个初步的 Linux 容器应用程序把容器作为高效快速的虚拟机。一旦把它部署到生产环境或者扩展为多个应用,你需要许多组托管在相同位置的容器合作提供某个单一的服务。随着这些容器的累积,你的运行环境中容器的数量会急剧增加,复杂度也随之增长。
|
||||
|
||||
Kubernetes 通过将容器分类组成 “pod” 来解决容器增殖带来的问题。Pod 为容器分组提供了一层抽象,以此协助你调度工作负载以及为这些容器提供类似网络与存储这类必要的服务。Kubernetes 的其它组件帮助你对 pod 进行负载均衡,以保证有合适数量的容器支撑你的工作负载。
|
||||
|
||||
正确执行的 Kubernetes,结合类似 [Atomic Registry][12]、[Open vSwitch][13]、[heapster][14]、[OAuth][15] 和 [SELinux][16] 的开源项目,让你可以管理你自己的整个容器基础设施。
|
||||
|
||||
* * *
|
||||
|
||||
### Kubernetes 能做些什么?
|
||||
|
||||
在生产环境中使用 Kubernetes 的主要优势在于它提供了在物理机或虚拟机集群上调度和运行容器的平台。更宽泛地说,它能帮你在生产环境中实现可以依赖的基于容器的基础设施。而且,由于 Kubernetes 本质上就是作业任务的自动化平台,你可以执行一些其它应用程序平台或管理系统支持的操作,只不过操作对象变成了容器。
|
||||
|
||||
有了 Kubernetes,你可以:
|
||||
|
||||
* 跨主机编排容器。
|
||||
|
||||
* 更充分地利用硬件资源来最大化地满足企业应用的需求。
|
||||
|
||||
* 控制与自动化应用的部署与升级。
|
||||
|
||||
* 为有状态的应用程序挂载和添加存储器。
|
||||
|
||||
* 线上扩展或裁剪容器化应用程序与它们的资源。
|
||||
|
||||
* 声明式的容器管理,保证应用按照我们部署的方式运作。
|
||||
|
||||
* 通过自动布局、自动重启、自动复制、自动伸缩实现应用的状态检查与自我修复。
|
||||
|
||||
然而 Kubernetes 依赖其它项目来提供完整的编排服务。结合其它开源项目作为其组件,你才能充分感受到 Kubernetes 的能力。这些必要组件包括:
|
||||
|
||||
* 仓库:Atomic Registry、Docker Registry 等。
|
||||
|
||||
* 网络:OpenvSwitch 和 智能边缘路由等。
|
||||
|
||||
* 监控:heapster、kibana、hawkular 和 elastic。
|
||||
|
||||
* 安全:LDAP、SELinux、 RBAC 与 支持多租户的 OAUTH。
|
||||
|
||||
* 自动化:通过 Ansible 的 playbook 进行集群的安装和生命周期管理。
|
||||
|
||||
* 服务:大量事先创建好的常用应用模板。
|
||||
|
||||
[红帽 OpenShift 为容器部署预先集成了上面这些组件。][17]
|
||||
|
||||
* * *
|
||||
|
||||
### Kubernetes 入门
|
||||
|
||||
和其它技术一样,大量的专有名词有可能成为入门的障碍。下面解释一些通用的术语,希望帮助你理解 Kubernetes。
|
||||
|
||||
**Master(主节点):** 控制 Kubernetes 节点的机器,也是创建作业任务的地方。
|
||||
|
||||
**Node(节点):** 这些机器在 Kubernetes 主节点的控制下执行被分配的任务。
|
||||
|
||||
**Pod:** 由一个或多个容器构成的集合,作为一个整体被部署一个单一节点。同一个 pod 中的容器共享 IP 地址、进程间通讯(IPC)、主机名以及其它资源。Pod 将底层网络和存储抽象出来,使得集群内的容器迁移更为便捷。
|
||||
|
||||
**Replication controller(复制控制器):** 控制一个 pod 在集群上运行的实例数量。
|
||||
|
||||
**Service(服务):** 将服务内容与具体的 pod 分离。Kubernetes 服务代理负责自动将服务请求分发到正确的 pod 处,用户无需考虑 pod 部署的位置甚至可以把它替换掉。
|
||||
|
||||
**Kubelet:** 这个守护进程运行在各个工作节点上,负责获取容器列表,保证被声明的容器已经启动并且正常运行。
|
||||
|
||||
**kubectl:** 这是 Kubernetes 的命令行配置工具。
|
||||
|
||||
[上面这些知识就足够了吗?不,这仅仅是一小部分,更多内容请查看 Kubernetes 术语表。][18]
|
||||
|
||||
* * *
|
||||
|
||||
### 生产环境中使用 Kubernetes
|
||||
|
||||
Kubernetes 是开源的,所以没有正式的技术支持组织为你的商业业务提供支持。如果在生存环境使用 Kubernetes 时遇到问题,你恐怕不会太愉快,当然你的客户也不会太高兴。
|
||||
|
||||
这就是[红帽 OpenShift][2] 要解决的问题。OpenShift 是为企业提供的 Kubernetes ——并且集成了更多的组件。OpenShift 包含了强化 Kubernetes 功能,使其更适用于企业场景的额外部件,包括仓库、网络、监控、安全、自动化和服务在内。OpenShift 使得开发者能够在具有伸缩性、控制和编排能力的云端开发、托管和部署容器化的应用,快速便捷地把想法转变为业务。
|
||||
|
||||
而且,OpenShift 还是由头号开源领导公司红帽支持和开发的。
|
||||
|
||||
* * *
|
||||
|
||||
### Kubernetes 如何适配你的基础设施
|
||||
|
||||

|
||||
|
||||
Kubernetes 运行在操作系统(例如 [Red Hat Enterprise Linux Atomic Host][19])之上,操作着节点上运行的容器。Kubernetes 主节点(master)从管理员(或者 DevOps 团队)处接受命令,再把指令转交给附属的节点。转交工作由 service 自动决定接受任务的节点,然后在该节点上分配资源并指派 pod 来完成任务请求。
|
||||
|
||||
所以从基础设施的角度,管理容器的方式发生了一点小小的变化。对容器的控制在更高的层次进行,这不再需要用户管理每个单独的容器或者节点,提供了更佳的控制方式。必要的工作则主要集中在如何指派 Kubernetes 主节点,定义节点和 pod 等问题上。
|
||||
|
||||
### docker 在 Kubernetes 中的角色
|
||||
|
||||
[Docker][20] 依然执行它原本的任务。当 Kubernetes 把 pod 调度到节点上,节点上的 kubelet 会指示 docker 启动特定的容器。接着,kubelet 会通过 docker 持续地收集容器的信息,然后提交到主节点上。Docker 如往常一样拉取容器镜像、启动或停止容器。不同点仅仅在于这是由自动化系统控制而非管理员在每个节点上手动操作的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/containers/what-is-kubernetes
|
||||
|
||||
作者:[www.redhat.com ][a]
|
||||
译者:[haoqixu](https://github.com/haoqixu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.redhat.com/
|
||||
[1]:https://cloudplatform.googleblog.com/2016/07/from-Google-to-the-world-the-Kubernetes-origin-story.html
|
||||
[2]:https://www.redhat.com/en/technologies/cloud-computing/openshift
|
||||
[3]:https://www.redhat.com/en/containers/whats-a-linux-container
|
||||
[4]:https://www.redhat.com/en/topics/cloud-computing/what-is-public-cloud
|
||||
[5]:https://www.redhat.com/en/topics/cloud-computing/what-is-private-cloud
|
||||
[6]:https://en.wikipedia.org/wiki/Cgroups
|
||||
[7]:https://speakerdeck.com/jbeda/containers-at-scale
|
||||
[8]:http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html
|
||||
[9]:http://stackalytics.com/?project_type=kubernetes-group&metric=commits
|
||||
[10]:https://techcrunch.com/2015/07/21/as-kubernetes-hits-1-0-google-donates-technology-to-newly-formed-cloud-native-computing-foundation-with-ibm-intel-twitter-and-others/
|
||||
[11]:https://www.cncf.io/
|
||||
[12]:http://www.projectatomic.io/registry/
|
||||
[13]:http://openvswitch.org/
|
||||
[14]:https://github.com/kubernetes/heapster
|
||||
[15]:https://oauth.net/
|
||||
[16]:https://selinuxproject.org/page/Main_Page
|
||||
[17]:https://www.redhat.com/en/technologies/cloud-computing/openshift
|
||||
[18]:https://kubernetes.io/docs/reference/
|
||||
[19]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/options
|
||||
[20]:https://www.redhat.com/en/containers/what-is-docker
|
@ -1,149 +0,0 @@
|
||||
# Docker Sawrm 模式 - 添加 worker 节点教程
|
||||
|
||||
让我们继续几周前在 CentOS 7.2 中开始的工作。 在本[指南][1]中,我们学习了如何初始化以及启动 Docker 1.12 中内置的本地集群以及编排功能。但是我们只有管理节点还没有其他 worker 节点。今天我们会展开这个。
|
||||
|
||||
我将向你展示如何将不对称节点添加到 Sawrm 中,也就是 [Fedora 24][2] 将与 CentOS 相邻,它们都将加入到集群中,还有相关很棒的负载均衡等等。当然这并不是微不足道的,我们会遇到一些障碍,所以它应该是非常有趣的。
|
||||
|
||||

|
||||
|
||||
### 先决条件
|
||||
|
||||
在将其他节点成功加入 Swarm 之前,我们需要做几件事情。理想情况下,所有节点都应该运行相同版本的 Docker,为了支持本地编排,它的版本至少应该为 1.12。像 CentOS 一样,Fedora 内置的仓库没有最新的构建,所以你需要手动或者使用 Docker 仓库手动[添加并安装][3]正确的版本,并修复一些依赖冲突。我已经向你展示了如何在 CentOS 中操作,练习是相同的。
|
||||
|
||||
此外,所有节点都需要能够相互通信。这就需要有正确的路由和防火墙规则,这样管理和 worker 节点才能互相通信。否则,你无法将节点加入 Swarm 中。最简单的解决方法是临时刷新防火墙规则 (iptables -F),但这可能会损害你的安全。请确保你完全了解你正在做什么,并为你的节点和端口创建正确的规则。
|
||||
|
||||
守护进程的错误响应:节点加入之前已超时。尝试加入 Swarm 的请求将在后台继续进行。使用 “docker info” 命令查看节点的当前 Swarm 状态。
|
||||
|
||||
你需要在主机上提供相同的 Docker 镜像。在上一个教程中我们创建了一个 Apache 映像,你需要在你的 worker 节点上执行相同操作,或者分发创建的镜像。如果你不这样做,你会遇到错误。如果你在设置 Docker 上需要帮助,请阅读我的[介绍指南][4]和[网络教程][5]。
|
||||
|
||||
```
|
||||
7vwdxioopmmfp3amlm0ulimcu \_ websky.11 my-apache2:latest
|
||||
localhost.localdomain Shutdown Rejected 7 minutes ago
|
||||
"No such image: my-apache2:lat&"
|
||||
```
|
||||
|
||||
### 现在开始
|
||||
|
||||
现在我们有一台 CentOS 机器并启动了,并成功创建了容器。你可以使用主机端口连接到服务,这一切都看起来很好。目前,你的 Swarm 只有管理节点。
|
||||
|
||||

|
||||
|
||||
### 加入 workers
|
||||
|
||||
要添加新的节点,你需要使用 join 命令。但是你首先必须提供令牌、IP 地址和端口,以便 woker 节点能正确地对 Swarm 管理器进行身份验证。接着执行(在 Fedora 上):
|
||||
|
||||
```
|
||||
[root@localhost ~]# docker swarm join-token worker
|
||||
要将 worker 添加大这个 Swarm 中,运行下面的命令:
|
||||
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-0xvojvlza90nrbihu6gfu3qm34ari7lwnza ... \
|
||||
192.168.2.100:2377
|
||||
```
|
||||
|
||||
如果你不修复防火墙和路由规则,你会得到超时错误。如果你已经加入了 Swarm,重复 join 命令会收到错误:
|
||||
|
||||
```
|
||||
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
|
||||
```
|
||||
|
||||
如果有疑问,你可以离开 Swarm,然后重试:
|
||||
|
||||
```
|
||||
[root@localhost ~]# docker swarm leave
|
||||
Node left the swarm.
|
||||
|
||||
docker swarm join --token
|
||||
SWMTKN-1-0xvojvlza90nrbihu6gfu3qnza4 ... 192.168.2.100:2377
|
||||
This node joined a swarm as a worker.
|
||||
```
|
||||
|
||||
在 worker 节点中,你可以使用 “docker info” 来检查状态:
|
||||
|
||||
```
|
||||
Swarm: active
|
||||
NodeID: 2i27v3ce9qs2aq33nofaon20k
|
||||
Is Manager: false
|
||||
Node Address: 192.168.2.103
|
||||
|
||||
Likewise, on the manager:
|
||||
|
||||
Swarm: active
|
||||
NodeID: cneayene32jsb0t2inwfg5t5q
|
||||
Is Manager: true
|
||||
ClusterID: 8degfhtsi7xxucvi6dxvlx1n4
|
||||
Managers: 1
|
||||
Nodes: 3
|
||||
Orchestration:
|
||||
Task History Retention Limit: 5
|
||||
Raft:
|
||||
Snapshot Interval: 10000
|
||||
Heartbeat Tick: 1
|
||||
Election Tick: 3
|
||||
Dispatcher:
|
||||
Heartbeat Period: 5 seconds
|
||||
CA Configuration:
|
||||
Expiry Duration: 3 months
|
||||
Node Address: 192.168.2.100
|
||||
```
|
||||
|
||||
### 创建或缩放服务
|
||||
|
||||
现在,我们需要看下 Docker 是否以及如何在节点间分发容器。我的测试展示了一个在非常轻的负载下相当简单的平衡算法。试了一两次之后,即使在我尝试缩放并更新之后,Docker 也没有将运行的服务重新分配给新的 worker。同样,有一次,它在 worker 节点上创建了一个新的服务。也许这是最好的选择。
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
在新的 worker 节点上创建完整新的服务。
|
||||
|
||||
过了一段时间,两个容器之间的现有服务有一些重新分配,但这需要一些时间。新服务工作正常。这只是一个前期观察,所以我现在不能说更多。现在是开始探索和调整的新起点。
|
||||
|
||||

|
||||
|
||||
负载均衡过了一会工作了。
|
||||
|
||||
### 总结
|
||||
|
||||
Docker 是一只灵巧的小野兽,它只会继续扩大,更复杂,更强大,当然也更优雅。它被一个大企业吃掉只是一个时间问题。当它涉及本地编排时,Swarm 模式运行得很好,但是它不仅仅需要几个容器来充分利用其算法和可扩展性。
|
||||
|
||||
我的教程展示了如何将 Fedora 节点添加到由 CentOS 运行的群集中,并且两者能并行工作。关于负载平衡还有一些问题,但这是我将在以后的文章中探讨的。总而言之,我希望这是一个值得记住的教训。我们已经解决了在尝试设置 Swarm 时可能遇到的一些先决条件和常见问题,同时我们启动了一堆容器,我们甚至简要介绍了如何缩放和分发服务。要记住,这只是一个开始。
|
||||
|
||||
干杯。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
我是 Igor Ljubuncic。现在大约 38 岁,已婚但还没有孩子。我现在在一个大胆创新的云科技公司做首席工程师。直到大约 2015 年初,我还在一个全世界最大的 IT 公司之一中做系统架构工程师,和一个工程计算团队开发新的基于 Linux 的解决方案,优化内核以及攻克 Linux 的问题。在那之前,我是一个为高性能计算环境设计创新解决方案的团队的技术领导。还有一些其他花哨的头衔,包括系统专家、系统程序员等等。所有这些都曾是我的爱好,但从 2008 年开始成为了我的付费工作。还有什么比这更令人满意的呢?
|
||||
|
||||
从 2004 年到 2008 年间,我曾通过作为医学影像行业的物理学家来糊口。我的工作专长集中在解决问题和算法开发。为此,我广泛地使用了 Matlab,主要用于信号和图像处理。另外,我得到了几个主要的工程方法学的认证,包括 MEDIC 六西格玛绿带、试验设计以及统计工程学。
|
||||
|
||||
我也开始写书,包括奇幻类和 Linux 上的技术性工作。彼此交融。
|
||||
|
||||
要查看我开源项目、出版物和专利的完整列表,请滚动到下面。
|
||||
|
||||
有关我的奖项,提名和 IT 相关认证的完整列表,请稍等一下。
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
[1]:http://www.dedoimedo.com/computers/docker-swarm-intro.html
|
||||
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[3]:http://www.dedoimedo.com/computers/docker-centos-upgrade-latest.html
|
||||
[4]:http://www.dedoimedo.com/computers/docker-guide.html
|
||||
[5]:http://www.dedoimedo.com/computers/docker-networking.html
|
@ -1,75 +0,0 @@
|
||||
开发者定义的应用交付
|
||||
============================================================
|
||||
|
||||
负载均衡器如何帮助你管理分布式系统的复杂性。
|
||||
|
||||
|
||||

|
||||
|
||||
原生云应用旨在利用分布式系统的性能、可扩展性和可靠性优势。不幸的是,分布式系统往往以额外的复杂性为代价。由于你程序的各个组件分布在网络中,并且这些网络有通信障碍或者性能降级,因此你的分布式程序组件需要继续独立运行。
|
||||
|
||||
为了避免程序状态的不一致,分布式系统设计应该有一个共识,即组件会失效。没有什么比网络更突出。因此,在其核心,分布式系统在很大程度上依赖于负载平衡-跨两个或多个系统的请求分布,以便在面临网络中断和在系统负载波动时水平缩放时具有弹性。
|
||||
|
||||
Get O'Reilly's weekly Systems Engineering and Operations newsletter[
|
||||

|
||||
][5]
|
||||
|
||||
随着分布式系统在原生云程序的设计和交付中越来越普及,负载平衡器在现代应用程序体系结构的各个层次都浸透了基础结构设计。在常见配置中,负载平衡器部署在应用程序前面,处理来自外部世界的请求。然而,微服务的出现意味着负载平衡器在幕后发挥关键作用:即管理_服务_之间的流。
|
||||
|
||||
因此,当你使用原生云程序和分布式系统时,负载均衡器将承担其他角色:
|
||||
|
||||
* 作为提供缓存和增加安全性的反向代理,因为它成为外部客户端的中间件。
|
||||
* 作为通过提供协议转换(例如 REST 到 AMQP)的 API 网关。
|
||||
* 它可以处理安全性(即运行 Web 应用程序防火墙)。
|
||||
* 它可能承担应用程序管理任务,如速率限制和 HTTP/2 支持。
|
||||
|
||||
鉴于它们的扩展能力远大于平衡流量,负载平衡器可以更广泛地称为应用交付控制器(ADC)。
|
||||
|
||||
### 开发人员定义基础设施
|
||||
|
||||
从历史上看,ADC 是由 IT 专业人员购买、部署和管理的,最常见的是运行企业架构的应用程序。对于物理负载平衡器设备(如 F5、Citrix、Brocade等),这种情况在很大程度上仍然存在。具有分布式系统设计和临时基础结构的云原生应用要求负载平衡器与它们运行时的基础结构 (如容器) 一样具有动态特性。这些通常是软件负载均衡器(例如来自公共云提供商的 NGINX 和负载平衡器)。云原生应用通常是开发人员主导的计划,这意味着开发人员正在创建应用程序(例如微服务器)和基础设施(Kubernetes 和 NGINX)。开发人员越来越多地对负载平衡 (和其他) 基础结构的决策做出或产生大量影响。
|
||||
|
||||
作为决策者,云原生应用的开发人员通常不会意识到企业基础架构要求或现有部署的影响,同时考虑到这些部署通常是新的,并且经常在公共或私有云环境中进行部署。云技术将基础设施抽象为可编程 API,开发人员正在定义应用程序在该基础架构的每一层构建的方式。在有负载平衡器的情况下,开发人员会选择要使用的类型,部署方式以及启用哪些功能。它们以编程方式对负载平衡器的行为进行编码 - 随着程序在部署的生存期内增长、收缩和功能上进化时,它如何动态响应应用程序的需要。开发人员将基础结构定义为代码-包括基础结构配置和代码操作。
|
||||
|
||||
### 开发者为什么定义基础架构?
|
||||
|
||||
编写这个代码-_如何构建和部署应用程序_-的实践已经发生了根本性的转变,它体现在很多方面。令人遗憾的是,这种根本性的转变是由两个因素驱动的:将新的应用功能推向市场(_上市时间_)所需的时间以及应用用户从产品(_时间到价值_)中获得价值所需的时间。因此,新的程序写出来被持续地交付(作为服务),没有下载和安装。
|
||||
|
||||
上市时间和时间价值的压力并不是新的,但由于其他因素的加剧,这些因素正在加强开发者的决策权力:
|
||||
|
||||
* 云:通过 API 定义基础架构作为代码的能力。
|
||||
* 伸缩:需要在大型环境中高效运行操作。
|
||||
* 速度:马上需要交付应用功能,为企业争取竞争力。
|
||||
* 微服务:抽象框架和工具选择,进一步赋予开发人员基础架构决策权力。
|
||||
|
||||
除了上述因素外,值得注意的是开源的影响。随着开源软件的普及和发展,开发人员掌握了许多应用程序基础设施 - 语言、运行时、框架、数据库、负载均衡器、托管服务等。微服务的兴起使应用程序基础设施的选择民主化,允许开发人员选择最佳的工具。在选择负载平衡器的情况下,与云原生应用的动态性质紧密集成并响应的那些应用程序将上升到最高。
|
||||
|
||||
### 总结
|
||||
|
||||
当你在仔细考虑你的云原生应用设计时,请与我一起讨论_[在云中使用 NGINX 和 Kubernetes 进行负载平衡][8]_。我们将检测不同公共云和容器平台的负载平衡功能,并通过一个宏应用的案例研究。我们将看看它是如何被变成成较小的,独立的服务,以及 NGINX 和 Kubernetes 的能力如何拯救它的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Lee Calcote 是一位创新的思想领袖,对开发者平台和云、容器、基础设施和应用的管理软件充满热情。先进的和新兴的技术一直是 Calcote 在 SolarWinds、Seagate、Cisco 和 Pelco 时的关注重点。技术会议和聚会的组织者、写作者、作家、演讲者,他活跃在技术社区。
|
||||
|
||||
----------------------------
|
||||
|
||||
via: https://www.oreilly.com/learning/developer-defined-application-delivery
|
||||
|
||||
作者:[Lee Calcote][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/7f693-lee-calcote
|
||||
[1]:https://pixabay.com/en/ship-containers-products-shipping-84139/
|
||||
[2]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
|
||||
[3]:https://www.oreilly.com/people/7f693-lee-calcote
|
||||
[4]:http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_text_cta
|
||||
[5]:https://www.oreilly.com/learning/developer-defined-application-delivery?imm_mid=0ee8c5&cmp=em-webops-na-na-newsltr_20170310
|
||||
[6]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
|
||||
[7]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
|
||||
[8]:http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_body_text_cta
|
@ -1,253 +0,0 @@
|
||||
函数式编程简介
|
||||
============================================================
|
||||
|
||||
> 我们来解释函数式编程的什么,它的优点是哪些,并且寻找一些函数式编程的学习资源。
|
||||
|
||||
|
||||

|
||||
图片来源于:
|
||||
|
||||
opensource.com
|
||||
|
||||
根据您的问题来回答, _函数式编程_ (FP) 是一种开放的程序设计方法,理应广泛传播或者流行于理论学术中,在现实中没有实际的作用。在这篇文章中我来讲解函数式编程,探究其优点,并推荐学习函数式编程的资源。
|
||||
|
||||
### 语法入门
|
||||
|
||||
本文的代码示例使用的是 [Haskell][40] 编程语言。因而你需要理解这篇文章的基本函数语法:
|
||||
|
||||
```
|
||||
even :: Int -> Bool
|
||||
even = ... -- implementation goes here
|
||||
```
|
||||
|
||||
示例定义了含有一个参数的函数 **even** ,第一行是 _类型声明_ 具体来说就是 **even** 函数接受一个 int 类型的参数返回一个 bool 类型的值,由一个或多个方法实现,在这里我们将忽略具体实现方法(名称和类型已经足够了):
|
||||
|
||||
```
|
||||
map :: (a -> b) -> [a] -> [b]
|
||||
map = ...
|
||||
```
|
||||
|
||||
这个示例, **map** 是一个有两个参数的函数:
|
||||
|
||||
1. **(a -> b)** : 将**a** 转换成 **b** 的匿名函数
|
||||
2. **[a]**: 将匿名函数作用到 **[a]** (List 序列与其它语言的数组对应)的每一个元素上,将每次所得结果放到另一个 **[b]** ,最后返回这个结果 **[b]** 。
|
||||
|
||||
同样我们不去关心是要如何实现,我们只感兴趣它的定义类型。
|
||||
**a** 和 **b** 是任何一种的的 _类型变量_ 。就像上一个示例中, **a** 是 **Int** 类型, **b** 是 **Bool** 类型:
|
||||
|
||||
```
|
||||
map even [1,2,3]
|
||||
```
|
||||
|
||||
这个是一个bool类型的序列:
|
||||
|
||||
```
|
||||
[False,True,False]
|
||||
```
|
||||
|
||||
如果你看到你不理解的其他语法,不要惊慌;对语法的充分理解不是必要的。
|
||||
|
||||
### 函数式编程的误区
|
||||
|
||||
编程与开发
|
||||
|
||||
* [我们最新的 JavaScript 文章][1]
|
||||
* [最近 Perl 的帖子][2]
|
||||
* [新的 Python 内容][3]
|
||||
* [红帽开发者博客][4]
|
||||
* [红帽开发者工具][5]
|
||||
|
||||
我们先来解释一下常见的误区:
|
||||
|
||||
* 函数式编程不是像命令行编程或者面向对象编程一样对立,这些都是虚假的。
|
||||
* 函数式编程不仅仅是学术领域在其他领域也有使用。这是真的,在函数式编程的历史中,如像Haskell和OCaml语言是最流行的研究。但是今天许多公司使用函数式编程来处理大型系统,小型专业程序,以及两者之间的一切。甚至还有一个面向函数式编程的商业用户的年度会议;过去的程序让我们了解了函数式编程在工业中的用途,以及由谁来使用它。
|
||||
* 函数式编程与monads无关 ,也不是任何其他特殊的抽象。对于围绕这个monad只是一个抽象的规定,有些是有些也的不是。
|
||||
* 函数式编程不是特别难学的。有些语言可能与您已经知道的语法不同,但这些差异是浅显的。函数式编程中有dense的概念,但其他方法也是如此。(这里的dense不懂什么意思,校对者注意一下)
|
||||
|
||||
### 什么是函数式编程?
|
||||
|
||||
核心是函数式编程是只使用_纯粹_的数学函数编程,函数的结果取决于参数,就像 I/O 或者状态转换这样。程序是通过 _组合函数_ 的方法构建的:
|
||||
|
||||
```
|
||||
(.) :: (b -> c) -> (a -> b) -> (a -> c)
|
||||
(g . f) x = g (f x)
|
||||
```
|
||||
|
||||
这个_(.)_ 表示的是二个函数组合成一个,将 **g** 作用到 **f** 上。我们将在下一个示例中看到它的使用。这里使用 Python 中的函数:
|
||||
|
||||
```
|
||||
def compose(g, f):
|
||||
return lambda x: g(f(x))
|
||||
```
|
||||
|
||||
函数式编程的优点在于:由于函数是确定的,所以可以用应用程序的结果替换函数,这种替代等价于使用使 _等式推理_ 。每个程序员都有使用自己代码和别人代码的理由,而等式推理就是解决这样问题不错的工具。来看一个示例。等你遇到这个问题:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
```
|
||||
|
||||
这段代码是做什么的?可以简化吗?通过等式推理,可以通过一系列替换来分析代码:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
map (even . (+1)) -- from definition of 'map'
|
||||
map (\x -> even (x + 1)) -- lambda abstraction
|
||||
map odd -- from definition of 'even'
|
||||
```
|
||||
|
||||
我们可以使用等式推理来理解程序并优化。Haskell编译器使用等式推理进行多种方案的优化。没有纯粹的函数,等式推理是不可能的,或者需要程序员更多的努力。
|
||||
|
||||
### 函数式编程语言
|
||||
|
||||
你需要一种编程语言来做函数式编程。
|
||||
|
||||
在没有高阶函数(传递函数作为参数和返回函数的能力)的语言中有意义地进行函数式编程, _lambdas_ (匿名函数)和泛型是困难的。 大多数现代语言都有这些,但在不同语言支持函数式编程方面存在差异。 具有最佳支持的语言称为函数式编程语言。 这些包括静态类型的 _Haskell_, _OCaml_ , _F#_ 和 _Scala_ ,动态类型的 _Erlang_ 和 _Clojure_。
|
||||
|
||||
在函数式语言之间可以在多大程度上利用函数编程有很大差异。有一个类型系统会有很大的帮助,特别是它支持 _类型推断_ (所以你并不总是必须键入类型)。这篇文章中没有详细介绍这部分,但足以说明,并非所有类型的系统都是平等的。
|
||||
|
||||
与所有语言一样,不同的函数的语言强调不同的概念,技术或用例。选择语言时,考虑到它支持函数式编程的程度以及是否适合您的用例很重要。如果您使用某些非 FP 语言,会受益于在语言支持的范围内的函数式编程。
|
||||
|
||||
### 不要打开表面没什么但却是陷阱的门
|
||||
|
||||
回想一下,函数的结果只取决于它的输入。几乎所有的编程语言都有这个。空值,类型case(instanceof),类型转换,异常以及无限递归的可能性都是陷阱,它打破等式推理,并削弱程序员对程序行为正确性的理解能力。(没有任何陷阱的语言包括Agda,Idris和Coq。)
|
||||
|
||||
幸运的是,作为程序员,我们可以选择避免这些陷阱,如果我们受到严格的规范,我们可以假装陷阱不存在。 这个方法叫做 _快速推理_ 。它不需要任何条件,几乎任何程序都可以在不使用陷阱的情况下进行编写,并且通过避免这些程序可以进行等式推理,可组合性和可重用性。
|
||||
|
||||
让我们详细讨论一下。 这个陷阱打破了等式推理,因为异常终止的可能性没有反映在类型中。(如果文档中提到可能抛出的异常,请自己计算一下)。但是没有理由我们无法包含所有故障模式的返回类型。
|
||||
|
||||
避开陷阱是语言特征中产生巨大影响的一个领域。为避免例外, 代数数据类型可用于模型误差的条件下,就像:
|
||||
|
||||
```
|
||||
-- new data type for results of computations that can fail
|
||||
--
|
||||
data Result e a = Error e | Success a
|
||||
|
||||
-- new data type for three kinds of arithmetic errors
|
||||
--
|
||||
data ArithError = DivByZero | Overflow | Underflow
|
||||
|
||||
-- integer division, accounting for divide-by-zero
|
||||
--
|
||||
safeDiv :: Int -> Int -> Result ArithError Int
|
||||
safeDiv x y =
|
||||
if y == 0
|
||||
then Error DivByZero
|
||||
else Success (div x y)
|
||||
```
|
||||
|
||||
在这个例子中的权衡你现在必须使用ArithError 或者 Int 类型为结果,而不是旧的 Int 的值,但这也是解决这个问题的一种方式。你不再需要处理异常,使用 _快速推理_ ,总体来说这是一个胜利。
|
||||
|
||||
### 免费的定理
|
||||
|
||||
大多数现代静态类型语言具有 _范型_(也称为 _参数多态性_ ),其中函数是通过一个或多个抽象类型定义的。 例如,考虑List(序列)上的函数:
|
||||
|
||||
```
|
||||
f :: [a] -> [a]
|
||||
f = ...
|
||||
```
|
||||
|
||||
Java中的相同函数如下所示:
|
||||
|
||||
```
|
||||
static <A> List<A> f(List<A> xs) { ... }
|
||||
```
|
||||
|
||||
编译程序的过程是一个证明的过程是将 _a_ 类型做出选择的过程。考虑到这一点,采用快速推理的方法,你能够创造出怎样的函数。
|
||||
|
||||
在这种情况下,该类型并不能告诉我们函数的功能(它可以改变序列,删除第一个元素或许多其他的东西),但它确实告诉了我们很多信息。只是从类型,我们可以得出关于函数的定理:
|
||||
|
||||
* **Theorem 1**: 输入决定输出;不可能在输入的序列 **a** 中添加值,因为你不知道它的数据结构。
|
||||
* **Theorem 2**: If you map any function over the list then apply **f**, the result is the same as applying **f** then mapping.
|
||||
|
||||
定理1帮助我们了解代码的作用,定理2对于程序优化提供了帮助。我们从类型中学到了这一切!从类型中获取有用的信息称为参数。因此,类型是函数行为的部分(有时是完整的)规范,也是一种检查机制。
|
||||
|
||||
现在你可以利用参数话了探寻了。你可以从 **map** **(.)** 或者下面的这些函数中发现什么呢?
|
||||
|
||||
* **foo :: a -> (a, a)**
|
||||
* **bar :: a -> a -> a**
|
||||
* **baz :: b -> a -> a**
|
||||
|
||||
### 学习功能编程的资源
|
||||
|
||||
也许你已经相信函数式编程是编写软件不错的方式,你想知道如何开始?有几种学习功能编程的方法; 这里有一些我推荐(我承认,我对 Haskell 偏爱:
|
||||
|
||||
* UPenn's 的 [CIS 194: 介绍 Haskell][35] 是函数式编程概念和 Haskell 开发的不错选择。可以当课程材料使用,讲座(您可以查看几年前 Brisbane 函数式编程小组的 [系列 CIS 194 讲座][36]。
|
||||
* 不错的入门书籍有 _[ Scala 的函数式编程][30]_ , _[ Haskell 对函数的思考][31]_ , 和 _[ Haskell 编程原理][32]_ .
|
||||
* [Data61 FP 课程][37] (f.k.a., _NICTA_ 课程) 通过 _类型驱动_ 开发来教授抽象和数据结构的概念。这是十分困难,但收获也是丰富的,如果你有一名愿意引导你函数式编程的程序员,你可以尝试。
|
||||
* 在你的工作学习中使用函数式编程书写代码,写一些纯粹的函数(避免不确定性和异常的出现),使用高阶函数而不是循环和递归,利用参数化来提高可读性和重用性。许多人从函数式编程开始,体验各种语言的美妙。
|
||||
* 加入到你区域中的一些函数式编程小组或者学习小组中,也可以是参加一些函数式编程的会议(新的会议总是不断的出现)。
|
||||
|
||||
### 总结
|
||||
|
||||
在本文中,我讨论了什么是函数式编程,而不是函数式编程的优点,包括等式推理和参数化。我们了解到在大多数编程语言中执行一些函数编程,但是语言的选择会影响受益的程度,而 Haskell 是函数式编程中语言最受欢迎的语言。我也推荐学习函数式编程的资源。
|
||||
|
||||
函数式编程是一个丰富的领域,还有许多更深入(更神秘)的主题正在等待探索。我没有提到那些具有实际意义的事情,比如:
|
||||
|
||||
* lenses and prisms (是一流的设置值的方式;非常适合使用嵌套数据);
|
||||
* 定理证明 (当测试你代码的时候你可以你代码的正确性);
|
||||
* 懒惰评估 (让您处理潜在无数的数据结构);
|
||||
* 类型理论 (函数式编程中许多美丽实用的抽象的起源).
|
||||
|
||||
我希望你喜欢这个函数式编程的介绍,并且启发你使用这个有趣和实用的软件开发方法。
|
||||
|
||||
_本文根据 [CC BY 4.0][38] 许可证发布。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
红帽软件工程师。对函数式编程,分类理论,数学感兴趣。Crazy about jalapeños.
|
||||
|
||||
----------------------
|
||||
|
||||
via: https://opensource.com/article/17/4/introduction-functional-programming
|
||||
|
||||
作者:[Fraser Tweedale ][a]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/frasertweedale
|
||||
[1]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://developers.redhat.com/products/#developer_tools?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[6]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[7]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[8]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[9]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:div
|
||||
[10]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[11]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[12]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Bool
|
||||
[13]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[14]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[15]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[16]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[17]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[18]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[19]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[20]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[21]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[22]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[23]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[24]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[25]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[26]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[27]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[28]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[29]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:odd
|
||||
[30]:https://www.manning.com/books/functional-programming-in-scala
|
||||
[31]:http://www.cambridge.org/gb/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell
|
||||
[32]:http://haskellbook.com/
|
||||
[33]:http://cufp.org/
|
||||
[34]:https://www.haskell.org/tutorial/monads.html
|
||||
[35]:https://www.cis.upenn.edu/~cis194/fall16/
|
||||
[36]:https://github.com/bfpg/cis194-yorgey-lectures
|
||||
[37]:https://github.com/data61/fp-course
|
||||
[38]:https://creativecommons.org/licenses/by/4.0/
|
||||
[39]:https://opensource.com/article/17/4/introduction-functional-programming?rate=_tO5hNzT4hRKNMJtWwQM-K3Jmxm10iPeqoy3bbS12MQ
|
||||
[40]:https://wiki.haskell.org/Introduction
|
||||
[41]:https://opensource.com/user/123116/feed
|
||||
[42]:https://opensource.com/users/frasertweedale
|
@ -0,0 +1,110 @@
|
||||
# [开源社交机器人套件运行在 Raspberry Pi 和 Arduino 上][22]
|
||||
|
||||
|
||||
|
||||

|
||||
Thecorpora 的发布的 “Q.bo One” 机器人基于 RPi 3 和 Arduino,并提供立体相机、麦克风、扬声器,以及视觉和语言识别。
|
||||
|
||||
2010 年,机器人开发商 Francisco Paz 及其巴塞罗那的 Thecorpora 公司推出了首款 [Qbo][6] “Cue-be-oh” 机器人作为一个开源概念验证和用于探索 AI 在多传感器、交互式机器人的能力的研究项目。目前,在 2 月移动世界大会上的预览之后,Thecorpora 把它放到了 Indiegogo 上,与 Arrow 合作推出了第一个批量生产的社交机器人版本。
|
||||
|
||||
[][7] [][8]
|
||||
**Q.bo One 的左侧和顶部**
|
||||
|
||||
|
||||
像原来一样,新的 Q.bo One 有一个带眼睛的球形头(双立体相机)、耳朵(3 个麦克风)和嘴(扬声器),并由 WiFi 和蓝牙控制。 Q.bo One 也同样有开源 Linux 软件和开放规格硬件。然而,它不是使用基于 Intel Atom 的 Mini-ITX 板,而是在与 Arduino 兼容的主板相连的 Raspberry Pi 3 上运行 Raspbian。
|
||||
|
||||
|
||||
[][9]
|
||||
**Q.bo One side views**
|
||||
|
||||
|
||||
Q.bo One 于 7 月中旬在 Indiegogo 上架,起价为 369 美元(早期买家)或 399 美元,有包括内置的 Raspberry Pi 3 和基于 Arduino 的 “Qboard” 控制器板。它还有售价 $499 的完整套装。目前,Indiegogo 上的弹性目标是 $100,000,现在大概达成了 15%,并且它 12 月出货。
|
||||
|
||||
更专业的机器人工程师和嵌入式开发人员可能会想要使用只有 RPI 和 Qboard PCB 和软件的价值 $99 的版本,或者提供没有电路板的机器人套件的 $249 版本。使用此版本,你可以用自己的 Arduino 控制器替换 Qboard,并将 RPi 3 替换为另一个 Linux SBC。该公司列出了 Banana Pi、BeagleBone、Tinker Board 以及[即将退市的 Intel Edison][10],作为兼容替代品的示例。
|
||||
|
||||
<center>
|
||||
[][11]
|
||||
**Q.bo One kit**
|
||||
(click image to enlarge)
|
||||
</center>
|
||||
|
||||
与 2010 年的 Qbo 不同,Q.bo One 除了球面头部之外无法移动,它在双重伺服系统的帮助下在底座上旋转,以便跟踪声音和动作。Robotis Dynamixel 舵机,也在开源中找到,Raspberry Pi 基于 [TurtleBot 3][23] 机器人工具包,除了左右之外,还可以上下移动。
|
||||
|
||||
<center>
|
||||
[][12] [][13]
|
||||
**Q.bo One detail view (left) and Qboard detail**
|
||||
(click images to enlarge)
|
||||
</center>
|
||||
|
||||
Q.bo One 也可类似地与基于 Linux 的 [Jibo][24] “社交机器人”相比,它于 2014 年在 Indiegogo 推出,最后达到 360 万美元。然而,Jibo 还没有出货,[最近的推迟][25]迫使它在今年的某个时候发布一个版本。
|
||||
|
||||
|
|
||||

|
||||
|
||||
**Q.bo One** |
|
||||
|
||||
我们大胆预测 Q.bo One 将会在 2017 年接近 12 月出货。核心技术和 AI 软件已被证明,而 Raspberry Pi 和 Arduino 技术也是如此。Qboard 主板已经由 Arrow 制造和认证。
|
||||
|
||||
开源设计表明, 即使是移动版本也不会有问题。这使它更像是滚动的人形生物 [Pepper][14],一个来自 Softbank 和 Aldeberan 类似的人工智能对话机器人。
|
||||
|
||||
Q.bo One 自原版以来添加了一些技巧,例如由 20 个 LED 组成的“嘴巴”, 它以不同的、可编程的方式在语音中模仿嘴唇移动。如果你想点击机器人获得关注,那么它的头上还有三个触摸传感器。但是,你真正需要做的就是说话,而 Q.bo One 会像一个可卡犬一样转身并凝视着你。
|
||||
|
||||
接口和你在 Raspberry Pi 3 上的一样,它在我们的[2017 黑客电路板调查][15]中消灭了其他对手。为 RPi 3 的 WiFi 和蓝牙安装了天线。
|
||||
|
||||
<center>
|
||||
[][16] [][17]
|
||||
**Q.bo One software architecture (left) and Q.bo One with Scratch screen**
|
||||
(click images to enlarge)
|
||||
</center>
|
||||
|
||||
Qboard(也称为 Q.board)在 Atmel ATSAMD21 MCU 上运行 Arduino 代码,并有三个麦克风、扬声器、触摸传感器、Dynamixel 控制器和用于嘴巴的 LED 矩阵。其他功能包括 GPIO、I2C接口和可连接到台式机的 micro-USB 口。
|
||||
|
||||
Q.bo One 可以识别脸部和追踪移动,机器人甚至可以在镜子中识别自己。在云连接的帮助下,机器人可以识别并与其他 Q.bo One 机器人交谈。机器人可以在自然语言处理的帮助下回答问题,并通过文字转语音朗读。
|
||||
|
||||
可以使用 Scratch 编程,它是机器人的主要功能,可以教孩子关于机器人和编程。机器人也是为教育者和制造者设计的,可以作为老年人的伴侣。
|
||||
|
||||
基于 Raspbian 的软件使用 OpenCV 进行视觉处理,并可以使用各种语言(包括 C++)进行编程。该软件还提供了 IBM Bluemix、NodeRED 和 ROS 的钩子。大概你也可以整合 [Alexa][18] 或 [Google Assistant][19]语音代理,虽然 Thecorpora 没有提及这一点。
|
||||
|
||||
|
||||
|
||||
**更多信息**
|
||||
|
||||
Q.bo One 在 7 月中旬在 Indiegogo 上架,起价为 $369 的完整套件和 $499 的所有组合
|
||||
。出货量预计在 2017 年 12 月。更多信息请参见[ Q.bo One 的 Indiegogo 页面][20] 和[ Thecorpora 网站][21]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
|
||||
作者:[ Eric Brown][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[1]:http://twitter.com/share?url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/&text=Open+source+social+robot+kit+runs+on+Raspberry+Pi+and+Arduino+
|
||||
[2]:https://plus.google.com/share?url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[3]:http://www.facebook.com/sharer.php?u=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[5]:http://reddit.com/submit?url=http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/&title=Open%20source%20social%20robot%20kit%20runs%20on%20Raspberry%20Pi%20and%20Arduino
|
||||
[6]:http://linuxdevices.linuxgizmos.com/open-source-robot-is-all-eyes/
|
||||
[7]:http://linuxgizmos.com/files/thecorpora_qboone.jpg
|
||||
[8]:http://linuxgizmos.com/files/thecorpora_qboone2.jpg
|
||||
[9]:http://linuxgizmos.com/files/thecorpora_qboone_side.jpg
|
||||
[10]:http://linuxgizmos.com/intel-pulls-the-plug-on-its-joule-edison-and-galileo-boards/
|
||||
[11]:http://linuxgizmos.com/files/thecorpora_qboone_kit.jpg
|
||||
[12]:http://linuxgizmos.com/files/thecorpora_qboone_detail.jpg
|
||||
[13]:http://linuxgizmos.com/files/thecorpora_qboone_qboard.jpg
|
||||
[14]:http://linuxgizmos.com/worlds-first-emotional-robot-runs-linux/
|
||||
[15]:http://linuxgizmos.com/2017-hacker-board-survey-raspberry-pi-still-rules-but-x86-sbcs-make-gains/
|
||||
[16]:http://linuxgizmos.com/files/thecorpora_qboone_arch.jpg
|
||||
[17]:http://linuxgizmos.com/files/thecorpora_qboone_scratch.jpg
|
||||
[18]:http://linuxgizmos.com/how-to-add-alexa-to-your-raspberry-pi-3-gizmo/
|
||||
[19]:http://linuxgizmos.com/free-raspberry-pi-voice-kit-taps-google-assistant-sdk/
|
||||
[20]:https://www.indiegogo.com/projects/q-bo-one-an-open-source-robot-for-everyone#/
|
||||
[21]:http://thecorpora.com/
|
||||
[22]:http://linuxgizmos.com/open-source-social-robot-kit-runs-on-raspberry-pi-and-arduino/
|
||||
[23]:http://linuxgizmos.com/ubuntu-driven-turtlebot-gets-a-major-rev-with-a-pi-or-joule-in-the-drivers-seat/
|
||||
[24]:http://linuxgizmos.com/cheery-social-robot-owes-it-all-to-its-inner-linux/
|
||||
[25]:https://www.slashgear.com/jibo-delayed-to-2017-as-social-robot-hits-more-hurdles-20464725/
|
70
translated/tech/20170629 Kubernetes Why does it matter.md
Normal file
70
translated/tech/20170629 Kubernetes Why does it matter.md
Normal file
@ -0,0 +1,70 @@
|
||||
Kubernetes:为什么这么重要?
|
||||
============================================================
|
||||
|
||||
### 运行容器化负载的 Kubernetes 平台在开发和部署云原生应用程序时将会承担一定作用。
|
||||
|
||||
|
||||

|
||||
>图片来源: opensource.com
|
||||
|
||||
由于非常好的原因,开发和部署云原生云应用程序已经变得非常受欢迎。对于一个允许快速部署和连续交付 bug 修复和新功能的过程来说,它有明显的优势,但是没有人会谈到鸡和鸡蛋问题:你从这里到达那里?构建基础设施和开发流程来开发和维护云原生应用程序(从头开始)是不简单的,耗时的任务。
|
||||
|
||||
[Kubernetes][3] 是一个相对较新的运行容器化负载的平台,它解决了这些问题。它原本是 Google 内部的一个项目,Kubernetes 在 2015 年被捐赠给了[云云原生计算基金会][4],并吸引了来自世界各地开源社区的开发人员。 Kubernetes 的设计基于 15 年的运行生产和开发负载的经验。由于它是开源的,任何人都可以下载并使用它,并实现其优点。
|
||||
|
||||
那么为什么 Kubernetes 会有这么大的惊喜呢?我相信它在基础架构即服务 (IaaS),像 OpenStack 和完整的平台即服务 (PaaS)的资源之间达到了最佳平衡,它的底层运行时实现完全由供应商控制。Kubernetes 提供了两个优势:管理基础设施的抽象,以及深入裸机进行故障排除的工具和功能。
|
||||
|
||||
### IaaS 与 PaaS
|
||||
|
||||
OpenStack 被大多数人分类为 IaaS 解决方案,其中物理资源池(如处理器、网络和存储)在不同用户之间分配和共享。使用传统的基于硬件的虚拟化实现用户之间的隔离。
|
||||
|
||||
OpenStack 的 REST API 允许使用代码自动创建基础架构,但是这就是问题。IaaS 产品输出的是更多的基础设施。创建后,支持和管理额外基础设施的服务方式并不多。在一定程度上,OpenStack 生产的底层基础架构(如服务器和 IP 地址)成为管理工作的重中之重。一个众所周知的结果是虚拟机(VM)扩张,但是同样的概念适用于网络、加密密钥和存储卷。这样,开发人员可以减少建立和维护应用程序的时间。
|
||||
|
||||
像其他基于集群的解决方案一样,Kubernetes 以单个服务器级别运行,以实现水平缩放。它可以轻松添加新的服务器,并立即在硬件上安排负载。类似地,当服务器没有被有效利用或需要维护时,可以从集群中删除服务器。编排活动,如工作调度、健康监测和维护高可用性是 Kubernetes 自动处理的其他任务。
|
||||
|
||||
网络是另一个可能难以在 IaaS 环境中可靠编排的领域。微服务之间通过 IP 地址通信可能是很棘手的。Kubernetes 实现了 IP 地址管理、负载均衡、服务发现和 DNS 名称注册,以在集群内提供无痛、透明的网络环境。
|
||||
|
||||
### 专为部署而设计
|
||||
|
||||
一旦创建了运行应用程序的环境,部署就是一件小事了。可靠地部署一个应用程序是其中一个很容易说出来但并不容易做到的任务 - 它并不是最简单的。Kubernetes 相对其他环境的巨大优势是部署是一等公民。
|
||||
|
||||
有一个使用 Kubernetes 命令行界面 (CLI) 的命令,该接口描述应用程序并将其安装在群集上。Kubernetes 从初始部署实现了应用程序的整个生命周期,推出新版本以及回滚 - 当一个关键功能出现问题时。进行中的部署也可以暂停和恢复。拥有现有的、内置的工具和支持应用程序部署的优点,而不用自己构建部署系统,这一点是不容小觑的。Kubernetes 用户既不必重新发明应用程序部署的轮子,也不会发现这是一项艰巨的任务。
|
||||
|
||||
Kubernetes 还可以监控进行中的部署状态。虽然你可以在 IaaS 环境中编写这个,例如部署过程本身,但这是一个非常困难的任务,这样的情况还比比皆是。
|
||||
|
||||
### 专为 DevOps 而设计
|
||||
|
||||
随着你在开发和部署 Kubernetes 应用程序方面获得更多经验,你将沿着与 Google 和其他人相同的路径前行。你将发现有几种 Kubernetes 功能对于有效地开发和多服务程序的故障排除是非常重要的。
|
||||
|
||||
首先,Kubernetes 能够通过日志或 SSH(安全 shell)轻松检查正在运行的服务的能力非常重要。通过单一命令行调用,管理员可以检查在 Kubernetes 下运行的服务的日志。这可能听起来像一个简单的任务,但在 IaaS 环境中,除非你已经做了一些工作,否则这并不容易。大型应用程序通常具有专门用于日志收集和分析的硬件和人员。登录 Kubernetes 可能不能替代完整功能的日志和度量解决方案,但它足以提供基本的故障排除。
|
||||
|
||||
第二,Kubernetes 提供内置的秘密管理。从头开发自己的部署系统的团队知道的另一个问题是,将敏感数据(如密码和 API 令牌)安全地部署到虚拟机上很困难。通过将秘密变成一等公民,Kubernetes 阻止你的团队发明自己的不安全的方法、错误的秘密分发系统或在部署脚本中硬编码的凭据。
|
||||
|
||||
最后,Kubernetes 有一些功能用于自动缩放、负载均衡和重新启动应用程序。同样,这些功能是开发人员在使用 IaaS 或裸机时要编写的。你的 Kubernetes 应用程序的缩放和运行状况检查在服务定义中声明,而 Kubernetes 确保正确数量的实例健康运行。
|
||||
|
||||
### 总结
|
||||
|
||||
IaaS 和 PaaS 系统之间的差异是巨大的,包括 PaaS 可以节省大量的开发和调试时间。作为 PaaS,Kubernetes 实现了强大而有效的功能,可帮助你开发、部署和调试云原生应用程序。它的架构和设计代表了数十年的难得的经验,让你的团队能够免费获得好处。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Tim Potter - Tim 是 Hewlett Packard Enterprise 的高级软件工程师。近二十年来,他一直致力于免费和开源软件的开发工作,其中包括 Samba、Wireshark、OpenPegasus 和 Docker 等多个项目。Tim博客在 https://elegantinfrastructure.com/ ,关于 Docker、Kubernetes 和其他基础设施相关主题。
|
||||
|
||||
-----
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/6/introducing-kubernetes
|
||||
|
||||
作者:[ Tim Potter][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tpot
|
||||
[1]:https://opensource.com/article/17/6/introducing-kubernetes?rate=RPoUoHXYQXbTb7DHQCDsHgR1ZcfLSoquZ8xVZzfMtxM
|
||||
[2]:https://opensource.com/user/63281/feed
|
||||
[3]:https://kubernetes.io/
|
||||
[4]:https://www.cncf.io/
|
||||
[5]:https://opensource.com/users/tpot
|
@ -0,0 +1,315 @@
|
||||
[GitHub 的 MySql 基础架构自动化测试][31]
|
||||
============================================================
|
||||
|
||||
我们 MySQL 数据库基础架构是 Github 关键组件。 MySQL 提供 Github.com, GitHub 的 API 和验证等等的服务。每一次的 `git` 请求都以某种方式触及 MySQL。即使我们 MySQL 集群是按流量的,但是我们还是需要执行重型清理,即时更新,在线模式迁移,集群拓扑重构,池化和负载平衡等任务。 我们建有基础架构自动化测试这种方式,在这篇文章中,我们分享几个例子,说明我们如何通过连续不间断的来测试建立基础架构的。这样的方式是为了让我们晚上有一个好梦到早晨。
|
||||
|
||||
### 备份[][36]
|
||||
|
||||
备份数据是非常重要的,如果您没有备份数据库,虽然当时没有说明问题,在之后可能就是一个大问题。Percona [Xtrabackup][37] 是我们一直使用的 MySQL 数据库备份工具。如果有需要备份数据库我们就会备份到另一个专门备份数据的服务器上。
|
||||
|
||||
In addition to the full binary backups, we run logical backups several times a day. These backups allow our engineers to get a copy of recent data. There are times that they would like a complete set of data from a table so they can test an index change on a production sized table or see data from a certain point of time. Hubot allows us to restore a backed up table and will ping us when the table is ready to use.
|
||||
|
||||
除了完整的二进制备份外,我们每天还会多次运行逻辑备份。这些备份数据允许我们的工程师获取最新的副本。有时候,他们希望从表中获取一整套数据,以便他们可以测试表上的索引,更改或从特定时间点查看数据。Hubot 允许我们恢复备份的表,并且当表准备使用时会自动检测连接( ping )我们。
|
||||
|
||||

|
||||
**tomkrouper**.mysql 备份列表的位置
|
||||

|
||||
**Hubot**
|
||||
```
|
||||
+-----------+------------+---------------+---------------------+---------------------+----------------------------------------------+
|
||||
| Backup ID | Table Name | Donor Host | Backup Start | Backup End | File Name |
|
||||
+-----------+------------+---------------+---------------------+---------------------+----------------------------------------------+
|
||||
| 1699494 | locations | db-mysql-0903 | 2017-07-01 22:09:17 | 2017-07-01 22:09:17 | backup-mycluster-locations-1498593122.sql.gz |
|
||||
| 1699133 | locations | db-mysql-0903 | 2017-07-01 16:11:37 | 2017-07-01 16:11:39 | backup-mycluster-locations-1498571521.sql.gz |
|
||||
| 1698772 | locations | db-mysql-0903 | 2017-07-01 10:09:21 | 2017-07-01 10:09:22 | backup-mycluster-locations-1498549921.sql.gz |
|
||||
| 1698411 | locations | db-mysql-0903 | 2017-07-01 04:12:32 | 2017-07-01 04:12:32 | backup-mycluster-locations-1498528321.sql.gz |
|
||||
| 1698050 | locations | db-mysql-0903 | 2017-06-30 22:18:23 | 2017-06-30 22:18:23 | backup-mycluster-locations-1498506721.sql.gz |
|
||||
| ...
|
||||
| 1262253 | locations | db-mysql-0088 | 2016-08-01 01:58:51 | 2016-08-01 01:58:54 | backup-mycluster-locations-1470034801.sql.gz |
|
||||
| 1064984 | locations | db-mysql-0088 | 2016-04-04 13:07:40 | 2016-04-04 13:07:43 | backup-mycluster-locations-1459494001.sql.gz |
|
||||
+-----------+------------+---------------+---------------------+---------------------+----------------------------------------------+
|
||||
|
||||
```
|
||||
|
||||

|
||||
**tomkrouper**.mysql 恢复 1699133
|
||||

|
||||
**Hubot**已为备份作业 1699133 创建还原作业,还原完成后,将在 #database-ops 中收到通知。
|
||||

|
||||
**Hubot**[@tomkrouper][1]: 恢复 db-mysql-0482 上数据库中的 locations 表已恢复为 locations_2017_07_01_16_11
|
||||
|
||||
数据被加载到非生产环境的数据库,该数据库可供请求恢复的工程师访问。
|
||||
|
||||
我们保留数据的“备份”的最后一个方法是使用 [延迟的副本delayed replicas][38]。这不是一个备份,而是更多层次的保护。对于每个生产集群中,我们有一个主机延迟4个小时。如果运行一个不应该有的查询,我们可以在 chatops 中运行 `mysql panic` 。这将导致我们所有的延迟副本立即停止复制。这也将页面呼叫DBA。从那里我们可以使用延迟复制来验证是否有问题,然后将二进制日志快速转发到错误之前的位置。然后,我们可以将此数据恢复到主服务器,从而恢复数据到该点。
|
||||
|
||||
备份是很棒的,但如果一些未知或未捕获的错误发生破坏它们,它们就显得没有价值了。让脚本恢复备份的好处是它允许我们通过 cron 自动执行备份验证。我们为每个集群设置了一个专用的主机,用于运行最新备份的恢复。这样可以确保备份运行正常,并且我们能够从备份中检索数据。
|
||||
|
||||
根据数据集大小,我们每天运行多个恢复。预期恢复的服务器将加入复制流并能够赶上复制。这不仅测试了我们采取可恢复的备份,而且我们正确地确定采取的时间点,并且可以从该时间点进一步应用更改。如果恢复过程中出现问题,我们会收到通知。
|
||||
|
||||
我们还追踪恢复所需的时间,所以我们知道在紧急情况下建立新的副本或还原需要多长时间。
|
||||
|
||||
以下是由 Hubot 在我们的机器人聊天室中编写的自动恢复过程的输出。
|
||||
|
||||

|
||||
**Hubot**
|
||||
gh-mysql-backup-restore: db-mysql-0752: restore_log.id = 4447
|
||||
gh-mysql-backup-restore: db-mysql-0752: 确定要为集群 “prodcluster” 还原备份。
|
||||
gh-mysql-backup-restore: db-mysql-0752: 启用维护模式
|
||||
gh-mysql-backup-restore: db-mysql-0752: 设置协调器停机时间
|
||||
gh-mysql-backup-restore: db-mysql-0752: 禁用 Puppet
|
||||
gh-mysql-backup-restore: db-mysql-0752: 停止 MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: 删除 MySQL 文件
|
||||
gh-mysql-backup-restore: db-mysql-0752: 运行中 gh-xtrabackup-restore
|
||||
gh-mysql-backup-restore: db-mysql-0752: 恢复文件: xtrabackup-notify-2017-07-02_0000.xbstream
|
||||
gh-mysql-backup-restore: db-mysql-0752: 运行 gh-xtrabackup-prepare
|
||||
gh-mysql-backup-restore: db-mysql-0752: 启动 MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: 更新文件权限
|
||||
gh-mysql-backup-restore: db-mysql-0752: 升级 MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: 停止 MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: 启动 MySQL
|
||||
gh-mysql-backup-restore: db-mysql-0752: 备份 Host: db-mysql-0034
|
||||
gh-mysql-backup-restore: db-mysql-0752: 设置开启复制
|
||||
gh-mysql-backup-restore: db-mysql-0752: 启动复制
|
||||
gh-mysql-backup-restore: db-mysql-0752: 响应缓存
|
||||
gh-mysql-backup-restore: db-mysql-0752: 还原完成 (响应运行中)
|
||||
gh-mysql-backup-restore: db-mysql-0752: 开启操作
|
||||
gh-mysql-backup-restore: db-mysql-0752: 关闭维护模式
|
||||
gh-mysql-backup-restore: db-mysql-0752: 设置协调器停机时间
|
||||
gh-mysql-backup-restore: db-mysql-0752: 恢复过程完成
|
||||
|
||||
我们使用备份的一件事是向现有的一组 MySQL 服务器添加一个新的副本。我们将启动一个新的服务器的构建,一旦我们通知它已经准备就绪,我们可以开始恢复该特定集群的最新备份。我们有一个脚本,运行我们手动执行的所有恢复命令。我们的自动恢复系统基本上使用相同的脚本。这简化了系统构建过程,并允许我们使用少数几个手动进程的聊天命令来启动和运行主机。下面显示的是在聊天中手动恢复:
|
||||
|
||||

|
||||
**jessbreckenridge**.mysql backup-restore -H db-mysql-0007 -o -r magic_word=daily_rotating_word
|
||||

|
||||
**Hubot**
|
||||
[@jessbreckenridge][2] gh-mysql-backup-restore: db-mysql-0007: 确定要还原群集'mycluster'的备份
|
||||
[@jessbreckenridge][3] gh-mysql-backup-restore: db-mysql-0007: restore_log.id = 4449
|
||||
[@jessbreckenridge][4] gh-mysql-backup-restore: db-mysql-0007: 启用维护模式
|
||||
[@jessbreckenridge][5] gh-mysql-backup-restore: db-mysql-0007: 设置协调器停机时间
|
||||
[@jessbreckenridge][6] gh-mysql-backup-restore: db-mysql-0007: 禁用操作
|
||||
[@jessbreckenridge][7] gh-mysql-backup-restore: db-mysql-0007: 停止 MySQL
|
||||
[@jessbreckenridge][8] gh-mysql-backup-restore: db-mysql-0007: 删除 MySQL 文件
|
||||
[@jessbreckenridge][9] gh-mysql-backup-restore: db-mysql-0007: 运行 gh-xtrabackup-restore
|
||||
[@jessbreckenridge][10] gh-mysql-backup-restore: db-mysql-0007: 恢复文件: xtrabackup-mycluster-2017-07-02_0015.xbstream
|
||||
[@jessbreckenridge][11] gh-mysql-backup-restore: db-mysql-0007: 运行 gh-xtrabackup-prepare
|
||||
[@jessbreckenridge][12] gh-mysql-backup-restore: db-mysql-0007: 更新文件权限
|
||||
[@jessbreckenridge][13] gh-mysql-backup-restore: db-mysql-0007: 启用 MySQL
|
||||
[@jessbreckenridge][14] gh-mysql-backup-restore: db-mysql-0007: 升级 MySQL
|
||||
[@jessbreckenridge][15] gh-mysql-backup-restore: db-mysql-0007: 停止 MySQL
|
||||
[@jessbreckenridge][16] gh-mysql-backup-restore: db-mysql-0007: 开启 MySQL
|
||||
[@jessbreckenridge][17] gh-mysql-backup-restore: db-mysql-0007: 设置开启复制
|
||||
[@jessbreckenridge][18] gh-mysql-backup-restore: db-mysql-0007: 启动复制
|
||||
[@jessbreckenridge][19] gh-mysql-backup-restore: db-mysql-0007: 备份 Host: db-mysql-0201
|
||||
[@jessbreckenridge][20] gh-mysql-backup-restore: db-mysql-0007: 复制缓存
|
||||
[@jessbreckenridge][21] gh-mysql-backup-restore: db-mysql-0007: 复制后 4589 秒,下次检查之前等待 1800 秒
|
||||
[@jessbreckenridge][22] gh-mysql-backup-restore: db-mysql-0007: 还原完成 (响应运行中)
|
||||
[@jessbreckenridge][23] gh-mysql-backup-restore: db-mysql-0007: 禁用操作
|
||||
[@jessbreckenridge][24] gh-mysql-backup-restore: db-mysql-0007: 禁用维护模式
|
||||
|
||||
### 故障转移[][39]
|
||||
|
||||
[我们使用协调器][40] 在主和从中执行自动化故障切换。我们期望 orchestrator 正确检测主故障,指定副本进行升级,在所指定的副本下修复拓扑,进行升级。我们期待 VIP 变化,池变化,客户端重连,`puppet` 运行的基本组件等等。故障转移是一项复杂的任务,涉及到我们基础架构的许多方面。
|
||||
|
||||
为了建立对我们的故障转移的信任,我们建立了一个 _类似生产的测试集群_ ,并且我们不断地崩溃它来观察故障转移。
|
||||
|
||||
_类似生产的测试集群_ 是一个复制设置,在所有方面与我们的生产集群相同:硬件类型,操作系统,MySQL版本,网络环境,VIP,`puppet` 配置,[haproxy 设置][41] 等。与此集群不同的是它不发送/接收生产流量。
|
||||
|
||||
我们在测试集群上模拟写入负载,同时避免复制滞后。写入负载不会太大,但是有一些有意争取写入相同数据集的查询。这在正常的时间并不是太有意思,但是证明在故障转移中是有用的,正如我们将会简要描述的那样。
|
||||
|
||||
我们的测试集群有三个数据中心的代表服务器。 我们希望故障转移能够在同一个数据中心内推广替代副本。 我们希望在这样的限制下尽可能多地复制副本。 我们要求尽可能适用。 协调者对拓扑结构没有先前的假设; 它必须对崩溃时的状态作出反应。
|
||||
|
||||
然而,我们有兴趣为故障切换创建复杂而多变的场景。我们的故障转移测试脚本为故障转移准备了理由:
|
||||
|
||||
* 它识别现有的主人
|
||||
|
||||
* 重构拓扑结构,使所有三个数据中心的代表成为主控。不同的 DC 具有不同的网络延迟,并且预期会在不同的时间对主机崩溃做出反应。
|
||||
|
||||
* 它选择一个崩溃方法。我们选择 master(`kill -9`)或网络划分它: `iptables -j REJECT` (nice-ish) 或 `iptables -j DROP`(unresponsive无响应)。
|
||||
|
||||
脚本继续通过选择的方法使主机崩溃,并等待 `orchestrator` 可靠地检测到崩溃并执行故障转移。虽然我们期望检测和升级在 `30` 几秒钟内 完成,但脚本会打破这一期望,并在查找故障转移结果之前睡觉一段指定的时间。然后:
|
||||
|
||||
* 检查一个新的(不同的)master 是否到位
|
||||
|
||||
* 集群中有很多副本
|
||||
|
||||
* master 是可写的
|
||||
|
||||
* 对 master 的写入在副本上可见
|
||||
|
||||
* 更新内部服务发现条目(新 master 的身份如预期;旧 master 已删除)
|
||||
|
||||
* 其他内部检查
|
||||
|
||||
这些测试证实故障转移是成功的,不仅是 MySQL 明智的,而且在更大的基础设施范围。VIP 被拒绝; 特别服务已经开始; 信息到达应该去的地方。
|
||||
|
||||
该脚本进一步继续恢复失败的服务器:
|
||||
|
||||
* 从备份恢复,从而隐含地测试我们的备份/恢复过程
|
||||
|
||||
* 验证服务器配置是否符合预期(服务器不再相信是主服务器)
|
||||
|
||||
* 返回到复制集群,期望找到在主机上写入的数据
|
||||
|
||||
考虑以下可视化的计划故障转移测试:从运行良好的群集到某些副本上的问题,诊断主机 (`7136`) 是否死机,选择一个服务器来促进 (`a79d`) ,重构该服务器下面的拓扑,推动它(故障切换成功),恢复死主机并将其放回群集。
|
||||
|
||||

|
||||
|
||||
#### 测试失败怎么样?
|
||||
|
||||
我们的测试脚本使用了一种停止世界的方法。任何故障切换组件中的单个故障都将失败,因此在人类解决问题之前,无法进行任何未来的自动化测试。我们得到警报,并继续检查状态和日志。
|
||||
|
||||
脚本将在不可接受的检测或故障转移时间失败; 备份/还原问题; 失去太多服务器; 在故障切换后的意外配置; 等等
|
||||
|
||||
我们需要确保 `orchestrator` 正确连接服务器。这是竞争性写入负载有用的地方:如果设置不正确,复制很容易中断。我们会得到 `DUPLICATE KEY` 或其他错误提示出错。
|
||||
|
||||
这是特别重要的,因为我们改进 `orchestrator` 并引入新的行为,允许我们在安全的环境中测试这些变化。
|
||||
|
||||
#### 来了:chaos 测试
|
||||
|
||||
上面所示的测试程序将捕获(并已经捕获)我们基础设施许多部分的问题。这些够了吗?
|
||||
|
||||
在生产环境中总是有其他的东西。关于不适用于我们的生产集群的特定测试方法。它们不具有相同的流量和流量操纵,也不具有完全相同的服务器集。故障类型可能有所不同。
|
||||
|
||||
我们正在为我们的生产集群设计 chaos 测试。 chaos 测试将会在我们的生产中,但是按照预期的时间表和充分控制的方式来破坏碎片。 chaos 测试在恢复机制中引入更高层次的信任,并影响(因此测试)我们的基础设施和应用程序的较大部分。
|
||||
|
||||
这是微妙的工作:当我们承认 chaos 测试的需要时,我们也希望避免对我们的服务造成不必要的影响。不同的测试将在风险级别和影响方面有所不同,我们将努力确保我们的服务的可用性。
|
||||
|
||||
### 模式迁移[][42]
|
||||
|
||||
[我们使用 gh-ost ][43]来运行实时模式迁移。 `gh-ost` 它是稳定的,但也在迅速的发展,增加或计划的主要新功能。
|
||||
|
||||
`gh-ost` 通过将数据复制到 _gh-ost_ 表来迁移,将进行的二进制日志拦截的更改应用到 _gh-ost_表,即使正在写入原始表。然后它将 _gh-ost_ 表交换代替原始表。迁移完成时,GitHub 继续使用生成和填充的 `gh-ost` 表。
|
||||
|
||||
在这个时候,几乎所有的 GitHub 的 MySQL 数据都被重新创建 `gh-ost`,其中大部分都是重新创建的。我们必须高度信任 `gh-ost`,即使面对快速的发展,也可以一再修改数据。这是我们如何获得这种信任。
|
||||
|
||||
`gh-ost` 提供生产测试能力。它支持在副本上运行迁移,其方式与在主服务器上运行的方式大致相同: `gh-ost` 将连接到副本,并将其视为主服务器。它将解析其二进制日志的方式与实际主机迁移相同。但是,它将复制行并将二进制日志事件应用于副本,并避免对主机进行写入。
|
||||
|
||||
我们经营的 `gh-ost` 复制品在生产中。这些副本不能提供生产流量。每个这样的副本将检索生产表的当前列表,并以随机顺序对其进行迭代。一个接一个地,它选择一个表并在该表上执行复制迁移。迁移实际上并不修改表结构,而是运行一个微不足道的 `ENGINE=InnoDB`。测试运行迁移,即使在生产中使用表,从而复制实际的生产数据,并将真正的生产流量从二进制日志中应用。
|
||||
|
||||
这些迁移可以被审计。以下是我们如何从聊天中检查运行测试的状态:
|
||||
|
||||

|
||||
**ggunson**.migration test-status
|
||||

|
||||
**Hubot**
|
||||
```
|
||||
# Migrating `prod`.`pull_requests`; Ghost table is `prod`.`_pull_requests_gho`
|
||||
# Migrating ghost-db-mysql-0007:3306; inspecting ghost-db-mysql-0007:3306; executing on ghost-db-mysql-0007
|
||||
# Migration started at Mon Jan 30 02:13:39 -0800 2017
|
||||
# chunk-size: 2500; max-lag-millis: 1500ms; max-load: Threads_running=30; critical-load: Threads_running=1000; nice-ratio: 0.000000
|
||||
# throttle-additional-flag-file: /tmp/gh-ost.throttle
|
||||
# panic-flag-file: /tmp/ghost-test-panic.flag
|
||||
# Serving on unix socket: /tmp/gh-ost.test.sock
|
||||
Copy: 57992500/86684838 66.9%; Applied: 57708; Backlog: 1/100; Time: 3h28m38s(total), 3h28m36s(copy); streamer: mysql-bin.000576:142993938; State: migrating; ETA: 1h43m12s
|
||||
```
|
||||
|
||||
当测试迁移完成表数据的复制时,它将停止复制并执行切换,使用 _gh-ost_ 表替换原始表,然后交换回来。我们对实际更换数据并不感兴趣。相反,我们将留下原始的表和 _gh-ost_ 表,它们都应该是相同的。我们通过校验两个表的整个表数据来验证。
|
||||
|
||||
测试可以完成:
|
||||
|
||||
* _成功_ : 一切顺利,校验和相同。我们期待看到这一点。
|
||||
|
||||
* _失败_ : 执行问题。这可能偶尔发生,因为迁移过程被杀死,复制问题等,并且通常与 `gh-ost` 自身无关。
|
||||
|
||||
* _校验失败_ : 表数据不一致。对于被测试的分支,这个调用修复。对于正在进行的 `master` 分支测试,这意味着立即阻止生产迁移。我们不会得到后者。
|
||||
|
||||
测试结果经过审核,发送到机器人聊天室,作为事件发送到我们的度量系统。下图中的每条垂直线代表成功的迁移测试:
|
||||
|
||||

|
||||
|
||||
这些测试不断运行。如果发生故障,我们会收到通知。当然,我们可以随时访问机器人聊天室,了解发生了什么。
|
||||
|
||||
#### 测试新版本
|
||||
|
||||
我们不断提高 `gh-ost`。我们的开发流程基于 `git` 分支机构,然后我们提供通过[拉请求][44]进行合并。
|
||||
|
||||
提交的 `gh-ost` 拉请求通过持续集成(CI)进行基本的编译和单元测试。一旦过去,公关在技术上有资格合并,但更有趣的是它有 [eligible for deployment via Heaven][45] 。作为我们基础架构中的敏感组件,我们 `gh-ost` 在合并之前,要小心部署 master 分支机构,进行密集测试。
|
||||
|
||||

|
||||
**shlomi-noach**.deploy gh-ost/fix-reappearing-throttled-reasons to prod/ghost-db-mysql-0007
|
||||

|
||||
**Hubot**[@shlomi-noach][25] 正在部署 gh-ost/fix-reappearing-throttled-reasons (baee4f6) 到生产中 (ghost-db-mysql-0007).
|
||||
[@shlomi-noach][26] gh-ost/fix-reappearing-throttled-reasons (baee4f6) 的生产部署已经完成了! (2s)
|
||||
[@shlomi-noach][27] 确保你在 haystack 中看到异常
|
||||

|
||||
**jonahberquist**.deploy gh-ost/interactive-command-question to prod/ghost-db-mysql-0012
|
||||

|
||||
**Hubot**[@jonahberquist][28] 正在部署 gh-ost/interactive-command-question (be1ab17) 到生产 (ghost-db-mysql-0012).
|
||||
[@jonahberquist][29] gh-ost/interactive-command-question (be1ab17) 的生产部署已经完成了!(2s)
|
||||
[@jonahberquist][30] 确保你在 haystack 中看到异常
|
||||

|
||||
**shlomi-noach**.wcid gh-ost
|
||||

|
||||
**Hubot**shlomi-noach 测试 fix-reappearing-throttled-reasons 41 秒前: ghost-db-mysql-0007
|
||||
jonahberquist 测试 interactive-command-question 7 秒前: ghost-db-mysql-0012
|
||||
|
||||
没人在排队。
|
||||
|
||||
一些 PR 很小,不影响数据本身。对状态消息,交互式命令等的更改对 `gh-ost` 应用程序的影响较小 。其他对迁移逻辑和操作造成重大变化。我们将严格测试这些,通过我们的生产表车队,直到满足这些变化不会造成数据损坏的威胁。
|
||||
|
||||
### 总结[][46]
|
||||
|
||||
在整个测试过程中,我们建立对系统的信任。通过自动化这些测试,在生产中,我们得到重复的确认,一切都按预期工作。随着我们继续发展我们的基础设施,我们还通过调整测试来覆盖最新的变化。
|
||||
|
||||
生产总是令人惊奇的,不包括测试的场景。我们对生产环境的测试越多,我们对应用程序的期望越多,基础设施的能力就越强。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://githubengineering.com/mysql-testing-automation-at-github/
|
||||
|
||||
作者:[tomkrouper ][a], [Shlomi Noach][b]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/tomkrouper
|
||||
[b]:https://github.com/shlomi-noach
|
||||
[1]:https://github.com/tomkrouper
|
||||
[2]:https://github.com/jessbreckenridge
|
||||
[3]:https://github.com/jessbreckenridge
|
||||
[4]:https://github.com/jessbreckenridge
|
||||
[5]:https://github.com/jessbreckenridge
|
||||
[6]:https://github.com/jessbreckenridge
|
||||
[7]:https://github.com/jessbreckenridge
|
||||
[8]:https://github.com/jessbreckenridge
|
||||
[9]:https://github.com/jessbreckenridge
|
||||
[10]:https://github.com/jessbreckenridge
|
||||
[11]:https://github.com/jessbreckenridge
|
||||
[12]:https://github.com/jessbreckenridge
|
||||
[13]:https://github.com/jessbreckenridge
|
||||
[14]:https://github.com/jessbreckenridge
|
||||
[15]:https://github.com/jessbreckenridge
|
||||
[16]:https://github.com/jessbreckenridge
|
||||
[17]:https://github.com/jessbreckenridge
|
||||
[18]:https://github.com/jessbreckenridge
|
||||
[19]:https://github.com/jessbreckenridge
|
||||
[20]:https://github.com/jessbreckenridge
|
||||
[21]:https://github.com/jessbreckenridge
|
||||
[22]:https://github.com/jessbreckenridge
|
||||
[23]:https://github.com/jessbreckenridge
|
||||
[24]:https://github.com/jessbreckenridge
|
||||
[25]:https://github.com/shlomi-noach
|
||||
[26]:https://github.com/shlomi-noach
|
||||
[27]:https://github.com/shlomi-noach
|
||||
[28]:https://github.com/jonahberquist
|
||||
[29]:https://github.com/jonahberquist
|
||||
[30]:https://github.com/jonahberquist
|
||||
[31]:https://githubengineering.com/mysql-testing-automation-at-github/
|
||||
[32]:https://github.com/tomkrouper
|
||||
[33]:https://github.com/tomkrouper
|
||||
[34]:https://github.com/shlomi-noach
|
||||
[35]:https://github.com/shlomi-noach
|
||||
[36]:https://githubengineering.com/mysql-testing-automation-at-github/#backups
|
||||
[37]:https://www.percona.com/software/mysql-database/percona-xtrabackup
|
||||
[38]:https://dev.mysql.com/doc/refman/5.6/en/replication-delayed.html
|
||||
[39]:https://githubengineering.com/mysql-testing-automation-at-github/#failovers
|
||||
[40]:http://githubengineering.com/orchestrator-github/
|
||||
[41]:https://githubengineering.com/context-aware-mysql-pools-via-haproxy/
|
||||
[42]:https://githubengineering.com/mysql-testing-automation-at-github/#schema-migrations
|
||||
[43]:http://githubengineering.com/gh-ost-github-s-online-migration-tool-for-mysql/
|
||||
[44]:https://github.com/github/gh-ost/pulls
|
||||
[45]:https://githubengineering.com/deploying-branches-to-github-com/
|
||||
[46]:https://githubengineering.com/mysql-testing-automation-at-github/#summary
|
@ -1,377 +0,0 @@
|
||||
Samba 系列(十五):用 SSSD 和 Realm 集成 Ubuntu 到 Samba4 AD DC
|
||||
============================================================
|
||||
|
||||
|
||||
本教程将告诉你如何将 Ubuntu 桌面版机器加入到 Samba4 活动目录域中,用 SSSD 和 Realm 服务来针对活动目录认证用户。
|
||||
|
||||
#### 要求:
|
||||
|
||||
1. [在 Ubuntu 上用 Samba4 创建一个活动目录架构][1]
|
||||
|
||||
### 第 1 步: 初始配置
|
||||
|
||||
1. 在把 Ubuntu 加入活动目录前确保主机名被正确设置了。使用 hostnamectl 命令设置机器名字或者手动编辑 /etc/hostname 文件。
|
||||
|
||||
```
|
||||
$ sudo hostnamectl set-hostname your_machine_short_hostname
|
||||
$ cat /etc/hostname
|
||||
$ hostnamectl
|
||||
```
|
||||
|
||||
2. 接下来,编辑机器网络接口设置并且添加合适的 IP 设置和正确的 DNS IP 服务地址指向 Samba 活动目录域控制器如下图所示。
|
||||
|
||||
如果你已经在本地配置了 DHCP 服务来自动分配 IP 设置,给你局域网内机器合适的 AD DNS IP 地址,那么你可以跳过这一步。
|
||||
|
||||
[][2]
|
||||
|
||||
设置网络接口
|
||||
|
||||
上图中,192.168.1.254 和 192.168.1.253 代表 Samba4 域控制器的 IP 地址。
|
||||
|
||||
3. 用 GUI(图形用户界面) 或命令行重启网络服务来应用修改并且对你的域名发起一系列 ping 请求来测试 DNS 解析如期工作。 也用 host 命令来测试 DNS 解析。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart networking.service
|
||||
$ host your_domain.tld
|
||||
$ ping -c2 your_domain_name
|
||||
$ ping -c2 adc1
|
||||
$ ping -c2 adc2
|
||||
```
|
||||
|
||||
4. 最后, 确保机器时间和 Samba4 AD 同步。安装 ntpdate 包并用下列指令和 AD 同步时间。
|
||||
|
||||
```
|
||||
$ sudo apt-get install ntpdate
|
||||
$ sudo ntpdate your_domain_name
|
||||
```
|
||||
|
||||
### 第 2 步:安装需要的包
|
||||
|
||||
5. 这一步安装将 Ubuntu 加入 Samba4 活动目录域控制器所必须的软件和依赖: Realmd 和 SSSD 服务.
|
||||
|
||||
```
|
||||
$ sudo apt install adcli realmd krb5-user samba-common-bin samba-libs samba-dsdb-modules sssd sssd-tools libnss-sss libpam-sss packagekit policykit-1
|
||||
```
|
||||
|
||||
6. 输入大写的默认 realm 名称然后按下回车继续安装。
|
||||
|
||||
[][3]
|
||||
|
||||
输入 Realm 名称
|
||||
|
||||
7. 接着,创建包含以下内容的 SSSD 配置文件。
|
||||
|
||||
```
|
||||
$ sudo nano /etc/sssd/sssd.conf
|
||||
```
|
||||
|
||||
加入下面的内容到 sssd.conf 文件。
|
||||
|
||||
```
|
||||
[nss]
|
||||
filter_groups = root
|
||||
filter_users = root
|
||||
reconnection_retries = 3
|
||||
[pam]
|
||||
reconnection_retries = 3
|
||||
[sssd]
|
||||
domains = tecmint.lan
|
||||
config_file_version = 2
|
||||
services = nss, pam
|
||||
default_domain_suffix = TECMINT.LAN
|
||||
[domain/tecmint.lan]
|
||||
ad_domain = tecmint.lan
|
||||
krb5_realm = TECMINT.LAN
|
||||
realmd_tags = manages-system joined-with-samba
|
||||
cache_credentials = True
|
||||
id_provider = ad
|
||||
krb5_store_password_if_offline = True
|
||||
default_shell = /bin/bash
|
||||
ldap_id_mapping = True
|
||||
use_fully_qualified_names = True
|
||||
fallback_homedir = /home/%d/%u
|
||||
access_provider = ad
|
||||
auth_provider = ad
|
||||
chpass_provider = ad
|
||||
access_provider = ad
|
||||
ldap_schema = ad
|
||||
dyndns_update = true
|
||||
dyndsn_refresh_interval = 43200
|
||||
dyndns_update_ptr = true
|
||||
dyndns_ttl = 3600
|
||||
```
|
||||
|
||||
确保你对应地替换了域名在下面的参数:
|
||||
|
||||
```
|
||||
domains = tecmint.lan
|
||||
default_domain_suffix = TECMINT.LAN
|
||||
[domain/tecmint.lan]
|
||||
ad_domain = tecmint.lan
|
||||
krb5_realm = TECMINT.LAN
|
||||
```
|
||||
|
||||
8. 接着,用下列命令给 SSSD 文件适当的权限:
|
||||
|
||||
```
|
||||
$ sudo chmod 700 /etc/sssd/sssd.conf
|
||||
```
|
||||
|
||||
9. 现在, 打开并编辑 Realmd 配置文件输入下面这行。
|
||||
|
||||
```
|
||||
$ sudo nano /etc/realmd.conf
|
||||
```
|
||||
|
||||
Realmd.conf 文件摘录:
|
||||
|
||||
```
|
||||
[active-directory]
|
||||
os-name = Linux Ubuntu
|
||||
os-version = 17.04
|
||||
[service]
|
||||
automatic-install = yes
|
||||
[users]
|
||||
default-home = /home/%d/%u
|
||||
default-shell = /bin/bash
|
||||
[tecmint.lan]
|
||||
user-principal = yes
|
||||
fully-qualified-names = no
|
||||
```
|
||||
|
||||
10. 最后需要修改的文件属于 Samba daemon. 打开 /etc/samba/smb.conf 文件编辑然后在文件开头加入下面这块代码,在 [global]部分如下图所示之后。
|
||||
|
||||
```
|
||||
workgroup = TECMINT
|
||||
client signing = yes
|
||||
client use spnego = yes
|
||||
kerberos method = secrets and keytab
|
||||
realm = TECMINT.LAN
|
||||
security = ads
|
||||
```
|
||||
[][4]
|
||||
|
||||
配置 Samba 服务器
|
||||
|
||||
确保你替换了域名值,特别是对应域名的 realm 值并运行 testparm 命令检验设置文件是否包含错误。
|
||||
|
||||
```
|
||||
$ sudo testparm
|
||||
```
|
||||
[][5]
|
||||
|
||||
测试 Samba 配置
|
||||
|
||||
11. 在做完所有必需的修改之后,用 AD 管理员帐号验证 Kerberos 认证并用下面的命令列出票据。
|
||||
|
||||
```
|
||||
$ sudo kinit ad_admin_user@DOMAIN.TLD
|
||||
$ sudo klist
|
||||
```
|
||||
[][6]
|
||||
|
||||
检验 Kerberos 认证
|
||||
|
||||
### 第 3 步: 加入 Ubuntu 到 Samba4 Realm
|
||||
|
||||
12. 加入 Ubuntu 机器到 Samba4 活动目录键入下列命令。用有管理员权限的 AD DC 账户名字绑定 realm 以照常工作并替换对应的域名值。
|
||||
|
||||
```
|
||||
$ sudo realm discover -v DOMAIN.TLD
|
||||
$ sudo realm list
|
||||
$ sudo realm join TECMINT.LAN -U ad_admin_user -v
|
||||
$ sudo net ads join -k
|
||||
```
|
||||
[][7]
|
||||
|
||||
加入 Ubuntu 到 Samba4 Realm
|
||||
|
||||
[][8]
|
||||
|
||||
表列 Realm Domain 信息
|
||||
|
||||
[][9]
|
||||
|
||||
添加用户到 Realm Domain
|
||||
|
||||
[][10]
|
||||
|
||||
添加 Domain 到 Realm
|
||||
|
||||
13. 区域绑定好了之后,运行下面的命令确保所有域账户在这台机器上允许认证。
|
||||
|
||||
```
|
||||
$ sudo realm permit -all
|
||||
```
|
||||
|
||||
然后你可以使用下面例举的 realm 命令允许或者禁止域用户帐号或群组访问。
|
||||
|
||||
```
|
||||
$ sudo realm deny -a
|
||||
$ realm permit --groups ‘domain.tld\Linux Admins’
|
||||
$ realm permit user@domain.lan
|
||||
$ realm permit DOMAIN\\User2
|
||||
```
|
||||
|
||||
14. 从一个 [安装了 RSAT 工具的][11]Windows 机器你可以打开 AD UC 浏览电脑容器并检验是否有一个使用你机器名的对象帐号已经被创建。
|
||||
|
||||
[][12]
|
||||
|
||||
确保域被加入 AD DC
|
||||
|
||||
### 第 4 步: 配置 AD 账户认证
|
||||
|
||||
15. 为了用域账户认证 Ubuntu 机器,你需要用 root 权限运行 pam-auth-update 命令并允许所有 PAM 配置文件,包括为每个区域账户在第一次注册的时候自动创建起始目录的选项。
|
||||
|
||||
按 [空格] 键检验所有入口并敲 ok 来应用配置。
|
||||
|
||||
```
|
||||
$ sudo pam-auth-update
|
||||
```
|
||||
[][13]
|
||||
|
||||
PAM 配置
|
||||
|
||||
16. 系统上手动编辑 /etc/pam.d/common-account 文件,下面这几行是为了自动创建起始位置给认证过的区域用户。
|
||||
|
||||
```
|
||||
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
|
||||
```
|
||||
|
||||
17. 如果活动目录用户不能用 linux 命令行修改他们的密码,打开 /etc/pam.d/common-password 文件并在 password 行移除 use_authtok 语句最后如下摘要。
|
||||
|
||||
```
|
||||
password [success=1 default=ignore] pam_winbind.so try_first_pass
|
||||
```
|
||||
|
||||
18. 最后,用下面的命令重启并应用 Realmd 和 SSSD 服务的修改:
|
||||
|
||||
```
|
||||
$ sudo systemctl restart realmd sssd
|
||||
$ sudo systemctl enable realmd sssd
|
||||
```
|
||||
|
||||
19. 为了测试 Ubuntu 机器是是否成功集成到 realm 运行安装 winbind 包并运行 wbinfo 命令列出区域账户和群组如下所示。
|
||||
|
||||
```
|
||||
$ sudo apt-get install winbind
|
||||
$ wbinfo -u
|
||||
$ wbinfo -g
|
||||
```
|
||||
[][14]
|
||||
|
||||
列出区域账户
|
||||
|
||||
20. 同样, 也可以针对特定的域用户或群组使用 getent 命令检验 Winbind nsswitch 模式。
|
||||
|
||||
```
|
||||
$ sudo getent passwd your_domain_user
|
||||
$ sudo getent group ‘domain admins’
|
||||
```
|
||||
[][15]
|
||||
|
||||
检验 Winbind Nsswitch
|
||||
|
||||
21. 你也可以用 Linux id 命令获取 AD 账户的信息,命令如下。
|
||||
|
||||
```
|
||||
$ id tecmint_user
|
||||
```
|
||||
[][16]
|
||||
|
||||
检验 AD 用户信息
|
||||
|
||||
22. 用 su – 后跟域用户名参数命令来认证 Ubuntu 主机的一个 Samba4 AD 账户。运行 id 命令获取 AD 账户的更多信息。
|
||||
|
||||
```
|
||||
$ su - your_ad_user
|
||||
```
|
||||
[][17]
|
||||
|
||||
AD 用户认证
|
||||
|
||||
用 pwd 命令查看你的域用户当前工作目录和 passwd 命令修改密码。
|
||||
|
||||
23. 在 Ubuntu 上使用有 root 权限的域账户,你需要用下面的命令添加 AD 用户名到 sudo 系统群组:
|
||||
|
||||
```
|
||||
$ sudo usermod -aG sudo your_domain_user@domain.tld
|
||||
```
|
||||
|
||||
用域账户登录 Ubuntu 并运行 apt updatecommand 来更新你的系统以检验 root 权限。
|
||||
|
||||
24. 给一个域群组 root 权限,用 visudocommand 打开并编辑 /etc/sudoers 文件并加入如下行。
|
||||
|
||||
```
|
||||
%domain\ admins@tecmint.lan ALL=(ALL:ALL) ALL
|
||||
```
|
||||
|
||||
25. Ubuntu 桌面使用域账户认证修正 LightDM 显示管理,通过编辑 /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf 文件,增加以下两行并重启 lightdm 服务或重启机器应用修改。
|
||||
|
||||
```
|
||||
greeter-show-manual-login=true
|
||||
greeter-hide-users=true
|
||||
```
|
||||
|
||||
域账户用 your_domain_username 或 your_domain_username@your_domain.tld 语句登录 Ubuntu 桌面版。
|
||||
|
||||
26. 为使用 Samba AD 账户的简称格式,编辑 /etc/sssd/sssd.conf 文件, 在 [sssd] 块加入如下几行命令。
|
||||
|
||||
```
|
||||
full_name_format = %1$s
|
||||
```
|
||||
|
||||
并重启 SSSD 后台程序应用改变。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart sssd
|
||||
```
|
||||
|
||||
你会注意到 bash 提示符会变化,对于没有增生域名副本的 AD 用户的简称。
|
||||
|
||||
27. 万一你因为 sssd.conf 里的 enumerate=true 参数设定而不能登录,你得用下面的命令清空 sssd 缓存数据:
|
||||
|
||||
```
|
||||
$ rm /var/lib/sss/db/cache_tecmint.lan.ldb
|
||||
```
|
||||
|
||||
这就是全部了!虽然这个教程主要集中于集成 Samba4 活动目录,同样的步骤也能被用于用 Realm 和 SSSD 服务的 Ubuntu 整合到微软 Windows 服务器活动目录。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Matei Cezar
|
||||
我是一名网瘾少年,开源和基于 linux 系统软件的粉丝,有4年经验在 linux 发行版桌面、服务器和 bash 脚本。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/integrate-ubuntu-to-samba4-ad-dc-with-sssd-and-realm/
|
||||
|
||||
作者:[ Matei Cezar][a]
|
||||
译者:[XYenChi](https://github.com/XYenChi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/cezarmatei/
|
||||
[1]:https://www.tecmint.com/install-samba4-active-directory-ubuntu/
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Network-Interface.jpg
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2017/07/Set-realm-name.png
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/07/Configure-Samba-Server.jpg
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2017/07/Test-Samba-Configuration.jpg
|
||||
[6]:https://www.tecmint.com/wp-content/uploads/2017/07/Check-Kerberos-Authentication.jpg
|
||||
[7]:https://www.tecmint.com/wp-content/uploads/2017/07/Join-Ubuntu-to-Samba4-Realm.jpg
|
||||
[8]:https://www.tecmint.com/wp-content/uploads/2017/07/List-Realm-Domain-Info.jpg
|
||||
[9]:https://www.tecmint.com/wp-content/uploads/2017/07/Add-User-to-Realm-Domain.jpg
|
||||
[10]:https://www.tecmint.com/wp-content/uploads/2017/07/Add-Domain-to-Realm.jpg
|
||||
[11]:https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
|
||||
[12]:https://www.tecmint.com/wp-content/uploads/2017/07/Confirm-Domain-Added.jpg
|
||||
[13]:https://www.tecmint.com/wp-content/uploads/2017/07/PAM-Configuration.jpg
|
||||
[14]:https://www.tecmint.com/wp-content/uploads/2017/07/List-Domain-Accounts.jpg
|
||||
[15]:https://www.tecmint.com/wp-content/uploads/2017/07/check-Winbind-nsswitch.jpg
|
||||
[16]:https://www.tecmint.com/wp-content/uploads/2017/07/Check-AD-User-Info.jpg
|
||||
[17]:https://www.tecmint.com/wp-content/uploads/2017/07/AD-User-Authentication.jpg
|
||||
[18]:https://www.tecmint.com/author/cezarmatei/
|
||||
[19]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[20]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,51 +0,0 @@
|
||||
#IoT边缘计算框架的新进展
|
||||
---
|
||||

|
||||
|
||||
开源项目 EdgeX Foundry 旨在开发一个标准化的互操作物联网边缘计算框架.[使用权限获取](https://www.linux.com/licenses/category/used-permission).
|
||||
|
||||
在4月, Linux 基金组织[启动](http://linuxgizmos.com/open-source-group-focuses-on-industrial-iot-gateway-middleware/)了开发一个标准化互操作物联网边缘计算框架的开源项目[EdgeX Foundry](https://www.edgexfoundry.org/). 就在最近, EdgeX Foundry 又[宣布](https://www.edgexfoundry.org/announcement/2017/07/17/edgex-foundry-builds-momentum-for-a-iot-interoperability-and-a-unified-marketplace-with-eight-new-members/)新增 8 个成员, 其总成员达到 58.
|
||||
|
||||
这些新成员是 Absolute, IoT Impact LABS, inwinStack, Parallel Machines, Queen's University Belfast, RIOT, Toshiba Digital Solutions Corporation, 和 Tulip Interfaces. 其原有成员包括 AMD, Analog Devices, Canonical/Ubuntu, Cloud Foundry, Dell, Linaro, Mocana, NetFoundry, Opto 22, RFMicron 和 VMWare 等其他公司或组织.
|
||||
|
||||
戴尔贡献出其基于 Apache2.0 协议的[FUSE](https://medium.com/@gigastacey/dell-plans-an-open-source-iot-stack-3dde43f24feb)框架源码作为 EdgeX Foundry 项目的种子,其中包括十几个微服务和超过 12.5 万行代码. Linux 基金会和 Dell 将合并 FUSE 和 AllJoyn-compliant IoTX 项目, 后者是由现有 EdgeX Foundry 成员 Two Bulls 和 Beechwood 发起的与 FUSE 相似的一个项目. 待合并完成 Linux 基金组织将正式宣布启动 EdgeX Foundry 项目.
|
||||
|
||||
EdgeX Foundry 将创造一个互操作性的, 即插即用的物联网边缘计算组件生态系统. 开源 EdgeX 栈将协调多样的传感器网络与后台数据处理云平台间的消息协议. 该框架旨在充分挖掘横跨边缘计算, 安全, 系统管理和微服务等模块间的通用代码.
|
||||
|
||||
对于项目成员及其客户来说, 其关注焦点在于借助于 IoT 网关和智能边缘设备,预认证的软件可方便集成的可能性. 在 Linux.com 的一次采访中, [IoT Impact LABS](https://iotimpactlabs.com/) 的首席工程师, Dan Mahoney 说:"现实中, EdgeX Foundry 降低我们在部署囊括多供应商解决方案时所面对的挑战."
|
||||
|
||||
Linux 基金组织,在将 AllSeen Alliance 的 AllJoyn 项目合并到 [IoTivity](https://www.linux.com/news/how-iotivity-and-alljoyn-could-combine) 的情况下, 为什么Linux基金组织发起了另外一个物联网标准化项目 (EdgeX Foundry)? 原因之一, EdgeX Foundry 不同于 IoTivity, IoTivity 主要解决工业物联网问题, 而 EdgeX Foundry 旨在一站解决消费级和工业级物联网全部的问题. 更具体来说, EdgeX Foundry 旨在成为网关和智能终端的通用中间件. EdgeX Foundry 与 IoTivity 的另一个不同在于, 前者希望借助预连接的终端塑造一种新产品, 后者更多解决现存产品之间的互操作性.
|
||||
|
||||
Linux 基金会 IoT 高级总监 Philip DesAutels 说:"IoTivity 提供实现设备之间无缝连接的协议, 而 EdgeX Foundry 提供了一个边缘计算框架. EdgeX Foundry 能够兼容如 IoTivity, BacNet, EtherCat 等任何协议设备, 从而实现集成多协议通信系统的通用边缘计算框架, 该项目的目标是为构建互操作组件的生态系统的过程中, 降低不确定性, 缩短市场化时间, 更好地产生规模效应."
|
||||
|
||||
上个月, 由 [Open Connectivity Foundation](https://openconnectivity.org/developer/specifications/international-standards) (OCF) 和 Linux 基金组织共同发起的 IoTivity项目发布了 [IoTivity 1.3](https://wiki.iotivity.org/release_note_1.3.0), 该版本增加 了与其曾经的对手 AllJoyn spec 的纽带, 也增加了对于 OCF 的 UPnP 设备的接口. 预计在 [IoTivity 2.0](https://www.linux.com/news/iotivity-20-whats-store) 中, IoTivity 和 AllJoyn 将会更进一步深入集成.
|
||||
|
||||
DesAutels 告诉 linux.com, IoTivity 和 EdgeX 是高度互补的, 其原因是 EdgeX 项目和IoTivity 项目有好几个共同成员, 如此更强化了 IoTivity 和 EdgeX 的互补关系.
|
||||
|
||||
尽管 IoTivity 和 EdgeX 都宣称是跨平台,包括 CPU 架构和 OS, 但是二者还是存在一定区别. IoTivity 最初是基于 Linux 平台设计, 兼容 Ubuntu, Tizen 和 Android 等 Linux 系列 OS, 后来逐步扩展到 Windows 和 IOS 操作系统. 与之对应的 EdgeX 设计之初就是基于跨平台的理念, 其完美兼容于各种 CPU 架构, 以及 Linux, Windows 和 Mac OS 等操作系统. 未来还将兼容于实时操作系统(RTOSes).
|
||||
|
||||
EdgeX 的新成员 [RIOT](https://riot-os.org/) 提供了一个开源项目 RIOT RTOS. RIOT 的主要维护者 Thomas Eichinger 在一次重要报告时说:"由于 RIOT 初衷就是致力于解决 linux 不太适应的问题, 故对于 RIOT 社区来说,参加和支持类似于 EdgeX Foundry 等与 Linux 互补性社区的积极性是自然而然的."
|
||||
|
||||
##传感器集成的简化
|
||||
IoT Impact LABS (也叫aka impact LABS 或直接称为 LABS) 是另一个 EdgeX 新成员. 该公司推出了一个独特的业务, 旨在帮助中小企业度过物联网解决方案的试用阶段. 该公司的大部分客户, 其中包括几个 EdgeX Foundry 的项目成员, 是致力于建设智慧城市, 基础设施再利用, 提高食品安全, 以及解决会社面临的自然资源缺乏的挑战.
|
||||
|
||||
Dan Mahoney 说:"在 LABS 我们花费了很多时间来调和试点客户的解决方案之间的差异性. EdgeX Foundry 可以最小化部署边缘软件系统的工作量,从而使我们能够更快更好地部署高质量的解决方案."
|
||||
|
||||
该框架在涉及多个供应商, 多种类型传感器的场景尤其凸显优势. "Edgex Foundry 将为我们提供快速构建网关的能力, 以及快速部署传感器的能力." Mahoney 补充说到. 传感器制造商将借助 EdgeX SDK 烧写应用层协议驱动到边缘设备, 该协议能够兼容多供应商和解决方案.
|
||||
|
||||
##边缘分析能力的构建
|
||||
当我们问到, Mahoney 的公司想要见到 EdgeX Foundry 怎样的发展时, 他说:"我们喜见乐闻的一个目标是有更多有效的工业协议作为设备服务出现, 一个更清晰的边缘计算实现路径."
|
||||
|
||||
在工业物联网和消费级物联网中边缘计算都呈现增长趋势. 在后者, 我们已经看到如 Alexa 的智能声控以及录像分析等几个智能家居系统集成了边缘计算分析技术. 这减轻了云服务平台的计算负荷, 但同时也带来了安全, 隐私, 以及由于政策和供应商中断引起的服务中断问题.
|
||||
|
||||
对于工业物联网网关, 隐私问题成为首要的问题. 因此, 在物联网网关方面出现了一些类似于云服务功能的扩展. 其中一个解决方案是, 为了安全将一些云服务上的安全保障应用借助容器如 [RIOS 与 Ubuntu 内核快照机制](https://www.linux.com/news/future-iot-containers-aim-solve-security-crisis)等方式集成到嵌入式设备. 另一种方案是, 开发 IoT 系统迁移云功能到边缘. 上个月, Amazon 为基于 linux 的网关发布了实现 [AWS Greengrass](http://linuxgizmos.com/amazon-releases-aws-greengrass-for-local-iot-processing-on-linux-devices/) 物联网协议栈的 AWS lambda. 该软件能够使计算, 消息路由, 数据收集和同步能力在边缘设备上完成,如物联网网关.
|
||||
|
||||
分析能力是 EdgeX Foundry 的一个关键功能要点. 发起成员 Cloud Foundry 是旨在集成其主要的工业应用平台到边缘设备. 另一个新成员 [Parallel Machines](https://www.parallelmachines.com/) 计划利用EdgeX将AI带到边缘设备.
|
||||
|
||||
EdgeX Foundry 仍然在项目早期, 软件仍然在 α 阶段, 其成员在上个月才刚刚进行了第一次全体成员大会. 同时项目已经为新开发者准备了一些初始训练课程, 另外从[这里](https://wiki.edgexfoundry.org/)也能获取更多的信息.
|
||||
|
||||
原文连接: [https://www.linux.com/blog/2017/7/iot-framework-edge-computing-gains-ground](https://www.linux.com/blog/2017/7/iot-framework-edge-computing-gains-ground)
|
||||
|
||||
作者: [ERIC BROWN](https://www.linux.com/users/ericstephenbrown) 译者:penghuster 校对:校对者ID
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,335 @@
|
||||
开发一个 Linux 调试器(九):处理变量
|
||||
============================================================
|
||||
|
||||
变量是偷偷摸摸的。有时,它们会很高兴地呆在寄存器中,但是一转头就会跑到堆栈中。为了优化,编译器可能会完全将它们从窗口中抛出。无论变量在内存中的移动频率如何,我们都需要一些方法在调试器中跟踪和操作它们。这篇文章将会教你如何处理调试器中的变量,并使用 `libelfin` 演示一个简单的实现。
|
||||
|
||||
* * *
|
||||
|
||||
### 系列文章索引
|
||||
|
||||
1. [设置][1]
|
||||
|
||||
2. [断点][2]
|
||||
|
||||
3. [寄存器和内存][3]
|
||||
|
||||
4. [ELF 和 DWARF][4]
|
||||
|
||||
5. [源和信号][5]
|
||||
|
||||
6. [源码级单步调试][6]
|
||||
|
||||
7. [源码级断点][7]
|
||||
|
||||
8. [堆栈展开][8]
|
||||
|
||||
9. [处理变量][9]
|
||||
|
||||
10. [高级话题][10]
|
||||
|
||||
* * *
|
||||
|
||||
在开始之前,请确保你使用的 `libelfin` 版本是[我分支上的 `fbreg`][11]。这包含了一些 hack 来支持获取当前堆栈帧的基址并评估位置列表,这些都不是由原生的 `libelfin` 提供的。你可能需要给 GCC 传递 `-gdwarf-2` 参数使其生成兼容的 DWARF 信息。但是在实现之前,我将详细说明 DWARF 5 最新规范中的位置编码方式。如果你想要了解更多信息,那么你可以从[这里][12]获取标准。
|
||||
|
||||
### DWARF 未知
|
||||
|
||||
使用 `DW_AT_location` 属性在 DWARF 信息中编码给定时刻内存中变量的位置。位置描述可以是单个位置描述,复合位置描述或位置列表。
|
||||
|
||||
* 简单的位置描述描述对象的一个连续的部分(通常是所有)的位置。简单位置描述可以描述可寻址存储器或寄存器中的位置,或缺少位置(具有或不具有已知值)。
|
||||
* 比如:
|
||||
* `DW_OP_fbreg -32`
|
||||
|
||||
* 一个完全存储的变量 - 从堆栈帧基址开始的32个字节
|
||||
|
||||
* 复合位置描述根据片段描述对象,每个对象可以包含在寄存器的一部分中或存储在与其他片段无关的存储器位置中。
|
||||
* 比如:
|
||||
* `DW_OP_reg3 DW_OP_piece 4 DW_OP_reg10 DW_OP_piece 2`
|
||||
|
||||
* 前四个字节位于寄存器 3 中,后两个字节位于寄存器 10 中的一个变量。
|
||||
|
||||
* 位置列表描述了具有有限周期或在周期内更改位置的对象。
|
||||
* 比如:
|
||||
* `<loclist with 3 entries follows>`
|
||||
* `[ 0]<lowpc=0x2e00><highpc=0x2e19>DW_OP_reg0`
|
||||
|
||||
* `[ 1]<lowpc=0x2e19><highpc=0x2e3f>DW_OP_reg3`
|
||||
|
||||
* `[ 2]<lowpc=0x2ec4><highpc=0x2ec7>DW_OP_reg2`
|
||||
|
||||
* 根据程序计数器的当前值,位置在寄存器之间移动的变量
|
||||
|
||||
根据位置描述的种类,`DW_AT_location` 以三种不同的方式进行编码。`exprloc` 编码简单和复合的位置描述。它们由一个字节长度组成,后跟一个 DWARF 表达式或位置描述。`loclist` 和 `loclistptr` 的编码位置列表。它们在 `.debug_loclists` 部分中提供索引或偏移量,该部分描述了实际的位置列表。
|
||||
|
||||
### DWARF 表达式
|
||||
|
||||
使用 DWARF 表达式计算变量的实际位置。这包括操作堆栈值的一系列操作。有很多 DWARF 操作可用,所以我不会详细解释它们。相反,我会从每一个表达式中给出一些例子,给你一个可用的东西。另外,不要害怕这些; `libelfin` 将为我们处理所有这些复杂性。
|
||||
|
||||
* 字面编码
|
||||
* `DW_OP_lit0`、`DW_OP_lit1`。。。`DW_OP_lit31`
|
||||
* 将字面值压入堆栈
|
||||
|
||||
* `DW_OP_addr <addr>`
|
||||
* 将地址操作数压入堆栈
|
||||
|
||||
* `DW_OP_constu <unsigned>`
|
||||
* 将无符号值压入堆栈
|
||||
|
||||
* 寄存器值
|
||||
* `DW_OP_fbreg <offset>`
|
||||
* 压入在堆栈帧基址找到的值,偏移给定值
|
||||
|
||||
* `DW_OP_breg0`、`DW_OP_breg1`。。。 `DW_OP_breg31 <offset>`
|
||||
* 将给定寄存器的内容加上给定的偏移量压入堆栈
|
||||
|
||||
* 堆栈操作
|
||||
* `DW_OP_dup`
|
||||
* 复制堆栈顶部的值
|
||||
|
||||
* `DW_OP_deref`
|
||||
* 将堆栈顶部视为内存地址,并将其替换为该地址的内容
|
||||
|
||||
* 算术和逻辑运算
|
||||
* `DW_OP_and`
|
||||
* 弹出堆栈顶部的两个值,并压回它们的逻辑 `AND`
|
||||
|
||||
* `DW_OP_plus`
|
||||
* 与 `DW_OP_and` 相同,但是会添加值
|
||||
|
||||
* 控制流操作
|
||||
* `DW_OP_le`、`DW_OP_eq`、`DW_OP_gt` 等
|
||||
* 弹出前两个值,比较它们,并且如果条件为真,则压入 `1`,否则为 `0`
|
||||
|
||||
* `DW_OP_bra <offset>`
|
||||
* 条件分支:如果堆栈的顶部不是 `0`,则通过 `offset` 在表达式中向后或向后跳过
|
||||
|
||||
* 输入转化
|
||||
* `DW_OP_convert <DIE offset>`
|
||||
* 将堆栈顶部的值转换为不同的类型,它由给定偏移量的 DWARF 信息条目描述
|
||||
|
||||
* 特殊操作
|
||||
* `DW_OP_nop`
|
||||
* 什么都能不做!
|
||||
|
||||
### DWARF 类型
|
||||
|
||||
DWARF 的类型表示需要足够强大来为调试器用户提供有用的变量表示。用户经常希望能够在应用程序级别进行调试,而不是在机器级别进行调试,并且他们需要了解他们的变量正在做什么。
|
||||
|
||||
DWARF 类型与大多数其他调试信息一起编码在 DIE 中。它们可以具有指示其名称、编码、大小、字节等的属性。无数的类型标签可用于表示指针、数组、结构体、typedef 以及 C 或 C++ 程序中可以看到的任何其他内容。
|
||||
|
||||
以这个简单的结构体为例:
|
||||
|
||||
```
|
||||
struct test{
|
||||
int i;
|
||||
float j;
|
||||
int k[42];
|
||||
test* next;
|
||||
};
|
||||
```
|
||||
|
||||
这个结构体的父 DIE 是这样的:
|
||||
|
||||
```
|
||||
< 1><0x0000002a> DW_TAG_structure_type
|
||||
DW_AT_name "test"
|
||||
DW_AT_byte_size 0x000000b8
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000001
|
||||
|
||||
```
|
||||
|
||||
上面说的是我们有一个叫做 `test` 的结构体,大小为 `0xb8`,在 `test.cpp` 的第 `1` 行声明。接下来有许多描述成员的子 DIE。
|
||||
|
||||
```
|
||||
< 2><0x00000032> DW_TAG_member
|
||||
DW_AT_name "i"
|
||||
DW_AT_type <0x00000063>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000002
|
||||
DW_AT_data_member_location 0
|
||||
< 2><0x0000003e> DW_TAG_member
|
||||
DW_AT_name "j"
|
||||
DW_AT_type <0x0000006a>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000003
|
||||
DW_AT_data_member_location 4
|
||||
< 2><0x0000004a> DW_TAG_member
|
||||
DW_AT_name "k"
|
||||
DW_AT_type <0x00000071>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000004
|
||||
DW_AT_data_member_location 8
|
||||
< 2><0x00000056> DW_TAG_member
|
||||
DW_AT_name "next"
|
||||
DW_AT_type <0x00000084>
|
||||
DW_AT_decl_file 0x00000001 test.cpp
|
||||
DW_AT_decl_line 0x00000005
|
||||
DW_AT_data_member_location 176(as signed = -80)
|
||||
|
||||
```
|
||||
|
||||
每个成员都有一个名称,一个类型(它是一个 DIE 偏移量),一个声明文件和行,以及一个字节偏移到该成员所在的结构体中。类型指向下一个。
|
||||
|
||||
```
|
||||
< 1><0x00000063> DW_TAG_base_type
|
||||
DW_AT_name "int"
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000004
|
||||
< 1><0x0000006a> DW_TAG_base_type
|
||||
DW_AT_name "float"
|
||||
DW_AT_encoding DW_ATE_float
|
||||
DW_AT_byte_size 0x00000004
|
||||
< 1><0x00000071> DW_TAG_array_type
|
||||
DW_AT_type <0x00000063>
|
||||
< 2><0x00000076> DW_TAG_subrange_type
|
||||
DW_AT_type <0x0000007d>
|
||||
DW_AT_count 0x0000002a
|
||||
< 1><0x0000007d> DW_TAG_base_type
|
||||
DW_AT_name "sizetype"
|
||||
DW_AT_byte_size 0x00000008
|
||||
DW_AT_encoding DW_ATE_unsigned
|
||||
< 1><0x00000084> DW_TAG_pointer_type
|
||||
DW_AT_type <0x0000002a>
|
||||
|
||||
```
|
||||
|
||||
如你所见,我笔记本电脑上的 `int` 是一个 4 字节的有符号整数类型,`float`是一个 4 字节的浮点数。整数数组类型通过指向 `int` 类型作为其元素类型,`sizetype`(可以认为是 `size_t`)作为索引类型,它具有 `2a` 个元素。 `test *` 类型是 `DW_TAG_pointer_type`,它引用 `test` DIE。
|
||||
|
||||
* * *
|
||||
|
||||
### 实现简单的变量读取器
|
||||
|
||||
如上所述,`libelfin` 将处理我们大部分的复杂性。但是,它并没有实现用于表示可变位置的所有不同方法,并且在我们的代码中处理这些将变得非常复杂。因此,我现在选择只支持 `exprloc`。请随意添加对更多类型表达式的支持。如果你真的有勇气,请提交补丁到 `libelfin` 中来帮助完成必要的支持!
|
||||
|
||||
处理变量主要是将不同部分定位在存储器或寄存器中,读取或写入与之前一样。为了简单起见,我只会告诉你如何实现读取。
|
||||
|
||||
首先我们需要告诉 `libelfin` 如何从我们的进程中读取寄存器。我们创建一个继承自 `expr_context` 的类并使用 `ptrace` 来处理所有内容:
|
||||
|
||||
```
|
||||
class ptrace_expr_context : public dwarf::expr_context {
|
||||
public:
|
||||
ptrace_expr_context (pid_t pid) : m_pid{pid} {}
|
||||
|
||||
dwarf::taddr reg (unsigned regnum) override {
|
||||
return get_register_value_from_dwarf_register(m_pid, regnum);
|
||||
}
|
||||
|
||||
dwarf::taddr pc() override {
|
||||
struct user_regs_struct regs;
|
||||
ptrace(PTRACE_GETREGS, m_pid, nullptr, ®s);
|
||||
return regs.rip;
|
||||
}
|
||||
|
||||
dwarf::taddr deref_size (dwarf::taddr address, unsigned size) override {
|
||||
//TODO take into account size
|
||||
return ptrace(PTRACE_PEEKDATA, m_pid, address, nullptr);
|
||||
}
|
||||
|
||||
private:
|
||||
pid_t m_pid;
|
||||
};
|
||||
```
|
||||
|
||||
读取将由我们 `debugger` 类中的 `read_variables` 函数处理:
|
||||
|
||||
```
|
||||
void debugger::read_variables() {
|
||||
using namespace dwarf;
|
||||
|
||||
auto func = get_function_from_pc(get_pc());
|
||||
|
||||
//...
|
||||
}
|
||||
```
|
||||
|
||||
我们上面做的第一件事是找到我们目前进入的函数,然后我们需要循环访问该函数中的条目来寻找变量:
|
||||
|
||||
```
|
||||
for (const auto& die : func) {
|
||||
if (die.tag == DW_TAG::variable) {
|
||||
//...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
我们通过查找 DIE 中的 `DW_AT_location` 条目获取位置信息:
|
||||
|
||||
```
|
||||
auto loc_val = die[DW_AT::location];
|
||||
```
|
||||
|
||||
接着我们确保它是一个 `exprloc`,并请求 `libelfin` 来评估我们的表达式:
|
||||
|
||||
```
|
||||
if (loc_val.get_type() == value::type::exprloc) {
|
||||
ptrace_expr_context context {m_pid};
|
||||
auto result = loc_val.as_exprloc().evaluate(&context);
|
||||
```
|
||||
|
||||
现在我们已经评估了表达式,我们需要读取变量的内容。它可以在内存或寄存器中,因此我们将处理这两种情况:
|
||||
|
||||
```
|
||||
switch (result.location_type) {
|
||||
case expr_result::type::address:
|
||||
{
|
||||
auto value = read_memory(result.value);
|
||||
std::cout << at_name(die) << " (0x" << std::hex << result.value << ") = "
|
||||
<< value << std::endl;
|
||||
break;
|
||||
}
|
||||
|
||||
case expr_result::type::reg:
|
||||
{
|
||||
auto value = get_register_value_from_dwarf_register(m_pid, result.value);
|
||||
std::cout << at_name(die) << " (reg " << result.value << ") = "
|
||||
<< value << std::endl;
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
throw std::runtime_error{"Unhandled variable location"};
|
||||
}
|
||||
```
|
||||
|
||||
你可以看到,我根据变量的类型,打印输出了值而没有解释。希望通过这个代码,你可以看到如何支持编写变量,或者用给定的名字搜索变量。
|
||||
|
||||
最后我们可以将它添加到我们的命令解析器中:
|
||||
|
||||
```
|
||||
else if(is_prefix(command, "variables")) {
|
||||
read_variables();
|
||||
}
|
||||
```
|
||||
|
||||
### 测试一下
|
||||
|
||||
编写一些具有一些变量的小功能,不用优化并带有调试信息编译它,然后查看是否可以读取变量的值。尝试写入存储变量的内存地址,并查看程序改变的行为。
|
||||
|
||||
* * *
|
||||
|
||||
已经有九篇文章了,还剩最后一篇!下一次我会讨论一些你可能会感兴趣的更高级的概念。现在你可以在[这里][13]找到这个帖子的代码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/writing-a-linux-debugger-variables/
|
||||
|
||||
作者:[ Simon Brand][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-break/
|
||||
[8]:https://blog.tartanllama.xyz/writing-a-linux-debugger-unwinding/
|
||||
[9]:https://blog.tartanllama.xyz/writing-a-linux-debugger-variables/
|
||||
[10]:https://blog.tartanllama.xyz/writing-a-linux-debugger-advanced-topics/
|
||||
[11]:https://github.com/TartanLlama/libelfin/tree/fbreg
|
||||
[12]:http://dwarfstd.org/
|
||||
[13]:https://github.com/TartanLlama/minidbg/tree/tut_variable
|
@ -0,0 +1,150 @@
|
||||
开发一个 Linux 调试器(十):高级主题
|
||||
============================================================
|
||||
|
||||
我们终于来到这个系列的最后一篇文章!这一次,我将对调试中的一些更高级的概念进行高层的概述:远程调试、共享库支持、表达式计算和多线程支持。这些想法实现起来比较复杂,所以我不会详细说明如何做,但是如果有的话,我很乐意回答有关这些概念的问题。
|
||||
|
||||
* * *
|
||||
|
||||
### 系列索引
|
||||
|
||||
1. [准备环境][1]
|
||||
|
||||
2. [断点][2]
|
||||
|
||||
3. [寄存器和内存][3]
|
||||
|
||||
4. [Elves 和 dwarves][4]
|
||||
|
||||
5. [源码和信号][5]
|
||||
|
||||
6. [源码层逐步执行][6]
|
||||
|
||||
7. [源码层断点][7]
|
||||
|
||||
8. [调用栈][8]
|
||||
|
||||
9. [处理变量][9]
|
||||
|
||||
10. [高级主题][10]
|
||||
|
||||
* * *
|
||||
|
||||
### 远程调试
|
||||
|
||||
远程调试对于嵌入式系统或不同环境的调试非常有用。它还在高级调试器操作和与操作系统和硬件的交互之间设置了一个很好的分界线。事实上,像 GDB 和 LLDB 这样的调试器即使在调试本地程序时也可以作为远程调试器运行。一般架构是这样的:
|
||||
|
||||

|
||||
|
||||
调试器是我们通过命令行交互的组件。也许如果你使用的是 IDE,那么顶层中另一个层可以通过_机器接口_与调试器进行通信。在目标机器上(可能与本机一样)有一个 _debug stub_ ,它理论上是一个非常小的操作系统调试库的包装程序,它执行所有的低级调试任务,如在地址上设置断点。我说“在理论上”,因为如今 stub 变得越来越大。例如,我机器上的 LLDB debug stub 大小是 7.6MB。debug stub 通过一些使用特定于操作系统的功能(在我们的例子中是 “ptrace”)和被调试进程以及通过远程协议的调试器通信。
|
||||
|
||||
最常见的远程调试协议是 GDB 远程协议。这是一种基于文本的数据包格式,用于在调试器和 debug
|
||||
stub 之间传递命令和信息。我不会详细介绍它,但你可以在[这里][11]阅读你想知道的。如果你启动 LLDB 并执行命令 `log enable gdb-remote packets`,那么你将获得通过远程协议发送的所有数据包的跟踪。在 GDB 上,你可以用 `set remotelogfile <file>` 做同样的事情。
|
||||
|
||||
作为一个简单的例子,这是数据包设置断点:
|
||||
|
||||
```
|
||||
$Z0,400570,1#43
|
||||
|
||||
```
|
||||
|
||||
`$` 标记数据包的开始。`Z0` 是插入内存断点的命令。`400570` 和 `1` 是参数,其中前者是设置断点的地址,后者是特定目标的断点类型说明符。最后,`#43` 是校验值,以确保数据没有损坏。
|
||||
|
||||
GDB 远程协议非常易于扩展自定义数据包,这对于实现平台或语言特定的功能非常有用。
|
||||
|
||||
* * *
|
||||
|
||||
### 共享库和动态加载支持
|
||||
|
||||
调试器需要知道调试程序加载了哪些共享库,以便它可以设置断点,获取源代码级别的信息和符号等。除查找被动态链接的库之外,调试器还必须跟踪在运行时通过 `dlopen` 加载的库。为了打到这个目的,动态链接器维护一个 _交会结构体_。该结构体维护共享描述符的链表以及指向每当更新链表时调用的函数的指针。这个结构存储在 ELF 文件的 `.dynamic` 段中,在程序执行之前被初始化。
|
||||
|
||||
一个简单的跟踪算法:
|
||||
|
||||
* 追踪程序在 ELF 头中查找程序的入口(或者可以使用存储在 `/proc/<pid>/aux` 中的辅助向量)
|
||||
|
||||
* 追踪程序在程序的入口处设置一个断点,并开始执行。
|
||||
|
||||
* 当到达断点时,通过在 ELF 文件中查找 `.dynamic` 的加载地址找到交汇结构体的地址。
|
||||
|
||||
* 检查交汇结构体以获取当前加载的库的列表。
|
||||
|
||||
* 链接器更新函数上设置断点
|
||||
|
||||
* 每当到达断点时,列表都会更新
|
||||
|
||||
* 追踪程序无限循环,继续执行程序并等待信号,直到追踪程序信号退出。
|
||||
|
||||
我给这些概念写了一个小例子,你可以在[这里][12]找到。如果有人有兴趣,我可以将来写得更详细一点。
|
||||
|
||||
* * *
|
||||
|
||||
### 表达式计算
|
||||
|
||||
表达式计算是程序的一项功能,允许用户在调试程序时对原始源语言中的表达式进行计算。例如,在 LLDB 或 GDB 中,可以执行 `print foo()` 来调用 `foo` 函数并打印结果。
|
||||
|
||||
根据表达的复杂程度,有几种不同的计算方法。如果表达式只是一个简单的标识符,那么调试器可以查看调试信息,找到变量并打印出该值,就像我们在本系列最后一部分中所做的那样。如果表达式有点复杂,则可能将代码编译成中间表达式 (IR) 并解释来获得结果。例如,对于某些表达式,LLDB 将使用 Clang 将表达式编译为 LLVM IR 并将其解释。如果表达式更复杂,或者需要调用某些函数,那么代码可能需要 JIT 到目标并在被调试者的地址空间中执行。这涉及到调用 `mmap` 来分配一些可执行内存,然后将编译的代码复制到该块并执行。LLDB 通过使用 LLVM 的 JIT 功能来实现。
|
||||
|
||||
如果你想更多地了解 JIT 编译,我强烈推荐[ Eli Bendersky 关于这个主题的文章][13]。
|
||||
|
||||
* * *
|
||||
|
||||
### 多线程调试支持
|
||||
|
||||
本系列展示的调试器仅支持单线程应用程序,但是为了调试大多数真实程序,多线程支持是非常需要的。支持这一点的最简单的方法是跟踪线程创建并解析 procfs 以获取所需的信息。
|
||||
|
||||
Linux 线程库称为 `pthreads`。当调用 `pthread_create` 时,库会使用 `clone` 系统调用来创建一个新的线程,我们可以用 `ptrace` 跟踪这个系统调用(假设你的内核早于 2.5.46)。为此,你需要在连接到调试器之后设置一些 `ptrace` 选项:
|
||||
|
||||
```
|
||||
ptrace(PTRACE_SETOPTIONS, m_pid, nullptr, PTRACE_O_TRACECLONE);
|
||||
```
|
||||
|
||||
现在当 `clone` 被调用时,该进程将收到我们的老朋友 `SIGTRAP` 发出信号。对于本系列中的调试器,你可以将一个例子添加到 `handle_sigtrap` 来处理新线程的创建:
|
||||
|
||||
```
|
||||
case (SIGTRAP | (PTRACE_EVENT_CLONE << 8)):
|
||||
//get the new thread ID
|
||||
unsigned long event_message = 0;
|
||||
ptrace(PTRACE_GETEVENTMSG, pid, nullptr, message);
|
||||
|
||||
//handle creation
|
||||
//...
|
||||
```
|
||||
|
||||
一旦收到了,你可以看看 `/proc/<pid>/task/` 并查看内存映射之类来获得所需的所有信息。
|
||||
|
||||
GDB 使用 `libthread_db`,它提供了一堆帮助函数,这样你就不需要自己解析和处理。设置这个库很奇怪,我不会在这展示它如何工作,但如果你想使用它,你可以去阅读[这个教程][14]。
|
||||
|
||||
多线程支持中最复杂的部分是调试器中线程状态的建模,特别是如果你希望支持[不间断模式][15]或当你计算中涉及不止一个 CPU 的某种异构调试。
|
||||
|
||||
* * *
|
||||
|
||||
### 最后!
|
||||
|
||||
呼!这个系列花了很长时间才写完,但是我在这个过程中学到了很多东西,我希望它是有帮助的。如果你聊有关调试或本系列中的任何问题,请在 Twitter [@TartanLlama][16]或评论区联系我。如果你有想看到的其他任何调试主题,让我知道我或许会再发其他的文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/writing-a-linux-debugger-advanced-topics/
|
||||
|
||||
作者:[Simon Brand ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://blog.tartanllama.xyz/writing-a-linux-debugger-source-break/
|
||||
[8]:https://blog.tartanllama.xyz/writing-a-linux-debugger-unwinding/
|
||||
[9]:https://blog.tartanllama.xyz/writing-a-linux-debugger-variables/
|
||||
[10]:https://blog.tartanllama.xyz/writing-a-linux-debugger-advanced-topics/
|
||||
[11]:https://sourceware.org/gdb/onlinedocs/gdb/Remote-Protocol.html
|
||||
[12]:https://github.com/TartanLlama/dltrace
|
||||
[13]:http://eli.thegreenplace.net/tag/code-generation
|
||||
[14]:http://timetobleed.com/notes-about-an-odd-esoteric-yet-incredibly-useful-library-libthread_db/
|
||||
[15]:https://sourceware.org/gdb/onlinedocs/gdb/Non_002dStop-Mode.html
|
||||
[16]:https://twitter.com/TartanLlama
|
@ -1,219 +0,0 @@
|
||||
编译器简介
|
||||
============================================================
|
||||
|
||||
### 如何对计算机说话 - Pre-Siri
|
||||
|
||||
简单说来,一个编译器不过是一个可以翻译其他程序的程序。传统的编译器可以把源代码翻译成你的机器能够理解的可执行机器代码。(一些编译器将源代码翻译成别的程序语言,这样的编译器称为源到源翻译器或转化器。)[LLVM][7] 是一个广泛使用的编译器项目,包含许多模块化的编译工具。
|
||||
|
||||
传统的编译器设计包含三个部分:
|
||||
|
||||

|
||||
|
||||
* 通过前端翻译将源代码转化为中间表示: (IR)* 。[`clang`][1] 是 LLVM 中用于 C 家族语言的前端工具。
|
||||
* 优化程序分析指令然后将其转化为更高效的形式。[`opt`][2]是 LLVM 的优化工具。
|
||||
* 后端工具通过将指令映射到目标硬件指令集从而生成机器代码。[`11c`][3]是 LLVM 的后端工具。
|
||||
* LLVM IR 是一种和汇编类似的低级语言。然而,它抽象出了特定硬件信息。
|
||||
|
||||
### Hello, Compiler
|
||||
|
||||
下面是一个打印 "Hello, Compiler!" 到标准输出的简单 C 程序。C 语法是人类可读的,但是计算机却不能理解,不知道该程序要干什么。我将通过三个编译阶段使该程序变成机器可执行的程序。
|
||||
|
||||
```
|
||||
// compile_me.c
|
||||
// Wave to the compiler. The world can wait.
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
printf("Hello, Compiler!\n");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
### 前端
|
||||
|
||||
正如我在上面所提到的,`clang` 是 LLVM 中用于 C 家族语言的前端工具。Clang 包含一个 C 预处理器、词法分析器、语法解析器、语义分析器和 IR生成器。
|
||||
|
||||
* C 预处理器在将源程序翻译成 IR 前修改源程序。预处理器处理外部包含文件,比如上面的 `#include <stdio.h>`。 它将会把这一行替换为 `stdio.h` C 标准库文件的完整内容,其中包含 `printf` 函数的声明。
|
||||
|
||||
*通过运行下面的命令来查看预处理步骤的输出:*
|
||||
|
||||
```
|
||||
clang -E compile_me.c -o preprocessed.i
|
||||
|
||||
```
|
||||
|
||||
* 词法分析器(或扫描器或分词器)将一串字符转化为一串单词。每一个单词或记号,被归并到五种语法目录中的一个:标点符号、关键字、标识符、文字或注释。
|
||||
|
||||
*compile_me.c 的分词过程*
|
||||

|
||||
|
||||
* 语法分析器确定源程序中的单词流是否组成了合法的句子。在分析标识符流的语法后,它会输出一个抽象语法树(AST)。在 Clang 的 AST 中的节点表示声明、语句和类型。
|
||||
|
||||
_compile_me.c 的语法树_
|
||||
|
||||

|
||||
|
||||
* 语义分析器遍历抽象语法树,从而确定代码语句是否有正确意义。这个阶段会检查类型错误。如果 compile_me.c 的 main 函数返回 `"zero"`而不是 `0`, 那么语义分析器将会抛出一个错误,因为 `"zero"` 不是 `int` 类型。
|
||||
|
||||
* IR 生成器将抽象语法树翻译为 IR 。
|
||||
|
||||
*对 compile_me.c 运行 clang 来生成 LLVM IR:*
|
||||
|
||||
```
|
||||
clang -S -emit-llvm -o llvm_ir.ll compile_me.c
|
||||
|
||||
```
|
||||
|
||||
在 llvm_ir.ll 中的 main 函数
|
||||
|
||||
```
|
||||
; llvm_ir.ll
|
||||
@.str = private unnamed_addr constant [18 x i8] c"Hello, Compiler!\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- memory allocated on the stack
|
||||
store i32 0, i32* %1, align 4
|
||||
%2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([18 x i8], [18 x i8]* @.str, i32 0, i32 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
### 优化程序
|
||||
|
||||
优化程序的工作是基于程序的运行时行为来提高代码效率。优化程序将 IR 作为输入然后生成改进后的 IR 作为输出。LLVM 的优化工具 opt 将会通过标记 `-O2`(大写 o,数字 2)来优化处理器速度,通过标记 `Os`(大写 o,小写 s)来减少指令数目。
|
||||
|
||||
看一看上面的前端工具生成的 LLVM IR 代码和运行下面的命令生成的结果之间的区别:
|
||||
|
||||
```
|
||||
opt -O2 -S llvm_ir.ll -o optimized.ll
|
||||
|
||||
```
|
||||
|
||||
_在 optimized.ll 中的 main 函数_
|
||||
|
||||
```
|
||||
optimized.ll
|
||||
|
||||
@str = private unnamed_addr constant [17 x i8] c"Hello, Compiler!\00"
|
||||
|
||||
define i32 @main() {
|
||||
%puts = tail call i32 @puts(i8* getelementptr inbounds ([17 x i8], [17 x i8]* @str, i64 0, i64 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @puts(i8* nocapture readonly)
|
||||
```
|
||||
|
||||
优化后的版本中, main 函数没有在栈中分配内存,因为它不使用任何内存。优化后的代码中调用 `puts` 函数而不是 `printf` 函数,因为程序中并没有使用 `printf` 函数的格式化功能。
|
||||
|
||||
当然,优化程序不仅仅知道何时可以把 `printf` 函数用 `puts` 函数代替。优化程序也能展开循环和内联简单计算的结果。考虑下面的程序,它将两个整数相加并打印出结果。
|
||||
|
||||
```
|
||||
// add.c
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int a = 5, b = 10, c = a + b;
|
||||
printf("%i + %i = %i\n", a, b, c);
|
||||
}
|
||||
```
|
||||
|
||||
_下面是未优化的 LLVM IR:_
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- allocate stack space for var a
|
||||
%2 = alloca i32, align 4 ; <- allocate stack space for var b
|
||||
%3 = alloca i32, align 4 ; <- allocate stack space for var c
|
||||
store i32 5, i32* %1, align 4 ; <- store 5 at memory location %1
|
||||
store i32 10, i32* %2, align 4 ; <- store 10 at memory location %2
|
||||
%4 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %4
|
||||
%5 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %5
|
||||
%6 = add nsw i32 %4, %5 ; <- add the values in registers %4 and %5\. put the result in register %6
|
||||
store i32 %6, i32* %3, align 4 ; <- put the value of register %6 into memory address %3
|
||||
%7 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %7
|
||||
%8 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %8
|
||||
%9 = load i32, i32* %3, align 4 ; <- load the value at memory address %3 into register %9
|
||||
%10 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i32 0, i32 0), i32 %7, i32 %8, i32 %9)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
_下面是优化后的 LLVM IR_
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i64 0, i64 0), i32 5, i32 10, i32 15)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8* nocapture readonly, ...)
|
||||
```
|
||||
|
||||
优化后的 main 函数本质上是未优化版本的第 17 行和 18 行,伴有变量值内联。`opt` 计算加法,因为所有的变量都是常数。很酷吧,对不对?
|
||||
|
||||
### 后端
|
||||
|
||||
LLVM 的后端工具是 `11c`。它分三个阶段将 LLVM IR 作为输入生成机器代码。
|
||||
|
||||
* 指令选择是将 IR 指令映射到目标机器的指令集。这个步骤使用虚拟寄存器的无限名字空间。
|
||||
* 寄存器分配是将虚拟寄存器映射到目标体系结构的实际寄存器。我的 CPU 是 x86 结构,它只有 16 个寄存器。然而,编译器将会尽可能少的使用寄存器。
|
||||
* 指令安排是重排操作,从而反映出目标机器的性能约束。
|
||||
|
||||
_运行下面这个命令将会产生一些机器代码:_
|
||||
|
||||
```
|
||||
llc -o compiled-assembly.s optimized.ll
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
_main:
|
||||
pushq %rbp
|
||||
movq %rsp, %rbp
|
||||
leaq L_str(%rip), %rdi
|
||||
callq _puts
|
||||
xorl %eax, %eax
|
||||
popq %rbp
|
||||
retq
|
||||
L_str:
|
||||
.asciz "Hello, Compiler!"
|
||||
```
|
||||
|
||||
这个程序是 x86 汇编语言,它是计算机所说的语言,并具有人类可读语法。某些人最后也许能理解我。
|
||||
|
||||
* * *
|
||||
|
||||
相关资源:
|
||||
|
||||
1. [设计一个编译器][4]
|
||||
|
||||
2. [开始探索 LLVM 核心库][5]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nicoleorchard.com/blog/compilers
|
||||
|
||||
作者:[Nicole Orchard][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://nicoleorchard.com/
|
||||
[1]:http://clang.llvm.org/
|
||||
[2]:http://llvm.org/docs/CommandGuide/opt.html
|
||||
[3]:http://llvm.org/docs/CommandGuide/llc.html
|
||||
[4]:https://www.amazon.com/Engineering-Compiler-Second-Keith-Cooper/dp/012088478X
|
||||
[5]:https://www.amazon.com/Getting-Started-LLVM-Core-Libraries/dp/1782166920
|
||||
[6]:https://twitter.com/norchard/status/864246049266958336
|
||||
[7]:http://llvm.org/
|
@ -1,343 +0,0 @@
|
||||
# 常用的 GDB 命令中文释义
|
||||
|
||||
## 目录
|
||||
|
||||
- [break](#break) -- 缩写 `b`,在指定的行或函数处设置断点
|
||||
- [info breakpoints](#info-breakpoints) -- 简写 `i b`,打印未删除的所有断点,观察点和捕获点的列表
|
||||
- [disable](#disable) -- 禁用断点,可以缩写为 `dis`
|
||||
- [enable](#enable) -- 启用断点
|
||||
- [clear](#clear) -- 清除指定行或函数处的断点
|
||||
- [delete](#delete) -- 缩写 `d`,删除断点
|
||||
- [tbreak](#tbreak) -- 设置临时断点,参数同 `break`,但在程序第一次停住后会被自动删除
|
||||
- [watch](#watch) -- 为表达式(或变量)设置观察点,当表达式(或变量)的值有变化时,停住程序
|
||||
- [step](#step) -- 缩写 `s`,单步跟踪,如果有函数调用,会进入该函数
|
||||
- [reverse-step](#reverse-step) -- 反向单步跟踪,如果有函数调用,会进入该函数
|
||||
- [next](#next) -- 缩写 `n`,单步跟踪,如果有函数调用,不会进入该函数
|
||||
- [reverse-next](#reverse-next) -- 反向单步跟踪,如果有函数调用,不会进入该函数
|
||||
- [return](#return) -- 使选定的栈帧返回到其调用者
|
||||
- [finish](#finish) -- 缩写 `fin`,执行直到选择的栈帧返回
|
||||
- [until](#until) -- 缩写 `u`,执行直到...(用于跳过循环、递归函数调用)
|
||||
- [continue](#continue) -- 同义词 `c`,恢复程序执行
|
||||
- [print](#print) -- 缩写 `p`,打印表达式 EXP 的值
|
||||
- [x](#x) -- 查看内存
|
||||
- [display](#display) -- 每次程序停止时打印表达式 EXP 的值(自动显示)
|
||||
- [info display](#info-display) -- 打印早先设置为自动显示的表达式列表
|
||||
- [disable display](#disable-display) -- 禁用自动显示
|
||||
- [enable display](#enable-display) -- 启用自动显示
|
||||
- [undisplay](#undisplay) -- 删除自动显示项
|
||||
- [help](#help) -- 缩写 `h`,打印命令列表(带参数时查找命令的帮助)
|
||||
- [attach](#attach) -- 挂接到已在运行的进程来调试
|
||||
- [run](#run) -- 缩写 `r`,启动被调试的程序
|
||||
- [backtrace](#backtrace) -- 缩写 `bt`,查看程序调用栈的信息
|
||||
- [ptype](#ptype) -- 打印类型 TYPE 的定义
|
||||
|
||||
------
|
||||
|
||||
## break
|
||||
|
||||
使用 `break` 命令(缩写 `b`)来设置断点。 参见[官方文档][1]。
|
||||
|
||||
- `break` 当不带参数时,在所选栈帧中执行的下一条指令处设置断点。
|
||||
- `break <function-name>` 在函数体入口处打断点,在 C++ 中可以使用 `class::function` 或 `function(type, ...)` 格式来指定函数名。
|
||||
- `break <line-number>` 在当前源码文件指定行的开始处打断点。
|
||||
- `break -N` `break +N` 在当前源码行前面或后面的 `N` 行开始处打断点,`N` 为正整数。
|
||||
- `break <filename:linenum>` 在源码文件 `filename` 的 `linenum` 行处打断点。
|
||||
- `break <filename:function>` 在源码文件 `filename` 的 `function` 函数入口处打断点。
|
||||
- `break <address>` 在程序指令的地址处打断点。
|
||||
- `break ... if <cond>` 设置条件断点,`...` 代表上述参数之一(或无参数),`cond` 为条件表达式,仅在 `cond` 值非零时停住程序。
|
||||
|
||||
## info breakpoints
|
||||
|
||||
查看断点,观察点和捕获点的列表。用法:
|
||||
`info breakpoints [list…]`
|
||||
`info break [list…]`
|
||||
`list…` 用来指定若干个断点的编号(可省略),可以是 `2`, `1-3`, `2 5` 等。
|
||||
|
||||
## disable
|
||||
|
||||
禁用一些断点。 参见[官方文档][2]。
|
||||
参数是用空格分隔的断点编号。
|
||||
要禁用所有断点,不加参数。
|
||||
禁用的断点不会被忘记,但直到重新启用才有效。
|
||||
用法: `disable [breakpoints] [list…]`
|
||||
`breakpoints` 是 `disable` 的子命令(可省略),`list…` 同 `info breakpoints` 中的描述。
|
||||
|
||||
## enable
|
||||
|
||||
启用一些断点。 参见[官方文档][2]。
|
||||
给出断点编号(以空格分隔)作为参数。
|
||||
没有参数时,所有断点被启用。
|
||||
|
||||
- `enable [breakpoints] [list…]` 启用指定的断点(或所有定义的断点)。
|
||||
- `enable [breakpoints] once list…` 临时启用指定的断点。GDB 在停止您的程序后立即禁用这些断点。
|
||||
- `enable [breakpoints] delete list…` 使指定的断点启用一次,然后删除。一旦您的程序停止,GDB 就会删除这些断点。等效于用 `tbreak` 设置的断点。
|
||||
|
||||
`breakpoints` 同 `disable` 中的描述。
|
||||
|
||||
## clear
|
||||
|
||||
在指定行或函数处清除断点。 参见[官方文档][3]。
|
||||
参数可以是行号,函数名称或 `*` 跟一个地址。
|
||||
|
||||
- `clear` 当不带参数时,清除所选栈帧在执行的源码行中的所有断点。
|
||||
- `clear <function>`, `clear <filename:function>` 删除在命名函数的入口处设置的任何断点。
|
||||
- `clear <linenum>`, `clear <filename:linenum>` 删除在指定的文件指定的行号的代码中设置的任何断点。
|
||||
- `clear <address>` 清除指定程序指令的地址处的断点。
|
||||
|
||||
## delete
|
||||
|
||||
删除一些断点或自动显示表达式。 参见[官方文档][3]。
|
||||
参数是用空格分隔的断点编号。
|
||||
要删除所有断点,不加参数。
|
||||
用法: `delete [breakpoints] [list…]`
|
||||
|
||||
## tbreak
|
||||
|
||||
设置临时断点。参数形式同 `break` 一样。 参见[官方文档][1]。
|
||||
除了断点是临时的之外像 `break` 一样,所以在命中时会被删除。
|
||||
|
||||
## watch
|
||||
|
||||
为表达式设置观察点。 参见[官方文档][4]。
|
||||
用法: `watch [-l|-location] <expr>`
|
||||
每当一个表达式的值改变时,观察点就会停止执行您的程序。
|
||||
如果给出了 `-l` 或者 `-location`,则它会对 `expr` 求值并观察它所指向的内存。
|
||||
例如,`watch *(int *)0x12345678` 将在指定的地址处观察一个 4 字节的区域(假设 int 占用 4 个字节)。
|
||||
|
||||
## step
|
||||
|
||||
单步执行程序,直到到达不同的源码行。 参见[官方文档][5]。
|
||||
用法: `step [N]`
|
||||
参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
|
||||
警告:如果当控制在没有调试信息的情况下编译的函数中使用 `step` 命令,则执行将继续进行,
|
||||
直到控制到达具有调试信息的函数。 同样,它不会进入没有调试信息编译的函数。
|
||||
要执行没有调试信息的函数,请使用 `stepi` 命令,后文再述。
|
||||
|
||||
## reverse-step
|
||||
|
||||
反向步进程序,直到到达另一个源码行的开头。 参见[官方文档][6]。
|
||||
用法: `reverse-step [N]`
|
||||
参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
|
||||
|
||||
## next
|
||||
|
||||
单步执行程序,执行完子程序调用。 参见[官方文档][5]。
|
||||
用法: `next [N]`
|
||||
与 `step` 不同,如果当前的源代码行调用子程序,则此命令不会进入子程序,而是继续执行,将其视为单个源代码行。
|
||||
|
||||
## reverse-next
|
||||
|
||||
反向步进程序,执行完子程序调用。 参见[官方文档][6]。
|
||||
用法: `reverse-next [N]`
|
||||
如果要执行的源代码行调用子程序,则此命令不会进入子程序,调用被视为一个指令。
|
||||
参数 `N` 表示执行 N 次(或由于另一个原因直到程序停止)。
|
||||
|
||||
## return
|
||||
|
||||
您可以使用 `return` 命令取消函数调用的执行。 参见[官方文档][7]。
|
||||
如果你给出一个表达式参数,它的值被用作函数的返回值。
|
||||
`return <expression>` 将 `expression` 的值作为函数的返回值并使函数直接返回。
|
||||
|
||||
## finish
|
||||
|
||||
执行直到选定的栈帧返回。 参见[官方文档][5]。
|
||||
用法: `finish`
|
||||
返回后,返回的值将被打印并放入到值历史记录中。
|
||||
|
||||
## until
|
||||
|
||||
执行直到程序到达大于当前栈帧或当前栈帧中的指定位置(与 [break](#break) 命令相同的参数)的源码行。 参见[官方文档][5]。
|
||||
此命令用于通过一个多次的循环,以避免单步执行。
|
||||
`until <location>` 或 `u <location>` 继续运行程序,直到达到指定的位置,或者当前栈帧返回。
|
||||
|
||||
## continue
|
||||
|
||||
在信号或断点之后,继续运行被调试的程序。 参见[官方文档][5]。
|
||||
用法: `continue [N]`
|
||||
如果从断点开始,可以使用数字 `N` 作为参数,这意味着将该断点的忽略计数设置为 `N - 1`(以便断点在第 N 次到达之前不会中断)。
|
||||
如果启用了非停止模式(使用 `show non-stop` 查看),则仅继续当前线程,否则程序中的所有线程都将继续。
|
||||
|
||||
## print
|
||||
|
||||
求值并打印表达式 EXP 的值。 参见[官方文档][8]。
|
||||
可访问的变量是所选栈帧的词法环境,以及范围为全局或整个文件的所有变量。
|
||||
用法: `print [expr]` 或 `print /f [expr]`
|
||||
`expr` 是一个(在源代码语言中的)表达式。
|
||||
默认情况下,`expr` 的值以适合其数据类型的格式打印;您可以通过指定 `/f` 来选择不同的格式,其中 `f` 是一个指定格式的字母;参见[输出格式][9]。
|
||||
如果省略 `expr`,GDB 再次显示最后一个值。
|
||||
要以每行一个成员带缩进的格式打印结构体变量请使用命令 `set print pretty on`,取消则使用命令 `set print pretty off`。
|
||||
可使用命令 `show print` 查看所有打印的设置。
|
||||
|
||||
## x
|
||||
|
||||
检查内存。 参见[官方文档][10]。
|
||||
用法: `x/nfu <addr>` 或 `x <addr>`
|
||||
`n`, `f`, 和 `u` 都是可选参数,用于指定要显示的内存以及如何格式化。
|
||||
`addr` 是要开始显示内存的地址的表达式。
|
||||
`n` 重复次数(默认值是 1),指定要显示多少个单位(由 `u` 指定)的内存值。
|
||||
`f` 显示格式(初始默认值是 `x`),显示格式是 `print('x','d','u','o','t','a','c','f','s')` 使用的格式之一,再加 `i`(机器指令)。
|
||||
`u` 单位大小,`b` 表示单字节,`h` 表示双字节,`w` 表示四字节,`g` 表示八字节。
|
||||
例如:
|
||||
`x/3uh 0x54320` 表示从地址 0x54320 开始以无符号十进制整数的方式,双字节为单位显示 3 个内存值。
|
||||
`x/16xb 0x7f95b7d18870` 表示从地址 0x7f95b7d18870 开始以十六进制整数的方式,单字节为单位显示 16 个内存值。
|
||||
|
||||
## display
|
||||
|
||||
每次程序停止时打印表达式 EXP 的值。 参见[官方文档][11]。
|
||||
用法: `display <expr>`, `display/fmt <expr>` 或 `display/fmt <addr>`
|
||||
`fmt` 用于指定显示格式。像 [print](#print) 命令里的 `/f` 一样。
|
||||
对于格式 `i` 或 `s`,或者包括单位大小或单位数量,将表达式 `addr` 添加为每次程序停止时要检查的内存地址。
|
||||
|
||||
## info display
|
||||
|
||||
打印自动显示的表达式列表,每个表达式都带有项目编号,但不显示其值。
|
||||
包括被禁用的表达式和不能立即显示的表达式(当前不可用的自动变量)。
|
||||
|
||||
## undisplay
|
||||
|
||||
取消某些表达式在程序停止时自动显示。
|
||||
参数是表达式的编号(使用 `info display` 查询编号)。
|
||||
不带参数表示取消所有自动显示表达式。
|
||||
`delete display` 具有与此命令相同的效果。
|
||||
|
||||
## disable display
|
||||
|
||||
禁用某些表达式在程序停止时自动显示。
|
||||
禁用的显示项目不会被自动打印,但不会被忘记。 它可能稍后再次被启用。
|
||||
参数是表达式的编号(使用 `info display` 查询编号)。
|
||||
不带参数表示禁用所有自动显示表达式。
|
||||
|
||||
## enable display
|
||||
|
||||
启用某些表达式在程序停止时自动显示。
|
||||
参数是重新显示的表达式的编号(使用 `info display` 查询编号)。
|
||||
不带参数表示启用所有自动显示表达式。
|
||||
|
||||
## help
|
||||
|
||||
打印命令列表。 参见[官方文档][12]。
|
||||
您可以使用不带参数的 `help`(缩写为 `h`)来显示命令的类别名的简短列表。
|
||||
使用 `help <class>` 您可以获取该类中各个命令的列表。
|
||||
使用 `help <command>` 显示如何使用该命令的简述。
|
||||
|
||||
## attach
|
||||
|
||||
挂接到 GDB 之外的进程或文件。 参见[官方文档][13]。
|
||||
该命令可以将进程 ID 或设备文件作为参数。
|
||||
对于进程 ID,您必须具有向进程发送信号的权限,并且必须具有与调试器相同的有效的 uid。
|
||||
用法: `attach <process-id>`
|
||||
GDB 在安排调试指定的进程之后做的第一件事是停住它。
|
||||
您可以使用所有通过 `run` 命令启动进程时可以使用的 GDB 命令来检查和修改挂接的进程。
|
||||
|
||||
## run
|
||||
|
||||
启动被调试的程序。 参见[官方文档][14]。
|
||||
可以直接指定参数,也可以用 [set args][15] 设置(启动所需的)参数。
|
||||
例如: `run arg1 arg2 ...` 等效于
|
||||
|
||||
```
|
||||
set args arg1 arg2 ...
|
||||
run
|
||||
```
|
||||
|
||||
还允许使用 `>`, `<`, 或 `>>` 进行输入和输出重定向。
|
||||
|
||||
## backtrace
|
||||
|
||||
打印整个栈的回溯。 参见[官方文档][16]。
|
||||
|
||||
- `bt` 打印整个栈的回溯,每个栈帧一行。
|
||||
- `bt n` 类似于上,但只打印最内层的 n 个栈帧。
|
||||
- `bt -n` 类似于上,但只打印最外层的 n 个栈帧。
|
||||
- `bt full n` 类似于 `bt n`,还打印局部变量的值。
|
||||
|
||||
`where` 和 `info stack`(缩写 `info s`) 是 `backtrace` 的别名。调用栈信息类似如下:
|
||||
|
||||
```
|
||||
(gdb) where
|
||||
#0 vconn_stream_run (vconn=0x99e5e38) at lib/vconn-stream.c:232
|
||||
#1 0x080ed68a in vconn_run (vconn=0x99e5e38) at lib/vconn.c:276
|
||||
#2 0x080dc6c8 in rconn_run (rc=0x99dbbe0) at lib/rconn.c:513
|
||||
#3 0x08077b83 in ofconn_run (ofconn=0x99e8070, handle_openflow=0x805e274 <handle_openflow>) at ofproto/connmgr.c:1234
|
||||
#4 0x08075f92 in connmgr_run (mgr=0x99dc878, handle_openflow=0x805e274 <handle_openflow>) at ofproto/connmgr.c:286
|
||||
#5 0x08057d58 in ofproto_run (p=0x99d9ba0) at ofproto/ofproto.c:1159
|
||||
#6 0x0804f96b in bridge_run () at vswitchd/bridge.c:2248
|
||||
#7 0x08054168 in main (argc=4, argv=0xbf8333e4) at vswitchd/ovs-vswitchd.c:125
|
||||
```
|
||||
|
||||
## ptype
|
||||
|
||||
打印类型 TYPE 的定义。 参见[官方文档][17]。
|
||||
用法: `ptype[/FLAGS] TYPE-NAME | EXPRESSION`
|
||||
参数可以是由 `typedef` 定义的类型名, 或者 `struct STRUCT-TAG` 或者 `class CLASS-NAME` 或者 `union UNION-TAG` 或者 `enum ENUM-TAG`。
|
||||
所选的栈帧的词法上下文用于查找该名字。
|
||||
类似的命令是 `whatis`,区别在于 `whatis` 不展开由 `typedef` 定义的数据类型,而 `ptype` 会展开,举例如下:
|
||||
|
||||
```
|
||||
/* 类型声明与变量定义 */
|
||||
typedef double real_t;
|
||||
struct complex {
|
||||
real_t real;
|
||||
double imag;
|
||||
};
|
||||
typedef struct complex complex_t;
|
||||
|
||||
complex_t var;
|
||||
real_t *real_pointer_var;
|
||||
```
|
||||
|
||||
这两个命令给出了如下输出:
|
||||
|
||||
```
|
||||
(gdb) whatis var
|
||||
type = complex_t
|
||||
(gdb) ptype var
|
||||
type = struct complex {
|
||||
real_t real;
|
||||
double imag;
|
||||
}
|
||||
(gdb) whatis complex_t
|
||||
type = struct complex
|
||||
(gdb) whatis struct complex
|
||||
type = struct complex
|
||||
(gdb) ptype struct complex
|
||||
type = struct complex {
|
||||
real_t real;
|
||||
double imag;
|
||||
}
|
||||
(gdb) whatis real_pointer_var
|
||||
type = real_t *
|
||||
(gdb) ptype real_pointer_var
|
||||
type = double *
|
||||
```
|
||||
|
||||
|
||||
|
||||
------
|
||||
|
||||
## 参考资料
|
||||
|
||||
- [Debugging with GDB](https://sourceware.org/gdb/current/onlinedocs/gdb/)
|
||||
|
||||
------
|
||||
|
||||
编译者:[robot527](https://github.com/robot527)
|
||||
|
||||
[1]: https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Breaks.html
|
||||
[2]: https://sourceware.org/gdb/current/onlinedocs/gdb/Disabling.html
|
||||
[3]: https://sourceware.org/gdb/current/onlinedocs/gdb/Delete-Breaks.html
|
||||
[4]: https://sourceware.org/gdb/current/onlinedocs/gdb/Set-Watchpoints.html
|
||||
[5]: https://sourceware.org/gdb/current/onlinedocs/gdb/Continuing-and-Stepping.html
|
||||
[6]: https://sourceware.org/gdb/current/onlinedocs/gdb/Reverse-Execution.html
|
||||
[7]: https://sourceware.org/gdb/current/onlinedocs/gdb/Returning.html
|
||||
[8]: https://sourceware.org/gdb/current/onlinedocs/gdb/Data.html
|
||||
[9]: https://sourceware.org/gdb/current/onlinedocs/gdb/Output-Formats.html
|
||||
[10]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html
|
||||
[11]: https://sourceware.org/gdb/current/onlinedocs/gdb/Auto-Display.html
|
||||
[12]: https://sourceware.org/gdb/current/onlinedocs/gdb/Help.html
|
||||
[13]: https://sourceware.org/gdb/current/onlinedocs/gdb/Attach.html
|
||||
[14]: https://sourceware.org/gdb/current/onlinedocs/gdb/Starting.html
|
||||
[15]: https://sourceware.org/gdb/current/onlinedocs/gdb/Arguments.html
|
||||
[16]: https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html
|
||||
[17]: https://sourceware.org/gdb/current/onlinedocs/gdb/Symbols.html
|
||||
|
Loading…
Reference in New Issue
Block a user