Merge pull request #1 from LCTT/master

update
This commit is contained in:
rockychan 2017-11-24 21:32:07 +08:00 committed by GitHub
commit 466410f241
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 1687 additions and 1159 deletions

View File

@ -60,6 +60,7 @@ LCTT 的组成
* 2017/03/13 制作了 LCTT 主页、成员列表和成员主页LCTT 主页将移动至 https://linux.cn/lctt 。
* 2017/03/16 提升 GHLandy、bestony、rusking 为新的 Core 成员。创建 Comic 小组。
* 2017/04/11 启用头衔制,为各位重要成员颁发头衔。
* 2017/11/21 鉴于 qhwdw 快速而上佳的翻译质量,提升 qhwdw 为新的 Core 成员。
核心成员
-------------------------------
@ -86,6 +87,7 @@ LCTT 的组成
- 核心成员 @Locez,
- 核心成员 @ucasFL,
- 核心成员 @rusking,
- 核心成员 @qhwdw,
- 前任选题 @DeadFire,
- 前任校对 @reinoir222,
- 前任校对 @PurlingNayuki,

View File

@ -0,0 +1,347 @@
理解多区域配置中的 firewalld
============================================================
现在的新闻里充斥着服务器被攻击和数据失窃事件。对于一个阅读过安全公告博客的人来说,通过访问错误配置的服务器,利用最新暴露的安全漏洞或通过窃取的密码来获得系统控制权,并不是件多困难的事情。在一个典型的 Linux 服务器上的任何互联网服务都可能存在漏洞,允许未经授权的系统访问。
因为在应用程序层面上强化系统以防范任何可能的威胁是不可能做到的事情,而防火墙可以通过限制对系统的访问提供了安全保证。防火墙基于源 IP、目标端口和协议来过滤入站包。因为这种方式中仅有几个 IP/端口/协议的组合与系统交互,而其它的方式做不到过滤。
Linux 防火墙是通过 netfilter 来处理的它是内核级别的框架。这十几年来iptables 被作为 netfilter 的用户态抽象层LCTT 译注: userland一个基本的 UNIX 系统是由 kernel 和 userland 两部分构成,除 kernel 以外的称为 userland。iptables 将包通过一系列的规则进行检查,如果包与特定的 IP/端口/协议的组合匹配,规则就会被应用到这个包上,以决定包是被通过、拒绝或丢弃。
Firewalld 是最新的 netfilter 用户态抽象层。遗憾的是,由于缺乏描述多区域配置的文档,它强大而灵活的功能被低估了。这篇文章提供了一个示例去改变这种情况。
### Firewalld 的设计目标
firewalld 的设计者认识到大多数的 iptables 使用案例仅涉及到几个单播源 IP仅让每个符合白名单的服务通过而其它的会被拒绝。这种模式的好处是firewalld 可以通过定义的源 IP 和/或网络接口将入站流量分类到不同<ruby>区域<rt>zone</rt></ruby>。每个区域基于指定的准则按自己配置去通过或拒绝包。
另外的改进是基于 iptables 进行语法简化。firewalld 通过使用服务名而不是它的端口和协议去指定服务,使它更易于使用,例如,是使用 samba 而不是使用 UDP 端口 137 和 138 和 TCP 端口 139 和 445。它进一步简化语法消除了 iptables 中对语句顺序的依赖。
最后firewalld 允许交互式修改 netfilter允许防火墙独立于存储在 XML 中的永久配置而进行改变。因此,下面的的临时修改将在下次重新加载时被覆盖:
```
# firewall-cmd <some modification>
```
而,以下的改变在重加载后会永久保存:
```
# firewall-cmd --permanent <some modification>
# firewall-cmd --reload
```
### 区域
在 firewalld 中最上层的组织是区域。如果一个包匹配区域相关联的网络接口或源 IP/掩码 ,它就是区域的一部分。可用的几个预定义区域:
```
# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
```
任何配置了一个**网络接口**和/或一个**源**的区域就是一个<ruby>活动区域<rt>active zone</rt></ruby>。列出活动的区域:
```
# firewall-cmd --get-active-zones
public
interfaces: eno1 eno2
```
**Interfaces** (接口)是系统中的硬件和虚拟的网络适配器的名字,正如你在上面的示例中所看到的那样。所有的活动的接口都将被分配到区域,要么是默认的区域,要么是用户指定的一个区域。但是,一个接口不能被分配给多于一个的区域。
在缺省配置中firewalld 设置所有接口为 public 区域,并且不对任何区域设置源。其结果是,`public` 区域是唯一的活动区域。
**Sources** (源)是入站 IP 地址的范围,它也可以被分配到区域。一个源(或重叠的源)不能被分配到多个区域。这样做的结果是产生一个未定义的行为,因为不清楚应该将哪些规则应用于该源。
因为指定一个源不是必需的,任何包都可以通过接口匹配而归属于一个区域,而不需要通过源匹配来归属一个区域。这表示通过使用优先级方式,优先到达多个指定的源区域,稍后将详细说明这种情况。首先,我们来检查 `public` 区域的配置:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services: dhcpv6-client ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
default
```
逐行说明如下:
* `public (default, active)` 表示 `public` 区域是默认区域(当接口启动时会自动默认),并且它是活动的,因为,它至少有一个接口或源分配给它。
* `interfaces: eno1 eno2` 列出了这个区域上关联的接口。
* `sources:` 列出了这个区域的源。现在这里什么都没有,但是,如果这里有内容,它们应该是这样的格式 xxx.xxx.xxx.xxx/xx。
* `services: dhcpv6-client ssh` 列出了允许通过这个防火墙的服务。你可以通过运行 `firewall-cmd --get-services` 得到一个防火墙预定义服务的详细列表。
* `ports:` 列出了一个允许通过这个防火墙的目标端口。它是用于你需要去允许一个没有在 firewalld 中定义的服务的情况下。
* `masquerade: no` 表示这个区域是否允许 IP 伪装。如果允许,它将允许 IP 转发,它可以让你的计算机作为一个路由器。
* `forward-ports:` 列出转发的端口。
* `icmp-blocks:` 阻塞的 icmp 流量的黑名单。
* `rich rules:` 在一个区域中优先处理的高级配置。
* `default` 是目标区域,它决定了与该区域匹配而没有由上面设置中显式处理的包的动作。
### 一个简单的单区域配置示例
如果只是简单地锁定你的防火墙。简单地在删除公共区域上当前允许的服务,并重新加载:
```
# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
# firewall-cmd --permanent --zone=public --remove-service=ssh
# firewall-cmd --reload
```
在下面的防火墙上这些命令的结果是:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services:
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
default
```
本着尽可能严格地保证安全的精神,如果发生需要在你的防火墙上临时开放一个服务的情况(假设是 ssh你可以增加这个服务到当前会话中省略 `--permanent`),并且指示防火墙在一个指定的时间之后恢复修改:
```
# firewall-cmd --zone=public --add-service=ssh --timeout=5m
```
这个 `timeout` 选项是一个以秒(`s`)、分(`m`)或小时(`h`)为单位的时间值。
### 目标
当一个区域处理它的源或接口上的一个包时,但是,没有处理该包的显式规则时,这时区域的<ruby>目标<rt>target</rt></ruby>决定了该行为:
* `ACCEPT`:通过这个包。
* `%%REJECT%%`:拒绝这个包,并返回一个拒绝的回复。
* `DROP`:丢弃这个包,不回复任何信息。
* `default`:不做任何事情。该区域不再管它,把它踢到“楼上”。
在 firewalld 0.3.9 中有一个 bug (已经在 0.3.10 中修复对于一个目标是除了“default”以外的源区域不管允许的服务是什么这的目标都会被应用。例如一个使用目标 `DROP` 的源区域,将丢弃所有的包,甚至是白名单中的包。遗憾的是,这个版本的 firewalld 被打包到 RHEL7 和它的衍生版中,使它成为一个相当常见的 bug。本文中的示例避免了可能出现这种行为的情况。
### 优先权
活动区域中扮演两个不同的角色。关联接口行为的区域作为接口区域并且关联源行为的区域作为源区域一个区域能够扮演两个角色。firewalld 按下列顺序处理一个包:
1. 相应的源区域。可以存在零个或一个这样的区域。如果这个包满足一个<ruby>富规则<rt>rich rule</rt></ruby>、服务是白名单中的、或者目标没有定义,那么源区域处理这个包,并且在这里结束。否则,向上传递这个包。
2. 相应的接口区域。肯定有一个这样的区域。如果接口处理这个包,那么到这里结束。否则,向上传递这个包。
3. firewalld 默认动作。接受 icmp 包并拒绝其它的一切。
这里的关键信息是,源区域优先于接口区域。因此,对于多区域的 firewalld 配置的一般设计模式是,创建一个优先源区域来允许指定的 IP 对系统服务的提升访问,并在一个限制性接口区域限制其它访问。
### 一个简单的多区域示例
为演示优先权,让我们在 `public` 区域中将 `http` 替换成 `ssh`,并且为我们喜欢的 IP 地址,如 1.1.1.1,设置一个默认的 `internal` 区域。以下的命令完成这个任务:
```
# firewall-cmd --permanent --zone=public --remove-service=ssh
# firewall-cmd --permanent --zone=public --add-service=http
# firewall-cmd --permanent --zone=internal --add-source=1.1.1.1
# firewall-cmd --reload
```
这些命令的结果是生成如下的配置:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services: dhcpv6-client http
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
default
# firewall-cmd --zone=internal --list-all
internal (active)
interfaces:
sources: 1.1.1.1
services: dhcpv6-client mdns samba-client ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=internal --get-target
default
```
在上面的配置中,如果有人尝试从 1.1.1.1 去 `ssh`,这个请求将会成功,因为这个源区域(`internal`)被首先应用,并且它允许 `ssh` 访问。
如果有人尝试从其它的地址,如 2.2.2.2,去访问 `ssh`,它不是这个源区域的,因为和这个源区域不匹配。因此,这个请求被直接转到接口区域(`public`),它没有显式处理 `ssh`因为public 的目标是 `default`,这个请求被传递到默认动作,它将被拒绝。
如果 1.1.1.1 尝试进行 `http` 访问会怎样?源区域(`internal`)不允许它,但是,目标是 `default`,因此,请求将传递到接口区域(`public`),它被允许访问。
现在,让我们假设有人从 3.3.3.3 拖你的网站。要限制从那个 IP 的访问,简单地增加它到预定义的 `drop` 区域,正如其名,它将丢弃所有的连接:
```
# firewall-cmd --permanent --zone=drop --add-source=3.3.3.3
# firewall-cmd --reload
```
下一次 3.3.3.3 尝试去访问你的网站firewalld 将转发请求到源区域(`drop`)。因为目标是 `DROP`,请求将被拒绝,并且它不会被转发到接口区域(`public`)。
### 一个实用的多区域示例
假设你为你的组织的一台服务器配置防火墙。你希望允许全世界使用 `http``https` 的访问你的组织1.1.0.0/16和工作组1.1.1.0/8使用 `ssh` 访问,并且你的工作组可以访问 `samba` 服务。使用 firewalld 中的区域,你可以用一个很直观的方式去实现这个配置。
`public` 这个命名,它的逻辑似乎是把全世界访问指定为公共区域,而 `internal` 区域用于为本地使用。从在 `public` 区域内设置使用 `http``https` 替换 `dhcpv6-client``ssh` 服务来开始:
```
# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
# firewall-cmd --permanent --zone=public --remove-service=ssh
# firewall-cmd --permanent --zone=public --add-service=http
# firewall-cmd --permanent --zone=public --add-service=https
```
然后,取消 `internal` 区域的 `mdns`、`samba-client` 和 `dhcpv6-client` 服务(仅保留 `ssh`),并增加你的组织为源:
```
# firewall-cmd --permanent --zone=internal --remove-service=mdns
# firewall-cmd --permanent --zone=internal --remove-service=samba-client
# firewall-cmd --permanent --zone=internal --remove-service=dhcpv6-client
# firewall-cmd --permanent --zone=internal --add-source=1.1.0.0/16
```
为容纳你提升的 `samba` 的权限,增加一个富规则:
```
# firewall-cmd --permanent --zone=internal --add-rich-rule='rule family=ipv4 source address="1.1.1.0/8" service name="samba" accept'
```
最后,重新加载,把这些变化拉取到会话中:
```
# firewall-cmd --reload
```
仅剩下少数的细节了。从一个 `internal` 区域以外的 IP 去尝试通过 `ssh` 到你的服务器,结果是回复一个拒绝的消息。它是 firewalld 默认的。更为安全的作法是去显示不活跃的 IP 行为并丢弃该连接。改变 `public` 区域的目标为 `DROP`,而不是 `default` 来实现它:
```
# firewall-cmd --permanent --zone=public --set-target=DROP
# firewall-cmd --reload
```
但是,等等,你不再可以 ping 了,甚至是从内部区域!并且 icmp ping 使用的协议)并不在 firewalld 可以列入白名单的服务列表中。那是因为icmp 是第 3 层的 IP 协议,它没有端口的概念,不像那些捆绑了端口的服务。在设置公共区域为 `DROP` 之前ping 能够通过防火墙是因为你的 `default` 目标通过它到达防火墙的默认动作default即允许它通过。但现在它已经被删除了。
为恢复内部网络的 ping使用一个富规则
```
# firewall-cmd --permanent --zone=internal --add-rich-rule='rule protocol value="icmp" accept'
# firewall-cmd --reload
```
结果如下,这里是两个活动区域的配置:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services: http https
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
DROP
# firewall-cmd --zone=internal --list-all
internal (active)
interfaces:
sources: 1.1.0.0/16
services: ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
rule family=ipv4 source address="1.1.1.0/8" service name="samba" accept
rule protocol value="icmp" accept
# firewall-cmd --permanent --zone=internal --get-target
default
```
这个设置演示了一个三层嵌套的防火墙。最外层,`public`,是一个接口区域,包含全世界的访问。紧接着的一层,`internal`,是一个源区域,包含你的组织,它是 `public` 的一个子集。最后,一个富规则增加到最内层,包含了你的工作组,它是 `internal` 的一个子集。
这里的关键信息是,当在一个场景中可以突破到嵌套层,最外层将使用接口区域,接下来的将使用一个源区域,并且在源区域中额外使用富规则。
### 调试
firewalld 采用直观范式来设计防火墙,但比它的前任 iptables 更容易产生歧义。如果产生无法预料的行为,或者为了更好地理解 firewalld 是怎么工作的,则可以使用 iptables 描述 netfilter 是如何配置操作的。前一个示例的输出如下,为了简单起见,将输出和日志进行了修剪:
```
# iptables -S
-P INPUT ACCEPT
... (forward and output lines) ...
-N INPUT_ZONES
-N INPUT_ZONES_SOURCE
-N INPUT_direct
-N IN_internal
-N IN_internal_allow
-N IN_internal_deny
-N IN_public
-N IN_public_allow
-N IN_public_deny
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_ZONES_SOURCE
-A INPUT -j INPUT_ZONES
-A INPUT -p icmp -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
... (forward and output lines) ...
-A INPUT_ZONES -i eno1 -j IN_public
-A INPUT_ZONES -i eno2 -j IN_public
-A INPUT_ZONES -j IN_public
-A INPUT_ZONES_SOURCE -s 1.1.0.0/16 -g IN_internal
-A IN_internal -j IN_internal_deny
-A IN_internal -j IN_internal_allow
-A IN_internal_allow -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 137 -m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 138 -m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 139 -m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 445 -m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -p icmp -m conntrack --ctstate NEW -j ACCEPT
-A IN_public -j IN_public_deny
-A IN_public -j IN_public_allow
-A IN_public -j DROP
-A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
```
在上面的 iptables 输出中,新的链(以 `-N` 开始的行)是被首先声明的。剩下的规则是附加到(以 `-A` 开始的行) iptables 中的。已建立的连接和本地流量是允许通过的,并且入站包被转到 `INPUT_ZONES_SOURCE` 在那里如果存在相应的区域IP 将被发送到那个区域。从那之后,流量被转到 `INPUT_ZONES` 从那里它被路由到一个接口区域。如果在那里它没有被处理icmp 是允许通过的,无效的被丢弃,并且其余的都被拒绝。
### 结论
firewalld 是一个文档不足的防火墙配置工具它的功能远比大多数人认识到的更为强大。以创新的区域范式firewalld 允许系统管理员去分解流量到每个唯一处理它的分类中,简化了配置过程。因为它直观的设计和语法,它在实践中不但被用于简单的单一区域中也被用于复杂的多区域配置中。
--------------------------------------------------------------------------------
via: https://www.linuxjournal.com/content/understanding-firewalld-multi-zone-configurations?page=0,0
作者:[Nathan Vance][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxjournal.com/users/nathan-vance
[1]:https://www.linuxjournal.com/tag/firewalls
[2]:https://www.linuxjournal.com/tag/howtos
[3]:https://www.linuxjournal.com/tag/networking
[4]:https://www.linuxjournal.com/tag/security
[5]:https://www.linuxjournal.com/tag/sysadmin
[6]:https://www.linuxjournal.com/users/william-f-polik
[7]:https://www.linuxjournal.com/users/nathan-vance

View File

@ -1,35 +1,28 @@
#Translating by aiwhj
Ubuntu Core in LXD containers
使用 LXD 容器运行 Ubuntu Core
============================================================
### Share or save
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
### Whats Ubuntu Core?
### Ubuntu Core 是什么?
Ubuntu Core is a version of Ubuntu thats fully transactional and entirely based on snap packages.
Ubuntu Core 是完全基于 snap 包构建,并且完全事务化的 Ubuntu 版本。
Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.
该系统大部分是只读的,所有已安装的应用全部来自 snap 包,完全使用事务化更新。这意味着不管在系统更新还是安装软件的时候遇到问题,整个系统都可以回退到之前的状态并且记录这个错误。
The current release of Ubuntu Core is called series 16 and was released in November 2016.
最新版是在 2016 年 11 月发布的 Ubuntu Core 16。
Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.
注意Ubuntu Core 限制只能够安装 snap 包(而非 “传统” 软件包),并且有相当数量的 snap 包在当前环境下不能正常运行,或者需要人工干预(创建用户和用户组等)才能正常运行。随着新版的 snapd 和 “core” snap 包发布Ubuntu Core 每周都会得到改进。
### Requirements
### 环境需求
As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:
就 LXD 而言Ubuntu Core 仅仅相当于另一个 Linux 发行版。也就是说snapd 需要挂载无特权的 FUSE 和 AppArmor 命名空间以及软件栈,像下面这样:
* An up to date Ubuntu system using the official Ubuntu kernel
* 一个新版的使用 Ubuntu 官方内核的系统
* 一个新版的 LXD
* An up to date version of LXD
### 创建一个 Ubuntu Core 容器
### Creating an Ubuntu Core container
The Ubuntu Core images are currently published on the community image server.
You can launch a new container with:
当前 Ubuntu Core 镜像发布在社区的镜像服务器。你可以像这样启动一个新的容器:
```
stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
@ -37,9 +30,9 @@ Creating ubuntu-core
Starting ubuntu-core
```
The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You dont want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.
这个容器启动需要一点点时间,它会先执行第一阶段的加载程序,加载程序会确定使用哪一个镜像(镜像是只读的),并且在系统上设置一个可读层,你不要在这一阶段中断容器执行,这个时候什么都没有,所以执行 `lxc exec` 将会出错。
Seconds later, “lxc list” will show the container IP address, indicating that its booted into Ubuntu Core:
几秒钟之后,执行 `lxc list` 将会展示容器的 IP 地址,这表明已经启动了 Ubuntu Core
```
stgraber@dakara:~$ lxc list
@ -50,7 +43,7 @@ stgraber@dakara:~$ lxc list
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
```
You can then interact with that container the same way you would any other:
之后你就可以像使用其他的交互一样和这个容器进行交互:
```
stgraber@dakara:~$ lxc exec ubuntu-core bash
@ -62,11 +55,11 @@ pc-kernel 4.4.0-45-4 37 canonical -
root@ubuntu-core:~#
```
### Updating the container
### 更新容器
If youve been tracking the development of Ubuntu Core, youll know that those versions above are pretty old. Thats because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).
如果你一直关注着 Ubuntu Core 的开发,你应该知道上面的版本已经很老了。这是因为被用作 Ubuntu LXD 镜像的代码每隔几个月才会更新。Ubuntu Core 系统在重启时会检查更新并进行自动更新(更新失败会回退)。
If you want to immediately force an update, you can do it with:
如果你想现在强制更新,你可以这样做:
```
stgraber@dakara:~$ lxc exec ubuntu-core bash
@ -80,7 +73,7 @@ series 16
root@ubuntu-core:~#
```
And then reboot the system and check the snapd version again:
然后重启一下 Ubuntu Core 系统,然后看看 snapd 的版本。
```
root@ubuntu-core:~# reboot
@ -94,7 +87,7 @@ series 16
root@ubuntu-core:~#
```
You can get an history of all snapd interactions with
你也可以像下面这样查看所有 snapd 的历史记录:
```
stgraber@dakara:~$ lxc exec ubuntu-core snap changes
@ -104,9 +97,9 @@ ID Status Spawn Ready Summary
3 Done 2017-01-31T05:21:30Z 2017-01-31T05:22:45Z Refresh all snaps in the system
```
### Installing some snaps
### 安装 Snap 软件包
Lets start with the simplest snaps of all, the good old Hello World:
以一个最简单的例子开始,经典的 Hello World
```
stgraber@dakara:~$ lxc exec ubuntu-core bash
@ -116,7 +109,7 @@ root@ubuntu-core:~# hello-world
Hello World!
```
And then move on to something a bit more useful:
接下来让我们看一些更有用的:
```
stgraber@dakara:~$ lxc exec ubuntu-core bash
@ -124,9 +117,9 @@ root@ubuntu-core:~# snap install nextcloud
nextcloud 11.0.1snap2 from 'nextcloud' installed
```
Then hit your container over HTTP and youll get to your newly deployed Nextcloud instance.
之后通过 HTTP 访问你的容器就可以看到刚才部署的 Nextcloud 实例。
If you feel like testing the latest LXD straight from git, you can do so with:
如果你想直接通过 git 测试最新版 LXD你可以这样做
```
stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
@ -155,7 +148,7 @@ What IPv6 address should be used (CIDR subnet notation, “auto” or “none”
LXD has been successfully configured.
```
And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:
已经设置过的容器不能回退版本,但是可以在 Ubuntu Core 16 中运行另一个 Ubuntu Core 16 容器:
```
root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
@ -169,28 +162,29 @@ root@ubuntu-core:~# lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
```
### Conclusion
### 写在最后
If you ever wanted to try Ubuntu Core, this is a great way to do it. Its also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.
如果你只是想试用一下 Ubuntu Core这是一个不错的方法。对于 snap 包开发者来说,这也是一个不错的工具来测试你的 snap 包能否在不同的环境下正常运行。
Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.
如果你希望你的系统总是最新的并且整体可复制Ubuntu Core 是一个很不错的方案,不过这也会带来一些相应的限制,所以可能不太适合你。
And lastly, a word of warning. Those images are considered as good enough for testing, but arent officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.
最后是一个警告,对于测试来说,这些镜像是足够的,但是当前并没有被正式的支持。在不久的将来,官方的 Ubuntu server 可以完整的支持 Ubuntu Core LXD 镜像。
### Extra information
### 附录
The main LXD website is at: [https://linuxcontainers.org/lxd][2] Development happens on Github at: [https://github.com/lxc/lxd][3]
Mailing-list support happens on: [https://lists.linuxcontainers.org][4]
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: [https://linuxcontainers.org/lxd/try-it][5]
- LXD 主站:[https://linuxcontainers.org/lxd][2]
- Github[https://github.com/lxc/lxd][3]
- 邮件列表:[https://lists.linuxcontainers.org][4]
- IRC#lxcontainers on irc.freenode.net
- 在线试用:[https://linuxcontainers.org/lxd/try-it][5]
--------------------------------------------------------------------------------
via: https://insights.ubuntu.com/2017/02/27/ubuntu-core-in-lxd-containers/
作者:[Stéphane Graber ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
作者:[Stéphane Graber][a]
译者:[aiwhj](https://github.com/aiwhj)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,48 @@
肯特·贝克:改变人生的代码整理魔法
==========
> 本文作者<ruby>肯特·贝克<rt>Kent Beck</rt></ruby>,是最早研究软件开发的模式和重构的人之一,是敏捷开发的开创者之一,更是极限编程和测试驱动开发的创始人,同时还是 Smalltalk 和 JUnit 的作者,对当今世界的软件开发影响深远。现在 Facebook 工作。
本周我一直在整理 Facebook 代码,而且我喜欢这个工作。我的职业生涯中已经整理了数千小时的代码,我有一套使这种整理更加安全、有趣和高效的规则。
整理工作是通过一系列短小而安全的步骤进行的。事实上,规则一就是**如果这很难,那就不要去做**。我以前在晚上做填字游戏。如果我卡住那就去睡觉,第二天晚上那些没有发现的线索往往很容易发现。与其想要一心搞个大的,不如在遇到阻力的时候停下来。
整理会陷入这样一种感觉:你错失的要比你从一个个成功中获得的更多(稍后会细说)。第二条规则是**当你充满活力时开始,当你累了时停下来**。起来走走。如果还没有恢复精神,那这一天的工作就算做完了。
只有在仔细追踪其它变化的时候(我把它和最新的差异搞混了),整理工作才可以与开发同步进行。第三条规则是**立即完成每个环节的工作**。与功能开发所不同的是,功能开发只有在完成一大块工作时才有意义,而整理是基于时间一点点完成的。
整理在任何步骤中都只需要付出不多的努力,所以我会在任何步骤遇到麻烦的时候放弃。所以,规则四是**两次失败后恢复**。如果我整理代码,运行测试,并遇到测试失败,那么我会立即修复它。如果我修复失败,我会立即恢复到上次已知最好的状态。
即便没有闪亮的新设计的愿景,整理也是有用的。不过,有时候我想看看事情会如何发展,所以第五条就是**实践**。执行一系列的整理和还原。第二次将更快,你会更加熟悉避免哪些坑。
只有在附带损害的风险较低,审查整理变化的成本也较低的时候整理才有用。规则六是**隔离整理**。如果你错过了在编写代码中途整理的机会,那么接下来可能很困难。要么完成并接着整理,要么还原、整理并进行修改。
试试这些。将临时申明的变量移动到它第一次使用的位置,简化布尔表达式(`return expression == True`?),提取一个 helper将逻辑或状态的范围缩小到实际使用的位置。
### 规则
- 规则一、 如果这很难,那就不要去做
- 规则二、 当你充满活力时开始,当你累了时停下来
- 规则三、 立即完成每个环节工作
- 规则四、 两次失败后恢复
- 规则五、 实践
- 规则六、 隔离整理
### 尾声
我通过严格地整理改变了架构、提取了框架。这种方式可以安全地做出重大改变。我认为这是因为,虽然每次整理的成本是不变的,但回报是指数级的,但我需要数据和模型来解释这个假说。
--------------------------------------------------------------------------------
via: https://www.facebook.com/notes/kent-beck/the-life-changing-magic-of-tidying-up-code/1544047022294823/
作者:[KENT BECK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.facebook.com/kentlbeck
[1]:https://www.facebook.com/notes/kent-beck/the-life-changing-magic-of-tidying-up-code/1544047022294823/?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#
[2]:https://www.facebook.com/kentlbeck
[3]:https://www.facebook.com/notes/kent-beck/the-life-changing-magic-of-tidying-up-code/1544047022294823/

View File

@ -0,0 +1,33 @@
Lets Encrypt 2018 年 1 月发布通配证书
============================================================
Lets Encrypt 将于 2018 年 1 月开始发放通配证书。通配证书是一个经常需要的功能,并且我们知道在一些情况下它可以使 HTTPS 部署更简单。我们希望提供通配证书有助于加速网络向 100 HTTPS 进展。
Lets Encrypt 目前通过我们的全自动 DV 证书颁发和管理 API 保护了 4700 万个域名。自从 Let's Encrypt 的服务于 2015 年 12 月发布以来,它已经将加密网页的数量从 40% 大大地提高到了 58%。如果你对通配证书的可用性以及我们达成 100% 的加密网页的使命感兴趣,我们请求你为我们的[夏季筹款活动][1]LCTT 译注:之前的夏季活动,原文发布于今年夏季)做出贡献。
通配符证书可以保护基本域的任何数量的子域名(例如 *.example.com。这使得管理员可以为一个域及其所有子域使用单个证书和密钥对这可以使 HTTPS 部署更加容易。
通配符证书将通过我们[即将发布的 ACME v2 API 终端][2]免费提供。我们最初只支持通过 DNS 进行通配符证书的基础域验证,但是随着时间的推移可能会探索更多的验证方式。我们鼓励人们在我们的[社区论坛][3]上提出任何关于通配证书支持的问题。
我们决定在夏季筹款活动中宣布这一令人兴奋的进展,因为我们是一个非营利组织,这要感谢使用我们服务的社区的慷慨支持。如果你想支持一个更安全和保密的网络,[现在捐赠吧][4]
我们要感谢我们的[社区][5]和我们的[赞助者][6],使我们所做的一切成为可能。如果你的公司或组织能够赞助 Let's Encrypt请发送电子邮件至 [sponsor@letsencrypt.org][7]。
--------------------------------------------------------------------------------
via: https://letsencrypt.org/2017/07/06/wildcard-certificates-coming-jan-2018.html
作者:[Josh Aas][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://letsencrypt.org/2017/07/06/wildcard-certificates-coming-jan-2018.html
[1]:https://letsencrypt.org/donate/
[2]:https://letsencrypt.org/2017/06/14/acme-v2-api.html
[3]:https://community.letsencrypt.org/
[4]:https://letsencrypt.org/donate/
[5]:https://letsencrypt.org/getinvolved/
[6]:https://letsencrypt.org/sponsors/
[7]:mailto:sponsor@letsencrypt.org

View File

@ -0,0 +1,149 @@
Linux 用户的手边工具Guide to Linux
============================================================
![Guide to Linux](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/guide-to-linux.png?itok=AAcrxjjc "Guide to Linux")
> “Guide to Linux” 这个应用并不完美,但它是一个非常好的工具,可以帮助你学习 Linux 命令。
还记得你初次使用 Linux 时的情景吗?对于有些人来说,他的学习曲线可能有些挑战性。比如,在 `/usr/bin` 中能找到许多命令。在我目前使用的 Elementary OS 系统中,命令的数量是 1944 个。当然,这并不全是真实的命令(或者,我会使用到的命令数量),但这个数目是很多的。
正因为如此(并且不同平台不一样),现在,新用户(和一些已经熟悉的用户)需要一些帮助。
对于每个管理员来说,这些技能是必须具备的:
* 熟悉平台
* 理解命令
* 编写 Shell 脚本
当你寻求帮助时有时你需要去“阅读那些该死的手册”Read the Fine/Freaking/Funky ManualLCTT 译注:一个网络用语,简写为 RTFM但是当你自己都不知道要找什么的时候它就没办法帮到你了。在那个时候你就会为你拥有像 [Guide to Linux][15] 这样的手机应用而感到高兴。
不像你在 Linux.com 上看到的那些大多数的内容,这篇文章只是介绍一个 Android 应用的。为什么呢?因为这个特殊的 应用是用来帮助用户学习 Linux 的。
而且,它做的很好。
关于这个应用我清楚地告诉你 —— 它并不完美。Guide to Linux 里面充斥着很烂的英文,糟糕的标点符号,并且(如果你是一个纯粹主义者),它从来没有提到过 GNU。在这之上它有一个特别的功能通常它对用户非常有用功能不是很有用LCTT 译注:是指终端模拟器,后面会详细解释)。除此之外,我敢说 Guide to Linux 可能是 Linux 平台上最好的一个移动端的 “口袋指南”。
对于这个应用,你可能会喜欢它的如下特性:
* 离线使用
* Linux 教程
* 基础的和高级的 Linux 命令的详细介绍
* 包含了命令示例和语法
* 专用的 Shell 脚本模块
除此以外Guide to Linux 是免费提供的(尽管里面有一些广告)。如果你想去除广告,它有一个应用内的购买,($2.99 USD/年)可以去消除广告。
让我们来安装这个应用,来看一看它的构成。
### 安装
像所有的 Android 应用一样,安装 Guide to Linux 是非常简单的。按照以下简单的几步就可以安装它了:
1. 打开你的 Android 设备上的 Google Play 商店
2. 搜索 Guide to Linux
3. 找到 Essence Infotech 的那个,并轻触进入
4. 轻触 Install
5. 允许安装
安装完成后,你可以在你的<ruby>应用抽屉<rt>App Drawer</rt></ruby>或主屏幕上(或者两者都有)上找到它去启动 Guide to Linux 。轻触图标去启动这个应用。
### 使用
让我们看一下 Guide to Linux 的每个功能。我发现某些功能比其它的更有帮助,或许你的体验会不一样。在我们分别讲解之前,我将重点提到其界面。开发者在为这个应用创建一个易于使用的界面方面做的很好。
从主窗口中(图 1你可以获取四个易于访问的功能。
![Guide to Linux main window](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux1.jpg?itok=UJhPP80J "Guide to Linux main window")
*图 1 The Guide to Linux 主窗口。[已获授权][1]*
轻触四个图标中的任何一个去启动一个功能,然后,准备去学习。
### 教程
让我们从这个应用教程的最 “新手友好” 的功能开始。打开“Tutorial”功能然后将看到该教程的欢迎部分“Linux 操作系统介绍”(图 2
![The Tutorial](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux2.jpg?itok=LiJ8pHdS "The Tutorial")
*图 2教程开始。[已获授权][2]*
如果你轻触 “汉堡包菜单” (左上角的三个横线),显示了内容列表(图 3因此你可以在教程中选择任何一个可用部分。
![Tutorial TOC](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux3_0.jpg?itok=5nJNeYN- "Tutorial TOC")
*图 3教程的内容列表。[已获授权][3]*
如果你现在还没有注意到Guide to Linux 教程部分是每个主题的一系列短文的集合。短文包含图片和链接(有时候),链接将带你到指定的 web 网站(根据主题的需要)。这里没有交互,仅仅只能阅读。但是,这是一个很好的起点,由于开发者在描述各个部分方面做的很好(虽然有语法问题)。
尽管你可以在窗口的顶部看到一个搜索选项,但是,我还是没有发现这一功能的任何效果 —— 但是,你可以试一下。
对于 Linux 新手来说,如果希望获得 Linux 管理的技能,你需要去阅读整个教程。完成之后,转到下一个主题。
### 命令
命令功能类似于手机上的 man 页面一样,是大量的频繁使用的 Linux 命令。当你首次打开它,欢迎页面将详细解释使用命令的益处。
读完之后,你可以轻触向右的箭头(在屏幕底部)或轻触 “汉堡包菜单” ,然后从侧边栏中选择你想去学习的其它命令。(图 4
![Commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux4.jpg?itok=Rmzfb8Or "Commands")
*图 4命令侧边栏允许你去查看列出的命令。[已获授权][4]*
轻触任意一个命令,你可以阅读这个命令的解释。每个命令解释页面和它的选项都提供了怎么去使用的示例。
### Shell 脚本
在这个时候,你开始熟悉 Linux 了,并对命令已经有一定程序的掌握。现在,是时候去熟悉 shell 脚本了。这个部分的设置方式与教程部分和命令部分相同。
你可以打开内容列表的侧边栏,然后打开包含 shell 脚本教程的任意部分(图 5
![Shell Script](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux-5-new.jpg?itok=EDlZ92IA "Shell Script")
*图 5Shell 脚本节看上去很熟悉。[已获授权][5]*
开发者在解释如何最大限度地利用 shell 脚本方面做的很好。对于任何有兴趣学习 shell 脚本细节的人来说,这是个很好的起点。
### 终端
现在我们到了一个新的地方,开发者在这个应用中包含了一个终端模拟器。遗憾的是,当你在一个没有 “root” 权限的 Android 设备上安装这个应用时,你会发现你被限制在一个只读文件系统中,在那里,大部分命令根本无法工作。但是,我在一台 Pixel 2 (通过 Android 应用商店)安装的 Guide to Linux 中,可以使用更多的这个功能(还只是较少的一部分)。在一台 OnePlus 3 (非 root 过的)上,不管我改变到哪个目录,我都是得到相同的错误信息 “permission denied”甚至是一个简单的命令也如此。
在 Chromebook 上,不管怎么操作,它都是正常的(图 6。可以说它可以一直很好地工作在一个只读操作系统中因此你不能用它进行真正的工作或创建新文件
![Permission denied](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/guidetolinux6_0.jpg?itok=cVENH5lM "Permission denied")
*图 6 可以完美地(可以这么说)用一个终端模拟器去工作。[已获授权][6]*
记住,这并不是真实的成熟终端,但却是一个新用户去熟悉终端是怎么工作的一种方法。遗憾的是,大多数用户只会发现自己对这个工具的终端功能感到沮丧,仅仅是因为,它们不能使用他们在其它部分学到的东西。开发者可能将这个终端功能打造成了一个 Linux 文件系统沙箱,因此,用户可以真实地使用它去学习。每次用户打开那个工具,它将恢复到原始状态。这只是我一个想法。
### 写在最后…
尽管终端功能被一个只读文件系统所限制几乎到了没法使用的程序Guide to Linux 仍然是一个新手学习 Linux 的好工具。在 guide to Linux 中,你将学习关于 Linux、命令、和 shell 脚本的很多知识,以便在你安装你的第一个发行版之前,让你学习 Linux 有一个好的起点。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/8/guide-linux-app-handy-tool-every-level-linux-user
作者:[JACK WALLEN][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/used-permission
[7]:https://www.linux.com/licenses/category/used-permission
[8]:https://www.linux.com/files/images/guidetolinux1jpg
[9]:https://www.linux.com/files/images/guidetolinux2jpg
[10]:https://www.linux.com/files/images/guidetolinux3jpg-0
[11]:https://www.linux.com/files/images/guidetolinux4jpg
[12]:https://www.linux.com/files/images/guidetolinux-5-newjpg
[13]:https://www.linux.com/files/images/guidetolinux6jpg-0
[14]:https://www.linux.com/files/images/guide-linuxpng
[15]:https://play.google.com/store/apps/details?id=com.essence.linuxcommands
[16]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[17]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F8%2Fguide-linux-app-handy-tool-every-level-linux-user&title=Guide%20to%20Linux%20App%20Is%20a%20Handy%20Tool%20for%20Every%20Level%20of%20Linux%20User

View File

@ -1,53 +1,51 @@
[放弃你的代码,而不是你的时间][23]
放弃你的代码,而不是你的时间
============================================================
作为软件开发人员,我认为我们可以认同开源代码已经[改变了世界][9]。它的公共性质去除了软件可以变成最好的阻碍。问题是太多有价值的项目由于领导者的经历耗尽而停滞不前:
作为软件开发人员,我认为我们可以认同开源代码^注1 已经[改变了世界][9]。它的公共性质去除了壁垒,可以让软件可以变的最好。但问题是,太多有价值的项目由于领导者的精力耗尽而停滞不前:
>“我没有时间和精力去投入开源了。我在开源上没有得到任何收入,所以我在那上面花的时间,我可以用在“生活上的事”,或者写作。。。正因为如此,我决定现在结束我所有的开源工作。”
>“我没有时间和精力去投入开源了。我在开源上没有得到任何收入,所以我在那上面花的时间,我可以用在‘生活上的事’,或者写作上……正因为如此,我决定现在结束我所有的开源工作。”
>
>—[Ryan Bigg几个 Ruby 和 Elixir 项目的前任维护者][1]
>
>“这也是一个巨大的机会成本,由于同时很多事情我无法学习或者完成,因为 FubuMVC 占用了我很多的时间,这是它现在必须停下来的主要原因。”
>“这也是一个巨大的机会成本,由于我无法同时学习或者完成很多事情FubuMVC 占用了我很多的时间,这是它现在必须停下来的主要原因。”
>
>—[前 FubuMVC 项目负责人 Jeremy Miller][2]
>
>“当我们决定要孩子的时候,我可能会放弃开源,我预计将是最终解决我问题的方案:最终选择。”
>“当我们决定要孩子的时候,我可能会放弃开源,我预计最终解决我问题的方案将是:核武器。”
>
>—[Nolan LawsonPouchDB 的维护者之一][3]
我们需要的是一种新的行业规范即项目领导者将_一直_获得时间上的补偿。我们还需要抛弃的想法是, 任何提交问题或合并请求的开发人员都自动会得到维护者的注意。
我们需要的是一种新的行业规范即项目领导者将_总是_能获得其付出的时间上的补偿。我们还需要抛弃的想法是 任何提交问题或合并请求的开发人员都自动会得到维护者的注意。
我们先来回顾一下开源库在市场上的作用。它是一个积木。它是[实用软件][10],一个企业为了在别处获利而必须承担的成本。如果用户能够理解代码并且发现它比替代方案(闭源专用、定制的内部解决方案等)更有价值,那么围绕软件的社区就会不断增长。它可以更好,更便宜,或两者兼而有之。
我们先来回顾一下开源代码在市场上的作用。它是一个积木。它是[实用软件][10],是企业为了在别处获利而必须承担的成本。如果用户能够理解代码的用途并且发现它比替代方案(闭源专用、定制的内部解决方案等)更有价值,那么围绕软件的社区就会不断增长。它可以更好,更便宜,或两者兼而有之。
如果一个组织需要改进代码,他们可以自由地聘请任何他们想要的开发人员。通常情况下[为了他们的利益][11]会将改进贡献给社区,因为由于合并的复杂性,这是他们能够轻松地从其他用户获得未来改进的唯一方式。这种“引力”倾向于把社区聚集在一起。
如果一个组织需要改进代码,他们可以自由地聘请任何他们想要的开发人员。通常情况下[为了他们的利益][11]会将改进贡献给社区,因为由于代码合并的复杂性,这是他们能够轻松地从其他用户获得未来改进的唯一方式。这种“引力”倾向于把社区聚集在一起。
但是它也会加重项目维护者的负担,因为他们必须对这些改进做出反应。他们得到了什么回报?最好的情况是,社区贡献可能是他们将来可以使用的东西,但现在不是。最坏的情况下,这只不过是一个带着利他主义面具的自私请求罢了。
但是它也会加重项目维护者的负担,因为他们必须对这些改进做出反应。他们得到了什么回报?最好的情况是,这些社区贡献可能是他们将来可以使用的东西,但现在不是。最坏的情况下,这只不过是一个带着利他主义面具的自私请求罢了。
一类开源项目避免了这个陷阱。Linux、MySQL、Android、Chromium 和 .NET Core 除了有名有什么共同点么他们都对一个或多个大型企业具有_战略性重要意义_因为它们满足了这些利益。[聪明的公司商品化他们的商品][12],没有比开源软件便宜的商品。红帽需要使用 Linux 的公司来销售企业级 LinuxOracle 使用 MySQL 销售 MySQL Enterprise谷歌希望世界上每个人都拥有电话和浏览器而微软则试图将开发者锁定在平台上然后将它们拉入 Azure 云。这些项目全部由各自公司直接资助。
一类开源项目避免了这个陷阱。Linux、MySQL、Android、Chromium 和 .NET Core 除了有名有什么共同点么他们都对一个或多个大型企业具有_战略性重要意义_因为它们满足了这些利益。[聪明的公司商品化他们的商品][12],没有什么比开源软件便宜的商品。红帽需要那些使用 Linux 的公司来销售企业级 LinuxOracle 使用 MySQL 作为销售 MySQL Enterprise 的引子,谷歌希望世界上每个人都拥有电话和浏览器,而微软则试图将开发者锁定在平台上然后将它们拉入 Azure 云。这些项目全部由各自公司直接资助。
但是那些其他的项目呢,那些不是大玩家核心战略的项目呢?
如果你是其中一个项目的负责人,请社区成员收取年费。_开放的源码封闭的社区_。给用户的信息应该是“尽你所愿地使用代码但如果你想影响项目的未来请_为我们的时间支付_。”将非付费用户锁定在论坛和问题跟踪之外并忽略他们的电子邮件。不支付的人应该觉得他们错过了派对。
如果你是其中一个项目的负责人,请社区成员收取年费。_开放的源码封闭的社区_。给用户的信息应该是“尽你所愿地使用代码但如果你想影响项目的未来请_为我们的时间支付_。”将非付费用户锁定在论坛和问题跟踪之外并忽略他们的电子邮件。不支付的人应该觉得他们错过了派对。
还要向贡献者收取合并非普通合并请求的时间花费。如果一个特定的提交不会立即给你带来好处,请为你的时间收取全价。要有纪律并[记住 YAGNI][13]。
还要向贡献者收取合并非普通合并请求的时间花费。如果一个特定的提交不会立即给你带来好处,请为你的时间收取全价。要有原则并[记住 YAGNI][13]。
这会导致一个极小的社区和更多的分支么绝对。但是如果你坚持不懈地构建自己的愿景并为其他人创造价值他们会尽快为要做的贡献支付。_合并贡献的意愿是[稀缺资源][4]_。如果没有它用户必须反复地将它们的变化与你发布的每个新版本进行协调。
这会导致一个极小的社区和更多的分支么?绝对。但是,如果你坚持不懈地构建自己的愿景,并为其他人创造价值,他们会尽快为要做的贡献支付。_合并贡献的意愿是[稀缺资源][4]_。如果没有它用户必须反复地将它们的变化与你发布的每个新版本进行协调。
如果你想在代码库中保持高水平的[概念完整性][14],那么限制社区是特别重要的。有[宽大贡献政策][15]的无领导者项目没有必要收费。
如果你想在代码库中保持高水平的[概念完整性][14],那么限制社区是特别重要的。有[自由贡献政策][15]的无领导者项目没有必要收费。
为了实现更大的愿景,而不是单独为自己的业务支付成本,而是可能使其他人受益,去[众筹][16]。有许多成功的故事:
为了实现更大的愿景,而不是单独为自己的业务支付成本,而是可能使其他人受益,去[众筹][16]。有许多成功的故事:
> [Font Awesome 5][5]
>
> [Ruby enVironment Management (RVM)][6]
>
> [Django REST framework 3][7]
- [Font Awesome 5][5]
- [Ruby enVironment Management (RVM)][6]
- [Django REST framework 3][7]
[众筹有局限性][17]。它[不适合][18][大型项目][19]。但是,开源代码也是实用软件,它不需要雄心勃勃,冒险的破局者。它已经[渗透到每个行业][20],只有增量更新
[众筹有局限性][17]。它[不适合][18][大型项目][19]。但是,开源代码也是实用软件,它不需要雄心勃勃、冒险的破局者。它已经一点点地[渗透到每个行业][20]。
这些观点代表着一条可持续发展的道路,也可以解决[开源的多样性问题][21],这可能源于其历史上无偿的性质。但最重要的是,我们要记住,[我们一生中只留下那么多的按键次数][22],而且我们总有一天会后悔那些我们浪费的东西。
 _当我说“开源”时,我的意思是代码[许可][8]以某种方式来构建专有的东西。这通常意味着一个宽松许可证MIT 或 Apache 或 BSD但并非总是如此。Linux 是当今科技行业的核心,但是是以 GPL 授权的。_
- 注 1 当我说“开源”时,我的意思是代码[许可][8]以某种方式来构建专有的东西。这通常意味着一个宽松许可证MIT 或 Apache 或 BSD但并非总是如此。Linux 是当今科技行业的核心,但是是以 GPL 授权的。_
感谢 Jason Haley、Don McNamara、Bryan Hogan 和 Nadia Eghbal 阅读了这篇文章的草稿。
@ -57,7 +55,7 @@ via: http://wgross.net/essays/give-away-your-code-but-never-your-time
作者:[William Gross][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,97 @@
很遗憾,我也不知道什么是容器!
========================
![But I dont know what a container is!](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/container-ship.png?itok=pqZYgQ7K "But I don't know what a container is")
> 题图抽象的形容了容器和虚拟机是那么的相似,又是那么的不同!
在近期的一些会议和学术交流会上,我一直在讲述有关 DevOps 的安全问题(亦称为 DevSecOps^注1 。通常,我首先都会问一个问题:“在座的各位有谁知道什么是容器吗?” 通常并没有很多人举手^注2 ,所以我都会先简单介绍一下什么是容器^注3 ,然后再进行深层次的讨论交流。
更准确的说,在运用 DevOps 或者 DevSecOps 的时候,容器并不是必须的。但容器能很好的融于 DevOps 和 DevSecOps 方案中,结果就是,虽然不用容器便可以运用 DevOps ,但我还是假设大部分人依然会使用容器。
### 什么是容器
几个月前的一个会议上,一个同事正在容器上操作演示,因为大家都不是这个方面的专家,所以该同事就很简单的开始了他的演示。他说了诸如“在 Linux 内核源码中没有一处提及到<ruby>容器<rt>container</rt></ruby>。“之类的话。事实上,在这样的特殊群体中,这种描述表达是很危险的。就在几秒钟内,我和我的老板(坐在我旁边)下载了最新版本的内核源代码并且查找统计了其中 “container” 单词出现的次数。很显然这位同事的说法并不准确。更准确来说我在旧版本内核4.9.2)代码中发现有 15273 行代码包含 “container” 一词^注4 。我和我老板会心一笑,确认同事的说法有误,并在休息时纠正了他这个有误的描述。
后来我们搞清楚同事想表达的意思是 Linux 内核中并没有明确提及容器这个概念。换句话说,容器使用了 Linux 内核中的一些概念、组件、工具以及机制,并没有什么特殊的东西;这些东西也可以用于其他目的^注 5 。所以才有会说“从 Linux 内核角度来看,并没有容器这样的东西。”
然后,什么是容器呢?我有着虚拟化(<ruby>管理器<rt>hypervisor</rt></ruby>和虚拟机)技术的背景,在我看来, 容器既像虚拟机VM又不像虚拟机。我知道这种解释好像没什么用不过请听我细细道来。
### 容器和虚拟机相似之处有哪些?
容器和虚拟机相似的一个主要方面就是它是一个可执行单元。将文件打包生成镜像文件,然后它就可以运行在合适的主机平台上。和虚拟机一样,它运行于主机上,同样,它的运行也受制于该主机。主机平台为容器的运行提供软件环境和硬件资源(诸如 CPU 资源、网络环境、存储资源等等),除此之外,主机还需要负责以下的任务:
1. 为每一个工作单元(这里指虚拟机和容器)提供保护机制,这样可以保证即使某一个工作单元出现恶意的、有害的以及不能写入的情况时不会影响其他的工作单元。
2. 主机保护自己不会受一些恶意运行或出现故障的工作单元影响。
虚拟机和容器实现这种隔离的原理并不一样,虚拟机的隔离是由管理器对硬件资源划分,而容器的隔离则是通过 Linux 内核提供的软件功能实现的^注6 。这种软件控制机制通过不同的“命名空间”保证了每一个容器的文件、用户以及网络连接等互不可见,当然容器和主机之间也互不可见。这种功能也能由 SELinux 之类软件提供,它们提供了进一步隔离容器的功能。
### 容器和虚拟机不同之处又有哪些?
以上描述有个问题,如果你对<ruby>管理器<rt>hypervisor</rt></ruby>机制概念比较模糊,也许你会认为容器就是虚拟机,但它确实不是。
首先,最为重要的一点^注7 ,容器是一种包格式。也许你会惊讶的反问我“什么,你不是说过容器是某种可执行文件么?” 对,容器确实是可执行文件,但容器如此迷人的一个主要原因就是它能很容易的生成比虚拟机小很多的实体化镜像文件。由于这些原因,容器消耗很少的内存,并且能非常快的启动与关闭。你可以在几分钟或者几秒钟(甚至毫秒级别)之内就启动一个容器,而虚拟机则不具备这些特点。
正因为容器是如此轻量级且易于替换,人们使用它们来创建微服务——应用程序拆分而成的最小组件,它们可以和一个或多个其它微服务构成任何你想要的应用。假使你只在一个容器内运行某个特定功能或者任务,你也可以让容器变得很小,这样丢弃旧容器创建新容器将变得很容易。我将在后续的文章中继续跟进这个问题以及它们对安全性的可能影响,当然,也包括 DevSecOps 。
希望这是一次对容器的有用的介绍,并且能带动你有动力去学习 DevSecOps 的知识(如果你不是,假装一下也好)。
---
- 注 1我觉得 DevSecOps 读起来很奇怪,而 DevOpsSec 往往有多元化的理解,然后所讨论的主题就不一样了。
- 注 2我应该注意到这不仅仅会被比较保守、不太喜欢被人注意的英国听众所了解也会被加拿大人和美国人所了解他们的性格则和英国人不一样。
- 注 3当然我只是想讨论 Linux 容器。我知道关于这个问题,是有历史根源的,所以它也值得注意,而不是我故弄玄虚。
- 注 4如果你感兴趣的话我使用的是命令 `grep -ir container linux-4.9.2 | wc -l`
- 注 5公平的说我们快速浏览一下一些用途与我们讨论容器的方式无关我们讨论的是 Linux 容器,它是抽象的,可以用来包含其他元素,因此在逻辑上被称为容器。
- 注 6也有一些巧妙的方法可以将容器和虚拟机结合起来以发挥它们各自的优势那个不在我今天的主题范围内。
- 注 7很明显除了我们刚才介绍的执行位。
*原文来自 [Alice, Eve, and Bob—a security blog][7] ,转载请注明*
(题图: opensource.com
---
**作者简介**
原文作者 Mike Bursell 是一名居住在英国、喜欢威士忌的开源爱好者, Red Hat 首席安全架构师。其自从 1997 年接触开源世界以来,生活和工作中一直使用 Linux (尽管不是一直都很容易)。更多信息请参考作者的博客 https://aliceevebob.com ,作者会不定期的更新一些有关安全方面的文章。
---
via: https://opensource.com/article/17/10/what-are-containers
作者:[Mike Bursell][a]
译者:[jrglinux](https://github.com/jrglinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[1]: https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&amp;intcmp=70160000000h1s6AAA
[2]: https://opensource.com/resources/what-docker?utm_campaign=containers&amp;intcmp=70160000000h1s6AAA
[3]: https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&amp;intcmp=70160000000h1s6AAA
[4]: https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&amp;intcmp=70160000000h1s6AAA
[5]: https://opensource.com/article/17/10/what-are-containers?rate=sPHuhiD4Z3D3vJ6ZqDT-wGp8wQjcQDv-iHf2OBG_oGQ
[6]: https://opensource.com/article/17/10/what-are-containers#*******
[7]: https://aliceevebob.wordpress.com/2017/07/04/but-i-dont-know-what-a-container-is/
[8]: https://opensource.com/user/105961/feed
[9]: https://opensource.com/article/17/10/what-are-containers#*
[10]: https://opensource.com/article/17/10/what-are-containers#**
[11]: https://opensource.com/article/17/10/what-are-containers#***
[12]: https://opensource.com/article/17/10/what-are-containers#******
[13]: https://opensource.com/article/17/10/what-are-containers#*****
[14]: https://opensource.com/users/mikecamel
[15]: https://opensource.com/users/mikecamel
[16]: https://opensource.com/article/17/10/what-are-containers#****

View File

@ -0,0 +1,198 @@
借助 minikube 上手 OpenFaaS
============================================================
本文将介绍如何借助 [minikube][4] 在 Kubernetes 1.8 上搭建 OpenFaaS让 Serverless Function 变得更简单。minikube 是一个 [Kubernetes][5] 分发版,借助它,你可以在笔记本电脑上运行 Kubernetes 集群minikube 支持 Mac 和 Linux 操作系统,但是在 MacOS 上使用得更多一些。
> 本文基于我们最新的部署手册 [Kubernetes 官方部署指南][6]
![](https://cdn-images-1.medium.com/max/1600/1*C9845SlyaaT1_xrAGOBURg.png)
### 安装部署 Minikube
1、 安装 [xhyve driver][1] 或 [VirtualBox][2] ,然后在上面安装 Linux 虚拟机以部署 minikube 。根据我的经验VirtualBox 更稳定一些。
2、 [参照官方文档][3] 安装 minikube 。
3、 使用 `brew``curl -sL cli.openfaas.com | sudo sh` 安装 `faas-cli`
4、 通过 `brew install kubernetes-helm` 安装 `helm` 命令行。
5、 运行 minikube `minikube start`。
> Docker 船长小贴士Mac 和 Windows 版本的 Docker 已经集成了对 Kubernetes 的支持。现在我们使用 Kubernetes 的时候,已经不需要再安装额外的软件了。
### 在 minikube 上面部署 OpenFaaS
1、 为 Helm 的服务器组件 tiller 新建服务账号:
```
kubectl -n kube-system create sa tiller \
&& kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
```
2、 安装 Helm 的服务端组件 tiller
```
helm init --skip-refresh --upgrade --service-account tiller
```
3、 克隆 Kubernetes 的 OpenFaaS 驱动程序 faas-netes 
```
git clone https://github.com/openfaas/faas-netes && cd faas-netes
```
4、 Minikube 没有配置 RBAC这里我们需要把 RBAC 关闭:
```
helm upgrade --install --debug --reset-values --set async=false --set rbac=false openfaas openfaas/
```
LCTT 译注RBACRole-Based access control基于角色的访问权限控制在计算机权限管理中较为常用详情请参考以下链接https://en.wikipedia.org/wiki/Role-based_access_control
现在,你可以看到 OpenFaaS pod 已经在你的 minikube 集群上运行起来了。输入 `kubectl get pods` 以查看 OpenFaaS pod
```
NAME READY STATUS RESTARTS AGE
alertmanager-6dbdcddfc4-fjmrf 1/1 Running 0 1m
faas-netesd-7b5b7d9d4-h9ftx 1/1 Running 0 1m
gateway-965d6676d-7xcv9 1/1 Running 0 1m
prometheus-64f9844488-t2mvn 1/1 Running 0 1m
```
30,000ft
该 API 网关包含了一个 [用于测试功能的最小化 UI][7],同时开放了用于功能管理的 [RESTful API][8] 。
faas-netesd 守护进程是一种 Kubernetes 控制器,用来连接 Kubernetes API 服务器来管理服务、部署和密码。
Prometheus 和 AlertManager 进程协同工作,实现 OpenFaaS Function 的弹性缩放,以满足业务需求。通过 Prometheus 指标我们可以查看系统的整体运行状态,还可以用来开发功能强悍的仪表盘。
Prometheus 仪表盘示例:
![](https://cdn-images-1.medium.com/max/1600/1*b0RnaFIss5fOJXkpIJJgMw.jpeg)
### 构建/迁移/运行
和很多其他的 FaaS 项目不同OpenFaaS 使用 Docker 镜像格式来进行 Function 的创建和版本控制,这意味着可以在生产环境中使用 OpenFaaS 实现以下目标:
* 漏洞扫描LCTT 译注:此处我觉得应该理解为更快地实现漏洞补丁)
* 持续集成/持续开发
* 滚动更新
你也可以在现有的生产环境集群中利用空闲资源部署 OpenFaaS。其核心服务组件内存占用大概在 10-30MB 。
> OpenFaaS 一个关键的优势在于,它可以使用容器编排平台的 API ,这样可以和 Kubernetes 以及 Docker Swarm 进行本地集成。同时,由于使用 Docker <ruby>存储库<rt>registry</rt></ruby>进行 Function 的版本控制,所以可以按需扩展 Function而没有按需构建 Function 的框架的额外的延时。
### 新建 Function
```
faas-cli new --lang python hello
```
以上命令创建文件 `hello.yml` 以及文件夹 `handler`,文件夹有两个文件 `handler.py`、`requirements.txt` 可用于你可能需要的 pip 模块。你可以随时编辑这些文件和文件夹,不需要担心如何维护 Dockerfile —— 我们为你通过以下方式维护:
* 分级创建
* 非 root 用户
* 以官方的 Docker Alpine Linux 版本为基础进行镜像构建 (可替换)
### 构建你的 Function
先在本地创建 Function然后推送到 Docker 存储库。 我们这里使用 Docker Hub打开文件 `hello.yml` 然后输入你的账号名:
```
provider:
name: faas
gateway: http://localhost:8080
functions:
hello:
lang: python
handler: ./hello
image: alexellis2/hello
```
现在,发起构建。你的本地系统上需要安装 Docker 。
```
faas-cli build -f hello.yml
```
把封装好 Function 的 Docker 镜像版本推送到 Docker Hub。如果还没有登录 Docker hub ,继续前需要先输入命令 `docker login` 。
```
faas-cli push -f hello.yml
```
当系统中有多个 Function 的时候,可以使用 `--parallel=N` 来调用多核并行处理构建或推送任务。该命令也支持这些选项: `--no-cache`、`--squash` 。
### 部署及测试 Function
现在,可以部署、列出、调用 Function 了。每次调用 Function 时,可以通过 Prometheus 收集指标值。
```
$ export gw=http://$(minikube ip):31112
$ faas-cli deploy -f hello.yml --gateway $gw
Deploying: hello.
No existing function to remove
Deployed.
URL: http://192.168.99.100:31112/function/hello
```
上面给到的是部署时调用 Function 的标准方法,你也可以使用下面的命令:
```
$ echo test | faas-cli invoke hello --gateway $gw
```
现在可以通过以下命令列出部署好的 Function你将看到调用计数器数值增加。
```
$ faas-cli list --gateway $gw
Function Invocations Replicas
hello 1 1
```
_提示这条命令也可以加上 `--verbose` 选项获得更详细的信息。_
由于我们是在远端集群Linux 虚拟机)上面运行 OpenFaaS命令里面加上一条 `--gateway` 用来覆盖环境变量。 这个选项同样适用于云平台上的远程主机。除了加上这条选项以外,还可以通过编辑 .yml 文件里面的 `gateway` 值来达到同样的效果。
### 迁移到 minikube 以外的环境
一旦你在熟悉了在 minikube 上运行 OpenFaaS ,就可以在任意 Linux 主机上搭建 Kubernetes 集群来部署 OpenFaaS 了。下图是由来自 WeaveWorks 的 Stefan Prodan 做的 OpenFaaS Demo ,这个 Demo 部署在 Google GKE 平台上的 Kubernetes 上面。图片上展示的是 OpenFaaS 内置的自动扩容的功能:
![](https://twitter.com/stefanprodan/status/931490255684939777/photo/1)
### 继续学习
我们的 Github 上面有很多手册和博文,可以带你轻松“上车”,把我们的页面保存成书签吧:[openfaas/faas][9][][10] 。
2017 哥本哈根 Dockercon Moby 峰会上,我做了关于 Serverless 和 OpenFaaS 的概述演讲,这里我把视频放上来,视频不长,大概 15 分钟左右。
[Youtube视频](https://youtu.be/UaReIKa2of8)
最后,别忘了关注 [OpenFaaS on Twitter][11] 这里有最潮的新闻、最酷的技术和 Demo 展示。
--------------------------------------------------------------------------------
via: https://medium.com/@alexellisuk/getting-started-with-openfaas-on-minikube-634502c7acdf
作者:[Alex Ellis][a]
译者:[mandeler](https://github.com/mandeler)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@alexellisuk?source=post_header_lockup
[1]:https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver
[2]:https://www.virtualbox.org/wiki/Downloads
[3]:https://kubernetes.io/docs/tasks/tools/install-minikube/
[4]:https://kubernetes.io/docs/getting-started-guides/minikube/
[5]:https://kubernetes.io/
[6]:https://github.com/openfaas/faas/blob/master/guide/deployment_k8s.md
[7]:https://github.com/openfaas/faas/blob/master/TestDrive.md
[8]:https://github.com/openfaas/faas/tree/master/api-docs
[9]:https://github.com/openfaas/faas/tree/master/guide
[10]:https://github.com/openfaas/faas/tree/master/guide
[11]:https://twitter.com/openfaas

View File

@ -0,0 +1,84 @@
# [The One in Which I Call Out Hacker News][14]
> “Implementing caching would take thirty hours. Do you have thirty extra hours? No, you dont. I actually have no idea how long it would take. Maybe it would take five minutes. Do you have five minutes? No. Why? Because Im lying. It would take much longer than five minutes. Thats the eternal optimism of programmers.”
>
> — Professor [Owen Astrachan][1] during 23 Feb 2004 lecture for [CPS 108][2]
[Accusing open-source software of being a royal pain to use][5] is not a new argument; its been said before, by those much more eloquent than I, and even by some who are highly sympathetic to the open-source movement. Why go over it again?
On Hacker News on Monday, I was amused to read some people saying that [writing StackOverflow was hilariously easy][6]—and proceeding to back up their claim by [promising to clone it over July 4th weekend][7]. Others chimed in, pointing to [existing][8] [clones][9] as a good starting point.
Lets assume, for sake of argument, that you decide its okay to write your StackOverflow clone in ASP.NET MVC, and that I, after being hypnotized with a pocket watch and a small club to the head, have decided to hand you the StackOverflow source code, page by page, so you can retype it verbatim. Well also assume you type like me, at a cool 100 WPM ([a smidge over eight characters per second][10]), and unlike me,  _you_  make zero mistakes. StackOverflows *.cs, *.sql, *.css, *.js, and *.aspx files come to 2.3 MB. So merely typing the source code back into the computer will take you about eighty hours if you make zero mistakes.
Except, of course, youre not doing that; youre going to implement StackOverflow from scratch. So even assuming that it took you a mere ten times longer to design, type out, and debug your own implementation than it would take you to copy the real one, that already has you coding for several weeks straight—and I dont know about you, but I am okay admitting I write new code  _considerably_  less than one tenth as fast as I copy existing code.
_Well, okay_ , I hear you relent. *So not the whole thing. But I can do **most** of it.*
Okay, so whats “most”? Theres simply asking and responding to questions—that parts easy. Well, except you have to implement voting questions and answers up and down, and the questioner should be able to accept a single answer for each question. And you cant let people upvote or accept their own answers, so you need to block that. And you need to make sure that users dont upvote or downvote another user too many times in a certain amount of time, to prevent spambots. Probably going to have to implement a spam filter, too, come to think of it, even in the basic design, and you also need to support user icons, and youre going to have to find a sanitizing HTML library you really trust and that interfaces well with Markdown (provided you do want to reuse [that awesome editor][11] StackOverflow has, of course). Youll also need to purchase, design, or find widgets for all the controls, plus you need at least a basic administration interface so that moderators can moderate, and youll need to implement that scaling karma thing so that you give users steadily increasing power to do things as they go.
But if you do  _all that_ , you  _will_  be done.
Except…except, of course, for the full-text search, especially its appearance in the search-as-you-ask feature, which is kind of indispensable. And user bios, and having comments on answers, and having a main page that shows you important questions but that bubbles down steadily à la reddit. Plus youll totally need to implement bounties, and support multiple OpenID logins per user, and send out email notifications for pertinent events, and add a tagging system, and allow administrators to configure badges by a nice GUI. And youll need to show users karma history, upvotes, and downvotes. And the whole thing has to scale really well, since it could be slashdotted/reddited/StackOverflown at any moment.
But  _then_ ! **Then** youre done!
…right after you implement upgrades, internationalization, karma caps, a CSS design that makes your site not look like ass, AJAX versions of most of the above, and G-d knows what else thats lurking just beneath the surface that you currently take for granted, but that will come to bite you when you start to do a real clone.
Tell me: which of those features do you feel you can cut and still have a compelling offering? Which ones go under “most” of the site, and which can you punt?
Developers think cloning a site like StackOverflow is easy for the same reason that open-source software remains such a horrible pain in the ass to use. When you put a developer in front of StackOverflow, they dont really  _see_ StackOverflow. What they actually  _see_  is this:
```
create table QUESTION (ID identity primary key,
TITLE varchar(255), --- why do I know you thought 255?
BODY text,
UPVOTES integer not null default 0,
DOWNVOTES integer not null default 0,
USER integer references USER(ID));
create table RESPONSE (ID identity primary key,
BODY text,
UPVOTES integer not null default 0,
DOWNVOTES integer not null default 0,
QUESTION integer references QUESTION(ID))
```
If you then tell a developer to replicate StackOverflow, what goes into his head are the above two SQL tables and enough HTML to display them without formatting, and that really  _is_  completely doable in a weekend. The smarter ones will realize that they need to implement login and logout, and comments, and that the votes need to be tied to a user, but thats still totally doable in a weekend; its just a couple more tables in a SQL back-end, and the HTML to show their contents. Use a framework like Django, and you even get basic users and comments for free.
But thats  _not_  what StackOverflow is about. Regardless of what your feelings may be on StackOverflow in general, most visitors seem to agree that the user experience is smooth, from start to finish. They feel that theyre interacting with a polished product. Even if I didnt know better, I would guess that very little of what actually makes StackOverflow a continuing success has to do with the database schema—and having had a chance to read through StackOverflows source code, I know how little really does. There is a  _tremendous_  amount of spit and polish that goes into making a major website highly usable. A developer, asked how hard something will be to clone, simply  _does not think about the polish_ , because  _the polish is incidental to the implementation._
That is why an open-source clone of StackOverflow will fail. Even if someone were to manage to implement most of StackOverflow “to spec,” there are some key areas that would trip them up. Badges, for example, if youre targeting end-users, either need a GUI to configure rules, or smart developers to determine which badges are generic enough to go on all installs. What will actually happen is that the developers will bitch and moan about how you cant implement a really comprehensive GUI for something like badges, and then bikeshed any proposals for standard badges so far into the ground that theyll hit escape velocity coming out the other side. Theyll ultimately come up with the same solution that bug trackers like Roundup use for their workflow: the developers implement a generic mechanism by which anyone, truly anyone at all, who feels totally comfortable working with the system API in Python or PHP or whatever, can easily add their own customizations. And when PHP and Python are so easy to learn and so much more flexible than a GUI could ever be, why bother with anything else?
Likewise, the moderation and administration interfaces can be punted. If youre an admin, you have access to the SQL server, so you can do anything really genuinely administrative-like that way. Moderators can get by with whatever django-admin and similar systems afford you, since, after all, few users are mods, and mods should understand how the sites  _work_ , dammit. And, certainly, none of StackOverflows interface failings will be rectified. Even if StackOverflows stupid requirement that you have to have and know how to use an OpenID (its worst failing) eventually gets fixed, Im sure any open-source clones will rabidly follow it—just as GNOME and KDE for years slavishly copied off Windows, instead of trying to fix its most obvious flaws.
Developers may not care about these parts of the application, but end-users do, and take it into consideration when trying to decide what application to use. Much as a good software company wants to minimize its support costs by ensuring that its products are top-notch before shipping, so, too, savvy consumers want to ensure products are good before they purchase them so that they wont  _have_  to call support. Open-source products fail hard here. Proprietary solutions, as a rule, do better.
Thats not to say that open-source doesnt have its place. This blog runs on Apache, [Django][12], [PostgreSQL][13], and Linux. But let me tell you, configuring that stack is  _not_  for the faint of heart. PostgreSQL needs vacuuming configured on older versions, and, as of recent versions of Ubuntu and FreeBSD, still requires the user set up the first database cluster. MS SQL requires neither of those things. Apache…dear heavens, dont even get me  _started_  on trying to explain to a novice user how to get virtual hosting, MovableType, a couple Django apps, and WordPress all running comfortably under a single install. Hell, just trying to explain the forking vs. threading variants of Apache to a technically astute non-developer can be a nightmare. IIS 7 and Apache with OS X Servers very much closed-source GUI manager make setting up those same stacks vastly simpler. Djangos a great a product, but its nothing  _but_  infrastructure—exactly the thing that I happen to think open-source  _does_  do well,  _precisely_  because of the motivations that drive developers to contribute.
The next time you see an application you like, think very long and hard about all the user-oriented details that went into making it a pleasure to use, before decrying how you could trivially reimplement the entire damn thing in a weekend. Nine times out of ten, when you think an application was ridiculously easy to implement, youre completely missing the user side of the story.
--------------------------------------------------------------------------------
via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/
作者:[Benjamin Pollack][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://bitquabit.com/meta/about/
[1]:http://www.cs.duke.edu/~ola/
[2]:http://www.cs.duke.edu/courses/cps108/spring04/
[3]:https://bitquabit.com/categories/programming
[4]:https://bitquabit.com/categories/technology
[5]:http://blog.bitquabit.com/2009/06/30/one-which-i-say-open-source-software-sucks/
[6]:http://news.ycombinator.com/item?id=678501
[7]:http://news.ycombinator.com/item?id=678704
[8]:http://code.google.com/p/cnprog/
[9]:http://code.google.com/p/soclone/
[10]:http://en.wikipedia.org/wiki/Words_per_minute
[11]:http://github.com/derobins/wmd/tree/master
[12]:http://www.djangoproject.com/
[13]:http://www.postgresql.org/
[14]:https://bitquabit.com/post/one-which-i-call-out-hacker-news/

View File

@ -1,4 +1,4 @@
Network automation with Ansible
Translating by qhwdw Network automation with Ansible
================
### Network Automation

View File

@ -1,75 +0,0 @@
translating---geekpi
INTRODUCING MOBY PROJECT: A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE CONTAINERIZATION MOVEMENT
============================================================
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/1-2.png?resize=763%2C275&ssl=1)
Since Docker democratized software containers four years ago, a whole ecosystem grew around containerization and in this compressed time period it has gone through two distinct phases of growth. In each of these two phases, the model for producing container systems evolved to adapt to the size and needs of the user community as well as the project and the growing contributor ecosystem.
The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas.
Lets review how we got where we are today. In 2013-2014 pioneers started to use containers and collaborate in a monolithic open source codebase, Docker and few other projects, to help tools mature.
![Docker Open Source](https://i0.wp.com/blog.docker.com/wp-content/uploads/2-2.png?resize=975%2C548&ssl=1)
Then in 2015-2016, containers were massively adopted in production for cloud-native applications. In this phase, the user community grew to support tens of thousands of deployments that were backed by hundreds of ecosystem projects and thousands of contributors. It is during this phase, that Docker evolved its production model to an open component based approach. In this way, it allowed us to increase both the surface area of innovation and collaboration.
What sprung up were new independent Docker component projects that helped spur growth in the partner ecosystem and the user community. During that period, we extracted and rapidly innovated on components out of the Docker codebase so that systems makers could reuse them independently as they were building their own container systems: [runc][7], [HyperKit][8], [VPNKit][9], [SwarmKit][10], [InfraKit][11], [containerd][12], etc..
![Docker Open Components](https://i1.wp.com/blog.docker.com/wp-content/uploads/3-2.png?resize=975%2C548&ssl=1)
Being at the forefront of the container wave, one trend we see emerging in 2017 is containers going mainstream, spreading to every category of computing, server, data center, cloud, desktop, Internet of Things and mobile.  Every industry and vertical market, finance, healthcare, government, travel, manufacturing. And every use case, modern web applications, traditional server applications, machine learning, industrial control systems, robotics. What many new entrants in the container ecosystem have in common is that they build specialized systems, targeted at a particular infrastructure, industry or use case.
As a company Docker uses open source as our innovation lab, in collaboration with a whole ecosystem. Dockers success is tied to the success of the container ecosystem: if the ecosystem succeeds, we succeed. Hence we have been planning for the next phase of the container ecosystem growth: what production model will help us scale the container ecosystem to fulfill the promise of making containers mainstream?
Last year, our customers started to ask for Docker on many platforms beyond Linux: Mac and Windows desktop, Windows Server, cloud platforms like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform, and we created a [dozen Docker editions][13] specialized for these platforms. In order to be able to build and ship these specialized editions is a relatively short time, with small teams, in a scalable way, without having to reinvent the wheel; it was clear we needed a new approach.  We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing [an idea from the car industry][14] where assemblies of components are reused to build completely different cars.
![Docker production model](https://i1.wp.com/blog.docker.com/wp-content/uploads/4-2.png?resize=975%2C548&ssl=1)
We think the best way to scale the container ecosystem to the next level to get containers mainstream is to collaborate on assemblies at the ecosystem level.
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/5-2.png?resize=975%2C548&ssl=1)
In order to enable this new level of collaboration, today we are announcing the Moby Project, a new open-source project to advance the software containerization movement. It provides a “Lego set” of dozens of components, a framework for assembling them into custom container-based systems, and a place for all container enthusiasts to experiment and exchange ideas. Think of Moby as the “Lego Club” of container systems.
Moby is comprised of:
1. A **library** of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
2. A **framework** for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
3. A reference assembly, called **Moby Origin**, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.
Moby is designed for system builders, who want to build their own container based systems, not for application developers, who can use Docker or other container platforms. Participants in the Moby project can choose from the library of components derived from Docker or they can elect to “bring your own components” (BYOC) packaged as containers with the option to mix and match among all of the components to create a customized container system.
Docker uses the Moby Project as an open R&D lab, to experiment, develop new components, and collaborate with the ecosystem on the future of container technology. All our open source collaboration will move to the Moby project. Docker is, and will remain, a open source product that lets you build, ship and run containers. It is staying exactly the same from a users perspective. Users can continue to download Docker from the docker.com website. See[ more information about the respective roles of Docker and Moby on the Moby website][15].
Please join us in helping take software containers mainstream, and grow our ecosystem and our user community to the next level by collaborating on components and assemblies.
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/04/introducing-the-moby-project/
作者:[Solomon Hykes ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/solomon/
[1]:https://blog.docker.com/author/solomon/
[2]:https://blog.docker.com/tag/containerization/
[3]:https://blog.docker.com/tag/moby-library/
[4]:https://blog.docker.com/tag/moby-origin/
[5]:https://blog.docker.com/tag/moby-project/
[6]:https://blog.docker.com/tag/open-source/
[7]:https://github.com/opencontainers/runc
[8]:https://github.com/docker/hyperkit
[9]:https://github.com/docker/vpnkit
[10]:https://github.com/docker/swarmkit
[11]:https://github.com/docker/infrakit
[12]:https://github.com/containerd/containerd
[13]:https://blog.docker.com/2017/03/docker-enterprise-edition/
[14]:https://en.wikipedia.org/wiki/List_of_Volkswagen_Group_platforms
[15]:https://mobyproject.org/#moby-and-docker

View File

@ -1,3 +1,4 @@
Translating by aiwhj
# How to Improve a Legacy Codebase

View File

@ -0,0 +1,84 @@
translating---geekpi
# Understanding Docker "Container Host" vs. "Container OS" for Linux and Windows Containers
Lets explore the relationship between the “Container Host” and the “Container OS” and how they differ between Linux and Windows containers.
#### Some Definitions:
* Container Host: Also called the Host OS. The Host OS is the operating system on which the Docker client and Docker daemon run. In the case of Linux and non-Hyper-V containers, the Host OS shares its kernel with running Docker containers. For Hyper-V each container has its own Hyper-V kernel.
* Container OS: Also called the Base OS. The base OS refers to an image that contains an operating system such as Ubuntu, CentOS, or windowsservercore. Typically, you would build your own image on top of a Base OS image so that you can take utilize parts of the OS. Note that windows containers require a Base OS, while Linux containers do not.
* Operating System Kernel: The Kernel manages lower level functions such as memory management, file system, network and process scheduling.
#### Now for some pictures:
![Linux Containers](http://floydhilton.com/images/2017/03/2017-03-31_14_50_13-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
In the above example
* The Host OS is Ubuntu.
* The Docker Client and the Docker Daemon (together called the Docker Engine) are running on the Host OS.
* Each container shares the Host OS kernel.
* CentOS and BusyBox are Linux Base OS images.
* The “No OS” container demonstrates that you do not NEED a base OS to run a container in Linux. You can create a Docker file that has a base image of [scratch][1]and then runs a binary that uses the kernel directly.
* Check out [this][2] article for a comparison of Base OS sizes.
![Windows Containers - Non Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_04_03-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
In the above example
* The Host OS is Windows 10 or Windows Server.
* Each container shares the Host OS kernel.
* All windows containers require a Base OS of either [nanoserver][3] or [windowsservercore][4].
![Windows Containers - Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_41_31-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
In the above example
* The Host OS is Windows 10 or Windows Server.
* Each container is hosted in its own light weight Hyper-V VM.
* Each container uses the kernel inside the Hyper-V VM which provides an extra layer of separation between containers.
* All windows containers require a Base OS of either [nanoserver][5] or [windowsservercore][6].
A couple of good links:
* [About windows containers][7]
* [Deep dive into the implementation Windows Containers including multiple User Modes and “copy-on-write” to save resources][8]
* [How Linux containers save resources by using “copy-on-write”][9]
--------------------------------------------------------------------------------
via: http://floydhilton.com/docker/2017/03/31/Docker-ContainerHost-vs-ContainerOS-Linux-Windows.html
作者:[Floyd Hilton ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://floydhilton.com/about/
[1]:https://hub.docker.com/_/scratch/
[2]:https://www.brianchristner.io/docker-image-base-os-size-comparison/
[3]:https://hub.docker.com/r/microsoft/nanoserver/
[4]:https://hub.docker.com/r/microsoft/windowsservercore/
[5]:https://hub.docker.com/r/microsoft/nanoserver/
[6]:https://hub.docker.com/r/microsoft/windowsservercore/
[7]:https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
[8]:http://blog.xebia.com/deep-dive-into-windows-server-containers-and-docker-part-2-underlying-implementation-of-windows-server-containers/
[9]:https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#the-copy-on-write-strategy

View File

@ -1,38 +0,0 @@
translating---geekpi
# The Life-Changing Magic of Tidying Up Code
I have been tidying up Facebook code this week and loving it. Ive tidied up for thousands of hours over my career, and I have some rules for making this style of cleanup safe, fun, and efficient.Tidying up works through a series of small, safe steps. In fact, Rule #1 is If its hard, dont do it. I used to do crossword puzzles at night. If I got stuck and went to sleep, the next night those same impossible clues were often easy. Instead of stressing about the big effects I want to create, I am better off just stopping when I encounter resistance.Tidying up is concave in the sense that you have a lot more to lose by a mistake than you have to win by any individual success (more on that later). Rule #2 is Start when youre fresh and stop when youre tired. Get up and walk around. If I dont come back refreshed, Im done for the day.Tidying up can happen in parallel with development, but only if you carefully track other changes (I messed this up with my latest diff). Rule #3 is Land each sessions work immediately. Unlike feature development, where it sometimes makes sense to land only when a chunk of work is done, tidying up is time based.Tidying up requires little effort for any step, so I am willing to discard any step at the first sign of trouble. For example, Rule #4 is Two reds is a revert. If I tidy, run the tests, and encounter a failed test, then if I can fix it immediately I do. If I try to fix it and fail, I immediately revert to the last known good state.Tidying up works even without a vision of the shiny new design. However, sometimes I want to see how things might play out, so Rule #5 is Practice. Perform a sequence of tidyings and revert. The second time will go much faster and youll be more familiar with which bumpy spots to avoid.Tidying up works only if the risk of collateral damage is low and the cost of reviewing tidying changes is also low. Rule #6 is Isolate tidying. This can be tough when you run across the chance to tidy in the midst of writing new code. Either finish and then tidy or revert, tidy, and make your changes.Try it. Move the declaration of a temp adjacent to its first use. Simplify a boolean expression (“return expression == True” anyone?). Extract a helper. Reduce the scope of logic or state to where it is actually used.
### The Rules
1. If its hard, dont do it
2. Start when youre fresh and stop when youre tired
3. Land each sessions work immediately
4. Two reds is a revert
5. Practice
6. Isolate tidying
### Coda
Ive made architectural changes strictly by tidying. Ive extracted frameworks strictly by tidying. You can make big changes safely in this style. I think this is because, while the cost of each tidying is constant, the payoff is power-law distributed. I need both data and a model to explain this hypothesis.
--------------------------------------------------------------------------------
via: https://www.facebook.com/notes/kent-beck/the-life-changing-magic-of-tidying-up-code/1544047022294823/
作者:[KENT BECK ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.facebook.com/kentlbeck
[1]:https://www.facebook.com/notes/kent-beck/the-life-changing-magic-of-tidying-up-code/1544047022294823/?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#
[2]:https://www.facebook.com/kentlbeck
[3]:https://www.facebook.com/notes/kent-beck/the-life-changing-magic-of-tidying-up-code/1544047022294823/

View File

@ -1,34 +0,0 @@
Wildcard Certificates Coming January 2018
============================================================
Lets Encrypt will begin issuing wildcard certificates in January of 2018\. Wildcard certificates are a commonly requested feature and we understand that there are some use cases where they make HTTPS deployment easier. Our hope is that offering wildcards will help to accelerate the Webs progress towards 100% HTTPS.
Lets Encrypt is currently securing 47 million domains via our fully automated DV certificate issuance and management API. This has contributed heavily to the Web going from 40% to 58% encrypted page loads since Lets Encrypts service became available in December 2015\. If youre excited about wildcard availability and our mission to get to a 100% encrypted Web, we ask that you contribute to our [summer fundraising campaign][1].
A wildcard certificate can secure any number of subdomains of a base domain (e.g. *.example.com). This allows administrators to use a single certificate and key pair for a domain and all of its subdomains, which can make HTTPS deployment significantly easier.
Wildcard certificates will be offered free of charge via our [upcoming ACME v2 API endpoint][2]. We will initially only support base domain validation via DNS for wildcard certificates, but may explore additional validation options over time. We encourage people to ask any questions they might have about wildcard certificate support on our [community forums][3].
We decided to announce this exciting development during our summer fundraising campaign because we are a nonprofit that exists thanks to the generous support of the community that uses our services. If youd like to support a more secure and privacy-respecting Web, [donate today][4]!
Wed like to thank our [community][5] and our [sponsors][6] for making everything weve done possible. If your company or organization is able to sponsor Lets Encrypt please email us at [sponsor@letsencrypt.org][7].
--------------------------------------------------------------------------------
via: https://letsencrypt.org/2017/07/06/wildcard-certificates-coming-jan-2018.html
作者:[Josh Aas ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://letsencrypt.org/2017/07/06/wildcard-certificates-coming-jan-2018.html
[1]:https://letsencrypt.org/donate/
[2]:https://letsencrypt.org/2017/06/14/acme-v2-api.html
[3]:https://community.letsencrypt.org/
[4]:https://letsencrypt.org/donate/
[5]:https://letsencrypt.org/getinvolved/
[6]:https://letsencrypt.org/sponsors/
[7]:mailto:sponsor@letsencrypt.org

View File

@ -0,0 +1,163 @@
Lessons from my first year of live coding on Twitch
============================================================
I gave streaming a go for the first time last July. Instead of gaming, which the majority of streamers on Twitch do, I wanted to stream the open source work I do in my personal time. I work on NodeJS hardware libraries a fair bit (most of them my own). Given that I was already in a niche on Twitch, why not be in an even smaller niche, like JavaScript powered hardware ;) I signed up for [my own channel][1], and have been streaming regularly since.
Of course Im not the first to do this. [Handmade Hero][2] was one of the first programmers I watched code online, quickly followed by the developers at Vlambeer who [developed Nuclear Throne live on Twitch][3]. I was fascinated by Vlambeer especially.
What tipped me over the edge of  _wishing_  I could do it to  _actually doing it_  is credited to [Nolan Lawson][4], a friend of mine. I watched him [streaming his open source work one weekend][5], and it was awesome. He explained everything he was doing along the way. Everything. Replying to issues on GitHub, triaging bugs, debugging code in branches, you name it. I found it fascinating, as Nolan maintains open source libraries that get a lot of use and activity. His open source life is very different to mine.
You can even see this comment I left under his video:
![](https://cdn-images-1.medium.com/max/1600/0*tm8xC8CJV9ZimCCI.png)
I gave it a go myself a week or so later, after setting up my Twitch channel and bumbling my way through using OBS. I believe I worked on [Avrgirl-Arduino][6], which I still frequently work on while streaming. It was a rough first stream. I was very nervous, and I had stayed up late rehearsing everything I was going to do the night before.
The tiny number of viewers I got that Saturday were really encouraging though, so I kept at it. These days I have more than a thousand followers, and a lovely subset of them are regular visitors who I call “the noopkat fam”.
We have a lot of fun, and I like to call the live coding parts “massively multiplayer online pair programming”. I am truly touched by the kindness and wit of everyone joining me each weekend. One of the funniest moments I have had was when one of the fam pointed out that my Arduino board was not working with my software because the microchip was missing from the board:
I have logged off a stream many a time, only to find in my inbox that someone has sent a pull request for some work that I had mentioned I didnt have the time to start on. I can honestly say that my open source work has been changed for the better, thanks to the generosity and encouragement of my Twitch community.
I have so much more to say about the benefits that streaming on Twitch has brought me, but thats for another blog post probably. Instead, I want to share the lessons I have learned for anyone else who would like to try live coding in this way for themselves. Recently Ive been asked by a few developers how they can get started, so Im publishing the same advice I have given them!
Firstly, Im linking you to a guide called [“Streaming and Finding Success on Twitch”][7] which helped me a lot. Its focused towards Twitch and gaming streams specifically, but there are still relevant sections and great advice in there. Id recommend reading this first before considering any other details about starting your channel (like equipment or software choices).
My own advice is below, which I have acquired from my own mistakes and the sage wisdom of fellow streamers (you know who you are!).
### Software
Theres a lot of free streaming software out there to stream with. I use [Open Broadcaster Software (OBS)][8]. Its available on most platforms. I found it really intuitive to get up and going, but others sometimes take a while to learn how it works. Your mileage may vary! Here is a screen-grab of what my OBS desktop scene setup looks like as of today (click for larger image):
![](https://cdn-images-1.medium.com/max/1600/0*s4wyeYuaiThV52q5.png)
You essentially switch between scenes while streaming. A scene is a collection of sources, layered and composited with each other. A source can be things like a camera, microphone, your desktop, a webpage, live text, images, the list goes on. OBS is very powerful.
This desktop scene above is where I do all of my live coding, and I mostly live here for the duration of the stream. I use iTerm and vim, and also have a browser window handy to switch to in order to look up documentation and triage things on GitHub, etc.
The bottom black rectangle is my webcam, so folks can see me work and have a more personal connection. 
I have a handful of labels for my scenes, many of which are to do with the stats and info in the top banner. The banner just adds personality, and is a nice persistent source of info while streaming. Its an image I made in [GIMP][9], and you import it as a source in your scene. Some labels are live stats that pull from text files (such as most recent follower). Another label is a [custom one I made][10] which shows the live temperature and humidity of the room I stream from.
I have also alerts set up in my scenes, which show cute banners over the top of my stream whenever someone follows or donates money. I use the web service [Stream Labs][11] to do this, importing it as a browser webpage source into the scene. Stream Labs also creates my recent followers live text file to show in my banner.
I also have a standby screen that I use when Im about to be live:
![](https://cdn-images-1.medium.com/max/1600/0*cbkVjKpyWaWZLSfS.png)
I additionally need a scene for when Im entering secret tokens or API keys. It shows me on the webcam but hides my desktop with an entertaining webpage, so I can work in privacy:
![](https://cdn-images-1.medium.com/max/1600/0*gbhowQ37jr3ouKhL.png)
As you can see, I dont take stuff too seriously when streaming, but I like to have a good setup for my viewers to get the most out of my stream.
But now for an actual secret: I use OBS to crop out the bottom and right edges of my screen, while keeping the same video size ratio as what Twitch expects. That leaves me with space to watch my events (follows, etc) on the bottom, and look at and respond to my channel chat box on the right. Twitch allows you to pop out the chatbox in a new window which is really helpful.
This is what my full desktop  _really _ looks like:
![](https://cdn-images-1.medium.com/max/1600/0*sENLkp3Plh7ZTjJt.png)
I started doing this a few months ago and havent looked back. Im not even sure my viewers realise this is how my setup works. I think they take for granted that I can see everything, even though I cannot see what is actually being streamed live when Im busy programming!
You might be wondering why I only use one monitor. Its because two monitors was just too much to manage on top of everything else I was doing while streaming. I figured this out quickly and have stuck with one screen since.
### Hardware
I used cheaper stuff to start out, and slowly bought nicer stuff as I realised that streaming was going to be something I stuck with. Use whatever you have when getting started, even if its your laptops built in microphone and camera.
Nowadays I use a Logitech Pro C920 webcam, and a Blue Yeti microphone on a microphone arm with a mic shock. Totally worth the money in the end if you have it to spend. It made a difference to the quality of my streams.
I use a large monitor (27"), because as I mentioned earlier using two monitors just didnt work for me. I was missing things in the chat because I was not looking over to the second laptop screen enough, etc etc. Your milage may vary here, but having everything on one screen was key for me to pay attention to everything happening.
Thats pretty much it on the hardware side; I dont have a very complicated setup.
If you were interested, my desk looks pretty normal except for the obnoxious looming microphone:
![](https://cdn-images-1.medium.com/max/1600/0*EyRimlrHNEKeFmS4.jpg)
### Tips
This last section has some general tips Ive picked up, that have made my stream better and more enjoyable overall.
#### Panels
Spend some time on creating great panels. Panels are the little content boxes on the bottom of everyones channel page. I see them as the new MySpace profile boxes (lol but really). Panel ideas could be things like chat rules, information about when you stream, what computer and equipment you use, your favourite cat breed; anything that creates a personal touch. Look at other channels (especially popular ones) for ideas!
An example of one of my panels:
![](https://cdn-images-1.medium.com/max/1600/0*HlLs6xlnJtPwN4D6.png)
#### Chat
Chat is really important. Youre going to get the same questions over and over as people join your stream halfway through, so having chat macros can really help. “What are you working on?” is the most common question asked while Im coding. I have chat shortcut commands for that, which I made with [Nightbot][12]. It will put an explanation of something I have entered in ahead of time, by typing a small one word command like  _!whatamidoing_
When folks ask questions or leave nice comments, talk back to them! Say thanks, say their Twitch handle, and theyll really appreciate the attention and acknowledgement. This is SUPER hard to stay on top of when you first start streaming, but multitasking will come easier as you do more. Try to take a few seconds every couple of minutes to look at the chat for new messages.
When programming,  _explain what youre doing_ . Talk a lot. Make jokes. Even when Im stuck, Ill say, “oh, crap, I forget how to use this method lemme Google it hahaha” and folks are always nice and sometimes theyll even read along with you and help you out. Its fun and engaging, and keeps folks watching.
I lose interest quickly when Im watching programming streams where the streamer is sitting in silence typing code, ignoring the chat and their new follower alerts.
Its highly likely that 99% of folks who find their way to your channel will be friendly and curious. I get the occasional troll, but the moderation tools offered by Twitch and Nightbot really help to discourage this.
#### Prep time
Automate your setup as much as possible. My terminal is iTerm, and it lets you save window arrangements and font sizes so you can restore back to them later. I have one window arrangement for streaming and one for non streaming. Its a massive time saver. I hit one command and everything is the perfect size and in the right position, ready to go.
There are other apps out there that automate all of your app window placements, have a look to see if any of them would also help.
Make your font size really large in your terminal and code editor so everyone can see.
#### Regularity
Be regular with your schedule. I only stream once a week, but always at the same time. Let folks know if youre not able to stream during an expected time you normally do. This has netted me a regular audience. Some folks love routine and its exactly like catching up with a friend. Youre in a social circle with your community, so treat it that way.
I want to stream more often, but I know I cant commit to more than once a week because of travel. I am trying to come up with a way to stream in high quality when on the road, or perhaps just have casual chats and save programming for my regular Sunday stream. Im still trying to figure this out!
#### Awkwardness
Its going to feel weird when you get started. Youre going to feel nervous about folks watching you code. Thats normal! I felt that really strongly at the beginning, even though I have public speaking experience. I felt like there was nowhere for me to hide, and it scared me. I thought, “everyone is going to think my code is bad, and that Im a bad developer”. This is a thought pattern that has plagued me my  _entire career_  though, its nothing new. I knew that with this, I couldnt quietly refactor code before pushing to GitHub, which is generally much safer for my reputation as a developer.
I learned a lot about my programming style by live coding on Twitch. I learned that Im definitely the “make it work, then make it readable, then make it fast” type. I dont rehearse the night before anymore (I gave that up after 3 or 4 streams right at the beginning), so I write pretty rough code on Twitch and have to be okay with that. I write my best code when alone with my thoughts and not watching a chat box + talking aloud, and thats okay. I forget method signatures that Ive used a thousand times, and make silly mistakes in almost every single stream. For most, its not a productive environment for being at your best.
My Twitch community never judges me for this, and they help me out a lot. They understand Im multitasking, and are really great about pragmatic advice and suggestions. Sometimes they bail me out, and other times I have to explain to them why their suggestion wont work. Its really just like regular pair programming!
I think the warts and all approach to this medium is a strength, not a weakness. It makes you more relatable, and its important to show that theres no such thing as the perfect programmer, or the perfect code. Its probably quite refreshing for new coders to see, and humbling for myself as a more experienced coder.
### Conclusion
If youve been wanting to get into live coding on Twitch, I encourage you to give it a try! I hope this post helped you if you have been wondering where to start.
If youd like to join me on Sundays, you can [follow my channel on Twitch][13] :)
On my last note, Id like to personally thank [Mattias Johansson][14] for his wisdom and encouragement early on in my streaming journey. He was incredibly generous, and his [FunFunFunction YouTube channel][15] is a continuous source of inspiration.
Update: a bunch of folks have been asking about my keyboard and other parts of my workstation. [Here is the complete list of what I use][16]. Thanks for the interest!
--------------------------------------------------------------------------------
via: https://medium.freecodecamp.org/lessons-from-my-first-year-of-live-coding-on-twitch-41a32e2f41c1
作者:[ Suz Hinton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@suzhinton
[1]:https://www.twitch.tv/noopkat
[2]:https://www.twitch.tv/handmade_hero
[3]:http://nuclearthrone.com/twitch/
[4]:https://twitter.com/nolanlawson
[5]:https://www.youtube.com/watch?v=9FBvpKllTQQ
[6]:https://github.com/noopkat/avrgirl-arduino
[7]:https://www.reddit.com/r/Twitch/comments/4eyva6/a_guide_to_streaming_and_finding_success_on_twitch/
[8]:https://obsproject.com/
[9]:https://www.gimp.org/
[10]:https://github.com/noopkat/study-temp
[11]:https://streamlabs.com/
[12]:https://beta.nightbot.tv/
[13]:https://www.twitch.tv/noopkat
[14]:https://twitter.com/mpjme
[15]:https://www.youtube.com/channel/UCO1cgjhGzsSYb1rsB4bFe4Q
[16]:https://gist.github.com/noopkat/5de56cb2c5917175c5af3831a274a2c8

View File

@ -1,173 +0,0 @@
Translating by qhwdw Guide to Linux App Is a Handy Tool for Every Level of Linux User
============================================================
![Guide to Linux](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/guide-to-linux.png?itok=AAcrxjjc "Guide to Linux")
The Guide to Linux app is not perfect, but it's a great tool to help you learn your way around Linux commands.[Used with permission][7]Essence Infotech LLP
Remember when you first started out with Linux? Depending on the environment youre coming from, the learning curve can be somewhat challenging. Take, for instance, the number of commands found in _ /usr/bin_  alone. On my current Elementary OS system, that number is 1,944\. Of course, not all of those are actual commands (or commands I would use), but the number is significant.
Because of that (and many other differences from other platforms), new users (and some already skilled users) need a bit of help now and then.
For every administrator, there are certain skills that are must-have:
* Understanding of the platform
* Understanding commands
* Shell scripting
When you seek out assistance, sometimes youll be met with RTFM (Read the Fine/Freaking/Funky Manual). That doesnt always help when you have no idea what youre looking for. Thats when youll be glad for apps like [Guide to Linux][15].
Unlike most of the content youll find here on Linux.com, this particular article is about an Android app. Why? Because this particular app happens to be geared toward helping users learn Linux.
And it does a fine job.
Im going to give you fair warning about this app—its not perfect. Guide to Linux is filled with broken English, bad punctuation, and (if youre a purist) it never mentions GNU. On top of that, one particular feature (one that would normally be very helpful to users) doesnt function enough to be useful. Outside of that, Guide to Linux might well be one of your best bets for having a mobile “pocket guide” to the Linux platform.
With this app, youll enjoy:
* Offline usage.
* Linux Tutorial.
* Details of all basic and advanced Linux commands of Linux.
* Includes command examples and syntax.
* Dedicated Shell Script module.
On top of that, Guide to Linux is free (although it does contain ads). If you want to get rid of the ads, theres an in-app purchase ($2.99 USD/year) to take care of that.
Lets install this app and then take a look at the constituent parts.
### Installation
Like all Android apps, installation of Guide to Linux is incredibly simple. All you have to do is follow these easy steps:
1. Open up the Google Play Store on your Android device
2. Search for Guide to Linux
3. Locate and tap the entry by Essence Infotech
4. Tap Install
5. Allow the installation to complete
### [guidetolinux1.jpg][8]
![Guide to Linux main window](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux1.jpg?itok=UJhPP80J "Guide to Linux main window")
Figure 1: The Guide to Linux main window.[Used with permission][1]
Once installed, youll find the launcher for Guide to Linux in either your App Drawer or on your home screen (or both). Tap the icon to launch the app.
### Usage
Lets take a look at the individual features that make up Guide to Linux. You will probably find some features more helpful than others, and your experience will vary. Before we break it down, Ill make mention of the interface. The developer has done a great job of creating an easy-to-use interface for the app. 
From the main window (Figure 1), you can gain easy access to the four features.
Tap any one of the four icons to launch a feature and youre ready to learn.
### [guidetolinux2.jpg][9]
![The Tutorial](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux2.jpg?itok=LiJ8pHdS "The Tutorial")
Figure 2: The tutorial begins at the beginning.[Used with permission][2]
### Tutorial
Lets start out with the most newbie-friendly feature of the app—Tutorial. Open up that feature and youll be greeted by the first section of the tutorial, “Introduction to the Linux Operating System” (Figure 2).
If you tap the “hamburger menu” (three horizontal lines in the top left corner), the Table of Contents are revealed (Figure 3), so you can select any of the available sections within the Tutorial.
### [guidetolinux3.jpg][10]
![Tutorial TOC](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux3_0.jpg?itok=5nJNeYN- "Tutorial TOC")
Figure 3: The Tutorial Table of Contents.[Used with permission][3]
Unless you havent figured it out by now, the Tutorial section of Guide to Linux is a collection of short essays on each topic. The essays include pictures and (in some cases) links that will send you to specific web sites (as needed to suit a topic). There is no interaction here, just reading. However, this is a great place to start, as the developer has done a solid job of describing the various sections (grammar notwithstanding).
Although you will see a search option at the top of the window, I havent found that feature to be even remotely effective—but its there for you to try.
For new Linux users, looking to add Linux administration to their toolkit, youll want to read through this entire Tutorial. Once youve done that, move on to the next section.
### Commands
The Commands feature is like having the man pages, in hand, for many of the most frequently used Linux commands. When you first open this, you will be greeted by an introduction that explains the advantage of using commands.
### [guidetolinux4.jpg][11]
![Commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux4.jpg?itok=Rmzfb8Or "Commands")
Figure 4: The Commands sidebar allows you to check out any of the listed commands.[Used with permission][4]
Once youve read through that you can either tap the right-facing arrow (at the bottom of the screen) or tap the “hamburger menu” and then select the command you want to learn about from the sidebar (Figure 4).
Tap on one of the commands and you can then read through the explanation of the command in question. Each page explains the command and its options as well as offers up examples of how to use the command.
### Shell Script
At this point, youre starting to understand Linux and you have a solid grasp on commands. Now its time to start understanding shell scripts. This section is set up in the same fashion as is the Tutorial and Commands sections.
You can open up a sidebar Table of Contents to take and then open up any of the sections that comprise the Shell Script tutorial (Figure 5).
### [guidetolinux-5-new.jpg][12]
![Shell Script](https://www.linux.com/sites/lcom/files/styles/floated_images/public/guidetolinux-5-new.jpg?itok=EDlZ92IA "Shell Script")
Figure 5: The Shell Script section should look familiar by now.[Used with permission][5]
Once again, the developer has done a great job of explaining how to get the most out of shell scripting. For anyone interested in learning the ins and outs of shell scripting, this is a pretty good place to start.
### Terminal
Now we get to the section where your mileage may vary. The developer has included a terminal emulator with the app. Unfortunately, when installing this on an unrooted Android device, youll find yourself locked into a read-only file system, where most of the commands simply wont work. However, I did install Guide to Linux on a Pixel 2 (via the Android app store) and was able to get a bit more usage from the feature (if only slightly). On a OnePlus 3 (not rooted), no matter what directory I change into, I get the same “permission denied” error, even for a simple ls command.
On the Chromebook, however, all is well (Figure 6). Sort of. Were still working with a read-only file system (so you cannot actually work with or create new files).
### [guidetolinux6.jpg][13]
![Permission denied](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/guidetolinux6_0.jpg?itok=cVENH5lM "Permission denied")
Figure 6: Finally able to (sort of) work with the terminal emulator.[Used with permission][6]
Remember, this isnt actually a full-blown terminal, but a way for new users to understand how the terminal works. Unfortunately, most users are going to find themselves frustrated with this feature of the tool, simply because they cannot put to use what theyve learned within the other sections. It might behoove the developer to re-tool the terminal feature as a sandboxed Linux file system, so users could actually learn with it. Every time a user would open that tool, it could revert to its original state. Just a thought.
### In the end…
Even with the terminal feature being a bit hamstrung by the read-only filesystem (almost to the point of being useless), Guide to Linux is a great tool for users new to Linux. With this guide to Linux, youll learn enough about Linux, commands, and shell scripting to feel like you have a head start, even before you install that first distribution.
_Learn more about Linux through the free ["Introduction to Linux" ][16]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/8/guide-linux-app-handy-tool-every-level-linux-user
作者:[ JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/used-permission
[7]:https://www.linux.com/licenses/category/used-permission
[8]:https://www.linux.com/files/images/guidetolinux1jpg
[9]:https://www.linux.com/files/images/guidetolinux2jpg
[10]:https://www.linux.com/files/images/guidetolinux3jpg-0
[11]:https://www.linux.com/files/images/guidetolinux4jpg
[12]:https://www.linux.com/files/images/guidetolinux-5-newjpg
[13]:https://www.linux.com/files/images/guidetolinux6jpg-0
[14]:https://www.linux.com/files/images/guide-linuxpng
[15]:https://play.google.com/store/apps/details?id=com.essence.linuxcommands
[16]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[17]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F8%2Fguide-linux-app-handy-tool-every-level-linux-user&title=Guide%20to%20Linux%20App%20Is%20a%20Handy%20Tool%20for%20Every%20Level%20of%20Linux%20User

View File

@ -1,3 +1,5 @@
translating by flowsnow
High Dynamic Range (HDR) Imaging using OpenCV (C++/Python)
============================================================
@ -396,8 +398,8 @@ via: http://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-
[4]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr.zip
[5]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr.zip
[6]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
[7]:https://itunes.apple.com/us/app/autobracket-hdr/id923626339?mt=8&ign-mpt=uo%3D8
[8]:https://play.google.com/store/apps/details?id=com.almalence.opencam&hl=en
[7]:https://itunes.apple.com/us/app/autobracket-hdr/id923626339?mt=8&amp;ign-mpt=uo%3D8
[8]:https://play.google.com/store/apps/details?id=com.almalence.opencam&amp;hl=en
[9]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-image-sequence.jpg
[10]:https://www.howtogeek.com/289712/how-to-see-an-images-exif-data-in-windows-and-macos
[11]:https://www.sno.phy.queensu.ca/~phil/exiftool
@ -410,9 +412,9 @@ via: http://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-
[18]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Drago.jpg
[19]:https://people.csail.mit.edu/fredo/PUBLI/Siggraph2002/DurandBilateral.pdf
[20]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Durand.jpg
[21]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.8100&rep=rep1&type=pdf
[21]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.8100&amp;rep=rep1&amp;type=pdf
[22]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Reinhard.jpg
[23]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.4077&rep=rep1&type=pdf
[23]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.4077&amp;rep=rep1&amp;type=pdf
[24]:http://www.learnopencv.com/wp-content/uploads/2017/10/hdr-Mantiuk.jpg
[25]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/
[26]:https://bigvisionllc.leadpages.net/leadbox/143948b73f72a2%3A173c9390c346dc/5649050225344512/

View File

@ -1,104 +0,0 @@
jrglinux is translating!!!
But I don't know what a container is
============================================================
### Here's how containers are both very much like — and very much unlike — virtual machines.
![But I don't know what a container is](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/container-ship.png?itok=pqZYgQ7K "But I don't know what a container is")
Image by : opensource.com
I've been speaking about security in DevOps—also known as "DevSecOps"[*][9]—at a few conferences and seminars recently, and I've started to preface the discussion with a quick question: "Who here understands what a container is?" Usually I don't see many hands going up,[**][10]  so I've started briefly explaining what containers[***][11] are before going much further.
To be clear: You  _can_  do DevOps without containers, and you  _can_  do DevSecOps without containers. But containers lend themselves so well to the DevOps approach—and to DevSecOps, it turns out—that even though it's possible to do DevOps without them, I'm going to assume that most people will use containers.
### What is a container?
Linux Containers
* [What are Linux containers?][1]
* [What is Docker?][2]
* [What is Kubernetes?][3]
* [An introduction to container terminology][4]
I was in a meeting with colleagues a few months ago, and one of them was presenting on containers. Not everybody around the table was an expert on the technology, so he started simply. He said something like, "There's no mention of containers in the Linux kernel source code." This, it turned out, was a dangerous statement to make in this particular group, and within a few seconds, both my boss (sitting next to me) and I were downloading the recent kernel source tarballs and performing a count of the exact number of times that the word "container" occurred. It turned out that his statement wasn't entirely correct. To give you an idea, I just tried it on an old version (4.9.2) I have on a laptop—it turns out 15,273 lines in that version include the word "container."[****][16] My boss and I had a bit of a smirk and ensured we corrected him at the next break.
What my colleague meant to say—and clarified later—is that the concept of a container doesn't really exist as a clear element within the Linux kernel. In other words, containers use a number of abstractions, components, tools, and mechanisms from the Linux kernel, but there's nothing very special about these; they can also be used for other purposes. So, there's "no such thing as a container, according to the Linux kernel."
What, then, is a container? Well, I come from a virtualization—hypervisor and virtual machine (VM)—background, and, in my mind, containers are both very much like and very much unlike VMs. I realize that this may not sound very helpful, but let me explain.
### How is a container like a VM?
The main way in which a container is like a VM is that it's a unit of execution. You bundle something up—an image—which you can then run on a suitably equipped host platform. Like a VM, it's a workload on a host, and like a VM, it runs at the mercy of that host. Beyond providing workloads with the resources they need to do their job (CPU cycles, networking, storage access, etc.), the host has a couple of jobs that it needs to do:
1. Protect workloads from each other, and make sure that a malicious, compromised, or poorly written workload cannot affect the operation of any others.
2. Protect itself (the host) from workloads, and make sure that a malicious, compromised, or poorly written workload cannot affect the operation of the host.
The ways VMs and containers achieve this isolation are fundamentally different, with VMs isolated by hypervisors making use of hardware capabilities, and containers isolated via software controls provided by the Linux kernel.[******][12]These controls revolve around various "namespaces" that ensure one container can't see other containers' files, users, network connections, etc.—nor those of the host. These can be supplemented by tools such as SELinux, which provide capabilities controls for further isolation of containers.
### How is a container unlike a VM?
The problem with the description above is that if you're even vaguely hypervisor-aware, you probably think that a container is just like a VM, and it  _really_  isn't.
A container, first and foremost,[*******][6] is a packaging format. "WHAT?" you say, "but you just said it was something that was executed." Well, yes, but the main reason containers are so interesting is that it's very easy to create the images from which they're instantiated, and those images are typically much,  _much_  smaller than for VMs. For this reason, they take up very little memory and can be spun up and spun down very, very quickly. Having a container that sits around for just a few minutes or even seconds (OK, milliseconds, if you like) is an entirely sensible and feasible idea. For VMs, not so much.
Given that containers are so lightweight and easy to replace, people are using them to create microservices—minimal components split out of an application that can be used by one or many other microservices to build into whatever you want. Given that you plan to put only what you need for a particular function or service within a container, you're now free to make it very small, which means that writing new ones and throwing away the old ones becomes very practicable. I'll follow up on this and some of the impacts this might have on security, and hence DevSecOps, in a future article.
Hopefully this has been a useful intro to containers, and you're motivated to learn more about DevSecOps. (And if you aren't, just pretend.)
* * *
* I think SecDevOps reads oddly, and DevOpsSec tends to get pluralized, and then you're on an entirely different topic.
** I should note that this isn't just with British audiences, who are reserved and don't like drawing attention to themselves. This also happens with Canadian and U.S. audiences who, well … are different in that regard.
*** I'm going to be talking about Linux containers. I'm aware there's history here, so it's worth noting. In case of pedantry.
**** I used **grep -ir container linux-4.9.2 | wc -l** in case you're interested.[*****][13]
***** To be fair, at a quick glance, a number of those uses have nothing to do with containers in the way we're discussing them as "Linux containers," but refer to abstractions, which can be said to contain other elements, and are, therefore, logically referred to as containers.
****** There are clever ways to combine VMs and containers to benefit from the strengths of each. I'm not going into those today.
_*******_Well, apart from the execution bit that we just covered, obviously.
_This article originally appeared on [Alice, Eve, and Bob—a security blog][7] and is republished with permission._
--------------------------------------------------------------------------------
作者简介:
Mike Bursell - I've been in and around Open Source since around 1997, and have been running (GNU) Linux as my main desktop at home and work since then: not always easy... I'm a security bod and architect, and am currently employed as Chief Security Architect for Red Hat. I have a blog - "Alice, Eve & Bob" - where I write (sometimes rather parenthetically) about security. I live in the UK and like single malts.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/10/what-are-containers
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
[5]:https://opensource.com/article/17/10/what-are-containers?rate=sPHuhiD4Z3D3vJ6ZqDT-wGp8wQjcQDv-iHf2OBG_oGQ
[6]:https://opensource.com/article/17/10/what-are-containers#*******
[7]:https://aliceevebob.wordpress.com/2017/07/04/but-i-dont-know-what-a-container-is/
[8]:https://opensource.com/user/105961/feed
[9]:https://opensource.com/article/17/10/what-are-containers#*
[10]:https://opensource.com/article/17/10/what-are-containers#**
[11]:https://opensource.com/article/17/10/what-are-containers#***
[12]:https://opensource.com/article/17/10/what-are-containers#******
[13]:https://opensource.com/article/17/10/what-are-containers#*****
[14]:https://opensource.com/users/mikecamel
[15]:https://opensource.com/users/mikecamel
[16]:https://opensource.com/article/17/10/what-are-containers#****

View File

@ -1,3 +1,5 @@
translating----geekpi
# [File better bugs with coredumpctl][1]
![](https://fedoramagazine.org/wp-content/uploads/2017/11/coredump.png-945x400.jpg)

View File

@ -0,0 +1,278 @@
A CEO's Guide to Emacs
============================================================
Years—no, decades—ago, I lived in Emacs. I wrote code and documents, managed email and calendar, and shelled all in the editor/OS. I was quite happy. Years went by and I moved to newer, shinier things. As a result, I forgot how to do tasks as basic as efficiently navigating files without a mouse. About three months ago, noticing just how much of my time was spent switching between applications and computers, I decided to give Emacs another try. It was a good decision for several reasons that will be covered in this post. Covered too are `.emacs` and Dropbox tips so that you can set up a good, movable environment.
For those who haven't used Emacs, it's something you'll likely hate, but may love. It's sort of a Rube Goldberg machine the size of a house that, at first glance, performs all the functions of a toaster. That hardly sounds like an endorsement, but the key phrase is "at first glance." Once you grok Emacs, you realize that it's a thermonuclear toaster that can also serve as the engine for... well, just about anything you want to do with text. When you think about how much your computing life revolves around text, this is a rather bold statement. Bold, but true.
Perhaps more importantly to me though, it's the one application I've ever used that makes me feel like I really own it instead of casting me as an anonymous "user" whose wallet is cleverly targeted by product marketing departments in fancy offices somewhere near [Soma][30] or Redmond. Modern productivity and authoring applications (e.g., Pages or IDEs) are like carbon fiber racing bikes. They come kitted out very nicely and fully assembled. Emacs is like a box of classic [Campagnolo][31] parts and a beautiful lugged steel frame that's missing one crank arm and a brake lever that you have to find in some tiny subculture on the Internet. The first one is faster and complete. The second is a source of endless joy or annoyance depending on your personality—and will last until your dying day. I'm the sort of person who feels equal joy at finding an old stash of Campy parts or tweaking my editor with eLisp. YMMV.
![1933 steel bicycle](https://blog.fugue.co/assets/images/bicycle.jpg)
A 1933 steel bicycle that I still ride. Check out this comparison of frame tubes: [https://www.youtube.com/watch?v=khJQgRLKMU0][6].
This may give the impression that Emacs is anachronistic or old-fashioned. It's not. It's powerful and timeless, but demands that you patiently understand it on its terms. The terms are pretty far off the beaten path and seem odd, but there is a logic to them that is both compelling and charming. To me, Emacs feels like the future rather than the past. Just as the lugged steel frame will be useful and comfortable in decades to come and the carbon fiber wunderbike will be in a landfill, having shattered on impact, so will Emacs persist as a useful tool when the latest trendy app is long forgotten.
If the notion of building your own personal working environment by editing Lisp code and having that fits-like-a-glove environment follow you to any computer is appealing to you, you may really like Emacs. If you like the new and shiny and want to get straight to work without much investment of time and mental cycles, it's likely not for you. I don't write code any more (other than Ludwig and Emacs Lisp), but many of the engineers at Fugue use Emacs to good effect. I'd say our engineers are about 30% Emacs, 40% IDEs, and 30% Vim users. But, this post is about Emacs for CEOs and other [Pointy-Haired Bosses][32] (PHB)[1][7] (and, hey, anyone whos curious), so I'm going to explain and/or rationalize why I love it and how I use it. I also hope to provide you with enough detail that you can have a successful experience with it, without hours of Googling.
### Lasting Advantages
The long-term advantages that come with using Emacs just make life easier. The net gain makes the initial lift entirely worthwhile. Consider these:
### No More Context Switching
Org Mode alone is worth investing some serious time in, but if you are like me, you are usually working on a dozen or so documents—from blog posts to lists of what you need to do for a conference to employee reviews. In the modern world of computing, this generally means using several applications, all of which have distracting user interfaces and different ways to store, sort, and search. The result is that you need to constantly switch mental contexts and remember minutiae. I hate context switching because it is an imposition put on me due to a broken interface model[2][8] and I hate having to remember things my computer should remember for me in any rational world. In providing a single environment, Emacs is even more powerful for the PHB than the programmer, since programmers tend to spend a greater percentage of their day in a single application. Switching mental contexts has a higher cost than is often apparent. OS and application vendors have tarted up interfaces to distract us from this reality. If youre technical, having access to a powerful [language interpreter][33] in a single keyboard shortcut (`M-:`) is especially useful.[3][9]
Many applications can be full screened all day and used to edit text. Emacs is singular because it is both an editor and a Lisp interpreter. In essence, you have a Turing complete machine a keystroke or two away at all times, while you go about your business. If you know a little or a lot about programming, you'll recognize that this means you can do  _anything_  in Emacs. The full power of your computer is available to you in near real time while you work, once you have the commands in memory. You won't want to re-create Excel in eLisp, but most things you might do in Excel are smaller in scope and easy to accomplish in a line or two of code. If I need to crunch numbers, I'm more likely to jump over to the scratch buffer and write a little code than open a spreadsheet. Even if I have an email to write that isn't a one-liner, I'll usually just write it in Emacs and paste it into my email client. Why context switch when you can just flow? You might start with a simple calculation or two, but, over time, anything you need computed can be added with relative ease to Emacs. This is perhaps unique in applications that also provide rich features for creating things for other humans. Remember those magical terminals in Isaac Asimov's books? Emacs is the closest thing I've encountered to them.[4][10] I no longer decide what app to use for this or that thing. Instead, I just work. There is real power and efficiency to having a great tool and committing to it.
### Creating Things in Peace and Quiet
Whats the end result of having the best text editing features I've ever found? Having a community of people making all manner of useful additions? Having the full power of Lisp a keychord away? Its that I use Emacs for all my creative work, aside from making music or images.
I have a dual monitor set up at my desk. One of them is in portrait mode with Emacs full screened all day long. The other one has web browsers for researching and reading; it usually has a terminal open as well. I keep my calendar, email, etc., on another desktop in OS X, which is hidden while I'm in Emacs, and I keep all notifications turned off. This allows me to actually concentrate on what I'm doing. I've found eliminating distractions to be almost impossible in the more modern UI applications due to their efforts to be helpful and easy to use. I don't need to be constantly reminded how to do operations I've done tens of thousands of times, but I do need a nice, clean white sheet of paper to be thoughtful. Maybe I'm just bad at living in noisy environments due to age and abuse, but Id suggest its worth a try for anyone. See what it's like to have some actual peace and quiet in your computing environment. Of course, lots of apps now have modes that hide the interface and, thankfully, both Apple and Microsoft now have meaningful full-screen modes. But, no other application is powerful enough to “live in” for most things. Unless you are writing code all day or perhaps working on a very long document like a book, you're still going to face the noise of other apps. Also, most modern applications seem simultaneously patronizing and lacking in functionality and usability.[5][11] The only applications I dislike more than office apps are the online versions.
![1933 steel bicycle](https://blog.fugue.co/assets/images/desktop.jpg)
My desktop arrangement. Emacs on the left.
But what about communicating? The difference between creating and communicating is substantial. I'm much more productive at both when I set aside distinct time for each. We use Slack at Fugue, which is both wonderful and hellish. I keep it on a messaging desktop alongside my calendar and email, so that, while I'm actually making things, I'm blissfully unaware of all the chatter in the world. It takes just one Slackstorm or an email from a VC or Board Director to immediately throw me out of my work. But, most things can usually wait an hour or two.
### Taking Everything with You and Keeping It Forever
The third reason I find Emacs more advantageous than other environments is that it's easy to take all your stuff with you. By this, I mean that, rather than having a plethora of apps interacting and syncing in their own ways, all you need is one or two directories syncing via Dropbox or the like. Then, you can have all your work follow you anywhere in the environment you have crafted to suit your purposes. I do this across OS X, Windows, and sometimes Linux. It's dead simple and reliable. I've found this capability to be so useful that I dread dealing with Pages, GDocs, Office, or other kinds of files and applications that force me back into finding stuff somewhere on the filesystem or in the cloud.
The limiting factor in keeping things forever on a computer is file format. Assuming that humans have now solved the problem of storage [6][12] for good, the issue we face over time is whether we can continue to access the information we've created. Text files are the most long-lived format for computing. You easily can open a text file from 1970 in Emacs. Thats not so true for Office applications. Text files are also nice and small—radically smaller than Office application data files. As a digital pack rat and as someone who makes lots of little notes as things pop into my head, having a simple, light, permanent collection of stuff that is always available is important to me.
If youre feeling ready to give Emacs a try, read on! The sections that follow dont take the place of a full tutorial, but will have you operational by the time you finish reading.
### Learning To Ride Emacs - A Technical Setup
The price of all this power and mental peace and quiet is that you have a steep learning curve with Emacs and it does everything differently than you're used to. At first, this will make you feel like youre wasting time on an archaic and strange application that the modern world passed by. Its a bit like learning to ride a bicycle[7][13]if you've only driven cars.
### Which Emacs?
I use the plain vanilla Emacs from GNU for OS X and Windows. You can get the OS X version at [][34][http://emacsformacosx.com/][35] and the Windows version at [][36][http://www.gnu.org/software/emacs/][37]. There are a bunch of other versions out there, especially for the Mac, but I've found the learning curve for doing powerful stuff (which involves Lisp and lots of modes) to be much lower with the real deal. So download it, and we can get started![8][14]
### First, You'll Need To Learn How To Navigate
I use the Emacs conventions for keys and combinations in this document. These are 'C' for control, 'M' for meta (which is usually mapped to Alt or Option), and the hyphen for holding down the keys in combination. So `C-h t` means to hold down control and type h, then release control and type t. This is the command for bringing up the tutorial, which you should go ahead and do.
Don't use the arrow keys or the mouse. They work, but you should give yourself a week of using the native navigation commands in Emacs. Once you have them committed to muscle memory, you'll likely enjoy them and miss them badly everywhere else you go. The Emacs tutorial does a pretty good job of walking you through them, but I'll summarize so you don't need to read the whole thing. The boring stuff is that, instead of arrows, you use `C-b` for back, `C-f` for forward, `C-p` for previous (up), and `C-n` for next (down). You may be thinking "why in the world would I do that, when I have perfectly good arrow keys?" There are several reasons. First, you don't have to move your hands from the typing position, and the forward and back keys used with Alt (or Meta in Emacspeak) navigate a word at a time. This is more handy than is obvious. The third good reason is that, if you want to repeat a command, you can precede it with a number. I often use this when editing documents by estimating how many words I need to go back or lines up or down and doing something like `C-9 C-p` or `M-5 M-b`. The other really important navigation commands are based on `a` for the beginning of a thing and `e` for the end of a thing. Using `C-a|e` are on lines, and using `M-a|e`, are on sentences. For the sentence commands to work properly, you'll need to double space after periods, which simultaneously provides a useful feature and takes a shibboleth of [opinion][38] off the mental table. If you need to export the document to a single space [publication environment][39], you can write a macro in moments to do so.
It genuinely is worth going through the tutorial that ships with Emacs. I'll cover a few important commands for the truly impatient, but the tutorial is gold. Reminder: `C-h t` for the tutorial.
### Learn To Copy and Paste
You can put Emacs into `CUA` mode, which will work in familiar ways, but the native Emacs way is pretty great and plenty easy once you learn it. You mark regions (like selecting) by using Shift with the navigation commands. So `C-F` selects one character forward from the cursor, etc. You copy with `M-w`, you cut with `C-w`, and you paste with `C-y`. These are actually called killing and yanking, but it's very similar to cut and paste. There is magic under the hood here in the kill ring, but for now, just worry about cut, copy, and paste. If you start fumbling around at this point, `C-x u` is undo...
### Next, Learn Ido Mode
Trust me. Ido makes working with files much easier. You don't generally use a separate Finder|Explorer window to work with files in Emacs. Instead you use the editor's commands to create, open, and save files. This is a bit of a pain without Ido, so I recommend installing it before learning the other way. Ido comes with Emacs beginning with version 22, but you'll want to make some tweaks to your `.emacs` file so that it is always used. This is a good excuse to get your environment set up.
Most features in Emacs come in modes. To install any given mode, you'll need to do two things. Well, at first you'll need to do a few extra things, but these only need to be done once, and thereafter only two things. So the extra things are that you'll need a single place to put all your eLisp files and you'll need to tell Emacs where that place is. I suggest you make a single directory in, say, Dropbox that is your Emacs home. Inside this, you'll want to create an `.emacs` file and an `.emacs.d` directory. Inside the `.emacs.d`, make a directory called `lisp`. So you should have:
```
home
|
+.emacs
|
-.emacs.d
|
-lisp
```
You'll put the `.el` files for things like modes into the `home/.emacs.d/lisp`directory, and you'll point to that in your `.emacs` like so:
`(add-to-list 'load-path "~/.emacs.d/lisp/")`
Ido Mode comes with Emacs, so you won't need to put an `.el` file into your Lisp directory for this, but you'll be adding other stuff soon that will go in there.
### Symlinks are Your Friend
But wait, that says that `.emacs` and `.emacs.d` are in your home directory, and we put them in some dumb folder in Dropbox! Correct. This is how you make it easy to have your environment anywhere you go. Keep everything in Dropbox and make symbolic links to `.emacs`, `.emacs.d`, and your main document directories in `~`. On OS X, this is super easy with the `ln -s` command, but on Windows this is a pain. Fortunately, Emacs provides an easy alternative to symlinking on Windows, the HOME environment variable. Go into Environment Variables in Windows (as of Windows 10, you can just hit the Windows key and type "Environment Variables" to find this with search, which is the best part of Windows 10), and make a HOME environment variable in your account that points to the Dropbox folder you made for Emacs. If you want to make it easy to navigate to local files that aren't in Dropbox, you may instead want to make a symbolic link to the Dropbox Emacs home in your actual home directory.
So now you've done all the jiggery-pokery needed to get any machine pointed to your Emacs setup and files. If you get a new computer or use someone else's for an hour or a day, you get your entire work environment. This seems a little difficult the first time you do it, but it's about a ten minute (at most) operation once you know what you're doing.
But we were configuring Ido...
`C-x` `C-f` and type `~/.emacs RET RET` to create your `.emacs` file. Add these lines to it:
```
;; set up ido mode
(require `ido)
(setq ido-enable-flex-matching t)
(setq ido-everywhere t)
(ido-mode 1)
```
With the `.emacs` buffer open, do an `M-x evaluate-buffer` command, and you'll either get an error if you munged something or you'll get Ido. Ido changes how the minibuffer works when doing file operations. There is great documentation on it, but I'll point out a few tips. Use the `~/` effectively; you can just type `~/` at any point in the minibuffer and it'll jump back to home. Implicit in this is that you should have most of your stuff a short hop off your home. I use `~/org` for all my non-code stuff and `~/code` for code. Once youre in the right directory, you'll often have a collection of files with different extensions, especially if you use Org Mode and publish from it. You can type period and the extension you want no matter where you are in the file name and Ido will limit the choices to files with that extension. For example, I'm writing this blog post in Org Mode, so the main file is:
`~/org/blog/emacs.org`
I also occasionally push it out to HTML using Org Mode publishing, so I've got an `emacs.html` file in the same directory. When I want to open the Org file, I will type:
`C-x C-f ~/o[RET]/bl[RET].or[RET]`
The [RET]s are me hitting return for auto completion for Ido Mode. So, thats 12 characters typed and, if you're used to it, a  _lot_  less time than opening Finder|Explorer and clicking around. Ido Mode is plenty useful, but really is a utility mode for operating Emacs. Let's explore some modes that are useful for getting work done.
### Fonts and Styles
I recommend getting the excellent input family of typefaces for use in Emacs. They are customizable with different braces, zeroes, and other characters. You can build in extra line spacing into the font files themselves. I recommend a 1.5X line spacing and using their excellent proportional fonts for code and data. I use Input Serif for my writing, which has a funky but modern feel. You can find them on [http://input.fontbureau.com/][40] where you can customize to your preferences. You can manually set the fonts using menus in Emacs, but this puts code into your `.emacs`file and, if you use multiple devices, you may find you want some different settings. I've set up my `.emacs` to look for the machine I'm using by name and configure the screen appropriately. The code for this is:
```
;; set up fonts for different OSes. OSX toggles to full screen.
(setq myfont "InputSerif")
(cond
((string-equal system-name "Sampo.local")
(set-face-attribute 'default nil :font myfont :height 144)
(toggle-frame-fullscreen))
((string-equal system-name "Morpheus.local")
(set-face-attribute 'default nil :font myfont :height 144))
((string-equal system-name "ILMARINEN")
(set-face-attribute 'default nil :font myfont :height 106))
((string-equal system-name "UKKO")
(set-face-attribute 'default nil :font myfont :height 104)))
```
You should replace the `system-name` values with what you get when you evaluate `(system-name)` in your copy of Emacs. Note that on Sampo (my MacBook), I also set Emacs to full screen. I'd like to do this on Windows as well, but Windows and Emacs don't really love each other and it always ends up in some wonky state when I try this. Instead, I just fullscreen it manually after launch.
I also recommend getting rid of the awful toolbar that Emacs got sometime in the 90s when the cool thing to do was to have toolbars in your application. I also got rid of some other "chrome" so that I have a simple, productive interface. Add these to your `.emacs` file to get rid of the toolbar and scroll bars, but to keep your menu available (on OS X, it'll be hidden unless you mouse to the top of the screen anyway):
```
(if (fboundp 'scroll-bar-mode) (scroll-bar-mode -1))
(if (fboundp 'tool-bar-mode) (tool-bar-mode -1))
(if (fboundp 'menu-bar-mode) (menu-bar-mode 1))
```
### Org Mode
I pretty much live in Org Mode. It is my go-to environment for authoring documents, keeping notes, making to-do lists and 90% of everything else I do. Org was originally conceived as a combination note-taking and to-do list utility by a fellow who is a laptop-in-meetings sort. I am against use of laptops in meetings and don't do it myself, so my use cases are a little different than his. For me, Org is primarily a way to handle all manner of content within a structure. There are heads and subheads, etc., in Org Mode, and they function like an outline. Org allows you to expand or hide the contents of the tree and also to rearrange the tree. This fits how I think very nicely and I find it to be just a pleasure to use in this way.
Org Mode also has a lot of little things that make life pleasant. For example, the footnote handling is excellent and the LaTeX/PDF output is great. Org has the ability to generate agendas based on the to-do's in all your documents and a nice way to relate them to dates/times. I don't use this for any sort of external commitments, which are handled on a shared calendar, but for creating things and keeping track of what I need to create in the future, it's invaluable. Installing it is as easy as adding the `org-mode.el` to your Lisp directory and adding these lines to your `.emacs`, if you want it to indent based on tree location and to open documents fully expanded:
```
;; set up org mode
(setq org-startup-indented t)
(setq org-startup-folded "showall")
(setq org-directory "~/org")
```
The last line is there so that Org knows where to look for files to include in agendas and some other things. I keep Org right in my home directory, i.e., a symlink to the directory that lives in Dropbox, as described earlier.
I have a `stuff.org` file that is always open in a buffer. I use it like a notepad. Org makes it easy to extract things like TODOs and stuff with deadlines. It's especially useful when you can inline Lisp code and evaluate it whenever you need. Having code with content is super handy. Again, you have access to the actual computer with Emacs, and this is a liberation.
#### Publishing with Org Mode
I care about the appearance and formatting of my documents. I started my career as a designer, and I think information can and should be presented clearly and beautifully. Org has great support for generating PDFs via LaTeX, which has a bit of its own learning curve, but doing simple things is pretty easy.
If you want to use fonts and styles other than the typical LaTeX ones, you've got a few things to do. First, you'll want XeLaTeX so you can use normal system fonts rather than LaTeX specific fonts. Next, you'll want to add this to `.emacs`:
```
(setq org-latex-pdf-process
'("xelatex -interaction nonstopmode %f"
"xelatex -interaction nonstopmode %f"))
```
I put this right at the end of my Org section of `.emacs` to keep things tidy. This will allow you to use more formatting options when publishing from Org. For example, I often use:
```
#+LaTeX_HEADER: \usepackage{fontspec}
#+LATEX_HEADER: \setmonofont[Scale=0.9]{Input Mono}
#+LATEX_HEADER: \setromanfont{Maison Neue}
#+LATEX_HEADER: \linespread{1.5}
#+LATEX_HEADER: \usepackage[margin=1.25in]{geometry}
#+TITLE: Document Title Here
```
These simply go somewhere in your `.org` file. Our corporate font for body copy is Maison Neue, but you can put whatever is appropriate here. I strongly discourage the use of Maison Neue. Its a terrible font and no one should ever use it.
This file is an example of PDF output using these settings. This is what out-of-the-box LaTeX always looks like. It's fine I suppose, but the fonts are boring and a little odd. Also, if you use the standard format, people will assume they are reading something that is or pretends to be an academic paper. You've been warned.
### Ace Jump Mode
This is more of a gem than a major feature, but you want it. It works a bit like Jef Raskin's Leap feature from days gone by.[9][15] The way it works is you type `C-c` `C-SPC`and then type the first letter of the word you want to jump to. It highlights all occurrences of words with that initial character, replacing it with a letter of the alphabet. You simply type the letter of the alphabet for the location you want and your cursor jumps to it. I find myself using this as often as the more typical nav keys or search. Download the `.el` to your Lisp directory and put this in your `.emacs`:
```
;; set up ace-jump-mode
(add-to-list 'load-path "which-folder-ace-jump-mode-file-in/")
(require 'ace-jump-mode)
(define-key global-map (kbd "C-c C-SPC" ) 'ace-jump-mode)
```
### More Later
That's enough for one post—this may get you somewhere you'd like to be. I'd love to hear about your uses for Emacs aside from programming (or for programming!) and whether this was useful at all. There are likely some boneheaded PHBisms in how I use Emacs, and if you want to point them out, I'd appreciate it. I'll probably write some updates over time to introduce additional features or modes. I'll certainly show you how to use Fugue with Emacs and Ludwig-mode as we evolve it into something more useful than code highlighting. Send your thoughts to [@fugueHQ][41] on Twitter.
* * *
#### Footnotes
1. [^][16] If you are now a PHB of some sort, but were never technical, Emacs likely isnt for you. There may be a handful of folks for whom Emacs will form a path into the more technical aspects of computing, but this is probably a small population. Its helpful to know how to use a Unix or Windows terminal, to have edited a dotfile or two, and to have written some code at some point in your life for Emacs to make much sense.
2. [^][17] [][18][http://archive.wired.com/wired/archive/2.08/tufte.html][19]
3. [^][20] I mainly use this to perform calculations while writing. For example, I was writing an offer letter to a new employee and wanted to calculate how many options to include in the offer. Since I have a variable defined in my `.emacs` for outstanding-shares, I can simply type `M-: (* .001 outstanding-shares)`and get a tenth of a point without opening a calculator or spreadsheet. I keep  _lots_ of numbers in variables like this so I can avoid context switching.
4. [^][21] The missing piece of this is the web. There is an Emacs web browser called eww that will allow you to browse in Emacs. I actually use this, as it is both a great ad-blocker and removes most of the poor choices in readability from the web designer's hands. It's a bit like Reading Mode in Safari. Unfortunately, most websites have lots of annoying cruft and navigation that translates poorly into text.
5. [^][22] Usability is often confused with learnability. Learnability is how difficult it is to learn a tool. Usability is how useful the tool is. Often, these are at odds, such as with the mouse and menus. Menus are highly learnable, but have poor usability, so there have been keyboard shortcuts from the earliest days. Raskin was right on many points where he was ignored about GUIs in general. Now, OSes are putting things like decent search onto a keyboard shortcut. On OS X and Windows, my default method of navigation is search. Ubuntu's search is badly broken, as is the rest of its GUI.
6. [^][23] AWS S3 has effectively solved file storage for as long as we have the Internet. Trillions of objects are stored in S3 and they've never lost one of them. Most every service out there that offers cloud storage is built on S3 or imitates it. No one has the scale of S3, so I keep important stuff there, via Dropbox.
7. [^][24] By now, you might be thinking "what is it with this guy and bicycles?" ... I love them on every level. They are the most mechanically efficient form of transportation ever invented. They can be objects of real beauty. And, with some care, they can last a lifetime. I had Rivendell Bicycle Works build a frame for me back in 2001 and it still makes me happy every time I look at it. Bicycles and UNIX are the two best inventions I've interacted with. Well, they and Emacs.
8. [^][25] This is not a tutorial for Emacs. It comes with one and it's excellent. I do walk through some of the things that I find most important to getting a useful Emacs setup, but this is not a replacement in any way.
9. [^][26] Jef Raskin designed the Canon Cat computer in the 1980s after falling out with Steve Jobs on the Macintosh project, which he originally led. The Cat had a document-centric interface (as all computers should) and used the keyboard in innovative ways that you can now imitate with Emacs. If I could have a modern, powerful Cat with a giant high-res screen and Unix underneath, I'd trade my Mac for it right away. [][27][https://youtu.be/o_TlE_U_X3c?t=19s][28]
--------------------------------------------------------------------------------
via: https://blog.fugue.co/2015-11-11-guide-to-emacs.html
作者:[Josh Stella ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.fugue.co/authors/josh.html
[1]:https://blog.fugue.co/2013-10-16-vpc-on-aws-part3.html
[2]:https://blog.fugue.co/2013-10-02-vpc-on-aws-part2.html
[3]:http://ww2.fugue.co/2017-05-25_OS_AR_GartnerCoolVendor2017_01-LP-Registration.html
[4]:https://blog.fugue.co/authors/josh.html
[5]:https://twitter.com/joshstella
[6]:https://www.youtube.com/watch?v=khJQgRLKMU0
[7]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phb
[8]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufte
[9]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreter
[10]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#eww
[11]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usability
[12]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3
[13]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicycles
[14]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorial
[15]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncat
[16]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phbOrigin
[17]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufteOrigin
[18]:http://archive.wired.com/wired/archive/2.08/tufte.html
[19]:http://archive.wired.com/wired/archive/2.08/tufte.html
[20]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreterOrigin
[21]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#ewwOrigin
[22]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usabilityOrigin
[23]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3Origin
[24]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicyclesOrigin
[25]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorialOrigin
[26]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncatOrigin
[27]:https://youtu.be/o_TlE_U_X3c?t=19s
[28]:https://youtu.be/o_TlE_U_X3c?t=19s
[29]:https://blog.fugue.co/authors/josh.html
[30]:http://www.huffingtonpost.com/zachary-ehren/soma-isnt-a-drug-san-fran_b_987841.html
[31]:http://www.campagnolo.com/US/en
[32]:http://www.businessinsider.com/best-pointy-haired-boss-moments-from-dilbert-2013-10
[33]:http://www.webopedia.com/TERM/I/interpreter.html
[34]:http://emacsformacosx.com/
[35]:http://emacsformacosx.com/
[36]:http://www.gnu.org/software/emacs/
[37]:http://www.gnu.org/software/emacs/
[38]:http://www.huffingtonpost.com/2015/05/29/two-spaces-after-period-debate_n_7455660.html
[39]:http://practicaltypography.com/one-space-between-sentences.html
[40]:http://input.fontbureau.com/
[41]:https://twitter.com/fugueHQ

View File

@ -1,3 +1,5 @@
Translated by cnobelw
5 Coolest Linux Terminal Emulators
============================================================

View File

@ -0,0 +1,59 @@
Language engineering for great justice
============================================================
Whole-systems engineering, when you get good at it, goes beyond being entirely or even mostly about technical optimizations. Every artifact we make is situated in a context of human action that widens out to the economics of its use, the sociology of its users, and the entirety of what Austrian economists call “praxeology”, the science of purposeful human behavior in its widest scope.
This isnt just abstract theory for me. When I wrote my papers on open-source development, they were exactly praxeology they werent about any specific software technology or objective but about the context of human action within which technology is worked. An increase in praxeological understanding of technology can reframe it, leading to tremendous increases in human productivity and satisfaction, not so much because of changes in our tools but because of changes in the way we grasp them.
In this, the third of my unplanned series of posts about the twilight of C and the huge changes coming as we actually begin to see forward into a new era of systems programming, Im going to try to cash that general insight out into some more specific and generative ideas about the design of computer languages, why they succeed, and why they fail.
In my last post I noted that every computer language is an embodiment of a relative-value claim, an assertion about the optimal tradeoff between spending machine resources and spending programmer time, all of this in a context where the cost of computing power steadily falls over time while programmer-time costs remain relatively stable or may even rise. I also highlighted the additional role of transition costs in pinning old tradeoff assertions into place. I described what language designers do as seeking a new optimum for present and near-future conditions.
Now Im going to focus on that last concept. A language designer has lots of possible moves in language-design space from where the state of the art is now. What kind of type system? GC or manual allocation? What mix of imperative, functional, or OO approaches? But in praxeological terms his choice is, I think, usually much simpler: attack a near problem or a far problem?
“Near” and “far” are measured along the curves of falling hardware costs, rising software complexity, and increasing transition costs from existing languages. A near problem is one the designer can see right in front of him; a far problem is a set of conditions that can be seen coming but wont necessarily arrive for some time. A near solution can be deployed immediately, to great practical effect, but may age badly as conditions change. A far solution is a bold bet that may smother under the weight of its own overhead before its future arrives, or never be adopted at all because moving to it is too expensive.
Back at the dawn of computing, FORTRAN was a near-problem design, LISP a far-problem one. Assemblers are near solutions. Illustrating that the categories apply to non-general-purpose languages, also roff markup. Later in the game, PHP and Javascript. Far solutions? Oberon. Ocaml. ML. XML-Docbook. Academic languages tend to be far because the incentive structure around them rewards originality and intellectual boldness (note that this is a praxeological cause, not a technical one!). The failure mode of academic languages is predictable; high inward transition costs, nobody goes there, failure to achieve community critical mass sufficient for mainstream adoption, isolation, and stagnation. (Thats a potted history of LISP in one sentence, and I say that as an old LISP-head with a deep love for the language…)
The failure modes of near designs are uglier. The best outcome to hope for is a graceful death and transition to a newer design. If they hang on (most likely to happen when transition costs out are high) features often get piled on them to keep them relevant, increasing complexity until they become teetering piles of cruft. Yes, C++, Im looking at you. You too, Javascript. And (alas) Perl, though Larry Walls good taste mitigated the problem for many years but that same good taste eventually moved him to blow up the whole thing for Perl 6.
This way of thinking about language design encourages reframing the designers task in terms of two objectives. (1) Picking a sweet spot on the near-far axis away from you into the projected future; and (2) Minimizing inward transition costs from one or more existing languages so you co-opt their userbases. And now lets talk about about how C took over the world.
There is no more more breathtaking example than C than of nailing the near-far sweet spot in the entire history of computing. All I need to do to prove this is point at its extreme longevity as a practical, mainstream language that successfully saw off many competitors for its roles over much of its range. That timespan has now passed about 35 years (counting from when it swamped its early competitors) and is not yet with certainty ended.
OK, you can attribute some of Cs persistence to inertia if you want, but what are you really adding to the explanation if you use the word “inertia”? What it means is exactly that nobody made an offer that actually covered the transition costs out of the language!
Conversely, an underappreciated strength of the language was the low inward transition costs. C is an almost uniquely protean tool that, even at the beginning of its long reign, could readily accommodate programming habits acquired from languages as diverse as FORTRAN, Pascal, assemblers and LISP. I noticed back in the 1980s that I could often spot a new C programmers last language by his coding style, which was just the flip side of saying that C was damn good at gathering all those tribes unto itself.
C++ also benefited from having low transition costs in. Later, most new languages at least partly copied C syntax in order to minimize them.Notice what this does to the context of future language designs: it raises the value of being a C-like as possible in order to minimize inward transition costs from anywhere.
Another way to minimize inward transition costs is to simply be ridiculously easy to learn, even to people with no prior programming experience. This, however, is remarkably hard to pull off. I evaluate that only one language Python has made the major leagues by relying on this quality. I mention it only in passing because its not a strategy I expect to see a  _systems_  language execute successfully, though Id be delighted to be wrong about that.
So here we are in late 2017, and…the next part is going to sound to some easily-annoyed people like Go advocacy, but it isnt. Go, itself, could turn out to fail in several easily imaginable ways. Its troubling that the Go team is so impervious to some changes their user community is near-unanimously and rightly (I think) insisting it needs. Worst-case GC latency, or the throughput sacrifices made to lower it, could still turn out to drastically narrow the languages application range.
That said, there is a grand strategy expressed in the Go design that I think is right. To understand it, we need to review what the near problem for a C replacement is. As I noted in the prequels, it is rising defect rates as systems projects scale up and specifically memory-management bugs because that category so dominates crash bugs and security exploits.
Weve now identified two really powerful imperatives for a C replacement: (1) solve the memory-management problem, and (2) minimize inward-transition costs from C. And the history the praxeological context of programming languages tells us that if a C successor candidate dont address the transition-cost problem effectively enough, it almost doesnt matter how good a job it does on anything else. Conversely, a C successor that  _does_  address transition costs well buys itself a lot of slack for not being perfect in other ways.
This is what Go does. Its not a theoretical jewel; it has annoying limitations; GC latency presently limits how far down the stack it can be pushed. But what it is doing is replicating the Unix/C infective strategy of being easy-entry and  _good enough_  to propagate faster than alternatives that, if it didnt exist, would look like better far bets.
Of course, the proboscid in the room when I say that is Rust. Which is, in fact, positioning itself as the better far bet. Ive explained in previous installments why I dont think its really ready to compete yet. The TIOBE and PYPL indices agree; its never made the TIOBE top 20 and on both indices does quite poorly against Go.
Where Rust will be in five years is a different question, of course. My advice to the Rust community, if they care, is to pay some serious attention to the transition-cost problem. My personal experience says the C to Rust energy barrier is  _[nasty][2]_ . Code-lifting tools like Corrode wont solve it if all they do is map C to unsafe Rust, and if there were an easy way to automate ownership/lifetime annotations they wouldnt be needed at all the compiler would just do that for you. I dont know what a solution would look like, here, but I think they better find one.
I will finally note that Ken Thompson has a history of designs that look like minimal solutions to near problems but turn out to have an amazing quality of openness to the future, the capability to  _be improved_ . Unix is like this, of course. It makes me very cautious about supposing that any of the obvious annoyances in Go that look like future-blockers to me (like, say, the lack of generics) actually are. Because for that to be true, Id have to be smarter than Ken, which is not an easy thing to believe.
--------------------------------------------------------------------------------
via: http://esr.ibiblio.org/?p=7745
作者:[Eric Raymond ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://esr.ibiblio.org/?author=2
[1]:http://esr.ibiblio.org/?author=2
[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931
[3]:http://esr.ibiblio.org/?p=7745

View File

@ -1,411 +0,0 @@
理解多区域配置中的 Firewalld
============================================================
脆弱的服务器和数据失窃的事件充斥着现在的新闻。对于一个阅读过安全公告博客的人来说,通过访问有错误配置的服务器,利用最新暴露的安全漏洞或通过窃取的密码来获得系统控制权,这并不是件困难是事情。在一般的 LInux 服务器上的任何因特网服务都可能存在漏洞,允许未经授权的系统访问。
因为一个系统上在应用程序级防范任何可能的威胁是不可能做到的事情,防火墙通过限制对系统的访问提供了安全保证。防火墙基于它们的源 IP、目标端口和协议来过滤入站包。这种方式中仅有几个 IP/port/protocol 的组合与系统交互,而其它的做不到。
Linux 防火墙是通过 netfilter 来处理的它是内核级别的框架。十几年来iptables 是 netfilter 提供的用户态userland译者注一个基本的 UNIX 系统是由 kernel 和 userland 两部分构成,除 kernel 以外的称为 userland抽象层。iptables 提交包通过一系列的规则,如果包与 IP/port/protocol 的组合匹配,规则被应用到这个包上,并决定包是被通过、拒绝或丢弃。
Firewalld 是最新的 netfilter 用户态抽象层。遗憾的是,由于缺乏描述多区域配置的文档,它的强大而灵活的功能被低估了。这篇文章提供了一个示例去改变这种情况。
### Firewalld 的设计目标
#
firewalld 的设计者认识到大多数的 iptables 使用案例仅涉及到几个单播源 IP仅对每个符合白名单的服务通过而其它的会被拒绝。这种模式的好处是firewalld 通过定义的源 IP 和/或网络接口去分类入站流量到区域。每个区域基于指定的准则自己配置去通过或拒绝包。
另外的改善是基于 iptables 进行语法简化。Firewalld 通过使用服务名而不是它的端口和协议去指定服务,使它更易于使用,例如,是使用 samba 而不是使用 UDP 端口 137 和 138 和 TCP 端口 139 和 445。它进一步简化语法就像 iptables 一样,消除对语句顺序的依赖。
最后firewalld 允许交互式修改 netfilter允许防火墙的改变独立于存储在 XML 中的永久配置。因此,下面的的临时修改将在下次重新加载时被覆盖:
```
# firewall-cmd <some modification>
```
而,以下的改变在重加载后会永久保存:
```
# firewall-cmd --permanent <some modification>
# firewall-cmd --reload
```
### 区域
在 firewalld 中最上层的组织是区域。如果一个包匹配区域相关联的网络接口或源 IP/mask ,它就是区域的一部分。可用的几个预定义区域:
```
# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
```
任何配置了一个网络接口和/或一个源的区域就是一个活动的区域。列出活动的区域:
```
# firewall-cmd --get-active-zones
public
interfaces: eno1 eno2
```
**Interfaces** 是系统中的硬件和虚拟网络适配器的名字,正如你在上面的示例中所看到的那样。所有的活动的接口都将被分配到区域,要么是默认的区域,要么是用户指定的一个区域。但是,一个接口不能被分配给多于一个的区域。
在缺省配置中firewalld 设置所有接口为公共区域,并且不对任何区域设置源。结果是,公共区域是唯一的活动区域。
**Sources** 是入站 IP 地址的范围,它也可以被分配到区域。一个源(或 重叠的源)不能被分配到多个区域。这样做的结果是产生一个未定义的行为,因为不清楚应该将哪些规则应用于该源。
因为指定一个源不是必需的,任何包在这里将被一个区域使用一个接口来匹配,但是,这里不需要使用一个源去匹配一个区域。这表示通过使用优先级方式,优先进行多个指定的源区域,稍后,详细说明这种情况。首先,我们来检查公共区域的配置:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services: dhcpv6-client ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
default
```
逐行说明如下:
* `public (default, active)` 表示公共区是默认区(当接口启动时会自动默认),并且它是活动的,因为,它至少有一个接口或源与分配给它。
* `interfaces: eno1 eno2` 列出了这个区域上关联的接口。
* `sources:` 列出了这个区域的源。现在这里什么都没有,但是,如果这里有内容,它们应该是这样的格式 xxx.xxx.xxx.xxx/xx。
* `services: dhcpv6-client ssh` 列出了允许通过这个防火墙的服务。你可以通过运行 `firewall-cmd --get-services` 得到一个防火墙预定义服务的详细列表。
* `ports:` 列出了一个允许通过这个防火墙的目标端口。它是用于你需要去允许一个没有在 firewalld 中定义的服务的情况下。
* `masquerade: no` 表示这个区域是否允许 IP 伪装。如果允许,它将允许 IP 转发,它可以让你的计算机作为一个路由器。
* `forward-ports:` 列出转发的端口。
* `icmp-blocks:` 一个阻塞的 icmp 流量的黑名单。
* `rich rules:` 在一个区域中优先处理的高级配置。
* `default` 是目标区域,它决定了在与该区域匹配的包上的动作,而不是由上面设置中的一个显式处理的。
### 一个简单的单区域配置示例
如果只是简单地锁定你的防火墙。简单地在删除公共区域上当前允许的服务,并重新加载:
```
# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
# firewall-cmd --permanent --zone=public --remove-service=ssh
# firewall-cmd --reload
```
在下面的防火墙上这些命令的结果是:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services:
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
default
```
本着尽可能严格地保证安全的精神,如果发生需要在你的防火墙上临时开放一个服务的情况(假设是 ssh你可以增加这个服务到当前会话中省略 `--permanent`),并且指示防火墙在一个指定的时间之后恢复修改:
```
# firewall-cmd --zone=public --add-service=ssh --timeout=5m
```
这个 timeout 选项是一个以秒s、分m或小时h为单位的时间值。
### 目标
当一个区域处理它的源或接口上的一个包时,但是,这里没有一个处理包的显式的规则,这时区域的目标决定了这个行为:
* `ACCEPT`: 通过这个包。
* `%%REJECT%%`: 拒绝这个包,并返回一个拒绝的回复。
* `DROP`: 丢弃这个包,不回复任何信息。
* `default`: 不做任何事情。区域不再管它,把它踢到“楼上”。
在 firewalld 0.3.9 中有一个 bug (已经在 0.3.10 中修复),对于一个目标是除了“默认”以外的源区域,不管允许的服务是什么,这的目标都会被应用。例如,一个使用目标 `DROP` 的源区域,将丢弃所有的包,甚至是白名单中的包。遗憾的是,在 RHEL7 和它的衍生版中打包的这个版本的 firewalld使它成为一个相当常见的 bug。本文中的示例避免了可能出现这种行为的情况。
### 优先权
活动区域中执行两个不同的规则。关联接口行为的区域作为接口区域并且关联源行为的区域作为源区域一个区域两个规则都可以执行。Firewalld 按下列顺序处理一个包:
1. 相应的源区域。可以存在零个或一个这样的区域。如果这个包满足一个富规则rich rule、服务是白名单中的、或者目标没有定义那么源区域处理这个包并且在这里结束。否则向上传递这个包。
2. 相应的接口区域。有一个完全匹配的区域已经存在。则接口处理这个包,并且到这里结束。否则,向上传递这个包。
3. firewalld 定义的动作。通过 icmp 包和拒绝一切。
这里的关键信息是,源区域优先于接口区域。因此,对于多区域的 firewalld 配置的一般设计模式是去创建一个优先源区域去允许指定的 IP 提升对系统服务的访问,和一个限制性接口区域去限制任何的访问。
### 一个简单的多区域示例
为演示优先权,让我们在公共区域中将 http 替换成 ssh并且为我们喜欢的 IP 地址,如 1.1.1.1,设置一个默认的 internal 区域。以下的命令完成这个任务:
```
# firewall-cmd --permanent --zone=public --remove-service=ssh
# firewall-cmd --permanent --zone=public --add-service=http
# firewall-cmd --permanent --zone=internal --add-source=1.1.1.1
# firewall-cmd --reload
```
这些命令的结果是生成如下的配置:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services: dhcpv6-client http
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
default
# firewall-cmd --zone=internal --list-all
internal (active)
interfaces:
sources: 1.1.1.1
services: dhcpv6-client mdns samba-client ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=internal --get-target
default
```
在上面的配置中,如果有人尝试从 1.1.1.1 去 `ssh`这个请求将会成功因为这个源区域internal被首先应用并且它允许 ssh 访问。
如果有人尝试从其它的地址,如 2.2.2.2,去访问 ssh它不是这个源区域的因为和这个源区域不匹配。因此这个请求被直接转到接口区域public它没有显式处理 ssh因为公共的目标是 `default`,这个请求被传递到默认动作,它将被拒绝。
如果 1.1.1.1 尝试进行 http 访问会怎样源区域internal不允许它但是目标是 `default`因此请求将传递到接口区域public它被允许访问。
现在,让我们假设有人从 3.3.3.3 拖你的网站。去限制从那个 IP 的访问,简单地增加它到预定义的丢弃区域,并适当地为它命名,因为它将丢弃所有的连接:
```
# firewall-cmd --permanent --zone=drop --add-source=3.3.3.3
# firewall-cmd --reload
```
下一次 3.3.3.3 尝试去访问你的网站firewalld 将转发请求到源区域drop。因为目标是 `DROP`请求将被拒绝并且它不会被转发到接口区域public
### 一个实用的多区域示例
假设你为你的组织的一台服务器配置防火墙。你希望允许全世界使用 http 和 https 的访问你的组织1.1.0.0/16和工作组1.1.1.0/8使用 ssh 访问,并且你的工作组可以访问 samba 服务。使用 firewalld 中的区域,我可以用一个很直观的方式去实现这个配置。
给它们命名,它的逻辑似乎是,为全世界访问指定为公共区域,并且,为本地使用指定为内部区域。从在公共区域内设置使用 http 和 https 的 dhcpv6-client 和 ssh 服务来开始:
```
# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
# firewall-cmd --permanent --zone=public --remove-service=ssh
# firewall-cmd --permanent --zone=public --add-service=http
# firewall-cmd --permanent --zone=public --add-service=https
```
然后,取消内部区域的 mdns、samba-client 和 dhcpv6-client 服务(仅保留 ssh并增加你的组织为源
```
# firewall-cmd --permanent --zone=internal --remove-service=mdns
# firewall-cmd --permanent --zone=internal --remove-service=samba-client
# firewall-cmd --permanent --zone=internal --remove-service=dhcpv6-client
# firewall-cmd --permanent --zone=internal --add-source=1.1.0.0/16
```
为容纳你提升的 samba 的权限,增加一个富规则:
```
# firewall-cmd --permanent --zone=internal --add-rich-rule='rule
↪family=ipv4 source address="1.1.1.0/8" service name="samba"
↪accept'
```
最后,重新加载,拉这些变化到会话中:
```
# firewall-cmd --reload
```
仅剩下少数的细节了。从一个内部区域以外的 IP 去尝试通过 `ssh` 到你的服务器,结果是回复一个拒绝的消息。它是 firewalld 默认的。更为安全的作法是去显示无效的 IP 行为并断开连接。改变公共区的目标为 `DROP`,而不是 `default` 来实现它:
```
# firewall-cmd --permanent --zone=public --set-target=DROP
# firewall-cmd --reload
```
但是,等等,你不再可以 ping 了,甚至是从内部区域!并且 icmp ping 使用的协议)并没有在 firewalld 的服务白名单中列出 。那是因为icmp 是第 3 层的 IP 协议,它没有端口的概念,不像那些捆绑了端口的服务。在设置公共区域为 `DROP` 之前ping 能够通过防火墙是因为你的 `default` 目标通过它到防火墙的默认default是它允许它通过的。现在它已经被删除了。
为恢复内部网络的 ping去使用一个富规则
```
# firewall-cmd --permanent --zone=internal --add-rich-rule='rule
↪protocol value="icmp" accept'
# firewall-cmd --reload
```
结果如下,这里是两个活动区域的配置:
```
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno1 eno2
sources:
services: http https
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
# firewall-cmd --permanent --zone=public --get-target
DROP
# firewall-cmd --zone=internal --list-all
internal (active)
interfaces:
sources: 1.1.0.0/16
services: ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
rule family=ipv4 source address="1.1.1.0/8"
↪service name="samba" accept
rule protocol value="icmp" accept
# firewall-cmd --permanent --zone=internal --get-target
default
```
这个设置演示了一个三层嵌套的防火墙。最外层public是一个接口区域并且包含全世界的访问。紧接着的一层internal是一个源区域并且包含你的组织它是 public 的一个子集。最后,一个富规则增加到最内层,包含你的工作组,它是 internal 的一个子集。
这里的关键信息是,当在一个场景中可以突破嵌套层,最外层将使用接口区域,接下来的将使用一个源区域,并且在源区域中额外使用富规则。
### 调试
采用直观范式设计的一个防火墙 Firewalld比它的前任 iptables 更容易产生歧义。将会有无法预料的行为发生,或者无法很好地去理解 firewalld 是怎么去工作的,去获取描述 netfilter 配置的操作是怎么运行的是非常有用的。前一个示例的输出如下,为了简单起见,将输出和日志进行了修剪:
```
# iptables -S
-P INPUT ACCEPT
... (forward and output lines) ...
-N INPUT_ZONES
-N INPUT_ZONES_SOURCE
-N INPUT_direct
-N IN_internal
-N IN_internal_allow
-N IN_internal_deny
-N IN_public
-N IN_public_allow
-N IN_public_deny
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_ZONES_SOURCE
-A INPUT -j INPUT_ZONES
-A INPUT -p icmp -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
... (forward and output lines) ...
-A INPUT_ZONES -i eno1 -j IN_public
-A INPUT_ZONES -i eno2 -j IN_public
-A INPUT_ZONES -j IN_public
-A INPUT_ZONES_SOURCE -s 1.1.0.0/16 -g IN_internal
-A IN_internal -j IN_internal_deny
-A IN_internal -j IN_internal_allow
-A IN_internal_allow -p tcp -m tcp --dport 22 -m conntrack
↪--ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 137
↪-m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 138
↪-m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 139
↪-m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 445
↪-m conntrack --ctstate NEW -j ACCEPT
-A IN_internal_allow -p icmp -m conntrack --ctstate NEW
↪-j ACCEPT
-A IN_public -j IN_public_deny
-A IN_public -j IN_public_allow
-A IN_public -j DROP
-A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack
↪--ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 443 -m conntrack
↪--ctstate NEW -j ACCEPT
```
在上面的 iptables 输出中,新的链(以 `-N` 开始的行)是被首先声明的。剩下的规则是附加到(以 `-A` 开始的行) iptables 中的。维护连接和本地流量是允许的,并且入站包被转到 `INPUT_ZONES_SOURCE` 在那个时间点如果存在相应的区域IPs 将被发送到那个区域。从那之后,流量被转到 `INPUT_ZONES` 在那个时间点开始它被路由到一个接口区域。如果在那里它没有被处理icmp 是通过的,无效的被丢弃,并且其余的都被拒绝。
### 结论
Firewalld 是一个文档化的防火墙配置工具它的功能远比大多数人认识到的更为强大。以创新的区域范式firewalld 允许系统管理员去分解流量到每个唯一处理它的分类中,简化了配置过程。因为它直观的设计和语法,它在实践中不但被用于简单的单一区域中也被用于复杂的多区域配置中。
--------------------------------------------------------------------------------
via: https://www.linuxjournal.com/content/understanding-firewalld-multi-zone-configurations?page=0,0
作者:[ Nathan Vance][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxjournal.com/users/nathan-vance
[1]:https://www.linuxjournal.com/tag/firewalls
[2]:https://www.linuxjournal.com/tag/howtos
[3]:https://www.linuxjournal.com/tag/networking
[4]:https://www.linuxjournal.com/tag/security
[5]:https://www.linuxjournal.com/tag/sysadmin
[6]:https://www.linuxjournal.com/users/william-f-polik
[7]:https://www.linuxjournal.com/users/nathan-vance

View File

@ -0,0 +1,73 @@
介绍 MOBY 项目:推进软件容器化运动的一个新的开源项目
============================================================
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/1-2.png?resize=763%2C275&ssl=1)
自从 Docker 四年前将软件容器推向民主化以来,整个生态系统都围绕着容器化而发展,在这段压缩的时期,它经历了两个不同的增长阶段。在这每一个阶段,生产容器系统的模式已经演变成适应用户群体以及项目的规模和需求和不断增长的贡献者生态系统。
Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生态系统将容器作为主流。它提供了一个组件库,一个将它们组装到定制的基于容器的系统的框架,以及所有容器爱好者进行实验和交换想法的地方。
让我们来回顾一下我们如何走到今天。在 2013-2014 年开拓者开始使用容器并在一个单一的开源代码库Docker 和其他一些项目中进行协作,以帮助工具成熟。
![Docker Open Source](https://i0.wp.com/blog.docker.com/wp-content/uploads/2-2.png?resize=975%2C548&ssl=1)
然后在 2015-2016 年云原生应用中大量采用容器用于生产环境。在这个阶段用户社区已经发展到支持成千上万个部署由数百个生态系统项目和成千上万的贡献者支持。正是在这个阶段Docker 将其生产模式演变为基于开放式组件的方法。这样,它使我们能够增加创新和合作的方面。
涌现出来的新独立的 Docker 组件项目帮助刺激了合作伙伴生态系统和用户社区的发展。在此期间,我们从 Docker 代码库中提取并快速创新组件,以便系统制造商可以在构建自己的容器系统时独立重用它们:[runc][7]、[HyperKit][8]、[VPNKit][9]、[SwarmKit][10]、[InfraKit][11]、[containerd][12] 等。
![Docker Open Components](https://i1.wp.com/blog.docker.com/wp-content/uploads/3-2.png?resize=975%2C548&ssl=1)
站在容器浪潮的最前沿,我们看到 2017 年出现的一个趋势是容器将成为主流,传播到计算、服务器、数据中心、云、桌面、物联网和移动的各个领域。每个行业和垂直市场、金融、医疗、政府、旅游、制造。以及每一个使用案例,现代网络应用、传统服务器应用、机器学习、工业控制系统、机器人技术。容器生态系统中许多新进入者的共同点是,它们建立专门的系统,针对特定的基础设施、行业或使用案例。
作为一家公司Docker 使用开源作为我们的创新实验室而与整个生态系统合作。Docker 的成功取决于容器生态系统的成功:如果生态系统成功,我们就成功了。因此,我们一直在计划下一阶段的容器生态系统增长:什么样的生产模式将帮助我们扩大集容器生态系统,实现容器成为主流的承诺?
去年,我们的客户开始在 Linux 以外的许多平台上要求有 DockerMac 和 Windows 桌面、Windows Server、云平台如亚马逊网络服务AWS、Microsoft Azure 或 Google 云平台),并且我们专门为这些平台创建了[许多 Docker 版本][13]。为了在一个相对较短的时间与更小的团队,以可扩展的方式构建和发布这些专业版本,而不必重新发明轮子,很明显,我们需要一个新的方法。我们需要我们的团队不仅在组件上进行协作,而且还在组件组合上进行协作,这借用[来自汽车行业的想法][14],其中组件被重用于构建完全不同的汽车。
![Docker production model](https://i1.wp.com/blog.docker.com/wp-content/uploads/4-2.png?resize=975%2C548&ssl=1)
我们认为将容器生态系统提升到一个新的水平以让容器成为主流的最佳方式是在生态系统层面上进行协作。
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/5-2.png?resize=975%2C548&ssl=1)
为了实现这种新的合作高度,今天我们宣布推出软件容器化运动的新开源项目 Moby。它是提供了数十个组件的“乐高集”一个将它们组合成定制容器系统的框架以及所有容器爱好者进行试验和交换意见的场所。可以把 Moby 认为是容器系统的“乐高俱乐部”。
Moby包括
1. 容器化后端组件**库**例如底层构建器、日志记录设备、卷管理、网络、镜像管理、containerd、SwarmKit 等)
2. 将组件组合到独立容器平台中的**框架**,以及为这些组件构建、测试和部署构件的工具。
3. 一个名为 **Moby Origin** 的引用组件,它是 Docker 容器平台的开放基础,以及使用 Moby 库或其他项目的各种组件的容器系统示例。
Moby 专为系统构建者而设计,他们想要构建自己的基于容器的系统,而不是可以使用 Docker 或其他容器平台的应用程序开发人员。Moby 的参与者可以从源自 Docker 的组件库中进行选择或者可以选择将“自己的组件”BYOC打包为容器以便在所有组件之间进行混合和匹配以创建定制的容器系统。
Docker 将 Moby 作为一个开放的研发实验室来试验、开发新的组件,并与容器技术的未来生态系统进行协作。我们所有的开源协作都将转向 Moby。Docker 现在并且将来仍然是一个开源产品,可以让你创建、发布和运行容器。从用户的角度来看,它是完全一样的。用户可以继续从 docker.com 下载 Docker。请在 Moby 网站上参阅[有关 Docker 和 Moby 各自角色的更多信息][15]。
请加入我们,帮助软件容器成为主流,并通过在组件和组合上进行协作,将我们的生态系统和用户社区发展到下一个高度。
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/04/introducing-the-moby-project/
作者:[Solomon Hykes ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/solomon/
[1]:https://blog.docker.com/author/solomon/
[2]:https://blog.docker.com/tag/containerization/
[3]:https://blog.docker.com/tag/moby-library/
[4]:https://blog.docker.com/tag/moby-origin/
[5]:https://blog.docker.com/tag/moby-project/
[6]:https://blog.docker.com/tag/open-source/
[7]:https://github.com/opencontainers/runc
[8]:https://github.com/docker/hyperkit
[9]:https://github.com/docker/vpnkit
[10]:https://github.com/docker/swarmkit
[11]:https://github.com/docker/infrakit
[12]:https://github.com/containerd/containerd
[13]:https://blog.docker.com/2017/03/docker-enterprise-edition/
[14]:https://en.wikipedia.org/wiki/List_of_Volkswagen_Group_platforms
[15]:https://mobyproject.org/#moby-and-docker

View File

@ -1,251 +0,0 @@
#借助 minikube 上手 OpenFaaS
============================================================
本文将介绍如何借助 [minikube][4] 在 Kubernetes 1.8 上搭建 OpenFaaS让 Serverless Function 变得更简单。minikube 是一个 [Kubernetes][5] 发行版,借助它,你可以在笔记本电脑上运行 Kubernetes 群集minikube 支持 Mac 和 Linux 操作系统,但是在 MacOS 上使用得更多一些。
> 本文基于我们最新的部署手册 [Official Kubernetes Deployment guide][6]
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*C9845SlyaaT1_xrAGOBURg.png)
### 安装部署 Minikube
1. 安装 [xhyve driver][1] 或 [VirtualBox][2] ,然后在上面安装 Liux 虚拟机以部署 minikube 。根据我的经验VirtualBox更稳定一些
2. [参照官方文档][3] 安装 minikube
3. 安装 `faas-cli` 
使用 brew 安装
```brew install faas-cli```
使用官方安装脚本
```curl -sL cli.openfaas.com | sudo sh```
4. 用 brew 安装 `helm`
 `brew install kubernetes-helm`
5. 运行 minikube :
`minikube start`
> Docker Captain 的小贴士: Mac 和 Windows 版本的 Docker 已经集成了对 Kubernetes 的支持。现在我们使用 Kubernetes 的时候,已经不需要再安装额外的软件了。
### 在 minikube 上面部署 OpenFaaS
1. 给 Helms 服务器组件新建账号 tiller:
`kubectl -n kube-system create sa tiller && kubectl create clusterrolebinding tiller \`
`--verbose_rbose_usterrole cluster-admin \`
`--verbose_rviceaccount=kube-system:tiller`
2. 安装 Helm 的服务端组件 tiller:
  `helm init --skip-refresh --upgrade --service-account tiller`
3. Git Clone faas-netes (Kubernetes 上面的 OpenFaaS 驱动程序): 
`git clone https://github.com/openfaas/faas-netes && cd faas-netes`
4. Minikube 没有配置 RBAC, 这里我们需要把 RBAC 关闭:
  `helm upgrade --install --debug --reset-values --set async=false --set rbac=false openfaas openfaas/`
译者注RBACRole-Based access control基于角色的访问权限控制在计算机权限管理中较为常用详情请参考以下链接
https://en.wikipedia.org/wiki/Role-based_access_control
现在,你可以看到 OpenFaaS pods 已经在你的 minikube 群集上运行起来了。输入 `kubectl get pods` 以查看 OpenFaaS pods:
```
NAME READY STATUS RESTARTS AGE
```
```
alertmanager-6dbdcddfc4-fjmrf 1/1 Running 0 1m
```
```
faas-netesd-7b5b7d9d4-h9ftx 1/1 Running 0 1m
```
```
gateway-965d6676d-7xcv9 1/1 Running 0 1m
```
```
prometheus-64f9844488-t2mvn 1/1 Running 0 1m
```
30,000ft:
API gateway 进程包含了一个 [用于测试的最小化 UI][7] ,同时开放了用于功能管理的 [RESTful API][8] 。
faas-netesd 守护进程是一种 Kubernetes 控制器,用来连接 Kubernetes API 服务实现对 Kubernetes 函数、部署和密码的管理功能。
Prometheus 和 AlertManager 进程协同工作,实现 OpenFaaS 函数的弹性缩放,以满足业务需求。通过 Prometheus metrics 我们可以查看系统的整体运行状态还可以用来开发功能强悍的仪表盘Dashboard
Prometheus 仪表盘示例:
![](https://cdn-images-1.medium.com/max/1600/1*b0RnaFIss5fOJXkpIJJgMw.jpeg)
### 部署/迁移/运行
和很多其他的 FaaS 项目不同OpenFaaS 的创建和函数版本控制使用的是 Docker 镜像格式,这意味着咱可以在生产环境中使用 OpenFaaS 实现以下目标:
* 漏洞扫描vulnerability scanning译者注此处我觉得应该理解为更快地实现漏洞补丁
* CI/CD Continuous integration and continuous deployment 持续集成/持续开发
* 滚动更新
你也可以在现有的生产环境群集中利用空闲资源部署 OpenFaaS。每个核心服务组件内存占用大概在 10-30MB 。
> OpenFaaS 一个关键的优势在于,它可以使用容器编排平台的 API ,这样可以和 Kubernetes 以及 Docker Swarm 进行本地集成。同时由于使用Docker registry 进行函数的版本控制,咱可以按需扩展函数。同时不会对按需开发函数的框架造成额外的延时。
### 新建 function
`faas-cli new --lang python hello`
以上命令创建文件 `hello.yml` 以及文件夹 `handler`,文件夹有两个文件 handler.py 、requirements.txt ,包含了你可能需要用到的任何 pip 模块。你可以随时编辑这些文件和文件夹,不需要担心如何维护 Dockerfile--我们为你维护,并且使用以下最佳实践:
* 分级创建版本multi-stage builds
* 非 root 用户non-root users
* 以 Docker Alpine Linux 版本为基础进行镜像构建 (可变更)
### build function
先在本地创建函数,然后 push 到 Docker registry 。 使用 Docker Hub ,打开文件 `hello.yml` 然后输入你的账号名:
```
provider:
```
```
name: faas
```
```
gateway: http://localhost:8080
```
```
functions:
```
```
hello:
```
```
lang: python
```
```
handler: ./hello
```
```
image: alexellis2/hello
```
现在,调用一个 build 版本。你的系统上需要安装 Docker 。
`faas-cli build -f hello.yml`
把封装好函数的 Docker 镜像版本 push 到 Docker Hub。如果还没有登录 Docker hub ,继续前需要先输入命令 `docker login` 。
`faas-cli push -f hello.yml`
当系统中有多个函数的时候,可以使用 `--parallel=N` 来调用多核并行处理 build 或 push 任务。命令也支持这些选项: `--no-cache`  `--squash`
### 部署及测试 function
现在,可以部署、列出、调用函数了。每次调用函数时,可以通过 Prometheus 收集 metric 值。
```
$ export gw=http://$(minikube ip):31112
```
```
$ faas-cli deploy -f hello.yml --gateway $gw
Deploying: hello.
```
```
No existing function to remove
```
```
Deployed.
```
```
URL: http://192.168.99.100:31112/function/hello
```
上面给到的是部署时调用函数的标准方法,你也可以使用下面的命令:
```
$ echo test | faas-cli invoke hello --gateway $gw
```
现在可以通过以下命令列出部署好的函数,你将看到调用计数器数值增加。
```
$ faas-cli list --gateway $gw
Function Invocations Replicas
```
```
hello 1 1
```
_提示这条命令也可以 also accepts a `_--verbose_` flag for more information._
由于我们是在远端群集( Linux 虚拟机)上面运行 OpenFaaS命令里面加上一条 `--gateway` 用来来覆盖环境变量. 这个选项同样适用于云平台上的远程主机。除了加上这条选项以外,还可以通过编辑 .yml 文件里面的 gateway 值来达到同样的效果。
### 迁移到 minikube 以外的环境
一旦你在熟悉了在 minikube 上运行 OpenFaaS ,就可以在任意 Linux 主机上搭建 Kubernetes 群集来部署 OpenFaaS 了。下图是由来自 WeaveWorks 的 Stefan Prodan 做的 OpenFaaS Demo ,这个 Demo 部署在 Google GKE 平台上的 Kubernetes 上面。图片上展示的是 OpenFaaS 内置的自动扩容的功能:
>译者注下面图片来自twitter可能访问不了
![](https://twitter.com/stefanprodan/status/931490255684939777/photo/1)
### 继续学习
我们的 Github上面有很多手册和博文可以带你轻松“上车”把我们的页面保存成书签吧
[openfaas/faas
faas - OpenFaaS - Serverless Functions Made Simple for Docker & Kubernetesgithub.com][9][][10]
2017 哥本哈根 Dockercon Moby 峰会上,我做了关于 Serverless 和 OpenFaaS 的概述演讲,这里我把视频放上来,视频不长,大概 15 分钟左右(译者注视频在youtube,需要科学访问)。
[Youtube视频](https://youtu.be/UaReIKa2of8)
最后,别忘了关注 [OpenFaaS on Twitter][11] 这里有最潮的新闻、最酷的技术和 Demo 展示。
--------------------------------------------------------------------------------
via: https://medium.com/@alexellisuk/getting-started-with-openfaas-on-minikube-634502c7acdf
作者:[Alex Ellis ][a]
译者:[mandeler](https://github.com/mandeler)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@alexellisuk?source=post_header_lockup
[1]:https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver
[2]:https://www.virtualbox.org/wiki/Downloads
[3]:https://kubernetes.io/docs/tasks/tools/install-minikube/
[4]:https://kubernetes.io/docs/getting-started-guides/minikube/
[5]:https://kubernetes.io/
[6]:https://github.com/openfaas/faas/blob/master/guide/deployment_k8s.md
[7]:https://github.com/openfaas/faas/blob/master/TestDrive.md
[8]:https://github.com/openfaas/faas/tree/master/api-docs
[9]:https://github.com/openfaas/faas/tree/master/guide
[10]:https://github.com/openfaas/faas/tree/master/guide
[11]:https://twitter.com/openfaas