Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Morisun029 2020-02-07 20:26:08 +08:00
commit dbbaa6bfd6
485 changed files with 16747 additions and 51126 deletions

View File

@ -0,0 +1,149 @@
分布式跟踪系统的四大功能模块如何协同工作
======
> 了解分布式跟踪中的主要体系结构决策,以及各部分如何组合在一起。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
早在十年前,认真研究过分布式跟踪基本上只有学者和一小部分大型互联网公司中的人。对于任何采用微服务的组织来说,它如今成为一种筹码。其理由是确立的:微服务通常会发生让人意想不到的错误,而分布式跟踪则是描述和诊断那些错误的最好方法。
也就是说,一旦你准备将分布式跟踪集成到你自己的应用程序中,你将很快意识到对于不同的人来说“<ruby>分布式跟踪<rt>Distributed Tracing</rt></ruby>”一词意味着不同的事物。此外,跟踪生态系统里挤满了具有相似内容的重叠项目。本文介绍了分布式跟踪系统中四个(可能)独立的功能模块,并描述了它们间将如何协同工作。
### 分布式跟踪:一种思维模型
大多数用于跟踪的思维模型来源于 [Google 的 Dapper 论文][1]。[OpenTracing][2] 使用相似的术语,因此,我们从该项目借用了以下术语:
![Tracing][3]
* <ruby>跟踪<rt>Trace</rt></ruby>:事物在分布式系统运行的过程描述。
* <ruby>跨度<rt>Span</rt></ruby>:一种命名的定时操作,表示工作流的一部分。跨度可接受键值对标签以及附加到特定跨度实例的细粒度的、带有时间戳的结构化日志。
* <ruby>跨度上下文<rt>Span context</rt></ruby>:携带分布式事务的跟踪信息,包括当它通过网络或消息总线将服务传递给服务时。跨度上下文包含跟踪标识符、跨度标识符以及跟踪系统所需传播到下游服务的任何其他数据。
如果你想要深入研究这种思维模式的细节,请仔细参照 [OpenTracing 技术规范][1]。
### 四大功能模块
从应用层分布式跟踪系统的观点来看,现代软件系统架构如下图所示:
![Tracing][5]
现代软件系统的组件可分为三类:
* **应用程序和业务逻辑**:你的代码。
* **广泛共享库**:他人的代码
* **广泛共享服务**:他人的基础架构
这三类组件有着不同的需求,驱动着监控应用程序的分布式跟踪系统的设计。最终的设计得到了四个重要的部分:
* <ruby>跟踪检测 API<rt>A tracing instrumentation API</rt></ruby>:修饰应用程序代码
* <ruby>线路协议<rt>Wire protocol</rt></ruby>:在 RPC 请求中与应用程序数据一同发送的规定
* <ruby>数据协议<rt>Data protocol</rt></ruby>:将异步信息(带外)发送到你的分析系统的规定
* <ruby>分析系统<rt>Analysis system</rt></ruby>:用于处理跟踪数据的数据库和交互式用户界面
为了更深入的解释这个概念,我们将深入研究驱动该设计的细节。如果你只需要我的一些建议,请跳转至下方的四大解决方案。
### 需求,细节和解释
应用程序代码、共享库以及共享式服务在操作上有显著的差别,这种差别严重影响了对其进行检测的请求操作。
#### 检测应用程序代码和业务逻辑
在任何特定的微服务中,由微服务开发者编写的大部分代码是应用程序或者商业逻辑。这部分代码规定了特定区域的操作。通常,它包含任何特殊、独一无二的逻辑判断,这些逻辑判断首先证明了创建新型微服务的合理性。基本上按照定义,**该代码通常不会在多个服务中共享或者以其他方式出现。**
也即是说你仍需了解它,这也意味着需要以某种方式对它进行检测。一些监控和跟踪分析系统使用<ruby>黑盒代理<rt>black-box agents</rt></ruby>自动检测代码,另一些系统更想使用显式的白盒检测工具。对于后者,抽象跟踪 API 提供了许多对于微服务的应用程序代码来说更为实用的优势:
* 抽象 API 允许你在不重新编写检测代码的条件下换新的监视工具。你可能想要变更云服务提供商、供应商和监测技术,而一大堆不可移植的检测代码将会为该过程增加有意义的开销和麻烦。
* 事实证明,除了生产监控之外,该工具还有其他有趣的用途。现有的项目使用相同的跟踪工具来驱动测试工具、分布式调试器、“混沌工程”故障注入器和其他元应用程序。
* 但更重要的是,若将应用程序组件提取到共享库中要怎么办呢?由上述内容可得到结论:
#### 检测共享库
在大多数应用程序中出现的实用程序代码(处理网络请求、数据库调用、磁盘写操作、线程、并发管理等)通常情况下是通用的,而非特别应用于某个特定应用程序。这些代码会被打包成库和框架,而后就可以被装载到许多的微服务上并且被部署到多种不同的环境中。
其真正的不同是:对于共享代码,其他人则成为了使用者。大多数用户有不同的依赖关系和操作风格。如果尝试去使用该共享代码,你将会注意到几个常见的问题:
* 你需要一个 API 来编写检测。然而,你的库并不知道你正在使用哪个分析系统。会有多种选择,并且运行在相同应用下的所有库无法做出不兼容的选择。
* 由于这些包封装了所有网络处理代码,因此从请求报头注入和提取跨度上下文的任务往往指向 RPC 库。然而,共享库必须了解到每个应用程序正在使用哪种跟踪协议。
* 最后,你不想强制用户使用相互冲突的依赖项。大多数用户有不同的依赖关系和操作风格。即使他们使用 gRPC绑定的 gRPC 版本是否相同?因此任何你的库附带用于跟踪的监控 API 必定是免于依赖的。
**因此一个a没有依赖关系、b与线路协议无关、c使用流行的供应商和分析系统的抽象 API 应该是对检测共享库代码的要求。**
#### 检测共享式服务
最后,有时整个服务(或微服务集合体)的通用性足以使许多独立的应用程序使用它们。这种共享式服务通常由第三方托管和管理,例如缓存服务器、消息队列以及数据库。
从应用程序开发者的角度来看,理解共享式服务本质上是黑盒子是极其重要的。它不可能将你的应用程序监控注入到共享式服务。恰恰相反,托管服务通常会运行它自己的监控方案。
### 四个方面的解决方案
因此,抽象的跟踪应用程序接口将会帮助库发出数据并且注入/抽取跨度上下文。标准的线路协议将会帮助黑盒服务相互连接,而标准的数据格式将会帮助分离的分析系统合并其中的数据。让我们来看一下部分有希望解决这些问题的方案。
#### 跟踪 APIOpenTracing 项目
如你所见,我们需要一个跟踪 API 来检测应用程序代码。为了将这种工具扩展到大多数进行跨度上下文注入和提取的共享库中,则必须以某种关键方式对 API 进行抽象。
[OpenTracing][2] 项目主要针对解决库开发者的问题OpenTracing 是一个与供应商无关的跟踪 API它没有依赖关系并且迅速得到了许多监控系统的支持。这意味着如果库附带了内置的本地 OpenTracing 工具,当监控系统在应用程序启动连接时,跟踪将会自动启动。
就个人而言,作为一个已经编写、发布和操作开源软件十多年的人,在 OpenTracing 项目上工作并最终解决这个观察性的难题令我十分满意。
除了 API 之外OpenTracing 项目还维护了一个不断增长的工具列表,其中一些可以在[这里][6]找到。如果你想参与进来,无论是通过提供一个检测插件,对你自己的 OSS 库进行本地测试,或者仅仅只想问个问题,都可以通过 [Gitter][7] 向我们打招呼。
#### 线路协议: HTTP 报头 trace-context
为了监控系统能进行互操作,以及减轻从一个监控系统切换为另外一个时带来的迁移问题,需要标准的线路协议来传播跨度上下文。
[w3c 分布式跟踪上下文社区小组][8]在努力制定此标准。目前的重点是制定一系列标准的 HTTP 报头。该规范的最新草案可以在[此处][9]找到。如果你对此小组有任何的疑问,[邮件列表][10]和[Gitter 聊天室][11]是很好的解惑地点。
LCTT 译注:本文原文发表于 2018 年 5 月,可能现在社区已有不同进展)
#### 数据协议 (还未出现!!)
对于黑盒服务,在无法安装跟踪程序或无法与程序进行交互的情况下,需要使用数据协议从系统中导出数据。
目前这种数据格式和协议的开发工作尚处在初级阶段,并且大多在 w3c 分布式跟踪上下文工作组的上下文中进行工作。需要特别关注的是在标准数据模式中定义更高级别的概念,例如 RPC 调用、数据库语句等。这将允许跟踪系统对可用数据类型做出假设。OpenTracing 项目也通过定义一套[标准标签集][12]来解决这一事务。该计划是为了使这两项努力结果相互配合。
注意当前有一个中间地带。对于由应用程序开发者操作但不想编译或以其他方式执行代码修改的“网络设备”,动态链接可以帮助避免这种情况。主要的例子就是服务网格和代理,就像 Envoy 或者 NGINX。针对这种情况可将兼容 OpenTracing 的跟踪器编译为共享对象,然后在运行时动态链接到可执行文件中。目前 [C++ OpenTracing API][13] 提供了该选项。而 JAVA 的 OpenTracing [跟踪器解析][14]也在开发中。
这些解决方案适用于支持动态链接,并由应用程序开发者部署的的服务。但从长远来看,标准的数据协议可以更广泛地解决该问题。
#### 分析系统:从跟踪数据中提取有见解的服务
最后不得不提的是,现在有足够多的跟踪监视解决方案。可以在[此处][15]找到已知与 OpenTracing 兼容的监控系统列表,但除此之外仍有更多的选择。我更鼓励你研究你的解决方案,同时希望你在比较解决方案时发现本文提供的框架能派上用场。除了根据监控系统的操作特性对其进行评级外(更不用提你是否喜欢 UI 和其功能),确保你考虑到了上述三个重要方面、它们对你的相对重要性以及你感兴趣的跟踪系统如何为它们提供解决方案。
### 结论
最后,每个部分的重要性在很大程度上取决于你是谁以及正在建立什么样的系统。举个例子,开源库的作者对 OpenTracing API 非常感兴趣,而服务开发者对 trace-context 规范更感兴趣。当有人说一部分比另一部分重要时,他们的意思通常是“一部分对我来说比另一部分重要”。
然而,事实是:分布式跟踪已经成为监控现代系统所必不可少的事物。在为这些系统进行构建模块时,“尽可能解耦”的老方法仍然适用。在构建像分布式监控系统一样的跨系统的系统时,干净地解耦组件是维持灵活性和前向兼容性地最佳方式。
感谢你的阅读!现在当你准备好在你自己的应用程序中实现跟踪服务时,你已有一份指南来了解他们正在谈论哪部分部分以及它们之间如何相互协作。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/distributed-tracing
作者:[Ted Young][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[chenmu-kk](https://github.com/chenmu-kk)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tedsuo
[1]:https://research.google.com/pubs/pub36356.html
[2]:http://opentracing.io/
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
[4]:https://github.com/opentracing/specification/blob/master/specification.md
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
[6]:https://github.com/opentracing-contrib/
[7]:https://gitter.im/opentracing/public
[8]:https://www.w3.org/community/trace-context/
[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
[10]:http://lists.w3.org/Archives/Public/public-trace-context/
[11]:https://gitter.im/TraceContext/Lobby
[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
[13]:https://github.com/opentracing/opentracing-cpp
[14]:https://github.com/opentracing-contrib/java-tracerresolver
[15]:http://opentracing.io/documentation/pages/supported-tracers
[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/

View File

@ -0,0 +1,159 @@
MidnightBSD或许是你通往 FreeBSD 的大门
======
![](https://www.linux.com/wp-content/uploads/2019/08/midnight_4_0.jpg)
[FreeBSD][1] 是一个开源操作系统,衍生自著名的 <ruby>[伯克利软件套件][2]<rt>Berkeley Software Distribution</rt></ruby>BSD。FreeBSD 的第一个版本发布于 1993 年并且仍然在继续发展。2007 年左右Lucas Holt 想要利用 OpenStep现在是 Cocoa的 Objective-C 框架、widget 工具包和应用程序开发工具的 [GnuStep][3] 实现,来创建一个 FreeBSD 的分支。为此,他开始开发 MidnightBSD 桌面发行版。
MidnightBSD以 Lucas 的猫 Midnight 命名)仍然在积极地(尽管缓慢)开发。从 2017 年 8 月开始可以获得最新的稳定发布版本0.8.6LCTT 译注:截止至本译文发布时,当前是 2019/10/31 发布的 1.2 版)。尽管 BSD 发行版不是你所说的用户友好型发行版,但上手安装是熟悉如何处理 文本ncurses安装过程以及通过命令行完成安装的好方法。
这样,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。这需要花费一点精力,但是如果你是一名正在寻找扩展你的技能的 Linux 用户……这是一个很好的起点。
我将带你走过安装 MidnightBSD 的流程,如何添加一个图形桌面环境,然后如何安装应用程序。
### 安装
正如我所提到的这是一个文本ncurses安装过程因此在这里找不到可以用鼠标点击的地方。相反你将使用你键盘的 `Tab` 键和箭头键。在你下载[最新的发布版本][4]后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装程序将打开并给你三个选项(图 1。使用你的键盘的箭头键选择 “Install”并敲击回车键。
![MidnightBSD installer][6]
*图 1: 启动 MidnightBSD 安装程序。*
在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
1. 设置非默认键盘映射(是/否)
2. 设置主机名称
3. 添加可选系统组件文档、游戏、32 位兼容性、系统源码代码)
4. 对硬盘分区
5. 管理员密码
6. 配置网络接口
7. 选择地区(时区)
8. 启用服务(例如 ssh
9. 添加用户(图 2
![Adding a user][7]
*图 2: 向系统添加一个用户。*
在你向系统添加用户后,你将被进入到一个窗口中(图 3在这里你可以处理任何你可能忘记配置或你想重新配置的东西。如果你不需要作出任何更改选择 “Exit”然后你的配置就会被应用。
![Applying your configurations][8]
*图 3: 应用你的配置。*
在接下来的窗口中,当出现提示时,选择 “No”接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
### 后安装阶段
当你最新安装的 MidnightBSD 启动时你将发现你自己处于命令提示符当中。此刻还没有图形界面。要安装应用程序MidnightBSD 依赖于 `mport` 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
```
sudo mport index
sudo mport install xorg
```
你现在已经安装好 Xorg 窗口服务器了,它允许你安装桌面环境。使用命令来安装 Xfce
```
sudo mport install xfce
```
现在 Xfce 已经安装好。不过,我们必须让它同命令 `startx` 一起启用。为此,让我们先安装 nano 编辑器。发出命令:
```
sudo mport install nano
```
随着 nano 安装好,发出命令:
```
nano ~/.xinitrc
```
这个文件仅包含一行内容:
```
exec startxfce4
```
保存并关闭这个文件。如果你现在发出命令 `startx`, Xfce 桌面环境将会启动。你应该会感到有点熟悉了吧(图 4
![ Xfce][9]
*图 4: Xfce 桌面界面已准备好服务。*
因为你不会总是想必须发出命令 `startx`,你希望启用登录守护进程。然而,它却没有安装。要安装这个子系统,发出命令:
```
sudo mport install mlogind
```
当完成安装后,通过在 `/etc/rc.conf` 文件中添加一个项目来在启动时启用 mlogind。在 `rc.conf` 文件的底部,添加以下内容:
```
mlogind_enable=”YES”
```
保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后我最后得到一个空白屏幕和讨厌的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必须使用 `startx` 命令。
### 安装应用
默认情况下,你找不到很多能可用的应用程序。如果你尝试使用 `mport` 安装应用程序,你很快就会感到沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 `svnlite` 命令来查看检出的可用 mport 软件列表。回到终端窗口,并发出命令:
```
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
```
在你完成这些后,你应该看到一个命名为 `~/mports` 的新目录。使用命令 `cd ~/.mports` 更改到这个目录。发出 `ls` 命令,然后你应该看到许多的类别(图 5
![applications][10]
*图 5: mport 现在可用的应用程序类别。*
你想安装 Firefox 吗?如果你查看 `www` 目录,你将看到一个 `linux-firefox` 列表。发出命令:
```
sudo mport install linux-firefox
```
现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项。翻找所有的类别,并使用 `mport` 命令来安装你需要的所有软件。
### 一个悲哀的警告
一个悲哀的小警告是,`mport` (通过 `svnlite`)仅能找到的一个办公套件的版本是 OpenOffice 3 。那是非常过时的。尽管在 `~/mports/editors` 目录中能找到 Abiword ,但是它看起来不能安装。甚至在安装 OpenOffice 3 后,它会输出一个执行格式错误。换句话说,你不能使用 MidnightBSD 在办公生产效率方面做很多的事情。但是,嘿嘿,如果你周围正好有一个旧的 Palm Pilot你可以安装 pilot-link。换句话说可用的软件不足以构成一个极其有用的桌面发行版……至少对普通用户不是。但是如果你想在 MidnightBSD 上开发,你将找到很多可用的工具可以安装(查看 `~/mports/devel` 目录)。你甚至可以使用命令安装 Drupal
```
sudo mport install drupal7
```
当然在此之后你将需要创建一个数据库MySQL 已经安装)、安装 Apache`sudo mport install apache24`),并配置必要的 Apache 配置。
显然地,已安装的和可以安装的是一个应用程序、系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务于特殊目的的发行版。
### 享受 \*BSD 优良
这就是如何使 MidnightBSD 启动,并使其运行某种有用的桌面发行版的方法。它不像很多其它的 Linux 发行版一样快速简便,但是如果你想要一个促使你思考的发行版,这可能正是你正在寻找的。尽管大多数竞争对手都准备了很多可以安装的应用软件,但 MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣挑战。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.freebsd.org/
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]:https://en.wikipedia.org/wiki/GNUstep
[4]:http://www.midnightbsd.org/download/
[5]:https://www.virtualbox.org/
[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,333 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11810-1.html)
[#]: subject: (Getting started with OpenSSL: Cryptography basics)
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen)
OpenSSL 入门:密码学基础知识
======
> 想要入门密码学的基础知识,尤其是有关 OpenSSL 的入门知识吗?继续阅读。
![](https://img.linux.net.cn/data/attachment/album/202001/23/142249fpnhyqz9y2cz1exe.jpg)
本文是使用 [OpenSSL][2] 的密码学基础知识的两篇文章中的第一篇OpenSSL 是在 Linux 和其他系统上流行的生产级库和工具包。(要安装 OpenSSL 的最新版本,请参阅[这里][3]。OpenSSL 实用程序可在命令行使用,程序也可以调用 OpenSSL 库中的函数。本文的示例程序使用的是 C 语言,即 OpenSSL 库的源语言。
本系列的两篇文章涵盖了加密哈希、数字签名、加密和解密以及数字证书。你可以从[我的网站][4]的 ZIP 文件中找到这些代码和命令行示例。
让我们首先回顾一下 OpenSSL 名称中的 SSL。
### OpenSSL 简史
<ruby>[安全套接字层][5]<rt>Secure Socket Layer</rt></ruby>SSL是 Netscape 在 1995 年发布的一种加密协议。该协议层可以位于 HTTP 之上,从而为 HTTPS 提供了 S<ruby>安全<rt>secure</rt></ruby>。SSL 协议提供了各种安全服务,其中包括两项在 HTTPS 中至关重要的服务:
* <ruby>对等身份验证<rt>Peer authentication</rt></ruby>(也称为相互质询):连接的每一边都对另一边的身份进行身份验证。如果 Alice 和 Bob 要通过 SSL 交换消息,则每个人首先验证彼此的身份。
* <ruby>机密性<rt>Confidentiality</rt></ruby>:发送者在通过通道发送消息之前先对其进行加密。然后,接收者解密每个接收到的消息。此过程可保护网络对话。即使窃听者 Eve 截获了从 Alice 到 Bob 的加密消息(即*中间人*攻击Eve 会发现他无法在计算上解密此消息。
反过来,这两个关键 SSL 服务与其他不太受关注的服务相关联。例如SSL 支持消息完整性,从而确保接收到的消息与发送的消息相同。此功能是通过哈希函数实现的,哈希函数也随 OpenSSL 工具箱一起提供。
SSL 有多个版本(例如 SSLv2 和 SSLv3并且在 1999 年出现了一个基于 SSLv3 的类似协议<ruby>传输层安全性<rt>Transport Layer Security</rt></ruby>TLS。TLSv1 和 SSLv3 相似,但不足以相互配合工作。不过,通常将 SSL/TLS 称为同一协议。例如,即使正在使用的是 TLS而非 SSLOpenSSL 函数也经常在名称中包含 SSL。此外调用 OpenSSL 命令行实用程序以 `openssl` 开始。
除了 man 页面之外OpenSSL 的文档是零零散散的,鉴于 OpenSSL 工具包很大,这些页面很难以查找使用。命令行和代码示例可以将主要主题集中起来。让我们从一个熟悉的示例开始(使用 HTTPS 访问网站),然后使用该示例来选出我们感兴趣的加密部分进行讲述。
### 一个 HTTPS 客户端
此处显示的 `client` 程序通过 HTTPS 连接到 Google
```
/* compilation: gcc -o client client.c -lssl -lcrypto */
#include <stdio.h>
#include <stdlib.h>
#include <openssl/bio.h> /* BasicInput/Output streams */
#include <openssl/err.h> /* errors */
#include <openssl/ssl.h> /* core library */
#define BuffSize 1024
void report_and_exit(const char* msg) {
perror(msg);
ERR_print_errors_fp(stderr);
exit(-1);
}
void init_ssl() {
SSL_load_error_strings();
SSL_library_init();
}
void cleanup(SSL_CTX* ctx, BIO* bio) {
SSL_CTX_free(ctx);
BIO_free_all(bio);
}
void secure_connect(const char* hostname) {
char name[BuffSize];
char request[BuffSize];
char response[BuffSize];
const SSL_METHOD* method = TLSv1_2_client_method();
if (NULL == method) report_and_exit("TLSv1_2_client_method...");
SSL_CTX* ctx = SSL_CTX_new(method);
if (NULL == ctx) report_and_exit("SSL_CTX_new...");
BIO* bio = BIO_new_ssl_connect(ctx);
if (NULL == bio) report_and_exit("BIO_new_ssl_connect...");
SSL* ssl = NULL;
/* 链路 bio 通道SSL 会话和服务器端点 */
sprintf(name, "%s:%s", hostname, "https");
BIO_get_ssl(bio, &ssl); /* 会话 */
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* 鲁棒性 */
BIO_set_conn_hostname(bio, name); /* 准备连接 */
/* 尝试连接 */
if (BIO_do_connect(bio) <= 0) {
cleanup(ctx, bio);
report_and_exit("BIO_do_connect...");
}
/* 验证信任库,检查证书 */
if (!SSL_CTX_load_verify_locations(ctx,
"/etc/ssl/certs/ca-certificates.crt", /* 信任库 */
"/etc/ssl/certs/")) /* 其它信任库 */
report_and_exit("SSL_CTX_load_verify_locations...");
long verify_flag = SSL_get_verify_result(ssl);
if (verify_flag != X509_V_OK)
fprintf(stderr,
"##### Certificate verification error (%i) but continuing...\n",
(int) verify_flag);
/* 获取主页作为示例数据 */
sprintf(request,
"GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A",
hostname);
BIO_puts(bio, request);
/* 从服务器读取 HTTP 响应并打印到输出 */
while (1) {
memset(response, '\0', sizeof(response));
int n = BIO_read(bio, response, BuffSize);
if (n <= 0) break; /* 0 代表流结束,< 0 代表有错误 */
puts(response);
}
cleanup(ctx, bio);
}
int main() {
init_ssl();
const char* hostname = "www.google.com:443";
fprintf(stderr, "Trying an HTTPS connection to %s...\n", hostname);
secure_connect(hostname);
return 0;
}
```
可以从命令行编译和执行该程序(请注意 `-lssl``-lcrypto` 中的小写字母 `L`
```
gcc -o client client.c -lssl -lcrypto
```
该程序尝试打开与网站 [www.google.com][13] 的安全连接。在与 Google Web 服务器的 TLS 握手过程中,`client` 程序会收到一个或多个数字证书,该程序会尝试对其进行验证(但在我的系统上失败了)。尽管如此,`client` 程序仍继续通过安全通道获取 Google 主页。该程序取决于前面提到的安全工件,尽管在上述代码中只着重突出了数字证书。但其它工件仍在幕后发挥作用,稍后将对它们进行详细说明。
通常,打开 HTTP非安全通道的 C 或 C++ 的客户端程序将使用诸如*文件描述符*或*网络套接字*之类的结构,它们是两个进程(例如,这个 `client` 程序和 Google Web 服务器)之间连接的端点。另一方面,文件描述符是一个非负整数值,用于在程序中标识该程序打开的任何文件类的结构。这样的程序还将使用一种结构来指定有关 Web 服务器地址的详细信息。
这些相对较低级别的结构不会出现在客户端程序中,因为 OpenSSL 库会将套接字基础设施和地址规范等封装在更高层面的安全结构中。其结果是一个简单的 API。下面首先看一下 `client` 程序示例中的安全性详细信息。
* 该程序首先加载相关的 OpenSSL 库,我的函数 `init_ssl` 中对 OpenSSL 进行了两次调用:
```
SSL_load_error_strings();
SSL_library_init();
```
* 下一个初始化步骤尝试获取安全*上下文*,这是建立和维护通往 Web 服务器的安全通道所需的信息框架。如对 OpenSSL 库函数的调用所示,在示例中使用了 TLS 1.2
```
const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */
```
如果调用成功,则将 `method` 指针被传递给库函数,该函数创建类型为 `SSL_CTX` 的上下文:
```
SSL_CTX* ctx = SSL_CTX_new(method);
```
`client` 程序会检查每个关键的库调用的错误,如果其中一个调用失败,则程序终止。
* 现在还有另外两个 OpenSSL 工件也在发挥作用SSL 类型的安全会话,从头到尾管理安全连接;以及类型为 BIO<ruby>基本输入/输出<rt>Basic Input/Output</rt></ruby>)的安全流,用于与 Web 服务器进行通信。BIO 流是通过以下调用生成的:
```
BIO* bio = BIO_new_ssl_connect(ctx);
```
请注意,这个最重要的上下文是其参数。`BIO` 类型是 C 语言中 `FILE` 类型的 OpenSSL 封装器。此封装器可保护 `client` 程序与 Google 的网络服务器之间的输入和输出流的安全。
* 有了 `SSL_CTX``BIO`,然后程序在 SSL 会话中将它们组合在一起。三个库调用可以完成工作:
```
BIO_get_ssl(bio, &ssl); /* 会话 */
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* 鲁棒性 */
BIO_set_conn_hostname(bio, name); /* 准备连接 */
```
安全连接本身是通过以下调用建立的:
```
BIO_do_connect(bio);
```
如果最后一个调用不成功,则 `client` 程序终止;否则,该连接已准备就绪,可以支持 `client` 程序与 Google Web 服务器之间的机密对话。
在与 Web 服务器握手期间,`client` 程序会接收一个或多个数字证书,以认证服务器的身份。但是,`client` 程序不会发送自己的证书这意味着这个身份验证是单向的。Web 服务器通常配置为**不**需要客户端证书)尽管对 Web 服务器证书的验证失败,但 `client` 程序仍通过了连接到 Web 服务器的安全通道继续获取 Google 主页。
为什么验证 Google 证书的尝试会失败?典型的 OpenSSL 安装目录为 `/etc/ssl/certs`,其中包含 `ca-certificates.crt` 文件。该目录和文件包含着 OpenSSL 自带的数字证书,以此构成<ruby>信任库<rt>truststore</rt></ruby>。可以根据需要更新信任库,尤其是可以包括新信任的证书,并删除不再受信任的证书。
`client` 程序从 Google Web 服务器收到了三个证书,但是我的计算机上的 OpenSSL 信任库并不包含完全匹配的证书。如目前所写,`client` 程序不会通过例如验证 Google 证书上的数字签名(一个用来证明该证书的签名)来解决此问题。如果该签名是受信任的,则包含该签名的证书也应受信任。尽管如此,`client` 程序仍继续获取页面,然后打印出 Google 的主页。下一节将更详细地介绍这些。
### 客户端程序中隐藏的安全性
让我们从客户端示例中可见的安全工件(数字证书)开始,然后考虑其他安全工件如何与之相关。数字证书的主要格式标准是 X509生产级的证书由诸如 [Verisign][14] 的<ruby>证书颁发机构<rt>Certificate Authority</rt></ruby>CA颁发。
数字证书中包含各种信息(例如,激活日期和失效日期以及所有者的域名),也包括发行者的身份和*数字签名*(这是加密过的*加密哈希*值)。证书还具有未加密的哈希值,用作其标识*指纹*。
哈希值来自将任意数量的二进制位映射到固定长度的摘要。这些位代表什么(会计报告、小说或数字电影)无关紧要。例如,<ruby>消息摘要版本 5<rt>Message Digest version 5</rt></ruby>MD5哈希算法将任意长度的输入位映射到 128 位哈希值,而 SHA1<ruby>安全哈希算法版本 1<rt>Secure Hash Algorithm version 1</rt></ruby>)算法将输入位映射到 160 位哈希值。不同的输入位会导致不同的(实际上在统计学上是唯一的)哈希值。下一篇文章将会进行更详细的介绍,并着重介绍什么使哈希函数具有加密功能。
数字证书的类型有所不同(例如根证书、中间证书和最终实体证书),并形成了反映这些证书类型的层次结构。顾名思义,*根*证书位于层次结构的顶部其下的证书继承了根证书所具有的信任。OpenSSL 库和大多数现代编程语言都具有 X509 数据类型以及处理此类证书的函数。来自 Google 的证书具有 X509 格式,`client` 程序会检查该证书是否为 `X509_V_OK`
X509 证书基于<ruby>公共密钥基础结构<rt>public-key infrastructure</rt></ruby>PKI其中包括的算法RSA 是占主导地位的算法)用于生成*密钥对*:公共密钥及其配对的私有密钥。公钥是一种身份:[Amazon][15] 的公钥对其进行标识,而我的公钥对我进行标识。私钥应由其所有者负责保密。
成对出现的密钥具有标准用途。可以使用公钥对消息进行加密,然后可以使用同一个密钥对中的私钥对消息进行解密。私钥也可以用于对文档或其他电子工件(例如程序或电子邮件)进行签名,然后可以使用该对密钥中的公钥来验证签名。以下两个示例补充了一些细节。
在第一个示例中Alice 将她的公钥分发给全世界,包括 Bob。然后Bob 用 Alice 的公钥加密邮件,然后将加密的邮件发送给 Alice。用 Alice 的公钥加密的邮件将可以用她的私钥解密(假设是她自己的私钥),如下所示:
```
+------------------+ encrypted msg +-------------------+
Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg
+------------------+ +-------------------+
```
理论上可以在没有 Alice 的私钥的情况下解密消息,但在实际情况中,如果使用像 RSA 这样的加密密钥对系统,则在计算上做不到。
现在,第二个示例,请对文档签名以证明其真实性。签名算法使用密钥对中的私钥来处理要签名的文档的加密哈希:
```
+-------------------+
Hash of document--->|Alice's private key|--->Alice's digital signature of the document
+-------------------+
```
假设 Alice 以数字方式签署了发送给 Bob 的合同。然后Bob 可以使用 Alice 密钥对中的公钥来验证签名:
```
+------------------+
Alice's digital signature of the document--->|Alice's public key|--->verified or not
+------------------+
```
假若没有 Alice 的私钥,就无法轻松伪造 Alice 的签名因此Alice 有必要保密她的私钥。
`client` 程序中,除了数字证书以外,这些安全性都没有明确展示。下一篇文章使用使用 OpenSSL 实用程序和库函数的示例填充更多详细的信息。
### 命令行的 OpenSSL
同时,让我们看一下 OpenSSL 命令行实用程序:特别是在 TLS 握手期间检查来自 Web 服务器的证书的实用程序。调用 OpenSSL 实用程序可以使用 `openssl` 命令,然后添加参数和标志的组合以指定所需的操作。
看看以下命令:
```
openssl list-cipher-algorithms
```
该输出是组成<ruby>加密算法套件<rt>cipher suite<rt></ruby>的相关算法的列表。下面是列表的开头,加了澄清首字母缩写词的注释:
```
AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining
AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes
AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1
...
```
下一条命令使用参数 `s_client` 将打开到 [www.google.com][13] 的安全连接,并在屏幕上显示有关此连接的所有信息:
```
openssl s_client -connect www.google.com:443 -showcerts
```
端口号 443 是 Web 服务器用于接收 HTTPS而不是 HTTP 连接)的标准端口号。(对于 HTTP标准端口为 80Web 地址 www.google.com:443 也出现在 `client` 程序的代码中。如果尝试连接成功,则将显示来自 Google 的三个数字证书以及有关安全会话、正在使用的加密算法套件以及相关项目的信息。例如,这是开头的部分输出,它声明*证书链*即将到来。证书的编码为 base64
```
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
-----BEGIN CERTIFICATE-----
MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw
...
```
诸如 Google 之类的主要网站通常会发送多个证书进行身份验证。
输出以有关 TLS 会话的摘要信息结尾,包括加密算法套件的详细信息:
```
SSL-Session:
    Protocol : TLSv1.2
    Cipher : ECDHE-RSA-AES128-GCM-SHA256
    Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573
...
```
`client` 程序中使用了协议 TLS 1.2`Session-ID` 唯一地标识了 `openssl` 实用程序和 Google Web 服务器之间的连接。`Cipher` 条目可以按以下方式进行解析:
* `ECDHE`<ruby>椭圆曲线 Diffie-Hellman临时<rt>Elliptic Curve Diffie Hellman Ephemeral</rt></ruby>)是一种用于管理 TLS 握手的高效的有效算法。尤其是ECDHE 通过确保连接双方(例如,`client` 程序和 Google Web 服务器)使用相同的加密/解密密钥(称为*会话密钥*)来解决“密钥分发问题”。后续文章会深入探讨该细节。
* `RSA`Rivest Shamir Adleman是主要的公共密钥密码系统并以 1970 年代末首次描述了该系统的三位学者的名字命名。这个正在使用的密钥对是使用 RSA 算法生成的。
* `AES128`<ruby>高级加密标准<rt>Advanced Encryption Standard</rt></ruby>)是一种<ruby>块式加密算法<rt>block cipher</rt></ruby>,用于加密和解密<ruby>位块<rt>blocks of bits</rt></ruby>。(另一种算法是<ruby>流式加密算法<rt>stream cipher</rt></ruby>它一次加密和解密一个位。这个加密算法是对称加密算法因为使用同一个密钥进行加密和解密这首先引起了密钥分发问题。AES 支持 128此处使用、192 和 256 位的密钥大小:密钥越大,安全性越好。
通常,像 AES 这样的对称加密系统的密钥大小要小于像 RSA 这样的非对称基于密钥对系统的密钥大小。例如1024 位 RSA 密钥相对较小,而 256 位密钥则当前是 AES 最大的密钥。
* `GCM`<ruby>伽罗瓦计数器模式<rt>Galois Counter Mode</rt></ruby>)处理在安全对话期间重复应用的加密算法(在这种情况下为 AES128。AES128 块的大小仅为 128 位,安全对话很可能包含从一侧到另一侧的多个 AES128 块。GCM 非常有效,通常与 AES128 搭配使用。
* `SHA256`<ruby>256 位安全哈希算法<rt>Secure Hash Algorithm 256 bits</rt></ruby>)是我们正在使用的加密哈希算法。生成的哈希值的大小为 256 位,尽管使用 SHA 甚至可以更大。
加密算法套件正在不断发展中。例如不久前Google 使用 RC4 流加密算法RSA 的 Ron Rivest 后来开发的 Ron's Cipher 版本 4。 RC4 现在有已知的漏洞,这大概部分导致了 Google 转换为 AES128。
### 总结
我们通过安全的 C Web 客户端和各种命令行示例对 OpenSSL 做了首次了解,使一些需要进一步阐明的主题脱颖而出。[下一篇文章会详细介绍][17],从加密散列开始,到对数字证书如何应对密钥分发挑战为结束的更全面讨论。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://www.openssl.org/
[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
[4]: http://condor.depaul.edu/mkalin
[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security
[6]: https://en.wikipedia.org/wiki/Netscape
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[13]: http://www.google.com
[14]: https://www.verisign.com
[15]: https://www.amazon.com
[16]: http://www.google.com:443
[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11822-1.html)
[#]: subject: (How the Linux screen tool can save your tasks and your sanity if SSH is interrupted)
[#]: via: (https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
如果 SSH 被中断Linux screen 工具如何拯救你的任务以及理智
======
> 当你需要确保长时间运行的任务不会在 SSH 会话中断时被杀死时Linux screen 命令可以成为救生员。以下是使用方法。
![](https://images.idgesg.net/images/article/2019/09/working_w_screen-shs-100812448-large.jpg)
如果因 SSH 会话断开而不得不重启一个耗时的进程,那么你可能会很高兴了解一个有趣的工具,可以用来避免此问题:`screen` 工具。
`screen` 是一个终端多路复用器,它使你可以在单个 SSH 会话中运行多个终端会话,并随时从它们之中脱离或重新接驳。做到这一点的过程非常简单,仅涉及少数命令。
要启动 `screen` 会话,只需在 SSH 会话中键入 `screen`。 然后,你可以开始启动需要长时间运行的进程,并在适当的时候键入 `Ctrl + A Ctrl + D` 从会话中脱离,然后键入 `screen -r` 重新接驳。
如果你要运行多个 `screen` 会话,更好的选择是为每个会话指定一个有意义的名称,以帮助你记住正在处理的任务。使用这种方法,你可以在启动每个会话时使用如下命令命名:
```
$ screen -S slow-build
```
一旦运行了多个会话,要重新接驳到一个会话,需要从列表中选择它。在以下命令中,我们列出了当前正在运行的会话,然后再重新接驳其中一个。请注意,一开始这两个会话都被标记为已脱离。
```
$ screen -ls
There are screens on:
6617.check-backups (09/26/2019 04:35:30 PM) (Detached)
1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
2 Sockets in /run/screen/S-shs
```
然后,重新接驳到该会话要求你提供分配给会话的名称。例如:
```
$ screen -r slow-build
```
在脱离的会话中,保持运行状态的进程会继续进行处理,而你可以执行其他工作。如果你使用这些 `screen` 会话之一来查询 `screen` 会话情况,可以看到当前重新接驳的会话再次显示为 `Attached`
```
$ screen -ls
There are screens on:
6617.check-backups (09/26/2019 04:35:30 PM) (Attached)
1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
2 Sockets in /run/screen/S-shs.
```
你可以使用 `-version` 选项查询正在运行的 `screen` 版本。
```
$ screen -version
Screen version 4.06.02 (GNU) 23-Oct-17
```
### 安装 screen
如果 `which screen` 未在屏幕上提供信息,则可能你的系统上未安装该工具。
```
$ which screen
/usr/bin/screen
```
如果你需要安装它,则以下命令之一可能适合你的系统:
```
sudo apt install screen
sudo yum install screen
```
当你需要运行耗时的进程时,如果你的 SSH 会话由于某种原因断开连接,则可能会中断这个耗时的进程,那么 `screen` 工具就会派上用场。而且,如你所见,它非常易于使用和管理。
以下是上面使用的命令的摘要:
```
screen -S <process description> 开始会话
Ctrl+A Ctrl+D 从会话中脱离
screen -ls 列出会话
screen -r <process description> 重新接驳会话
```
尽管还有更多关于 `screen` 的知识,包括可以在 `screen` 会话之间进行操作的其他方式,但这已经足够帮助你开始使用这个便捷的工具了。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11806-1.html)
[#]: subject: (How GNOME uses Git)
[#]: via: (https://opensource.com/article/19/10/how-gnome-uses-git)
[#]: author: (Molly de Blanc https://opensource.com/users/mollydb)
一个非技术人员对 GNOME 项目使用 GitLab 的感受
======
> 将 GNOME 项目集中在 GitLab 上的决定为整个社区(不只是开发人员)带来了好处。
![red panda][1]
“您的 GitLab 是什么?”这是我在 [GNOME 基金会][2]工作的第一天被问到的第一个问题之一,该基金会是支持 GNOME 项目(包括[桌面环境][3]、[GTK][4] 和 [GStreamer][5])的非盈利组织。此人问的是我在 [GNOME 的 GitLab 实例][6]上的用户名。我在 GNOME 期间,经常有人要求我提供我的 GitLab。
我们使用 GitLab 进行几乎所有操作。通常情况下,我会收到一些<ruby>提案<rt>issue</rt></ruby>和参考错误报告有时还需要修改文件。我不是以开发人员或系统管理员的身份进行此操作的。我参与了“参与度、包容性和多样性ID”团队。我为 GNOME 朋友们撰写新闻通讯,并采访该项目的贡献者。我为 GNOME 活动提供赞助。我不写代码,但我每天都使用 GitLab。
在过去的二十年中GNOME 项目的管理采用了各种方式。该项目的不同部分使用不同的系统来跟踪代码更改、协作以及作为项目和社交空间共享信息。但是,该项目决定,它需要更加地一体化,这从构思到完成大约花费了一年的时间。
GNOME 希望切换到单个工具供整个社区使用的原因很多。外部项目与 GNOME 息息相关,并为它们提供更简单的与资源交互的方式对于项目至关重要,无论是支持社区还是发展生态系统。我们还希望更好地跟踪 GNOME 的指标,即贡献者的数量、贡献的类型和数量以及项目不同部分的开发进度。
当需要选择一种协作工具时,我们考虑了我们需要的东西。最重要的要求之一是它必须由 GNOME 社区托管。由第三方托管并不是一种选择,因此像 GitHub 和 Atlassian 这样的服务就不在考虑之中。而且,当然了,它必须是自由软件。很快,唯一真正的竞争者出现了,它就是 GitLab。我们希望确保进行贡献很容易。GitLab 具有诸如单点登录的功能,该功能允许人们使用 GitHub、Google、GitLab.com 和 GNOME 帐户登录。
我们认为 GitLab 是一条出路我们开始从许多工具迁移到单个工具。GNOME 董事会成员 [Carlos Soriano][7] 领导这项改变。在 GitLab 和 GNOME 社区的大力支持下,我们于 2018 年 5 月完成了该过程。
人们非常希望迁移到 GitLab 有助于社区的发展,并使贡献更加容易。由于 GNOME 以前使用了许多不同的工具,包括 Bugzilla 和 CGit因此很难定量地评估这次切换对贡献量的影响。但是我们可以更清楚地跟踪一些统计数据例如在 2018 年 6 月至 2018 年 11 月之间关闭了近 10,000 个提案,合并了 7,085 个合并请求。人们感到社区在发展壮大,越来越受欢迎,而且贡献实际上也更加容易。
人们因不同的原因而开始使用自由软件重要的是可以通过为需要软件的人提供更好的资源和更多的支持来公平竞争。Git 作为一种工具已被广泛使用,并且越来越多的人使用这些技能来参与到自由软件当中。自托管的 GitLab 提供了将 Git 的熟悉度与 GitLab 提供的功能丰富、用户友好的环境相结合的绝佳机会。
切换到 GitLab 已经一年多了变化确实很明显。持续集成CI为开发带来了巨大的好处并且已经完全集成到 GNOME 的几乎每个部分当中。不进行代码开发的团队也转而使用 GitLab 生态系统进行工作。无论是使用问题跟踪来管理分配的任务还是使用版本控制来共享和管理资产就连“参与度、包容性和多样性ID”这样的团队都已经使用了 GitLab。
一个社区,即使是一个正在开发的自由软件,也很难适应新技术或新工具。在类似 GNOME 的情况下,这尤其困难,该项目[最近已经 22 岁了] [8]。像 GNOME 这样经过了 20 多年建设的项目,太多的人和组织使用了太多的部件,但迁移工作之所以能实现,这要归功于 GNOME 社区的辛勤工作和 GitLab 的慷慨帮助。
在为使用 Git 进行版本控制的项目工作时,我发现很方便。这是一个令人感觉舒适和熟悉的系统,是一个在工作场所和爱好项目之间保持一致的工具。作为 GNOME 社区的新成员,能够参与并使用 GitLab 真是太好了。作为社区建设者,看到这样结果是令人鼓舞的:越来越多的相关项目加入并进入生态系统;新的贡献者和社区成员对该项目做出了首次贡献;以及增强了衡量我们正在做的工作以了解其成功和成功的能力。
如此多的做着完全不同的事情(例如他们正在从事的不同工作以及所使用的不同技能)的团队同意汇集在一个工具上(尤其是被认为是跨开源的标准工具),这一点很棒。作为 GNOME 的贡献者,我真的非常感谢我们使用了 GitLab。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/how-gnome-uses-git
作者:[Molly de Blanc][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mollydb
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna (red panda)
[2]: https://www.gnome.org/foundation/
[3]: https://gnome.org/
[4]: https://www.gtk.org/
[5]: https://gstreamer.freedesktop.org/
[6]: https://gitlab.gnome.org/
[7]: https://twitter.com/csoriano1618?lang=en
[8]: https://opensource.com/article/19/8/poll-favorite-gnome-version

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11831-1.html)
[#]: subject: (My Linux story: Learning Linux in the 90s)
[#]: via: (https://opensource.com/article/19/11/learning-linux-90s)
[#]: author: (Mike Harris https://opensource.com/users/mharris)
我的 Linux 故事:在 90 年代学习 Linux
======
> 这是一个关于我如何在 WiFi 时代之前学习 Linux 的故事,那时的发行版还以 CD 的形式出现。
![](https://img.linux.net.cn/data/attachment/album/202001/29/213829t00wmwu2w0z502zg.jpg)
大部分人可能不记得 1996 年时计算产业或日常生活世界的样子。但我很清楚地记得那一年。我那时候是堪萨斯中部一所高中的二年级学生那是我的自由与开源软件FOSS旅程的开端。
我从这里开始进步。我在 1996 年之前就开始对计算机感兴趣。我在我家的第一台 Apple ][e 上启蒙成长,然后多年之后是 IBM Personal System/2。是的在这过程中有一些代际的跨越。IBM PS/2 有一个非常激动人心的特性:一个 1200 波特的 Hayes 调制解调器。
我不记得是怎样了,但在那不久之前,我得到了一个本地 [BBS][2] 的电话号码。一旦我拨号进去,我可以得到本地的一些其他 BBS 的列表,我的网络探险就此开始了。
在 1995 年,[足够幸运][3]的人拥有了家庭互联网连接,每月可以使用不到 30 分钟。那时的互联网不像我们现代的服务那样通过卫星、光纤、有线电视同轴电缆或任何版本的铜线提供。大多数家庭通过一个调制解调器拨号它连接到他们的电话线上。这时离移动电话无处不在的时代还早得很大多数人只有一部家庭电话。尽管这还要取决你所在的位置但我不认为那时有很多独立的互联网服务提供商ISP所以大多数人从仅有的几家大公司获得服务包括 America OnlineCompuServe 以及 Prodigy。
你能获取到的服务速率非常低,甚至在拨号上网革命性地达到了顶峰的 56K你也只能期望得到最高 3.5Kbps 的速率。如果你想要尝试 Linux下载一个 200MB 到 800MB 的 ISO 镜像或(更加切合实际的)一套软盘镜像要贡献出时间、决心,以及减少电话的使用。
我走了一条简单一点的路:在 1996 年,我从一家主要的 Linux 发行商订购了一套 “tri-Linux” CD 集。这些光盘提供了三个发行版,我的这套包含了 Debian 1.1Debian 的第一个稳定版本、Red Hat Linux 3.0.3 以及 Slackware 3.1(代号 Slackware '96。据我回忆这些光盘是从一家叫做 [Linux Systems Labs][4] 的在线商店购买的。这家在线商店如今已经不存在了,但在 90 年代和 00 年代早期,这样的发行商很常见。这些是多光盘 Linux 套件。这是 1998 年的一套光盘,你可以了解到他们都包含了什么:
![A tri-linux CD set][5]
![A tri-linux CD set][6]
在 1996 年夏天一个命中注定般的日子,那时我住在堪萨斯一个新的并且相对较为乡村的城市,我做出了安装并使用 Linux 的第一次尝试。在 1996 年的整个夏天,我尝试了那套三张 Linux CD 套件里的全部三个发行版。他们都在我母亲的老 Pentium 75MHz 电脑上完美运行。
我最终选择了 [Slackware][7] 3.1 作为我的首选发行版,相比其它发行版可能更多的是因为它的终端的外观,这是决定选择一个发行版前需要考虑的重要因素。
我将系统设置完毕并运行了起来。我连接到一家 “不太知名的” ISP一家这个区域的本地服务商通过我家的第二条电话线拨号为了满足我的所有互联网使用而订购。那就像在天堂一样。我有一台完美运行的双系统Microsoft Windows 95 和 Slackware 3.1)电脑。我依然拨号进入我所知道和喜爱的 BBS游玩在线 BBS 游戏,比如 Trade Wars、Usurper 以及 Legend of the Red Dragon。
我能够记得在 EFNetIRC#Linux 频道上渡过的日子,帮助其他用户,回答他们的 Linux 问题以及和版主们互动。
在我第一次在家尝试使用 Linux 系统的 20 多年后,已经是我进入作为 Red Hat 顾问的第五年,我仍然在使用 Linux现在是 Fedora作为我的日常系统并且依然在 IRC 上帮助想要使用 Linux 的人们。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/learning-linux-90s
作者:[Mike Harris][a]
选题:[lujun9972][b]
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mharris
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
[2]: https://en.wikipedia.org/wiki/Bulletin_board_system
[3]: https://en.wikipedia.org/wiki/Global_Internet_usage#Internet_users
[4]: https://web.archive.org/web/19961221003003/http://lsl.com/
[5]: https://opensource.com/sites/default/files/20191026_142009.jpg (A tri-linux CD set)
[6]: https://opensource.com/sites/default/files/20191026_142020.jpg (A tri-linux CD set)
[7]: http://slackware.com

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11814-1.html)
[#]: subject: (What's your favorite terminal emulator?)
[#]: via: (https://opensource.com/article/19/12/favorite-terminal-emulator)
[#]: author: (Opensource.com https://opensource.com/users/admin)
你最喜欢的终端模拟器是什么?
======
> 我们让社区讲述他们在终端仿真器方面的经验。以下是我们收到的一些回复。
![](https://img.linux.net.cn/data/attachment/album/202001/24/000846qsmpz7s7spig77qg.jpg)
终端仿真器的偏好可以说明一个人的工作流程。无鼠标操作能力是否必须具备?你想要标签页还是窗口?对于终端仿真器你还有什么选择的原因?是否有酷的因素?欢迎参加调查或给我们留下评论,告诉我们你最喜欢的终端模拟器。你尝试过多少种终端仿真器呢?
我们让社区讲述他们在终端仿真器方面的经验。以下是我们收到的一些回复。
“我最喜欢的终端仿真器是用 Powerline 定制的 Tilix。我喜欢它支持在一个窗口中打开多个终端。” —Dan Arel
“[urxvt][2]。它可以通过文件简单配置,轻巧,并且在大多数程序包管理器存储库中都很容易找到。” —Brian Tomlinson
“即使我不再使用 GNOMEgnome-terminal 仍然是我的首选。:)” —Justin W. Flory
“现在 FC31 上的 Terminator。我刚刚开始使用它我喜欢它的分屏功能对我来说感觉很轻巧。我正在研究它的插件。” —Marc Maxwell
“不久前,我切换到了 Tilix它完成了我需要终端执行的所有工作。:) 多个窗格、通知,很精简,用来运行我的 tmux 会话很棒。” —Kevin Fenzi
“alacritty。它针对速度进行了优化是用 Rust 实现的,并且具有很多常规功能,但是老实说,我只关心一个功能:可配置的字形间距,使我可以进一步压缩字体。” —Alexander Sosedkin
 
“我是个老古板KDE Konsole。如果是远程会话请使用 tmux。” —Marcin Juszkiewicz
“在 macOS 上用 iTerm2。是的它是开源的。:-) 在 Linux 上是 Terminator。” —Patrick Mullins
“我现在已经使用 alacritty 一两年了,但是最近我在全屏模式下使用 cool-retro-term因为我必须运行一个输出内容有很多的脚本而它看起来很酷让我感觉很酷。这对我很重要。” —Nick Childers
“我喜欢 Tilix部分是因为它擅长免打扰我通常全屏运行它里面是 tmux而且还提供自定义热链接支持在我的终端中rhbz1234 之类的文本是将我带到 Bugzilla 的热链接。类似的还有 LaunchPad 提案OpenStack 的 Gerrit 更改 ID 等。” —Lars Kellogg-Stedman
“Eterm在使用 Vintage 配置文件的 cool-retro-term 中,演示效果也最好。” —Ivan Horvath
“Tilix +1。这是 GNOME 用户最好的选择,我是这么觉得的!” —Eric Rich
“urxvt。快速、小型、可配置、可通过 Perl 插件扩展,这使其可以无鼠标操作。” —Roman Dobosz 
“Konsole 是最好的,也是 KDE 项目中我唯一使用的应用程序。所有搜索结果都高亮显示是一个杀手级功能,据我所知没有任何其它 Linux 终端有这个功能(如果能证明我错了,那我也很高兴)。最适合搜索编译错误和输出日志。” —Jan Horak
“我过去经常使用 Terminator。现在我在 Tilix 中克隆了它的主题(深色主题),而感受一样好。它可以在选项卡之间轻松移动。就是这样。” —Alberto Fanjul Alonso
“我开始使用的是 Terminator自从差不多过去这三年我已经完全切换到 Tilix。” —Mike Harris
“我使用下拉式终端 X。这是 GNOME 3 的一个非常简单的扩展,使我始终可以通过一个按键(对于我来说是`F12`)拉出一个终端。它还支持制表符,这正是我所需要的。 ” —Germán Pulido
“xfce4-terminal支持 Wayland、缩放、无边框、无标题栏、无滚动条 —— 这就是我在 tmux 之外全部想要的终端仿真器的功能。我希望我的终端仿真器可以尽可能多地使用屏幕空间,我通常在 tmux 窗格中并排放着编辑器Vim和 repl。” —Martin Kourim
“别问,问就是 Fish ;-)” —Eric Schabell
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/favorite-terminal-emulator
作者:[Opensource.com][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/admin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals_0.png?itok=XwIRERsn (Terminal window with green text)
[2]: https://opensource.com/article/19/10/why-use-rxvt-terminal

View File

@ -0,0 +1,467 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11819-1.html)
[#]: subject: (Enable your Python game player to run forward and backward)
[#]: via: (https://opensource.com/article/19/12/python-platformer-game-run)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
使你的 Python 游戏玩家能够向前和向后跑
======
> 使用 Pygame 模块来使你的 Python 平台开启侧滚效果,来让你的玩家自由奔跑。
![](https://img.linux.net.cn/data/attachment/album/202001/25/220636x5mabbl47xvtsk55.jpg)
这是仍在进行中的关于使用 Pygame 模块来在 Python 3 中在创建电脑游戏的第九部分。先前的文章是:
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][2]
* [使用 Python 和 Pygame 模块构建一个游戏框架][3]
* [如何在你的 Python 游戏中添加一个玩家][4]
* [用 Pygame 使你的游戏角色移动起来][5]
* [如何向你的 Python 游戏中添加一个敌人][6]
* [在 Pygame 游戏中放置平台][12]
* [在你的 Python 游戏中模拟引力][7]
* [为你的 Python 平台类游戏添加跳跃功能][8]
在这一系列关于使用 [Pygame][10] 模块来在 [Python 3][9] 中创建电脑游戏的先前文章中,你已经设计了你的关卡设计布局,但是你的关卡的一些部分可能已近超出你的屏幕的可视区域。在平台类游戏中,这个问题的普遍解决方案是,像术语“<ruby>侧滚<rt>side-scroller</rt></ruby>”表明的一样,滚动。
滚动的关键是当玩家精灵接近屏的幕边缘时,使在玩家精灵周围的平台移动。这样给予一种错觉,屏幕是一个在游戏世界中穿梭追拍的"摄像机"。
这个滚动技巧需要两个在屏幕边缘的绝对区域,在绝对区域内的点处,在世界滚动期间,你的化身静止不动。
### 在侧滚动条中放置卷轴
如果你希望你的玩家能够后退,你需要一个触发点来向前和向后。这两个点仅仅是两个变量。设置它们各个距各个屏幕边缘大约 100 或 200 像素。在你的设置部分中创建变量。在下面的代码中,前两行用于上下文说明,所以仅需要添加这行后的代码:
```
player_list.add(player)
steps = 10
forwardX  = 600
backwardX = 230
```
在主循环中,查看你的玩家精灵是否在 `forwardx``backwardx` 滚动点处。如果是这样,向左或向右移动使用的平台,取决于世界是向前或向后移动。在下面的代码中,代码的最后三行仅供你参考:
```
        # scroll the world forward
        if player.rect.x >= forwardx:
                scroll = player.rect.x - forwardx
                player.rect.x = forwardx
                for p in plat_list:
                        p.rect.x -= scroll
        # scroll the world backward
        if player.rect.x <= backwardx:
                scroll = backwardx - player.rect.x
                player.rect.x = backwardx
                for p in plat_list:
                        p.rect.x += scroll
        ## scrolling code above
    world.blit(backdrop, backdropbox)
    player.gravity() # check gravity
    player.update()
```
启动你的游戏,并尝试它。
![Scrolling the world in Pygame][11]
滚动像预期的一样工作,但是你可能注意到一个发生的小问题,当你滚动你的玩家和非玩家精灵周围的世界时:敌人精灵不随同世界滚动。除非你要你的敌人精灵要无休止地追逐你的玩家,你需要修改敌人代码,以便当你的玩家快速撤退时,敌人被留在后面。
### 敌人卷轴
在你的主循环中,你必须对卷轴平台为你的敌人的位置的应用相同的规则。因为你的游戏世界将(很可能)有不止一个敌人在其中,该规则应该被应用于你的敌人列表,而不是一个单独的敌人精灵。这是分组类似元素到列表中的优点之一。
前两行用于上下文注释,所以只需添加这两行后面的代码到你的主循环中:
```
    # scroll the world forward
    if player.rect.x >= forwardx:
        scroll = player.rect.x - forwardx
        player.rect.x = forwardx
        for p in plat_list:
            p.rect.x -= scroll
        for e in enemy_list:
            e.rect.x -= scroll
```
来滚向另一个方向:
```
    # scroll the world backward
    if player.rect.x <= backwardx:
        scroll = backwardx - player.rect.x
        player.rect.x = backwardx
        for p in plat_list:
            p.rect.x += scroll
        for e in enemy_list:
            e.rect.x += scroll
```
再次启动游戏,看看发生什么。
这里是到目前为止你已经为这个 Python 平台所写所有的代码:
```
#!/usr/bin/env python3
# draw a world
# add a player and player control
# add player movement
# add enemy and basic collision
# add platform
# add gravity
# add jumping
# add scrolling
# GNU All-Permissive License
# Copying and distribution of this file, with or without modification,
# are permitted in any medium without royalty provided the copyright
# notice and this notice are preserved. This file is offered as-is,
# without any warranty.
import pygame
import sys
import os
'''
Objects
'''
class Platform(pygame.sprite.Sprite):
# x location, y location, img width, img height, img file
def __init__(self,xloc,yloc,imgw,imgh,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img)).convert()
self.image.convert_alpha()
self.rect = self.image.get_rect()
self.rect.y = yloc
self.rect.x = xloc
class Player(pygame.sprite.Sprite):
'''
Spawn a player
'''
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.movex = 0
self.movey = 0
self.frame = 0
self.health = 10
self.collide_delta = 0
self.jump_delta = 6
self.score = 1
self.images = []
for i in range(1,9):
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
img.convert_alpha()
img.set_colorkey(ALPHA)
self.images.append(img)
self.image = self.images[0]
self.rect = self.image.get_rect()
def jump(self,platform_list):
self.jump_delta = 0
def gravity(self):
self.movey += 3.2 # how fast player falls
if self.rect.y > worldy and self.movey >= 0:
self.movey = 0
self.rect.y = worldy-ty
def control(self,x,y):
'''
control player movement
'''
self.movex += x
self.movey += y
def update(self):
'''
Update sprite position
'''
self.rect.x = self.rect.x + self.movex
self.rect.y = self.rect.y + self.movey
# moving left
if self.movex < 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[self.frame//ani]
# moving right
if self.movex > 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[(self.frame//ani)+4]
# collisions
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
for enemy in enemy_hit_list:
self.health -= 1
#print(self.health)
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.collide_delta = 0 # stop jumping
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.movey = 0
self.rect.y = worldy-ty-ty
self.collide_delta = 0 # stop jumping
if self.rect.y > g.rect.y:
self.health -=1
print(self.health)
if self.collide_delta < 6 and self.jump_delta < 6:
self.jump_delta = 6*2
self.movey -= 33 # how high to jump
self.collide_delta += 6
self.jump_delta += 6
class Enemy(pygame.sprite.Sprite):
'''
Spawn an enemy
'''
def __init__(self,x,y,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img))
self.movey = 0
#self.image.convert_alpha()
#self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.counter = 0
def move(self):
'''
enemy movement
'''
distance = 80
speed = 8
self.movey += 3.2
if self.counter >= 0 and self.counter <= distance:
self.rect.x += speed
elif self.counter >= distance and self.counter <= distance*2:
self.rect.x -= speed
else:
self.counter = 0
self.counter += 1
if not self.rect.y >= worldy-ty-ty:
self.rect.y += self.movey
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.rect.y = worldy-ty-ty
class Level():
def bad(lvl,eloc):
if lvl == 1:
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
enemy_list = pygame.sprite.Group() # create enemy group
enemy_list.add(enemy) # add enemy to group
if lvl == 2:
print("Level " + str(lvl) )
return enemy_list
def loot(lvl,lloc):
print(lvl)
def ground(lvl,gloc,tx,ty):
ground_list = pygame.sprite.Group()
i=0
if lvl == 1:
while i < len(gloc):
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
ground_list.add(ground)
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return ground_list
def platform(lvl,tx,ty):
plat_list = pygame.sprite.Group()
ploc = []
i=0
if lvl == 1:
ploc.append((0,worldy-ty-128,3))
ploc.append((300,worldy-ty-256,3))
ploc.append((500,worldy-ty-128,4))
while i < len(ploc):
j=0
while j <= ploc[i][2]:
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
plat_list.add(plat)
j=j+1
print('run' + str(i) + str(ploc[i]))
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return plat_list
'''
Setup
'''
worldx = 960
worldy = 720
fps = 40 # frame rate
ani = 4 # animation cycles
clock = pygame.time.Clock()
pygame.init()
main = True
BLUE = (25,25,200)
BLACK = (23,23,23 )
WHITE = (254,254,254)
ALPHA = (0,255,0)
world = pygame.display.set_mode([worldx,worldy])
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
backdropbox = world.get_rect()
player = Player() # spawn player
player.rect.x = 0
player.rect.y = 0
player_list = pygame.sprite.Group()
player_list.add(player)
steps = 10
forwardx = 600
backwardx = 230
eloc = []
eloc = [200,20]
gloc = []
#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
tx = 64 #tile size
ty = 64 #tile size
i=0
while i <= (worldx/tx)+tx:
gloc.append(i*tx)
i=i+1
enemy_list = Level.bad( 1, eloc )
ground_list = Level.ground( 1,gloc,tx,ty )
plat_list = Level.platform( 1,tx,ty )
'''
Main loop
'''
while main == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit(); sys.exit()
main = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT or event.key == ord('a'):
print("LEFT")
player.control(-steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
print("RIGHT")
player.control(steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
print('jump')
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(-steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump(plat_list)
if event.key == ord('q'):
pygame.quit()
sys.exit()
main = False
# scroll the world forward
if player.rect.x >= forwardx:
scroll = player.rect.x - forwardx
player.rect.x = forwardx
for p in plat_list:
p.rect.x -= scroll
for e in enemy_list:
e.rect.x -= scroll
# scroll the world backward
if player.rect.x <= backwardx:
scroll = backwardx - player.rect.x
player.rect.x = backwardx
for p in plat_list:
p.rect.x += scroll
for e in enemy_list:
e.rect.x += scroll
world.blit(backdrop, backdropbox)
player.gravity() # check gravity
player.update()
player_list.draw(world) #refresh player position
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh enemies
plat_list.draw(world) # refresh platforms
for e in enemy_list:
e.move()
pygame.display.flip()
clock.tick(fps)
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/python-platformer-game-run
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f (Gaming artifacts with joystick, GameBoy, paddle)
[2]: https://linux.cn/article-9071-1.html
[3]: https://linux.cn/article-10850-1.html
[4]: https://linux.cn/article-10858-1.html
[5]: https://linux.cn/article-10874-1.html
[6]: https://linux.cn/article-10883-1.html
[7]: https://linux.cn/article-11780-1.html
[8]: https://linux.cn/article-11790-1.html
[9]: https://www.python.org/
[10]: https://www.pygame.org/news
[11]: https://opensource.com/sites/default/files/uploads/pygame-scroll.jpg (Scrolling the world in Pygame)
[12]:https://linux.cn/article-10902-1.html

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11803-1.html)
[#]: subject: (4 ways to volunteer this holiday season)
[#]: via: (https://opensource.com/article/19/12/ways-volunteer)
[#]: author: (John Jones https://opensource.com/users/johnjones4)
假期志愿服务的 4 种方式
======
> 想要洒播些节日的快乐吗?为开源组织做贡献,帮助有需要的社区。
![](https://img.linux.net.cn/data/attachment/album/202001/20/223730f7983z8atxp1tf4l.jpg)
当领导者们配置人员和资源以做出积极改变时,就会产生社会影响。但是,许多社会努力都缺乏能够为这些改变者提供服务的技术资源。然而,有些组织通过将想要做出改变的开发人员与迫切需要更好技术的社区和非营利组织联系起来,来促进技术进步。这些组织通常为特定的受众提供服务,并招募特定种类的技术人员,它们有一个共同点:开源。
作为开发人员,我们出于各种原因试图加入开源社区。有些是为了专业发展,有些是为了能够与广阔的网络上令人印象深刻的技术人员合作,还有其他人则是因为他们清楚自己的贡献对于项目的成功的必要性。为什么不将你作为开发人员的才华投入到需要它的地方,而同时又为开源组织做贡献呢?以下组织是实现此目标的一些主要事例。
### Code for America
“Code for America” 是在数字时代,政府如何依靠人民为人民服务的一个例子。通过其 Brigade Network该组织在美国各个城市中组织了一个由志愿程序员、数据科学家、相关公民和设计师组成的全国联盟。这些本地分支机构定期举行聚会向社区开放。这样既可以向小组推出新项目又可以协调正在进行的工作。为了使志愿者与项目相匹配该网站经常列出项目所需的特定技能例如数据分析、内容创建和JavaScript。同时Brigade 网站也会关注当地问题,分享自然灾害等共同经验,这些都可以促进成员之间的合作。例如,新奥尔良、休斯敦和坦帕湾团队合作开发了一个飓风响应网站,当灾难发生时,该网站可以快速响应不同的城市灾难情况。
想要加入该组织,请访问 [该网站][2] 获取 70 多个 Brigade 的清单,以及个人加入组织的指南。
### Code for Change
“Code for Change” 显示了即使在高中时期也可以为社会做贡献。印第安纳波利斯的一群高中开发爱好者成立了自己的俱乐部他们通过创建针对社区问题的开源软件解决方案来回馈当地组织。“Code for Change” 鼓励当地组织提出项目构想,学生团体加入并开发完全自由和开源的解决方案。该小组已经开发了诸如“蓝宝石”之类的项目,该项目优化了当地难民组织的志愿者管理系统,并建立了民权委员会的投诉表格,方便公民就他们所关心的问题在网上发表意见。
有关如何在你自己的社区中创建 “Code for Change”[访问他们的网站][3]。
### Python for Good/Ruby for Good
“Python for Good” 和 “Ruby for Good” 是在俄勒冈州波特兰市和弗吉尼亚州费尔法克斯市举办的双年展活动,该活动将人们聚集在一起,为各自的社区开发和制定解决方案。
在周末,人们聚在一起聆听当地非营利组织的建议,并通过构建开源解决方案来解决他们的问题。 2017 年“Ruby For Good” 参与者创建了 “Justice for Juniors”该计划指导当前和以前被监禁的年轻人并将他们重新融入社区。参与者还创建了 “Diaperbase”这是一种库存管理系统为美国各地的<ruby>尿布库<rt>diaper bank</rt></ruby>所使用。这些活动的主要目标之一是将看似不同的行业和思维方式的组织和个人聚集在一起以谋求共同利益。公司可以赞助活动非营利组织可以提交项目构想各种技能的人都可以注册参加活动并做出贡献。通过两岸美国大西洋和太平洋东西海岸的努力“Ruby for Good” 和 “Python for Good” 一直恪守“使世界变得更好”的座右铭。
“[Ruby for Good][4]” 在夏天举行,举办地点在弗吉尼亚州费尔法克斯的乔治•梅森大学。
### Social Coder
英国的 Ed Guiness 创建了 “Social Coder”将志愿者和慈善机构召集在一起为六大洲的非营利组织创建和使用开源项目。“Social Coder” 积极招募来自世界各地的熟练 IT 志愿者,并将其与通过 Social Coder 注册的慈善机构和非营利组织进行匹配。项目范围从简单的网站更新到整个移动应用程序的开发。
例如PHASE Worldwide 是一个在尼泊尔支持工作的小型非政府组织,因为 “Social Coder”它获得了利用开源技术的关键支持和专业知识。
有许多慈善机构已经与英国的 “Social Coder”进行了合作也欢迎其它国家的组织加入。通过他们的网站个人可以注册为社会软件项目工作找到寻求帮助的组织和慈善机构。
对 “Social Coder” 的志愿服务感兴趣的个人可以 [在此][5]注册.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/ways-volunteer
作者:[John Jones][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/johnjones4
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1 (Gift box opens with colors coming out)
[2]: https://brigade.codeforamerica.org/
[3]: http://codeforchange.herokuapp.com/
[4]: https://rubyforgood.org/
[5]: https://socialcoder.org/Home/Programmer

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: (algzjh)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11816-1.html)
[#]: subject: (The best resources for agile software development)
[#]: via: (https://opensource.com/article/19/12/agile-resources)
[#]: author: (Leigh Griffin https://opensource.com/users/lgriffin)
敏捷软件开发的最佳资源
======
> 请阅读我们的热门文章,这些文章着重讨论了敏捷的过去、现在和未来。
![](https://img.linux.net.cn/data/attachment/album/202001/25/121308jrs4speu2y09u09e.jpg)
对于 Opensource.com 上的敏捷主题来说2019 年是非常棒的一年。随着 2020 年的到来,我们回顾了我们读者所读的与敏捷相关的热门文章。
### 小规模 Scrum 指南
Opensource.com 关于[小规模 Scrum][2] 的指南(我曾参与合著)由六部分组成,为小型团队提供了关于如何将敏捷引入到他们的工作中的建议。在官方的 [Scrum 指南][3]的概述中,传统的 Scrum 框架推荐至少三个人来实现,以充分发挥其潜力。但是,它并没有为一两个人的团队如何成功遵循 Scrum 提供指导。我们的六部分系列旨在规范化小规模的 Scrum并检验我们在现实世界中使用它的经验。该系列受到了读者的热烈欢迎以至于这六篇文章占据了前 10 名文章的 60%。因此,如果你还没有阅读的话,一定要从我们的[小规模 Scrum 介绍页面][2]下载。
### 全面的敏捷项目管理指南
遵循传统项目管理方法的团队最初对敏捷持怀疑态度现在已经热衷于敏捷的工作方式。目前敏捷已被接受并且一种更加灵活的混合风格已经找到了归宿。Matt Shealy 撰写的[有关敏捷项目管理的综合指南][4]涵盖了敏捷项目管理的 12 条指导原则,对于希望为其项目带来敏捷性的传统项目经理而言,它是完美的选择。
### 成为出色的敏捷开发人员的 4 个步骤
DevOps 文化已经出现在许多现代软件团队中这些团队采用了敏捷软件开发原则利用了最先进的工具和自动化技术。但是这种机械的敏捷方法并不能保证开发人员在日常工作中遵循敏捷实践。Daniel Oh 在[成为出色的敏捷开发人员的 4 个步骤][5]中给出了一些很棒的技巧,通过关注设计思维,使用可预测的方法,以质量为中心并不断学习和探索来提高你的敏捷性。用你的敏捷工具补充这些方法将形成非常灵活和强大的敏捷开发人员。
### Scrum 和 kanban哪种敏捷框架更好
对于以敏捷方式运行的团队来说Scrum 和 kanban 是两种最流行的方法。在 “[Scrum 与 kanban哪种敏捷框架更好][6]” 中Taz Brown 探索了两者的历史和目的。在阅读本文时,我想起一句名言:“如果你的工具箱里只有锤子,那么所有问题看起来都像钉子。”知道何时使用 kanban 以及何时使用 Scrum 非常重要,本文有助于说明两者都有一席之地,这取决于你的团队、挑战和目标。
### 开发人员对敏捷发表意见的 4 种方式
当采用敏捷的话题出现时,开发人员常常会担心自己会被强加上一种工作风格。在“[开发人员对敏捷发表意见的 4 种方式][7]”中,[Clément Verna][8] 着眼于开发人员通过帮助确定敏捷在其团队中的表现形式来颠覆这种说法的方法。检查敏捷的起源和基础是一个很好的起点但是真正的价值在于拥有可帮助指导你的过程的指标。知道你将面临什么样的挑战会给你的前进提供坚实的基础。根据经验进行决策不仅可以增强团队的能力还可以使他们对整个过程有一种主人翁意识。Verna 的文章还探讨了将人置于过程之上并作为一个团队来实现目标的重要性。
### 敏捷的现在和未来
今年Opensource.com 的作者围绕敏捷的过去、现在以及未来可能会是什么样子进行了大量的讨论。感谢他们所有人,请一定于 2020 年在这里分享[你自己的敏捷故事][9]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/agile-resources
作者:[Leigh Griffin][a]
选题:[lujun9972][b]
译者:[algzjh](https://github.com/algzjh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lgriffin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G "Women programming"
[2]: https://opensource.com/downloads/small-scale-scrum
[3]: https://scrumguides.org/scrum-guide.html
[4]: https://opensource.com/article/19/8/guide-agile-project-management
[5]: https://opensource.com/article/19/2/steps-agile-developer
[6]: https://opensource.com/article/19/8/scrum-vs-kanban
[7]: https://opensource.com/article/19/10/ways-developers-what-agile
[8]: https://twitter.com/clemsverna
[9]: https://opensource.com/how-submit-article

View File

@ -0,0 +1,525 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11828-1.html)
[#]: subject: (Put some loot in your Python platformer game)
[#]: via: (https://opensource.com/article/20/1/loot-python-platformer-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
在你的 Python 平台类游戏中放一些奖励
======
> 这部分是关于在使用 Python 的 Pygame 模块开发的视频游戏总给你的玩家提供收集的宝物和经验值的内容。
![](https://img.linux.net.cn/data/attachment/album/202001/29/131158jkwnhgd1nnawzn86.jpg)
这是正在进行的关于使用 [Python 3][2] 的 [Pygame][3] 模块创建视频游戏的系列文章的第十部分。以前的文章有:
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
* [使用 Python 和 Pygame 模块构建一个游戏框架][5]
* [如何在你的 Python 游戏中添加一个玩家][6]
* [用 Pygame 使你的游戏角色移动起来][7]
* [如何向你的 Python 游戏中添加一个敌人][8]
* [在 Pygame 游戏中放置平台][13]
* [在你的 Python 游戏中模拟引力][9]
* [为你的 Python 平台类游戏添加跳跃功能][10]
* [使你的 Python 游戏玩家能够向前和向后跑][11]
如果你已经阅读了本系列的前几篇文章,那么你已经了解了编写游戏的所有基础知识。现在你可以在这些基础上,创造一个全功能的游戏。当你第一次学习时,遵循本系列代码示例,这样的“用例”是有帮助的,但是,用例也会约束你。现在是时候运用你学到的知识,以新的方式应用它们了。
如果说,说起来容易做起来难,这篇文章展示了一个如何将你已经了解的内容用于新目的的例子中。具体来说,就是它涵盖了如何使用你以前的课程中已经了解到的来实现奖励系统。
在大多数电子游戏中,你有机会在游戏世界中获得“奖励”或收集到宝物和其他物品。奖励通常会增加你的分数或者你的生命值,或者为你的下一次任务提供信息。
游戏中包含的奖励类似于编程平台。像平台一样,奖励没有用户控制,随着游戏世界的滚动进行,并且必须检查与玩家的碰撞。
### 创建奖励函数
奖励和平台非常相似,你甚至不需要一个奖励的类。你可以重用 `Platform` 类,并将结果称为“奖励”。
由于奖励类型和位置可能因关卡不同而不同,如果你还没有,请在你的 `Level` 中创建一个名为 `loot` 的新函数。因为奖励物品不是平台,你也必须创建一个新的 `loot_list` 组,然后添加奖励物品。与平台、地面和敌人一样,该组用于检查玩家碰撞:
```
def loot(lvl,lloc):
if lvl == 1:
loot_list = pygame.sprite.Group()
loot = Platform(300,ty*7,tx,ty, 'loot_1.png')
loot_list.add(loot)
if lvl == 2:
print(lvl)
return loot_list
```
你可以随意添加任意数量的奖励对象;记住把每一个都加到你的奖励清单上。`Platform` 类的参数是奖励图标的 X 位置、Y 位置、宽度和高度(通常让你的奖励精灵保持和所有其他方块一样的大小最为简单),以及你想要用作的奖励的图片。奖励的放置可以和贴图平台一样复杂,所以使用创建关卡时需要的关卡设计文档。
在脚本的设置部分调用新的奖励函数。在下面的代码中,前三行是上下文,所以只需添加第四行:
```
enemy_list = Level.bad( 1, eloc )
ground_list = Level.ground( 1,gloc,tx,ty )
plat_list = Level.platform( 1,tx,ty )
loot_list = Level.loot(1,tx,ty)
```
正如你现在所知道的,除非你把它包含在你的主循环中,否则奖励不会被显示到屏幕上。将下面代码示例的最后一行添加到循环中:
```
    enemy_list.draw(world)
    ground_list.draw(world)
    plat_list.draw(world)
    loot_list.draw(world)
```
启动你的游戏看看会发生什么。
![Loot in Python platformer][12]
你的奖励将会显示出来,但是当你的玩家碰到它们时,它们不会做任何事情,当你的玩家经过它们时,它们也不会滚动。接下来解决这些问题。
### 滚动奖励
像平台一样,当玩家在游戏世界中移动时,奖励必须滚动。逻辑与平台滚动相同。要向前滚动奖励物品,添加最后两行:
```
        for e in enemy_list:
            e.rect.x -= scroll
        for l in loot_list:
            l.rect.x -= scroll
```
要向后滚动,请添加最后两行:
```
        for e in enemy_list:
            e.rect.x += scroll
        for l in loot_list:
            l.rect.x += scroll
```
再次启动你的游戏,看看你的奖励物品现在表现得像在游戏世界里一样了,而不是仅仅画在上面。
### 检测碰撞
就像平台和敌人一样,你可以检查奖励物品和玩家之间的碰撞。逻辑与其他碰撞相同,除了撞击不会(必然)影响重力或生命值。取而代之的是,命中会导致奖励物品会消失并增加玩家的分数。
当你的玩家触摸到一个奖励对象时,你可以从 `loot_list` 中移除该对象。这意味着当你的主循环在 `loot_list` 中重绘所有奖励物品时,它不会重绘那个特定的对象,所以看起来玩家已经获得了奖励物品。
`Player` 类的 `update` 函数中的平台碰撞检测之上添加以下代码(最后一行仅用于上下文):
```
                loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
                for loot in loot_hit_list:
                        loot_list.remove(loot)
                        self.score += 1
                print(self.score)
 
        plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
```
当碰撞发生时,你不仅要把奖励从它的组中移除,还要给你的玩家一个分数提升。你还没有创建分数变量,所以请将它添加到你的玩家属性中,该属性是在 `Player` 类的 `__init__` 函数中创建的。在下面的代码中,前两行是上下文,所以只需添加分数变量:
```
        self.frame = 0
        self.health = 10
        self.score = 0
```
当在主循环中调用 `update` 函数时,需要包括 `loot_list`
```
        player.gravity()
        player.update()
```
如你所见,你已经掌握了所有的基本知识。你现在要做的就是用新的方式使用你所知道的。
在下一篇文章中还有一些提示,但是与此同时,用你学到的知识来制作一些简单的单关卡游戏。限制你试图创造的东西的范围是很重要的,这样你就不会埋没自己。这也使得最终的成品看起来和感觉上更容易完成。
以下是迄今为止你为这个 Python 平台编写的所有代码:
```
#!/usr/bin/env python3
# draw a world
# add a player and player control
# add player movement
# add enemy and basic collision
# add platform
# add gravity
# add jumping
# add scrolling
# GNU All-Permissive License
# Copying and distribution of this file, with or without modification,
# are permitted in any medium without royalty provided the copyright
# notice and this notice are preserved. This file is offered as-is,
# without any warranty.
import pygame
import sys
import os
'''
Objects
'''
class Platform(pygame.sprite.Sprite):
# x location, y location, img width, img height, img file
def __init__(self,xloc,yloc,imgw,imgh,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img)).convert()
self.image.convert_alpha()
self.rect = self.image.get_rect()
self.rect.y = yloc
self.rect.x = xloc
class Player(pygame.sprite.Sprite):
'''
Spawn a player
'''
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.movex = 0
self.movey = 0
self.frame = 0
self.health = 10
self.collide_delta = 0
self.jump_delta = 6
self.score = 1
self.images = []
for i in range(1,9):
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
img.convert_alpha()
img.set_colorkey(ALPHA)
self.images.append(img)
self.image = self.images[0]
self.rect = self.image.get_rect()
def jump(self,platform_list):
self.jump_delta = 0
def gravity(self):
self.movey += 3.2 # how fast player falls
if self.rect.y > worldy and self.movey >= 0:
self.movey = 0
self.rect.y = worldy-ty
def control(self,x,y):
'''
control player movement
'''
self.movex += x
self.movey += y
def update(self):
'''
Update sprite position
'''
self.rect.x = self.rect.x + self.movex
self.rect.y = self.rect.y + self.movey
# moving left
if self.movex < 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[self.frame//ani]
# moving right
if self.movex > 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[(self.frame//ani)+4]
# collisions
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
for enemy in enemy_hit_list:
self.health -= 1
#print(self.health)
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
for loot in loot_hit_list:
loot_list.remove(loot)
self.score += 1
print(self.score)
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.collide_delta = 0 # stop jumping
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.movey = 0
self.rect.y = worldy-ty-ty
self.collide_delta = 0 # stop jumping
if self.rect.y > g.rect.y:
self.health -=1
print(self.health)
if self.collide_delta < 6 and self.jump_delta < 6:
self.jump_delta = 6*2
self.movey -= 33 # how high to jump
self.collide_delta += 6
self.jump_delta += 6
class Enemy(pygame.sprite.Sprite):
'''
Spawn an enemy
'''
def __init__(self,x,y,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img))
self.movey = 0
#self.image.convert_alpha()
#self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.counter = 0
def move(self):
'''
enemy movement
'''
distance = 80
speed = 8
self.movey += 3.2
if self.counter >= 0 and self.counter <= distance:
self.rect.x += speed
elif self.counter >= distance and self.counter <= distance*2:
self.rect.x -= speed
else:
self.counter = 0
self.counter += 1
if not self.rect.y >= worldy-ty-ty:
self.rect.y += self.movey
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.rect.y = worldy-ty-ty
class Level():
def bad(lvl,eloc):
if lvl == 1:
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
enemy_list = pygame.sprite.Group() # create enemy group
enemy_list.add(enemy) # add enemy to group
if lvl == 2:
print("Level " + str(lvl) )
return enemy_list
def loot(lvl,tx,ty):
if lvl == 1:
loot_list = pygame.sprite.Group()
loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
loot_list.add(loot)
if lvl == 2:
print(lvl)
return loot_list
def ground(lvl,gloc,tx,ty):
ground_list = pygame.sprite.Group()
i=0
if lvl == 1:
while i < len(gloc):
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
ground_list.add(ground)
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return ground_list
def platform(lvl,tx,ty):
plat_list = pygame.sprite.Group()
ploc = []
i=0
if lvl == 1:
ploc.append((20,worldy-ty-128,3))
ploc.append((300,worldy-ty-256,3))
ploc.append((500,worldy-ty-128,4))
while i < len(ploc):
j=0
while j <= ploc[i][2]:
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
plat_list.add(plat)
j=j+1
print('run' + str(i) + str(ploc[i]))
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return plat_list
'''
Setup
'''
worldx = 960
worldy = 720
fps = 40 # frame rate
ani = 4 # animation cycles
clock = pygame.time.Clock()
pygame.init()
main = True
BLUE = (25,25,200)
BLACK = (23,23,23 )
WHITE = (254,254,254)
ALPHA = (0,255,0)
world = pygame.display.set_mode([worldx,worldy])
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
backdropbox = world.get_rect()
player = Player() # spawn player
player.rect.x = 0
player.rect.y = 0
player_list = pygame.sprite.Group()
player_list.add(player)
steps = 10
forwardx = 600
backwardx = 230
eloc = []
eloc = [200,20]
gloc = []
#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
tx = 64 #tile size
ty = 64 #tile size
i=0
while i <= (worldx/tx)+tx:
gloc.append(i*tx)
i=i+1
enemy_list = Level.bad( 1, eloc )
ground_list = Level.ground( 1,gloc,tx,ty )
plat_list = Level.platform( 1,tx,ty )
loot_list = Level.loot(1,tx,ty)
'''
Main loop
'''
while main == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit(); sys.exit()
main = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT or event.key == ord('a'):
print("LEFT")
player.control(-steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
print("RIGHT")
player.control(steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
print('jump')
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(-steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump(plat_list)
if event.key == ord('q'):
pygame.quit()
sys.exit()
main = False
# scroll the world forward
if player.rect.x >= forwardx:
scroll = player.rect.x - forwardx
player.rect.x = forwardx
for p in plat_list:
p.rect.x -= scroll
for e in enemy_list:
e.rect.x -= scroll
for l in loot_list:
l.rect.x -= scroll
# scroll the world backward
if player.rect.x <= backwardx:
scroll = backwardx - player.rect.x
player.rect.x = backwardx
for p in plat_list:
p.rect.x += scroll
for e in enemy_list:
e.rect.x += scroll
for l in loot_list:
l.rect.x += scroll
world.blit(backdrop, backdropbox)
player.gravity() # check gravity
player.update()
player_list.draw(world) #refresh player position
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh enemies
plat_list.draw(world) # refresh platforms
loot_list.draw(world) # refresh loot
for e in enemy_list:
e.move()
pygame.display.flip()
clock.tick(fps)
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/loot-python-platformer-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp (Hearts, stars, and dollar signs)
[2]: https://www.python.org/
[3]: https://www.pygame.org/news
[4]: https://linux.cn/article-9071-1.html
[5]: https://linux.cn/article-10850-1.html
[6]: https://linux.cn/article-10858-1.html
[7]: https://linux.cn/article-10874-1.html
[8]: https://linux.cn/article-10883-1.html
[9]: https://linux.cn/article-11780-1.html
[10]: https://linux.cn/article-11790-1.html
[11]: https://linux.cn/article-11819-1.html
[12]: https://opensource.com/sites/default/files/uploads/pygame-loot.jpg (Loot in Python platformer)
[13]: https://linux.cn/article-10902-1.html

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11832-1.html)
[#]: subject: (Introducing the guide to inter-process communication in Linux)
[#]: via: (https://opensource.com/article/20/1/inter-process-communication-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
免费电子书《Linux 进程间通信指南》介绍
======
> 这本免费的电子书使经验丰富的程序员更深入了解 Linux 中进程间通信IPC的核心概念和机制。
![](https://img.linux.net.cn/data/attachment/album/202001/30/115631jthl0h61zhhmwpv1.jpeg)
让一个软件过程与另一个软件过程进行对话是一个微妙的平衡行为。但是,它对于应用程序而言可能是至关重要的功能,因此这是任何从事复杂项目的程序员都必须解决的问题。你的应用程序是否需要启动由其它软件处理的工作;监视外设或网络上正在执行的操作;或者检测来自其它来源的信号,当你的软件需要依赖其自身代码之外的东西来知道下一步做什么或什么时候做时,你就需要考虑<ruby>进程间通信<rt>inter-process communication</rt></ruby>IPC
这在 Unix 操作系统上已经由来已久了这可能是因为人们早期预期软件会来自各种来源。按照相同的传统Linux 提供了一些同样的 IPC 接口和一些新接口。Linux 内核具有多种 IPC 方法,[util-linux 包][2]包含了 `ipcmk`、`ipcrm`、`ipcs` 和 `lsipc` 命令,用于监视和管理 IPC 消息。
### 显示进程间通信信息
在尝试 IPC 之前,你应该知道系统上已经有哪些 IPC 设施。`lsipc` 命令提供了该信息。
```
RESOURCE DESCRIPTION LIMIT USED USE%
MSGMNI Number of message queues 32000 0 0.00%
MSGMAX Max size of message (byt.. 8192 - -
MSGMNB Default max size of queue 16384 - -
SHMMNI Shared memory segments 4096 79 1.93%
SHMALL Shared memory pages 184[...] 25452 0.00%
SHMMAX Max size of shared memory 18446744073692774399
SHMMIN Min size of shared memory 1 - -
SEMMNI Number of semaphore ident 32000 0 0.00%
SEMMNS Total number of semaphore 1024000.. 0 0.00%
SEMMSL Max semaphores per semap 32000 - -
SEMOPM Max number of operations p 500 - -
SEMVMX Semaphore max value 32767 - -
```
你可能注意到,这个示例清单包含三种不同类型的 IPC 机制,每种机制在 Linux 内核中都是可用的消息MSG、共享内存SHM和信号量SEM。你可以用 `ipcs` 命令查看每个子系统的当前活动:
```
$ ipcs
------ Message Queues Creators/Owners ---
msqid perms cuid cgid [...]
------ Shared Memory Segment Creators/Owners
shmid perms cuid cgid [...]
557056 700 seth users [...]
3571713 700 seth users [...]
2654210 600 seth users [...]
2457603 700 seth users [...]
------ Semaphore Arrays Creators/Owners ---
semid perms cuid cgid [...]
```
这表明当前没有消息或信号量阵列,但是使用了一些共享内存段。
你可以在系统上执行一个简单的示例,这样就可以看到正在工作的系统之一。它涉及到一些 C 代码,所以你必须在系统上有构建工具。必须安装这些软件包才能从源代码构建软件,这些软件包的名称取决于发行版,因此请参考文档以获取详细信息。例如,在基于 Debian 的发行版上,你可以在 wiki 的[构建教程][3]部分了解构建需求,而在基于 Fedora 的发行版上,你可以参考该文档的[从源代码安装软件][4]部分。
### 创建一个消息队列
你的系统已经有一个默认的消息队列,但是你可以使用 `ipcmk` 命令创建你自己的消息队列:
```
$ ipcmk --queue
Message queue id: 32764
```
编写一个简单的 IPC 消息发送器,为了简单,在队列 ID 中硬编码:
```
#include <sys/ipc.h>
#include <sys/msg.h>
#include <stdio.h>
#include <string.h>
struct msgbuffer {
char text[24];
} message;
int main() {
int msqid = 32764;
strcpy(message.text,"opensource.com");
msgsnd(msqid, &message, sizeof(message), 0);
printf("Message: %s\n",message.text);
printf("Queue: %d\n",msqid);
return 0;
}
```
编译该应用程序并运行:
```
$ gcc msgsend.c -o msg.bin
$ ./msg.bin
Message: opensource.com
Queue: 32769
```
你刚刚向你的消息队列发送了一条消息。你可以使用 `ipcs` 命令验证这一点,可以使用 `——queue` 选项将输出限制到该消息队列:
```
$ ipcs -q
------ Message Queues --------
key msqid owner perms used-bytes messages
0x7b341ab9 0 seth 666 0 0
0x72bd8410 32764 seth 644 24 1
```
你也可以检索这些消息:
```
#include <sys/ipc.h>
#include <sys/msg.h>
#include <stdio.h>
struct msgbuffer {
char text[24];
} message;
int main() {
int msqid = 32764;
msgrcv(msqid, &message, sizeof(message),0,0);
printf("\nQueue: %d\n",msqid);
printf("Got this message: %s\n", message.text);
msgctl(msqid,IPC_RMID,NULL);
return 0;
```
编译并运行:
```
$ gcc get.c -o get.bin
$ ./get.bin
Queue: 32764
Got this message: opensource.com
```
### 下载这本电子书
这只是 Marty Kalin 的《[Linux 进程间通信指南][5]》中课程的一个例子,可从 Opensource.com 下载的这本最新免费(且 CC 授权)的电子书。在短短的几节课中,你将从消息队列、共享内存和信号量、套接字、信号等中了解 IPC 的 POSIX 方法。认真阅读 Marty 的书,你将成为一个博识的程序员。而这不仅适用于经验丰富的编码人员,如果你编写的只是 shell 脚本,那么你将拥有有关管道(命名和未命名)和共享文件的大量实践知识,以及使用共享文件或外部消息队列时需要了解的重要概念。
如果你对制作具有动态和具有系统感知的优秀软件感兴趣,那么你需要了解 IPC。让[这本书][5]做你的向导。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/inter-process-communication-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_inter-process_communication_linux_520x292.png?itok=hPoen7oI (Inter-process Communication in Linux)
[2]: https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/
[3]: https://wiki.debian.org/BuildingTutorial
[4]: https://docs.pagure.org/docs-fedora/installing-software-from-source.html
[5]: https://opensource.com/downloads/guide-inter-process-communication-linux

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11809-1.html)
[#]: subject: (How to setup multiple monitors in sway)
[#]: via: (https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/)
[#]: author: (arte219 https://fedoramagazine.org/author/arte219/)
如何在 Sway 中设置多个显示器
======
![][1]
Sway 是一种平铺式 Wayland 合成器,具有与 [i3 X11 窗口管理器][2]相同的功能、外观和工作流程。由于 Sway 使用 Wayland 而不是 X11因此就不能一如既往地使用设置 X11 的工具。这包括 `xrandr` 之类的工具,这些工具在 X11 窗口管理器或桌面中用于设置显示器。这就是为什么必须通过编辑 Sway 配置文件来设置显示器的原因,这就是本文的目的。
### 获取你的显示器 ID
首先,你必须获得 Sway 用来指代显示器的名称。你可以通过运行以下命令进行操作:
```
$ swaymsg -t get_outputs
```
你将获得所有显示器的相关信息,每个显示器都用空行分隔。
你必须查看每个部分的第一行,以及 `Output` 之后的内容。例如,当你看到 `Output DVI-D-1 'Philips Consumer Electronics Company'` 之类的行时,则该输出 ID 为 `DVI-D-1`。注意这些 ID 及其所属的物理监视器。
### 编辑配置文件
如果你之前没有编辑过 Sway 配置文件,则必须通过运行以下命令将其复制到主目录中:
```
cp -r /etc/sway/config ~/.config/sway/config
```
现在,默认配置文件位于 `~/.config/sway` 中,名为 `config`。你可以使用任何文本编辑器进行编辑。
现在你需要做一点数学。想象有一个网格其原点在左上角。X 和 Y 坐标的单位是像素。Y 轴反转。这意味着,例如,如果你从原点开始,向右移动 100 像素,向下移动 80 像素,则坐标将为 `(100, 80)`
你必须计算最终显示在此网格上的位置。显示器的位置由左上方的像素指定。例如如果我们要使用名称为“HDMI1”且分辨率为 1920×1080 的显示器,并在其右侧使用名称为 “eDP1” 且分辨率为 1600×900 的笔记本电脑显示器,则必须在配置文件中键入
```
output HDMI1 pos 0 0
output eDP1 pos 1920 0
```
你还可以使用 `res` 选项手动指定分辨率:
```
output HDMI1 pos 0 0 res 1920x1080
output eDP1 pos 1920 0 res 1600x900
```
### 将工作空间绑定到显示器上
与多个监视器一起使用 Sway 在工作区管理中可能会有些棘手。幸运的是,你可以将工作区绑定到特定的显示器上,因此你可以轻松地切换到该显示器并更有效地使用它。只需通过配置文件中的 `workspace` 命令即可完成。例如,如果要绑定工作区 1 和 2 到显示器 “DVI-D-1”绑定工作区 8 和 9 到显示器 “HDMI-A-1”则可以使用以下方法
```
workspace 1 output DVI-D-1
workspace 2 output DVI-D-1
```
```
workspace 8 output HDMI-A-1
workspace 9 output HDMI-A-1
```
就是这样。这就在 Sway 中多显示器设置的基础知识。可以在 <https://github.com/swaywm/sway/wiki#Wiki#Multihead> 中找到更详细的指南。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/
作者:[arte219][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/arte219/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/sway-multiple-monitors-816x345.png
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/

View File

@ -1,29 +1,30 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11804-1.html)
[#]: subject: (Keep your email in sync with OfflineIMAP)
[#]: via: (https://opensource.com/article/20/1/sync-email-offlineimap)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用 OfflineIMAP 同步邮件
======
将邮件镜像保存到本地是整理消息的第一步。在我们的 20 个使用开源提升生产力的系列的第三篇文章中了解该如何做。
![email or newsletters via inbox and browser][1]
> 将邮件镜像保存到本地是整理消息的第一步。在我们的 20 个使用开源提升生产力的系列的第三篇文章中了解该如何做。
![](https://img.linux.net.cn/data/attachment/album/202001/20/235324nbgfyuwl98syowta.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 OfflineIMAP 在本地同步你的邮件
我与邮件之间存在爱恨交织的关系。我喜欢它让我与世界各地的人交流的方式。但是,像你们中的许多人一样,我收到过很多邮件,许多是来自列表,但也有很多垃圾邮件、广告等。这些积累了很多。
我与邮件之间存在爱恨交织的关系。我喜欢它让我与世界各地的人交流的方式。但是,像你们中的许多人一样,我收到过很多邮件,许多是来自邮件列表的,但也有很多垃圾邮件、广告等。这些积累了很多。
![The OfflineIMAP "blinkenlights" UI][2]
我尝试过的大多数工具(除了大型提供商外)都可以很好地处理大量邮件,它们都有一个共同点:它们都依赖于以 [Maildir][3] 格式存储的本地邮件副本。这其中最有用的是 [OfflineIMAP][4]。OfflineIMAP 是将 IMAP 邮箱镜像到本地 Maildir 文件夹树的 Python 脚本。我用它来创建邮件的本地副本并使其保持同步。大多数 Linux 发行版都包含它,并且可以通过 Python 的 pip 包管理器获得。
示例的最小配置文件是一个很好的模板。首先将其复制到 **~/.offlineimaprc**。我的看起来像这样:
我尝试过的大多数工具(除了大型邮件服务商外)都可以很好地处理大量邮件,它们都有一个共同点:它们都依赖于以 [Maildir][3] 格式存储的本地邮件副本。这其中最有用的是 [OfflineIMAP][4]。OfflineIMAP 是将 IMAP 邮箱镜像到本地 Maildir 文件夹树的 Python 脚本。我用它来创建邮件的本地副本并使其保持同步。大多数 Linux 发行版都包含它,并且可以通过 Python 的 pip 包管理器获得。
示例的最小配置文件是一个很好的模板。首先将其复制到 `~/.offlineimaprc`。我的看起来像这样:
```
[general]
@ -54,9 +55,9 @@ createfolder = true
我的配置要做的是定义两个仓库:远程 IMAP 服务器和本地 Maildir 文件夹。还有一个**帐户**,告诉 OfflineIMAP 运行时要同步什么。你可以定义链接到不同仓库的多个帐户。除了本地复制外,这还允许你从一台 IMAP 服务器复制到另一台作为备份。
如果你有很多邮件,那么首次运行 OfflineIMAP 将花费一些时间。但是完成后,下次会花_少得多_的时间。你也可以将 CoffeeIMAP 作为 cron 任务(我的偏好)或作为守护程序在仓库之间不断进行同步。文档涵盖了所有这些内容以及 Gmail 等高级配置选项。
如果你有很多邮件,那么首次运行 OfflineIMAP 将花费一些时间。但是完成后,下次会花*少得多*的时间。你也可以将 OfflineIMAP 作为 cron 任务(我的偏好)或作为守护程序在仓库之间不断进行同步。文档涵盖了所有这些内容以及 Gmail 等高级配置选项。
现在,我的邮件已在本地复制,并有多种工具用来加快搜索、归档和管理邮件的速度。我明天再说。
现在,我的邮件已在本地复制,并有多种工具用来加快搜索、归档和管理邮件的速度。这些我明天再说。
--------------------------------------------------------------------------------
@ -65,7 +66,7 @@ via: https://opensource.com/article/20/1/sync-email-offlineimap
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,16 +1,18 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11807-1.html)
[#]: subject: (Organize your email with Notmuch)
[#]: via: (https://opensource.com/article/20/1/organize-email-notmuch)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用 Notmuch 组织你的邮件
======
Notmuch 索引、标记和排序电子邮件。在我们的 20 个使用开源提升生产力的系列的第四篇文章中了解该如何使用。
![Filing cabinet for organization][1]
> Notmuch 可以索引、标记和排序电子邮件。在我们的 20 个使用开源提升生产力的系列的第四篇文章中了解该如何使用它。
![](https://img.linux.net.cn/data/attachment/album/202001/22/112231xg5dgv6f6g5a1iv1.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
@ -26,12 +28,11 @@ Notmuch 索引、标记和排序电子邮件。在我们的 20 个使用开源
![Notmuch's first run][7]
Notmuch 首次运行时,它将询问你一些问题,并在家目录中创建 **.notmuch-config** 文件。接下来,运行 **notmuch new** 来索引并标记所有邮件。你可以使用 **notmuch search tag:new** 进行验证,它会找到所有带有 “new” 标签的消息。这可能会有很多邮件,因为 Notmuch 使用 “new” 标签来指示新邮件,因此你需要对其进行清理。
Notmuch 首次运行时,它将询问你一些问题,并在家目录中创建 `.notmuch-config` 文件。接下来,运行 `notmuch new` 来索引并标记所有邮件。你可以使用 `notmuch search tag:new` 进行验证,它会找到所有带有 `new` 标签的消息。这可能会有很多邮件,因为 Notmuch 使用 `new` 标签来指示新邮件,因此你需要对其进行清理。
运行 **notmuch search tag:unread** 来查找未读消息,这会减少很多邮件。要从你已阅读的消息中删除 “new” 标签,请运行 **notmuch tag -new not tag:unread**,它将搜索所有没有 “unread” 标签的消息,并从其中删除 “new” 标签。现在,当你运行 **notmuch search tag:new**时,它将仅显示未读邮件。
但是,批量标记消息可能更有用,因为在每次运行时手动更新标记可能非常繁琐。**\--batch** 命令行选项告诉 Notmuch 读取多行命令并执行它们。还有一个 **\--input=filename** 选项,该选项从文件中读取命令并应用它们。我有一个名为 **tagmail.notmuch** 的文件,用于给”新“邮件添加标签;它看起来像这样:
运行 `notmuch search tag:unread` 来查找未读消息,这会减少很多邮件。要从你已阅读的消息中删除 `new` 标签,请运行 `notmuch tag -new not tag:unread`,它将搜索所有没有 `unread` 标签的消息,并从其中删除 `new` 标签。现在,当你运行 `notmuch search tag:new` 时,它将仅显示未读邮件。
但是,批量标记消息可能更有用,因为在每次运行时手动更新标记可能非常繁琐。`--batch` 命令行选项告诉 Notmuch 读取多行命令并执行它们。还有一个 `--input=filename` 选项,该选项从文件中读取命令并应用它们。我有一个名为 `tagmail.notmuch` 的文件,用于给“新”邮件添加标签;它看起来像这样:
```
# Manage sent, spam, and trash folders
@ -49,9 +50,9 @@ Notmuch 首次运行时,它将询问你一些问题,并在家目录中创建
-new tag:new
```
我可以在运行 **notmuch new** 后运行 **notmuch tag --input=tagmail.notmuch** 批量处理我的邮件,之后我也可以搜索这些标签。
我可以在运行 `notmuch new` 后运行 `notmuch tag --input=tagmail.notmuch` 批量处理我的邮件,之后我也可以搜索这些标签。
Notmuch 还支持 pre-new 和 post-new 钩子。这些脚本存放在 **Maildir/.notmuch/hooks** 中,它们定义了在使用 **notmuch new** 索引新邮件之前pre-new和之后post-new)要做的操作。在昨天的文章中,我谈到了使用 [OfflineIMAP][8] 同步来自 IMAP 服务器的邮件。从 “pre-new” 钩子运行它非常容易:
Notmuch 还支持 `pre-new``post-new` 钩子。这些脚本存放在 `Maildir/.notmuch/hooks` 中,它们定义了在使用 `notmuch new` 索引新邮件之前(`pre-new`)和之后(`post-new`)要做的操作。在昨天的文章中,我谈到了使用 [OfflineIMAP][8] 同步来自 IMAP 服务器的邮件。从 `pre-new` 钩子运行它非常容易:
```
@ -63,8 +64,7 @@ notmuch tag -new tag:new
offlineimap -a LocalSync -u quiet
```
你还可以使用可以操作 Notmuch 数据库的 Python 应用 [afew][9]来为你标记_邮件列表_和_垃圾邮件_。你可以用类似的方法在 post-new 钩子中使用 afew
你还可以使用可以操作 Notmuch 数据库的 Python 应用 [afew][9],来为你标记*邮件列表*和*垃圾邮件*。你可以用类似的方法在 `post-new` 钩子中使用 `afew`
```
#!/bin/bash
@ -75,7 +75,7 @@ notmuch tag --input=~/tagmail.notmuch
afew -t -n
```
我建议你在使用 afew 标记邮件时,不要使用 **[ListMailsFilter]**,因为某些邮件处理程序会在邮件中添加模糊或者完全的垃圾列表中的标头(我说的是你 Google
我建议你在使用 `afew` 标记邮件时,不要使用 `[ListMailsFilter]`,因为某些邮件处理程序会在邮件中添加模糊或者彻头彻尾是垃圾的列表标头(我说的就是你 Google
![alot email client][10]
@ -90,14 +90,14 @@ via: https://opensource.com/article/20/1/organize-email-notmuch
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
[2]: https://opensource.com/article/20/1/sync-email-offlineimap
[2]: https://linux.cn/article-11804-1.html
[3]: https://opensource.com/sites/default/files/uploads/productivity_4-1.png (Notmuch)
[4]: https://en.wikipedia.org/wiki/Maildir
[5]: https://notmuchmail.org/

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11812-1.html)
[#]: subject: (Organize and sync your calendar with khal and vdirsyncer)
[#]: via: (https://opensource.com/article/20/1/open-source-calendar)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用 khal 和 vdirsyncer 组织和同步你的日历
======
保存和共享日历可能会有点麻烦。在我们的 20 个使用开源提升生产力的系列的第五篇文章中了解如何让它更简单。
![Calendar close up snapshot][1]
> 保存和共享日历可能会有点麻烦。在我们的 20 个使用开源提升生产力的系列的第五篇文章中了解如何让它更简单。
![](https://img.linux.net.cn/data/attachment/album/202001/23/150009wsr3d5ovg4g1vzws.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 khal 和 vdirsyncer 跟踪你的日程
处理日历很_麻烦_要找到好的工具总是很困难的。但是自从我去年将日历列为[我的“失败“之一][2]以来,我已经取得了一些进步。
处理日历很*麻烦*,要找到好的工具总是很困难的。但是自从我去年将日历列为[我的“失败"之一][2]以来,我已经取得了一些进步。
目前使用日历最困难的是一直需要以某种方式在线共享。两种最受欢迎的在线日历是 Google Calendar 和 Microsoft Outlook/Exchange。两者都在公司环境中大量使用这意味着我的日历必须支持其中之一或者两个。
@ -28,10 +30,9 @@
![vdirsyncer][6]
Vdirsyncer 是个 Python 3 程序,可以通过软件包管理器或 pip 安装。它可以同步 CalDAV、VCalendar/iCalendar、Google Calendar 和目录中的本地文件。由于我使用 Google Calendar尽管这不是最简单的设置我也将以它为例。
在 vdirsyncer 中设置 Google Calendar 是[有文档参考的][7],所以这里我不再赘述。重要的是确保同步对设置将 Google Calendar 设置为冲突解决的”赢家“。也就是说,如果同一事件有两个更新,那么需要知道哪个更新优先。类似这样做:
Vdirsyncer 是个 Python 3 程序,可以通过软件包管理器或 `pip` 安装。它可以同步 CalDAV、VCalendar/iCalendar、Google Calendar 和目录中的本地文件。由于我使用 Google Calendar尽管这不是最简单的设置我也将以它为例。
在 vdirsyncer 中设置 Google Calendar 是[有文档参考的][7],所以这里我不再赘述。重要的是确保设置你的同步对,将 Google Calendar 设置为冲突解决的“赢家”。也就是说,如果同一事件有两个更新,那么需要知道哪个更新优先。类似这样做:
```
[general]
@ -56,12 +57,11 @@ path = "~/.calendars/Personal"
fileext = ".ics"
```
在第一次 vdirsyncer 同步之后,你将在存储路径中看到一系列目录。每个文件夹都将包含多个文件,日历中的每个事件都是一个文件。下一步是导入 khal。首先运行 **khal configure** 进行初始设置。
在第一次 vdirsyncer 同步之后,你将在存储路径中看到一系列目录。每个文件夹都将包含多个文件,日历中的每个事件都是一个文件。下一步是导入 khal。首先运行 `khal configure` 进行初始设置。
![Configuring khal][8]
现在,运行 **khal interactive** 将显示本文开头的界面。输入 **n** 将打开“新事件”对话框。这里要注意的一件事:日历的名称与 vdirsyncer 创建的目录匹配,但是你可以更改 khal 配置文件来指定更清晰的名称。根据条目所在的日历向条目添加颜色还可以帮助你确定日历内容:
现在,运行 `khal interactive` 将显示本文开头的界面。输入 `n` 将打开“新事件”对话框。这里要注意的一件事:日历的名称与 vdirsyncer 创建的目录匹配,但是你可以更改 khal 配置文件来指定更清晰的名称。根据条目所在的日历,向条目添加颜色还可以帮助你确定日历内容:
```
[calendars]
@ -76,7 +76,7 @@ path = ~/.calendars/Personal/c5i68sj5edpm4rrfdchm6rreehgm6t3j81jn4rrle0n7cbj3c5m
color = brown
```
现在,当你运行 **khal interactive** 时,每个日历将被着色以区别于其他日历,并且当你添加新条目时,它将有更具描述性的名称。
现在,当你运行 `khal interactive` 时,每个日历将被着色以区别于其他日历,并且当你添加新条目时,它将有更具描述性的名称。
![Adding a new calendar entry][9]
@ -89,7 +89,7 @@ via: https://opensource.com/article/20/1/open-source-calendar
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -103,3 +103,4 @@ via: https://opensource.com/article/20/1/open-source-calendar
[6]: https://opensource.com/sites/default/files/uploads/productivity_5-2.png (vdirsyncer)
[7]: https://vdirsyncer.pimutils.org/en/stable/config.html#google
[8]: https://opensource.com/sites/default/files/uploads/productivity_5-3.png (Configuring khal)
[9]: https://opensource.com/sites/default/files/uploads/productivity_5-4.png

View File

@ -0,0 +1,169 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11837-1.html)
[#]: subject: (Root User in Ubuntu: Important Things You Should Know)
[#]: via: (https://itsfoss.com/root-user-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 中的 root 用户:你应该知道的重要事情
======
![][5]
当你刚开始使用 Linux 时,你将发现与 Windows 的很多不同。其中一个“不同的东西”是 root 用户的概念。
在这个初学者系列中,我将解释几个关于 Ubuntu 的 root 用户的重要的东西。
**请记住,尽管我正在从 Ubuntu 用户的角度编写这篇文章,它应该对大多数的 Linux 发行版也是有效的。**
你将在这篇文章中学到下面的内容:
* 为什么在 Ubuntu 中禁用 root 用户
* 像 root 用户一样使用命
* 切换为 root 用户
* 解锁 root 用户
### 什么是 root 用户?为什么它在 Ubuntu 中被锁定?
在 Linux 中,有一个称为 [root][6] 的超级用户。这是超级管理员账号,它可以做任何事以及使用系统的一切东西。它可以在你的 Linux 系统上访问任何文件和运行任何命令。
能力越大责任越大。root 用户给予你完全控制系统的能力因此它应该被谨慎地使用。root 用户可以访问系统文件,运行更改系统配置的命令。因此,一个错误的命令可能会破坏系统。
这就是为什么 [Ubuntu][7] 和其它基于 Ubuntu 的发行版默认锁定 root 用户,以从意外的灾难中挽救你的原因。
对于你的日常任务,像移动你家目录中的文件,从互联网下载文件,创建文档等等,你不需要拥有 root 权限。
**打个比方来更好地理解它。假设你想要切一个水果,你可以使用一把厨房用刀。假设你想要砍一颗树,你就得使用一把锯子。现在,你可以使用锯子来切水果,但是那不明智,不是吗?**_
这意味着,你不能是 Ubuntu 中 root 用户或者不能使用 root 权限来使用系统吗?不,你仍然可以在 `sudo` 的帮助下来拥有 root 权限来访问(在下一节中解释)。
> **要点:** 使用于常规任务root 用户权限太过强大。这就是为什么不建议一直使用 root 用户。你仍然可以使用 root 用户来运行特殊的命令。
### 如何在 Ubuntu 中像 root 用户一样运行命令?
![Image Credit: xkcd][8]
对于一些系统的特殊任务来说,你将需要 root 权限。例如。如果你想[通过命令行更新 Ubuntu][9],你不能作为一个常规用户运行该命令。它将给出权限被拒绝的错误。
```
apt update
Reading package lists... Done
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
```
那么,你如何像 root 用户一样运行命令?简单的答案是,在命令前添加 `sudo`,来像 root 用户一样运行。
```
sudo apt update
```
Ubuntu 和很多其它的 Linux 发行版使用一个被称为 `sudo` 的特殊程序机制。`sudo` 是一个以 root 用户(或其它用户)来控制运行命令访问的程序。
实际上,`sudo` 是一个非常多用途的工具。它可以配置为允许一个用户像 root 用户一样来运行所有的命令,或者仅仅一些命令。你也可以配置为无需密码即可使用 sudo 运行命令。这个主题内容比较丰富,也许我将在另一篇文章中详细讨论它。
就目前而言,你应该知道[当你安装 Ubuntu 时][10],你必须创建一个用户账号。这个用户账号在你系统上以管理员身份来工作,并且按照 Ubuntu 中的默认 sudo 策略,它可以在你的系统上使用 root 用户权限来运行任何命令。
`sudo` 的问题是,运行 **sudo 不需要 root 用户密码,而是需要用户自己的密码**
并且这就是为什么当你使用 `sudo` 运行一个命令,会要求输入正在运行 `sudo` 命令的用户的密码的原因:
```
[email protected]:~$ sudo apt update
[sudo] password for abhishek:
```
正如你在上面示例中所见 `abhishek` 在尝试使用 `sudo` 来运行 `apt update` 命令,系统要求输入 `abhishek` 的密码。
**如果你对 Linux 完全不熟悉,当你在终端中开始输入密码时,你可能会惊讶,在屏幕上什么都没有发生。这是十分正常的,因为作为默认的安全功能,在屏幕上什么都不会显示。甚至星号(`*`)都没有。输入你的密码并按回车键。**
> **要点:**为在 Ubuntu 中像 root 用户一样运行命令,在命令前添加 `sudo`。 当被要求输入密码时,输入你的账户的密码。当你在屏幕上输入密码时,什么都看不到。请继续输入密码,并按回车键。
### 如何在 Ubuntu 中成为 root 用户?
你可以使用 `sudo` 来像 root 用户一样运行命令。但是,在某些情况下,你必须以 root 用户身份来运行一些命令,而你总是忘了在命令前添加 `sudo`,那么你可以临时切换为 root 用户。
`sudo` 命令允许你来模拟一个 root 用户登录的 shell ,使用这个命令:
```
sudo -i
```
```
[email protected]:~$ sudo -i
[sudo] password for abhishek:
[email protected]:~# whoami
root
[email protected]:~#
```
你将注意到,当你切换为 root 用户时shell 命令提示符从 `$`(美元符号)更改为 `#`(英镑符号)。我开个(拙劣的)玩笑,英镑比美元强大。
**虽然我已经向你显示如何成为 root 用户,但是我必须警告你,你应该避免作为 root 用户使用系统。毕竟它有阻拦你使用 root 用户的原因。**
另外一种临时切换为 root 用户的方法是使用 `su` 命令:
```
sudo su
```
如果你尝试使用不带有的 `sudo``su` 命令,你将遇到 “su authentication failure” 错误。
你可以使用 `exit` 命令来恢复为正常用户。
```
exit
```
### 如何在 Ubuntu 中启用 root 用户?
现在你知道root 用户在基于 Ubuntu 发行版中是默认锁定的。
Linux 给予你在系统上想做什么就做什么的自由。解锁 root 用户就是这些自由之一。
如果出于某些原因,你决定启用 root 用户,你可以通过为其设置一个密码来做到:
```
sudo passwd root
```
再强调一次,不建议使用 root 用户,并且我也不鼓励你在桌面上这样做。如果你忘记了密码,你将不能再次[在 Ubuntu 中更改 root 用户密码][11]。LCTT 译注:可以通过单用户模式修改。)
你可以通过移除密码来再次锁定 root 用户:
```
sudo passwd -dl root
```
### 最后…
我希望你现在对 root 概念理解得更好一点。如果你仍然有些关于它的困惑和问题,请在评论中让我知道。我将尝试回答你的问题,并且也可能更新这篇文章。
--------------------------------------------------------------------------------
via: https://itsfoss.com/root-user-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: tmp.IrHYJBAqVn#what-is-root
[2]: tmp.IrHYJBAqVn#run-command-as-root
[3]: tmp.IrHYJBAqVn#become-root
[4]: tmp.IrHYJBAqVn#enable-root
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/root_user_ubuntu.png?ssl=1
[6]: http://www.linfo.org/root.html
[7]: https://ubuntu.com/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/sudo_sandwich.png?ssl=1
[9]: https://itsfoss.com/update-ubuntu/
[10]: https://itsfoss.com/install-ubuntu/
[11]: https://itsfoss.com/how-to-hack-ubuntu-password/

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11823-1.html)
[#]: subject: (Why everyone is talking about WebAssembly)
[#]: via: (https://opensource.com/article/20/1/webassembly)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
为什么每个人都在谈论 WebAssembly
======
> 了解有关在 Web 浏览器中运行任何代码的最新方法的更多信息。
![](https://img.linux.net.cn/data/attachment/album/202001/27/125343ch0hxdfbzibrihfn.jpg)
如果你还没有听说过 [WebAssembly][2]那么你很快就会知道。这是业界最保密的秘密之一但它无处不在。所有主流的浏览器都支持它并且它也将在服务器端使用。它很快它能用于游戏编程。这是主要的国际网络标准组织万维网联盟W3C的一个开放标准。
你可能会说:“哇,这听起来像是我应该学习编程的东西!”你可能是对的,但也是错的。你不需要用 WebAssembly 编程。让我们花一些时间来学习这种通常被缩写为“Wasm”的技术。
### 它从哪里来?
大约十年前,人们越来越认识到,广泛使用的 JavaScript 不够快速无法满足许多目的。JavaScript 无疑是成功和方便的。它可以在任何浏览器中运行,并启用了今天我们认为理所当然的动态网页类型。但这是一种高级语言,在设计时并没有考虑到计算密集型工作负载。
然而,尽管负责主流 web 浏览器的工程师们对性能问题的看法大体一致,但他们对如何解决这个问题却意见不一。出现了两个阵营,谷歌开始了它的<ruby>原生客户端<rt>Native Client</rt></ruby>项目,后来又推出了<ruby>可移植原生客户端<rt>Portable Native Client</rt></ruby>变体,着重于允许用 C/C++ 编写的游戏和其它软件在 Chrome 的一个安全隔间中运行。与此同时Mozilla 赢得了微软对 asm.js 的支持。该方法更新了浏览器,因此它可以非常快速地运行 JavaScript 指令的低级子集(有另一个项目可以将 C/C++ 代码转换为这些指令)。
由于这两个阵营都没有得到广泛采用,各方在 2015 年同意围绕一种称为 WebAssembly 的新标准,以 asm.js 所采用的基本方法为基础,联合起来。[如 CNET 的 Stephen Shankland 当时所写][3],“在当今的 Web 上,浏览器的 JavaScript 将这些指令转换为机器代码。但是,通过 WebAssembly程序员可以在此过程的早期阶段完成很多工作从而生成介于两种状态之间的程序。这使浏览器摆脱了创建机器代码的繁琐工作但也实现了 Web 的承诺 —— 该软件将在具有浏览器的任何设备上运行,而无需考虑基础硬件的细节。”
在 2017 年Mozilla 宣布了它的最小可行的产品MVP并使其脱离预览版阶段。到该年年底所有主流的浏览器都采用了它。[2019 年 12 月][4]WebAssembly 工作组发布了三个 W3C 推荐的 WebAssembly 规范。
WebAssembly 定义了一种可执行程序的可移植二进制代码格式、相应的文本汇编语言以及用于促进此类程序与其宿主环境之间的交互接口。WebAssembly 代码在低级虚拟机中运行这个可运行于许多微处理器之上的虚拟机可模仿这些处理器的功能。通过即时JIT编译或解释WebAssembly 引擎可以以近乎原生平台编译代码的速度执行。
### 为什么现在感兴趣?
当然,最近对 WebAssembly 感兴趣的部分原因是最初希望在浏览器中运行更多计算密集型代码。尤其是笔记本电脑用户,越来越多的时间都花在浏览器上(或者,对于 Chromebook 用户来说,基本上是所有时间)。这种趋势已经迫切需要消除在浏览器中运行各种应用程序的障碍。这些障碍之一通常是性能的某些方面,这正是 WebAssembly 及其前身最初旨在解决的问题。
但是WebAssembly 并不仅仅适用于浏览器。在 2019 年,[Mozilla 宣布了一个名为 WASI][5]<ruby>WebAssembly 系统接口<rt>WebAssembly System Interface</rt></ruby>)的项目,以标准化 WebAssembly 代码如何与浏览器上下文之外的操作系统进行交互。通过将浏览器对 WebAssembly 和 WASI 的支持结合在一起,编译后的二进制文件将能够以接近原生的速度,跨不同的设备和操作系统在浏览器内外运行。
WebAssembly 的低开销立即使它可以在浏览器之外使用,但这无疑是赌注;显然,还有其它不会引入性能瓶颈的运行应用程序的方法。为什么要专门使用 WebAssembly
一个重要的原因是它的可移植性。如今,像 C++ 和 Rust 这样的广泛使用的编译语言可能是与 WebAssembly 关联最紧密的语言。但是,[各种各样的其他语言][6]可以编译为 WebAssembly 或拥有它们的 WebAssembly 虚拟机。此外,尽管 WebAssembly 为其执行环境[假定了某些先决条件][7]但它被设计为在各种操作系统和指令集体系结构上有效执行。因此WebAssembly 代码可以使用多种语言编写,并可以在多种操作系统和处理器类型上运行。
另一个 WebAssembly 优势源于这样一个事实:代码在虚拟机中运行。因此,每个 WebAssembly 模块都在沙盒环境中执行,并使用故障隔离技术将其与宿主机运行时环境分开。这意味着,对于其它部分而言,应用程序独立于其宿主机环境的其余部分执行,如果不调用适当的 API就无法摆脱沙箱。
### WebAssembly 现状
这一切在实践中意味着什么?
如今在运作中的 WebAssembly 的一个例子是 [Enarx][8]。
Enarx 是一个提供硬件独立性的项目,可使用<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby>TEE保护应用程序的安全。Enarx 使你可以安全地将编译为 WebAssembly 的应用程序始终交付到云服务商,并远程执行它。正如 Red Hat 安全工程师 [Nathaniel McCallum 指出的那样][9]:“我们这样做的方式是,我们将你的应用程序作为输入,并使用远程硬件执行认证过程。我们使用加密技术验证了远程硬件实际上是它声称的硬件。最终的结果不仅是我们对硬件的信任度提高了;它也是一个会话密钥,我们可以使用它将加密的代码和数据传递到我们刚刚要求加密验证的环境中。”
另一个例子是 OPA<ruby>开放策略代理<rt>Open Policy Agent</rt></ruby>,它[发布][10]于 2019 年 11 月,你可以[编译][11]他们的策略定义语言 Rego 为 WebAssembly。Rego 允许你编写逻辑来搜索和组合来自不同来源的 JSON/YAML 数据,以询问诸如“是否允许使用此 API”之类的问题。
OPA 已被用于支持策略的软件,包括但不限于 Kubernetes。使用 OPA 之类的工具来简化策略[被认为是在各种不同环境中正确保护 Kubernetes 部署的重要步骤][12]。WebAssembly 的可移植性和内置的安全功能非常适合这些工具。
我们的最后一个例子是 [Unity][13]。还记得我们在文章开头提到过 WebAssembly 可用于游戏吗?好吧,跨平台游戏引擎 Unity 是 WebAssembly 的较早采用者,它提供了在浏览器中运行的 Wasm 的首个演示品,并且自 2018 年 8 月以来,[已将 WebAssembly][14]用作 Unity WebGL 构建目标的输出目标。
这些只是 WebAssembly 已经开始产生影响的几种方式。你可以在 <https://webassembly.org/> 上查找更多信息并了解 Wasm 的所有最新信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/webassembly
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://opensource.com/article/19/8/webassembly-speed-code-reuse
[3]: https://www.cnet.com/news/the-secret-alliance-that-could-give-the-web-a-massive-speed-boost/
[4]: https://www.w3.org/blog/news/archives/8123
[5]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/
[6]: https://github.com/appcypher/awesome-wasm-langs
[7]: https://webassembly.org/docs/portability/
[8]: https://enarx.io
[9]: https://enterprisersproject.com/article/2019/9/application-security-4-facts-confidential-computing-consortium
[10]: https://blog.openpolicyagent.org/tagged/webassembly
[11]: https://github.com/open-policy-agent/opa/tree/master/wasm
[12]: https://enterprisersproject.com/article/2019/11/kubernetes-reality-check-3-takeaways-kubecon
[13]: https://opensource.com/article/20/1/www.unity.com
[14]: https://blogs.unity3d.com/2018/08/15/webassembly-is-here/

View File

@ -0,0 +1,141 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11834-1.html)
[#]: subject: (3 open source tools to manage your contacts)
[#]: via: (https://opensource.com/article/20/1/sync-contacts-locally)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
用于联系人管理的三个开源工具
======
> 通过将联系人同步到本地从而更快访问它。在我们的 20 个使用开源提升生产力的系列的第六篇文章中了解该如何做。
![](https://img.linux.net.cn/data/attachment/album/202001/30/194811bbtt449zfr9zppb3.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 用于联系人管理的开源工具
在本系列之前的文章中,我解释了如何在本地同步你的[邮件][2]和[日历][3]。希望这些加速了你访问邮件和日历。现在,我将讨论联系人同步,你可以给他们发送邮件和日历邀请。
![abook][4]
我目前收集了很多邮件地址。管理这些数据可能有点麻烦。有基于 Web 的服务,但它们不如本地副本快。
几天前,我谈到了用于管理日历的 [vdirsyncer][5]。vdirsyncer 还使用 CardDAV 协议处理联系人。vdirsyncer 除了可以使用**文件系统**存储日历外,还支持通过 **google_contacts****carddav** 进行联系人同步,但 `fileext` 设置会被更改,因此你无法在日历文件中存储联系人。
我在配置文件添加了一块配置,并从 Google 镜像了我的联系人。设置它需要额外的步骤。从 Google 镜像完成后,配置非常简单:
```
[pair address_sync]
a = "googlecard"
b = "localcard"
collections = ["from a", "from b"]
conflict_resolution = "a wins"
[storage googlecard]
type = "google_contacts"
token_file = "~/.vdirsyncer/google_token"
client_id = "my_client_id"
client_secret = "my_client_secret"
[storage localcard]
type = "filesystem"
path = "~/.calendars/Addresses/"
fileext = ".vcf"
```
现在,当我运行 `vdirsyncer discover` 时,它会找到我的 Google 联系人,并且 `vdirsyncer sync` 将它们复制到我的本地计算机。但同样,这只进行到一半。现在我想查看和使用联系人。需要 [khard][6] 和 [abook][7]。
![khard search][8]
为什么选择两个应用因为每个都有它自己的使用场景在这里越多越好。khard 用于管理地址,类似于 [khal][9] 用于管理日历条目。如果你的发行版附带了旧版本,你可能需要通过 `pip` 安装最新版本。安装 khard 后,你需要创建 `~/.config/khard/khard.conf`,因为 khard 没有与 khal 那样漂亮的配置向导。我的看起来像这样:
```
[addressbooks]
[[addresses]]
path = ~/.calendars/Addresses/default/
[general]
debug = no
default_action = list
editor = vim, -i, NONE
merge_editor = vimdiff
[contact table]
display = first_name
group_by_addressbook = no
reverse = no
show_nicknames = yes
show_uids = no
sort = last_name
localize_dates = yes
[vcard]
preferred_version = 3.0
search_in_source_files = yes
skip_unparsable = no
```
这会定义源通讯簿(并给它一个友好的名称)、显示内容和联系人编辑程序。运行 `khard list` 将列出所有条目,`khard list <some@email.adr>` 可以搜索特定条目。如果要添加或编辑条目,`add` 和 `edit` 命令将使用相同的基本模板打开配置的编辑器,唯一的区别是 `add` 命令的模板将为空。
![editing in khard][11]
abook 需要你导入和导出 VCF 文件,但它为查找提供了一些不错的功能。要将文件转换为 abook 格式,请先安装 abook 并创建 `~/.abook` 默认目录。然后让 abook 解析所有文件,并将它们放入 `~/.abook/addresses` 文件中:
```
apt install abook
ls ~/.calendars/Addresses/default/* | xargs cat | abook --convert --informat vcard --outformat abook > ~/.abook/addresses
```
现在运行 `abook`,你将有一个非常漂亮的 UI 来浏览、搜索和编辑条目。将它们导出到单个文件有点痛苦,所以我用 khard 进行大部分编辑,并有一个 cron 任务将它们导入到 abook 中。
abook 还可在命令行中搜索,并有大量有关将其与邮件客户端集成的文档。例如,你可以在 `.config/alot/config` 文件中添加一些信息,从而在 [Nmuch][12] 的邮件客户端 [alot][13] 中使用 abook 查询联系人:
```
[accounts]
[[Personal]]
realname = Kevin Sonney
address = kevin@sonney.com
alias_regexp = kevin\+.+@sonney.com
gpg_key = 7BB612C9
sendmail_command = msmtp --account=Personal -t
# ~ expansion works
sent_box = maildir://~/Maildir/Sent
draft_box = maildir://~/Maildir/Drafts
[[[abook]]]
type = abook
```
这样你就可以在邮件和日历中快速查找联系人了!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/sync-contacts-locally
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
[2]: https://linux.cn/article-11804-1.html
[3]: https://linux.cn/article-11812-1.html
[4]: https://opensource.com/sites/default/files/uploads/productivity_6-1.png (abook)
[5]: https://github.com/pimutils/vdirsyncer
[6]: https://github.com/scheibler/khard
[7]: http://abook.sourceforge.net/
[8]: https://opensource.com/sites/default/files/uploads/productivity_6-2.png (khard search)
[9]: https://khal.readthedocs.io/en/v0.9.2/index.html
[10]: mailto:some@email.adr
[11]: https://opensource.com/sites/default/files/uploads/productivity_6-3.png (editing in khard)
[12]: https://opensource.com/article/20/1/organize-email-notmuch
[13]: https://github.com/pazz/alot
[14]: mailto:kevin@sonney.com
[15]: mailto:+.+@sonney.com

View File

@ -0,0 +1,477 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11825-1.html)
[#]: subject: (C vs. Rust: Which to choose for programming hardware abstractions)
[#]: via: (https://opensource.com/article/20/1/c-vs-rust-abstractions)
[#]: author: (Dan Pittman https://opensource.com/users/dan-pittman)
C 还是 Rust选择哪个用于硬件抽象编程
======
> 在 Rust 中使用类型级编程可以使硬件抽象更加安全。
![](https://img.linux.net.cn/data/attachment/album/202001/28/123350k2w4mr3tp7crd4m2.jpg)
Rust 是一种日益流行的编程语言,被视为硬件接口的最佳选择。通常会将其与 C 的抽象级别相比较。本文介绍了 Rust 如何通过多种方式处理按位运算,并提供了既安全又易于使用的解决方案。
语言 | 诞生于 | 官方描述 | 总览
---|---|---|---
C | 1972 年 | C 是一种通用编程语言,具有表达式简约、现代的控制流和数据结构,以及丰富的运算符集等特点。(来源:[CS 基础知识] [2]| C 是(一种)命令式语言,旨在以相对简单的方式进行编译,从而提供对内存的低级访问。(来源:[W3schools.in] [3]
Rust | 2010 年 | 一种赋予所有人构建可靠、高效的软件的能力的语言(来源:[Rust 网站] [4]| Rust 是一种专注于安全性(尤其是安全并发性)的多范式系统编程语言。(来源:[维基百科] [5]
### 在 C 语言中对寄存器值进行按位运算
在系统编程领域,你可能经常需要编写硬件驱动程序或直接与内存映射设备进行交互,而这些交互几乎总是通过硬件提供的内存映射寄存器来完成的。通常,你通过对某些固定宽度的数字类型进行按位运算来与这些寄存器进行交互。
例如,假设一个 8 位寄存器具有三个字段:
```
+----------+------+-----------+---------+
| (unused) | Kind | Interrupt | Enabled |
+----------+------+-----------+---------+
   5-7       2-4        1          0
```
字段名称下方的数字规定了该字段在寄存器中使用的位。要启用该寄存器,你将写入值 `1`(以二进制表示为 `0000_0001`)来设置 `Enabled` 字段的位。但是,通常情况下,你也不想干扰寄存器中的现有配置。假设你要在设备上启用中断功能,但也要确保设备保持启用状态。为此,必须将 `Interrupt` 字段的值与 `Enabled` 字段的值结合起来。你可以通过按位操作来做到这一点:
```
1 | (1 << 1)
```
通过将 1 和 2`1` 左移一位得到)进行“或”(`|`)运算得到二进制值 `0000_0011` 。你可以将其写入寄存器,使其保持启用状态,但也启用中断功能。
你的头脑中要记住很多事情,特别是当你要在一个完整的系统上和可能有数百个之多的寄存器打交道时。在实践上,你可以使用助记符来执行此操作,助记符可跟踪字段在寄存器中的位置以及字段的宽度(即它的上边界是什么)
下面是这些助记符之一的示例。它们是 C 语言的宏,用右侧的代码替换它们的出现的地方。这是上面列出的寄存器的简写。`` 的左侧是该字段的起始位置,而右侧则限制该字段所占的位:
```
#define REG_ENABLED_FIELD(x) (x << 0) & 1
#define REG_INTERRUPT_FIELD(x) (x << 1) & 2
#define REG_KIND_FIELD(x) (x << 2) & (7 << 2)
```
然后,你可以使用这些来抽象化寄存器值的操作,如下所示:
```
void set_reg_val(reg* u8, val u8);
fn enable_reg_with_interrupt(reg* u8) {
    set_reg_val(reg, REG_ENABLED_FIELD(1) | REG_INTERRUPT_FIELD(1));
}
```
这就是现在的做法。实际上,这就是大多数驱动程序在 Linux 内核中的使用方式。
有没有更好的办法?如果能够基于对现代编程语言研究得出新的类型系统,就可能能够获得安全性和可表达性的好处。也就是说,如何使用更丰富、更具表现力的类型系统来使此过程更安全、更持久?
### 在 Rust 语言中对寄存器值进行按位运算
继续用上面的寄存器作为例子:
```
+----------+------+-----------+---------+
| (unused) | Kind | Interrupt | Enabled |
+----------+------+-----------+---------+
   5-7       2-4        1          0
```
你想如何用 Rust 类型来表示它呢?
你将以类似的方式开始,为每个字段的*偏移*定义常量(即,距最低有效位有多远)及其掩码。*掩码*是一个值,其二进制表示形式可用于更新或读取寄存器内部的字段:
```
const ENABLED_MASK: u8 = 1;
const ENABLED_OFFSET: u8 = 0;
const INTERRUPT_MASK: u8 = 2;
const INTERRUPT_OFFSET: u8 = 1;
const KIND_MASK: u8 = 7 << 2;
const KIND_OFFSET: u8 = 2;
```
接下来,你将声明一个 `Field` 类型并进行操作,将给定值转换为与其位置相关的值,以供在寄存器内使用:
```
struct Field {
value: u8,
}
impl Field {
fn new(mask: u8, offset: u8, val: u8) -> Self {
Field {
value: (val << offset) & mask,
}
}
}
```
最后,你将使用一个 `Register` 类型,该类型会封装一个与你的寄存器宽度匹配的数字类型。 `Register` 具有 `update` 函数,可使用给定字段来更新寄存器:
```
struct Register(u8);
impl Register {
fn update(&mut self, val: Field) {
self.0 = self.0 | field.value;
}
}
fn enable_register(&mut reg) {
reg.update(Field::new(ENABLED_MASK, ENABLED_OFFSET, 1));
}
```
使用 Rust你可以使用数据结构来表示字段将它们与特定的寄存器联系起来并在与硬件交互时提供简洁明了的工效。这个例子使用了 Rust 提供的最基本的功能。无论如何,添加的结构都会减轻上述 C 示例中的某些晦涩的地方。现在,字段是个带有名字的事物,而不是从模糊的按位运算符派生而来的数字,并且寄存器是具有状态的类型 —— 这在硬件上多了一层抽象。
### 一个易用的 Rust 实现
用 Rust 重写的第一个版本很好,但是并不理想。你必须记住要带上掩码和偏移量,并且要手工进行临时计算,这容易出错。人类不擅长精确且重复的任务 —— 我们往往会感到疲劳或失去专注力,这会导致错误。一次一个寄存器地手动记录掩码和偏移量几乎可以肯定会以糟糕的结局而告终。这是最好留给机器的任务。
其次,从结构上进行思考:如果有一种方法可以让字段的类型携带掩码和偏移信息呢?如果可以在编译时就发现硬件寄存器的访问和交互的实现代码中存在错误,而不是在运行时才发现,该怎么办?也许你可以依靠一种在编译时解决问题的常用策略,例如类型。
你可以使用 [typenum][6] 来修改前面的示例,该库在类型级别提供数字和算术。在这里,你将使用掩码和偏移量对 `Field` 类型进行参数化,使其可用于任何 `Field` 实例,而无需将其包括在调用处:
```
#[macro_use]
extern crate typenum;
use core::marker::PhantomData;
use typenum::*;
// Now we'll add Mask and Offset to Field's type
struct Field<Mask: Unsigned, Offset: Unsigned> {
value: u8,
_mask: PhantomData<Mask>,
_offset: PhantomData<Offset>,
}
// We can use type aliases to give meaningful names to
// our fields (and not have to remember their offsets and masks).
type RegEnabled = Field<U1, U0>;
type RegInterrupt = Field<U2, U1>;
type RegKind = Field<op!(U7 << U2), U2>;
```
现在,当重新访问 `Field` 的构造函数时,你可以忽略掩码和偏移量参数,因为类型中包含该信息:
```
impl<Mask: Unsigned, Offset: Unsigned> Field<Mask, Offset> {
fn new(val: u8) -> Self {
Field {
value: (val << Offset::U8) & Mask::U8,
_mask: PhantomData,
_offset: PhantomData,
}
}
}
// And to enable our register...
fn enable_register(&mut reg) {
reg.update(RegEnabled::new(1));
}
```
看起来不错,但是……如果你在给定的值是否*适合*该字段方面犯了错误,会发生什么?考虑一个简单的输入错误,你在其中放置了 `10` 而不是 `1`
```
fn enable_register(&mut reg) {
    reg.update(RegEnabled::new(10));
}
```
在上面的代码中,预期结果是什么?好吧,代码会将启用位设置为 0因为 `101 = 0`。那真不幸;最好在尝试写入之前知道你要写入字段的值是否适合该字段。事实上,我认为截掉错误字段值的高位是一种 1*未定义的行为*(哈)。
### 出于安全考虑使用 Rust
如何以一般方式检查字段的值是否适合其规定的位置?需要更多类型级别的数字!
你可以在 `Field` 中添加 `Width` 参数,并使用它来验证给定的值是否适合该字段:
```
struct Field<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> {
value: u8,
_mask: PhantomData<Mask>,
_offset: PhantomData<Offset>,
_width: PhantomData<Width>,
}
type RegEnabled = Field<U1,U1, U0>;
type RegInterrupt = Field<U1, U2, U1>;
type RegKind = Field<U3, op!(U7 << U2), U2>;
impl<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> Field<Width, Mask, Offset> {
fn new(val: u8) -> Option<Self> {
if val <= (1 << Width::U8) - 1 {
Some(Field {
value: (val << Offset::U8) & Mask::U8,
_mask: PhantomData,
_offset: PhantomData,
_width: PhantomData,
})
} else {
None
}
}
}
```
现在,只有给定值适合时,你才能构造一个 `Field` !否则,你将得到 `None` 信号,该信号指示发生了错误,而不是截掉该值的高位并静默写入意外的值。
但是请注意,这将在运行时环境中引发错误。但是,我们事先知道我们想写入的值,还记得吗?鉴于此,我们可以教编译器完全拒绝具有无效字段值的程序 —— 我们不必等到运行它!
这次,你将向 `new` 的新实现 `new_checked` 中添加一个特征绑定(`where` 子句),该函数要求输入值小于或等于给定字段用 `Width` 所能容纳的最大可能值:
```
struct Field<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> {
value: u8,
_mask: PhantomData<Mask>,
_offset: PhantomData<Offset>,
_width: PhantomData<Width>,
}
type RegEnabled = Field<U1, U1, U0>;
type RegInterrupt = Field<U1, U2, U1>;
type RegKind = Field<U3, op!(U7 << U2), U2>;
impl<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> Field<Width, Mask, Offset> {
const fn new_checked<V: Unsigned>() -> Self
where
V: IsLessOrEqual<op!((U1 << Width) - U1), Output = True>,
{
Field {
value: (V::U8 << Offset::U8) & Mask::U8,
_mask: PhantomData,
_offset: PhantomData,
_width: PhantomData,
}
}
}
```
只有拥有此属性的数字才实现此特征,因此,如果使用不适合的数字,它将无法编译。让我们看一看!
```
fn enable_register(&mut reg) {
reg.update(RegEnabled::new_checked::<U10>());
}
12 | reg.update(RegEnabled::new_checked::<U10>());
| ^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
|
= note: expected type `typenum::B0`
found type `typenum::B1`
```
`new_checked` 将无法生成一个程序,因为该字段的值有错误的高位。你的输入错误不会在运行时环境中才爆炸,因为你永远无法获得一个可以运行的工件。
就使内存映射的硬件进行交互的安全性而言,你已经接近 Rust 的极致。但是,你在 C 的第一个示例中所写的内容比最终得到的一锅粥的类型参数更简洁。当你谈论潜在可能有数百甚至数千个寄存器时,这样做是否容易处理?
### 让 Rust 恰到好处:既安全又方便使用
早些时候,我认为手工计算掩码有问题,但我又做了同样有问题的事情 —— 尽管是在类型级别。虽然使用这种方法很不错,但要达到编写任何代码的地步,则需要大量样板和手动转录(我在这里谈论的是类型的同义词)。
我们的团队想要像 [TockOS mmio 寄存器][7]之类的东西,而以最少的手动转录生成类型安全的实现。我们得出的结果是一个宏,该宏生成必要的样板以获得类似 Tock 的 API 以及基于类型的边界检查。要使用它,请写下一些有关寄存器的信息,其字段、宽度和偏移量以及可选的[枚举][8]类的值(你应该为字段可能具有的值赋予“含义”):
```
register! {
// The register's name
Status,
// The type which represents the whole register.
u8,
// The register's mode, ReadOnly, ReadWrite, or WriteOnly.
RW,
// And the fields in this register.
Fields [
On WIDTH(U1) OFFSET(U0),
Dead WIDTH(U1) OFFSET(U1),
Color WIDTH(U3) OFFSET(U2) [
Red = U1,
Blue = U2,
Green = U3,
Yellow = U4
]
]
}
```
由此,你可以生成寄存器和字段类型,如上例所示,其中索引:`Width`、`Mask` 和 `Offset` 是从一个字段定义的 `WIDTH``OFFSET` 部分的输入值派生的。另外,请注意,所有这些数字都是 “类型数字”;它们将直接进入你的 `Field` 定义!
生成的代码通过为寄存器及字段指定名称来为寄存器及其相关字段提供名称空间。这很绕口,看起来是这样的:
```
mod Status {
struct Register(u8);
mod On {
struct Field; // There is of course more to this definition
}
mod Dead {
struct Field;
}
mod Color {
struct Field;
pub const Red: Field = Field::<U1>new();
// &c.
}
}
```
生成的 API 包含名义上期望的读取和写入的原语,以获取原始寄存器的值,但它也有办法获取单个字段的值、执行集合操作以及确定是否设置了任何(或全部)位集合的方法。你可以阅读[完整生成的 API][9]上的文档。
### 粗略检查
将这些定义用于实际设备会是什么样?代码中是否会充斥着类型参数,从而掩盖了视图中的实际逻辑?
不会!通过使用类型同义词和类型推断,你实际上根本不必考虑程序的类型层面部分。你可以直接与硬件交互,并自动获得与边界相关的保证。
这是一个 [UART][10] 寄存器块的示例。我会跳过寄存器本身的声明,因为包括在这里就太多了。而是从寄存器“块”开始,然后帮助编译器知道如何从指向该块开头的指针中查找寄存器。我们通过实现 `Deref``DerefMut` 来做到这一点:
```
#[repr(C)]
pub struct UartBlock {
rx: UartRX::Register,
_padding1: [u32; 15],
tx: UartTX::Register,
_padding2: [u32; 15],
control1: UartControl1::Register,
}
pub struct Regs {
addr: usize,
}
impl Deref for Regs {
type Target = UartBlock;
fn deref(&self) -> &UartBlock {
unsafe { &*(self.addr as *const UartBlock) }
}
}
impl DerefMut for Regs {
fn deref_mut(&mut self) -> &mut UartBlock {
unsafe { &mut *(self.addr as *mut UartBlock) }
}
}
```
一旦到位,使用这些寄存器就像 `read()``modify()` 一样简单:
```
fn main() {
// A pretend register block.
let mut x = [0_u32; 33];
let mut regs = Regs {
// Some shenanigans to get at `x` as though it were a
// pointer. Normally you'd be given some address like
// `0xDEADBEEF` over which you'd instantiate a `Regs`.
addr: &mut x as *mut [u32; 33] as usize,
};
assert_eq!(regs.rx.read(), 0);
regs.control1
.modify(UartControl1::Enable::Set + UartControl1::RecvReadyInterrupt::Set);
// The first bit and the 10th bit should be set.
assert_eq!(regs.control1.read(), 0b_10_0000_0001);
}
```
当我们使用运行时值时,我们使用如前所述的**选项**。这里我使用的是 `unwrap`,但是在一个输入未知的真实程序中,你可能想检查一下从新调用中返回的**某些东西** [^1] [^2]
```
fn main() {
    // A pretend register block.
    let mut x = [0_u32; 33];
    let mut regs = Regs {
        // Some shenanigans to get at `x` as though it were a
        // pointer. Normally you'd be given some address like
        // `0xDEADBEEF` over which you'd instantiate a `Regs`.
        addr: &amp;mut x as *mut [u32; 33] as usize,
    };
    let input = regs.rx.get_field(UartRX::Data::Field::Read).unwrap();
    regs.tx.modify(UartTX::Data::Field::new(input).unwrap());
}
```
### 解码失败条件
根据你的个人痛苦忍耐程度,你可能已经注意到这些错误几乎是无法理解的。看一下我所说的不那么微妙的提醒:
```
error[E0271]: type mismatch resolving `<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UTerm, typenum::B1>, typenum::B0>, typenum::B1>, typenum::B0>, typenum::B0> as typenum::IsLessOrEqual<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UTerm, typenum::B1>, typenum::B0>, typenum::B1>, typenum::B0>>>::Output == typenum::B1`
--> src/main.rs:12:5
|
12 | less_than_ten::<U20>();
| ^^^^^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
|
= note: expected type `typenum::B0`
found type `typenum::B1`
```
`expected struct typenum::B0, found struct typenum::B1` 部分是有意义的,但是 ` typenum::UInt<typenum::UInt, typenum::UInt...` 到底是什么呢?好吧,`typenum` 将数字表示为二进制 [cons][13] 单元!像这样的错误使操作变得很困难,尤其是当你将多个这些类型级别的数字限制在狭窄的范围内时,你很难知道它在说哪个数字。当然,除非你一眼就能将巴洛克式二进制表示形式转换为十进制表示形式。
在第 U100 次试图从这个混乱中破译出某些含义之后,我们的一个队友简直《<ruby>疯了,地狱了,不要再忍受了<rt>Mad As Hell And Wasn't Going To Take It Anymore</rt></ruby>》,并做了一个小工具 `tnfilt`,从这种命名空间的二进制 cons 单元的痛苦中解脱出来。`tnfilt` 将 cons 单元格式的表示法替换为可让人看懂的十进制数字。我们认为其他人也会遇到类似的困难,所以我们分享了 [tnfilt][14]。你可以像这样使用它:
```
$ cargo build 2>&1 | tnfilt
```
它将上面的输出转换为如下所示:
```
error[E0271]: type mismatch resolving `<U20 as typenum::IsLessOrEqual<U10>>::Output == typenum::B1`
```
现在*这*才有意义!
### 结论
当在软件与硬件进行交互时,普遍使用内存映射寄存器,并且有无数种方法来描述这些交互,每种方法在易用性和安全性上都有不同的权衡。我们发现使用类型级编程来取得内存映射寄存器交互的编译时检查可以为我们提供制作更安全软件的必要信息。该代码可在 [bounded-registers][15] crateRust 包)中找到。
我们的团队从安全性较高的一面开始,然后尝试找出如何将易用性滑块移近易用端。从这些雄心壮志中,“边界寄存器”就诞生了,我们在 Auxon 公司的冒险中遇到内存映射设备的任何时候都可以使用它。
* * *
[^1]: 从技术上讲,从定义上看,从寄存器字段读取的值只能在规定的范围内,但是我们当中没有一个人生活在一个纯净的世界中,而且你永远都不知道外部系统发挥作用时会发生什么。你是在这里接受硬件之神的命令,因此与其强迫你进入“可能的恐慌”状态,还不如给你提供处理“这将永远不会发生”的机会。
[^2]: `get_field` 看起来有点奇怪。我正在专门查看 `Field::Read` 部分。`Field` 是一种类型,你需要该类型的实例才能传递给 `get_field`。更干净的 API 可能类似于:`regs.rx.get_field::<UartRx::Data::Field>();` 但是请记住,`Field` 是一种具有固定的宽度、偏移量等索引的类型的同义词。要像这样对 `get_field` 进行参数化,你需要使用更高级的类型。
* * *
此内容最初发布在 [Auxon Engineering 博客][16]上,并经许可进行编辑和重新发布。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/c-vs-rust-abstractions
作者:[Dan Pittman][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dan-pittman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
[2]: https://cs-fundamentals.com/c-programming/history-of-c-programming-language.php
[3]: https://www.w3schools.in/c-tutorial/history-of-c/
[4]: https://www.rust-lang.org/
[5]: https://en.wikipedia.org/wiki/Rust_(programming_language)
[6]: https://docs.rs/crate/typenum
[7]: https://docs.rs/tock-registers/0.3.0/tock_registers/
[8]: https://en.wikipedia.org/wiki/Enumerated_type
[9]: https://github.com/auxoncorp/bounded-registers#the-register-api
[10]: https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter
[11]: tmp.shpxgDsodx#1
[12]: tmp.shpxgDsodx#2
[13]: https://en.wikipedia.org/wiki/Cons
[14]: https://github.com/auxoncorp/tnfilt
[15]: https://crates.io/crates/bounded-registers
[16]: https://blog.auxon.io/2019/10/25/type-level-registers/

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11835-1.html)
[#]: subject: (Get started with this open source to-do list manager)
[#]: via: (https://opensource.com/article/20/1/open-source-to-do-list)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
开始使用开源待办事项清单管理器
======
> 待办事项清单是跟踪任务列表的强大方法。在我们的 20 个使用开源提升生产力的系列的第七篇文章中了解如何使用它。
![](https://img.linux.net.cn/data/attachment/album/202001/31/111103kmv55ploshuso4ot.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 todo 跟踪任务
任务管理和待办事项清单是我非常喜欢0的东西。我是一位生产效率的狂热粉丝以至于我为此做了一个[播客][2]),我尝试了各种不同的应用。我甚至为此[做了演讲][3]并[写了些文章][4]。因此,当我谈到提高工作效率时,肯定会出现任务管理和待办事项清单工具。
![Getting fancy with Todo.txt][5]
说实话,由于简单、跨平台且易于同步,用 [todo.txt][6] 肯定不会错。它是我不断反复提到的两个待办事项清单以及任务管理应用之一(另一个是 [Org 模式][7])。让我反复使用它的原因是它简单、可移植、易于理解,并且有许多很好的附加组件,并且当一台机器有附加组件,而另一台没有,也不会破坏它。由于它是一个 Bash shell 脚本,我还没发现一个无法支持它的系统。
#### 设置 todo.txt
首先,你需要安装基本 shell 脚本并将默认配置文件复制到 `~/.todo` 目录:
```
git clone https://github.com/todotxt/todo.txt-cli.git
cd todo.txt-cli
make
sudo make install
mkdir ~/.todo
cp todo.cfg ~/.todo/config
```
接下来,设置配置文件。一般,我想取消对颜色设置的注释,但必须马上设置的是 `TODO_DIR` 变量:
```
export TODO_DIR="$HOME/.todo"
```
#### 添加待办事件
要添加第一个待办事件,只需输入 `todo.sh add <NewTodo>` 就能添加。这还将在 `$HOME/.todo/` 中创建三个文件:`todo.txt`、`done.txt` 和 `reports.txt`
添加几个项目后,运行 `todo.sh ls` 查看你的待办事项。
![Basic todo.txt list][8]
#### 管理任务
你可以通过给项目设置优先级来稍微改善它。要向项目添加优先级,运行 `todo.sh pri # A`。数字是列表中任务的数量,而字母 `A` 是优先级。你可以将优先级设置为从 A 到 Z因为这是它的排序方式。
要完成任务,运行 `todo.sh do #` 来标记项目已完成并将它移动到 `done.txt`。运行 `todo.sh report` 会向 `report.txt` 写入已完成和未完成项的数量。
所有这三个文件的格式都有详细的说明,因此你可以使用你的文本编辑器修改。`todo.txt` 的基本格式是:
```
(Priority) YYYY-MM-DD Task
```
该日期表示任务的到期日期(如果已设置)。手动编辑文件时,只需在任务前面加一个 `x` 来标记为已完成。运行 `todo.sh archive` 会将这些项目移动到 `done.txt`,你可以编辑该文本文件,并在有时间时将已完成的项目归档。
#### 设置重复任务
我有很多重复的任务,我需要以每天/周/月来计划。
![Recurring tasks with the ice_recur add-on][9]
这就是 `todo.txt` 的灵活性所在。通过在 `~/.todo.actions.d/` 中使用[附加组件][10],你可以添加命令并扩展基本 `todo.sh` 的功能。附加组件基本上是实现特定命令的脚本。对于重复执行的任务,插件 [ice_recur][11] 应该符合要求。按照其页面上的说明操作,你可以设置任务以非常灵活的方式重复执行。
![Todour on MacOS][12]
在该[附加组件目录][10]中有很多附加组件,包括同步到某些云服务,也有链接到桌面或移动端应用的组件,这样你可以随时看到待办列表。
我只是简单介绍了这个代办事项清单功能,请花点时间深入了解这个工具的强大!它确实可以帮助我每天完成任务。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-to-do-list
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://productivityalchemy.com/
[3]: https://www.slideshare.net/AllThingsOpen/getting-to-done-on-the-command-line
[4]: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line
[5]: https://opensource.com/sites/default/files/uploads/productivity_7-1.png
[6]: http://todotxt.org/
[7]: https://orgmode.org/
[8]: https://opensource.com/sites/default/files/uploads/productivity_7-2.png (Basic todo.txt list)
[9]: https://opensource.com/sites/default/files/uploads/productivity_7-3.png (Recurring tasks with the ice_recur add-on)
[10]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
[11]: https://github.com/rlpowell/todo-text-stuff
[12]: https://opensource.com/sites/default/files/uploads/productivity_7-4.png (Todour on MacOS)

View File

@ -0,0 +1,112 @@
[#]: collector: (lujun9972)
[#]: translator: (FSSlc)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11813-1.html)
[#]: subject: (Locking and unlocking accounts on Linux systems)
[#]: via: (https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
在 Linux 系统中禁用与解禁用户的账号
======
> 总有这样的时候:有时你需要禁用某位 Linux 用户的账号,有时你还需要反过来解禁用户的账号。
本文将介绍一些管理用户访问的命令,并介绍它们背后的原理。
![](https://images.idgesg.net/images/article/2019/10/cso_cybersecurity_mysterious_padlock_complex_circuits_gold_by_sqback_gettyimages-1177918748_2400x1600-100813830-large.jpg)
假如你正管理着一台 [Linux][1] 系统,那么很有可能将遇到需要禁用一个账号的情况。可能是某人已经换了职位,他们是否还需要该账号仍是个问题;或许有理由相信再次使用该账号并没有大碍。不管上述哪种情况,知晓如何禁用账号并解禁账号都是你需要知道的知识。
需要你记住的一件重要的事是尽管有多种方法来禁用账号,但它们并不都达到相同的效果。假如用户使用公钥/私钥来使用该账号而不是使用密码来访问,那么你使用的某些命令来阻止用户获取该账号或许将不会生效。
### 使用 passwd 来禁用一个账号
最为简单的用来禁用一个账号的方法是使用 `passwd -l` 命令。例如:
```
$ sudo passwd -l tadpole
```
上面这个命令的效果是在加密后的密码文件 `/etc/shadow` 中,用户对应的那一行的最前面加上一个 `!` 符号。这样就足够阻止用户使用密码来访问账号了。
在没有使用上述命令前,加密后的密码行如下所示(请注意第一个字符):
```
$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
```
而禁用该账号后,这一行将变为:
```
!$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
```
在 tadpole 下一次尝试登录时,他可能会使用他原有的密码来尝试多次登录,但就是无法再登录成功了。另一方面,你则可以使用下面的命令来查看他这个账号的状态(`-S` = status
```
$ sudo passwd -S tadpole
tadpole L 10/15/2019 0 99999 7 -1
```
第二项的 `L` 告诉你这个账号已经被禁用了。在该账号被禁用前,这一项应该是 `P`。如果显示的是 `NP` 则意味着该账号还没有设置密码。
命令 `usermod -L` 也具有相同的效果(添加 `!` 来禁用账号的使用)。
使用这种方法来禁用某个账号的一个好处是当需要解禁某个账号时非常容易。只需要使用一个文本编辑器或者使用 `passwd -u` 命令来执行相反的操作,即将添加的 `!` 移除即可。
```
$ sudo passwd -u tadpole
passwd: password expiry information changed.
```
但使用这种方式的问题是如果用户使用公钥/私钥对的方式来访问他/她的账号,这种方式将不能阻止他们使用该账号。
### 使用 chage 命令来禁用账号
另一种禁用用户账号的方法是使用 `chage` 命令,它可以帮助管理用户账号的过期日期。
```
$ sudu chage -E0 tadpole
$ sudo passwd -S tadpole
tadpole P 10/15/2019 0 99999 7 -1
```
`chage` 命令将会稍微修改 `/etc/shadow` 文件。在这个使用 `:` 来分隔的文件(下面将进行展示)中,某行的第 8 项将被设置为 `0`(先前为空),这就意味着这个账号已经过期了。`chage` 命令会追踪密码更改期间的天数,通过选项也可以提供账号过期信息。第 8 项如果是 0 则意味着这个账号在 1970 年 1 月 1 日后的一天过期,当使用上面显示的那个命令时可以用来禁用账号。
```
$ sudo grep tadpole /etc/shadow | fold
tadpole:$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPC
nXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7::0:
^
|
+--- days until expiration
```
为了执行相反的操作,你可以简单地使用下面的命令将放置在 `/etc/shadow` 文件中的 `0` 移除掉:
```
% sudo chage -E-1 tadpole
```
一旦一个账号使用这种方式被禁用,即便是无密码的 [SSH][4] 登录也不能再访问该账号了。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[FSSlc](https://github.com/FSSlc)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[4]: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11817-1.html)
[#]: subject: (What's your favorite Linux terminal trick?)
[#]: via: (https://opensource.com/article/20/1/linux-terminal-trick)
[#]: author: (Opensource.com https://opensource.com/users/admin)
你有什么喜欢的 Linux 终端技巧?
======
> 告诉我们你最喜欢的终端技巧,无论是提高生产率的快捷方式还是有趣的彩蛋。
![](https://img.linux.net.cn/data/attachment/album/202001/25/135858accxc70tfxuifxx1.jpg)
新年伊始始终是评估提高效率的新方法的好时机。许多人尝试使用新的生产力工具,或者想找出如何优化其最常用的流程。终端是一个需要评估的领域,尤其是在开源世界中,有无数种方法可以通过快捷键和命令使终端上的生活更加高效(又有趣!)。
我们向作者们询问了他们最喜欢的终端技巧。他们分享了一些节省时间的技巧,甚至还有一个有趣的终端彩蛋。你会采用这些键盘快捷键或命令行技巧吗?你有喜欢分享的最爱吗?请发表评论来告诉我们。
“我找不出哪个是我最喜欢的;每天我都会使用这三个:
* `Ctrl + L` 来清除屏幕(而不是键入 `clear`)。
* `sudo !!``sudo` 特权运行先前的命令。
* `grep -Ev '^#|^$' <file>` 将显示文件内容,不带注释或空行。” —Mars Toktonaliev
“对我来说,如果我正在使用终端文本编辑器,并且希望将其丢开,以便可以快速执行其他操作,则可以使用 `Ctrl + Z` 将其放到后台,接着执行我需要做的一切,然后用 `fg` 将其带回前台。有时我也会对 `top``htop` 做同样的事情。我可以将其丢到后台并在我想检查当前性能时随时将其带回前台。我不会将通常很快能完成的任务在前后台之间切换它确实可以增强终端上的多任务处理能力。” —Jay LaCroix
“我经常在某一天在终端中做很多相同的事情,有两件事是每天都不变的:
* `Ctrl + R` 反向搜索我的 Bash 历史记录以查找我已经运行并且希望再次执行的命令。
* 插入号(`^`)替换是最好的,因为我经常做诸如 `sudo dnf search <package name>` 之类的事情,然后,如果我以这种方式找到合适的软件包,则执行 `^search^install` 来重新运行该命令,以 `install` 替换 `search`
这些东西肯定是很基本的但是对我来说却节省了时间。” —Steve Morris
“我的炫酷终端技巧不是我在终端上执行的操作,而是我使用的终端。有时候我只是想要使用 Apple II 或旧式琥珀色终端的感觉,那我就启动了 Cool-Retro-Term。它的截屏可以在这个[网站][2]上找到。” —Jim Hall
“可能是用 `ssh -X` 来在其他计算机上运行图形程序。(在某些终端仿真器上,例如 gnome-terminal`C-S c` 和 `C-S v` 复制/粘贴。我不确定这是否有价值(因为它有趣的是以 ssh 启动的图形化)。最近,我需要登录另一台计算机,但是我的孩子们可以在笔记本电脑的大屏幕上看到它。这个[链接][3]向我展示了一些我从未见过的内容:通过局域网从我的笔记本电脑上镜像来自另一台计算机屏幕上的活动会话(`x11vnc -desktop`并能够同时从两台计算机上进行控制。” —Kyle R. Conway
“你可以安装 `sl``$ sudo apt install sl` 或 `$ sudo dnf install sl`),并且当在 Bash 中输入命令 `sl`一个基于文本的蒸汽机车就会在显示屏上移动。” —Don Watkins
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/linux-terminal-trick
作者:[Opensource.com][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/admin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
[2]: https://github.com/Swordfish90/cool-retro-term
[3]: https://elinux.org/Screen_Casting_on_a_Raspberry_Pi

View File

@ -0,0 +1,202 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11830-1.html)
[#]: subject: (Setting up passwordless Linux logins using public/private keys)
[#]: via: (https://www.networkworld.com/article/3514607/setting-up-passwordless-linux-logins-using-publicprivate-keys.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
使用公钥/私钥对设定免密的 Linux 登录方式
======
> 使用一组公钥/私钥对让你不需要密码登录到远程 Linux 系统或使用 ssh 运行命令,这会非常方便,但是设置过程有点复杂。下面是帮助你的方法和脚本。
![](https://img.linux.net.cn/data/attachment/album/202001/29/141343ldps4muy4kp64k4l.jpg)
在 [Linux][1] 系统上设置一个允许你无需密码即可远程登录或运行命令的帐户并不难,但是要使它正常工作,你还需要掌握一些繁琐的细节。在本文,我们将完成整个过程,然后给出一个可以帮助处理琐碎细节的脚本。
设置好之后,如果希望在脚本中运行 `ssh` 命令,尤其是希望配置自动运行的命令,那么免密访问特别有用。
需要注意的是,你不需要在两个系统上使用相同的用户帐户。实际上,你可以把公用密钥用于系统上的多个帐户或多个系统上的不同帐户。
设置方法如下。
### 在哪个系统上启动?
首先,你需要从要发出命令的系统上着手。那就是你用来创建 `ssh` 密钥的系统。你还需要可以访问远程系统上的帐户并在其上运行这些命令。
为了使角色清晰明了,我们将场景中的第一个系统称为 “boss”因为它将发出要在另一个系统上运行的命令。
因此,命令提示符如下:
```
boss$
```
如果你还没有在 boss 系统上为你的帐户设置公钥/私钥对,请使用如下所示的命令创建一个密钥对。注意,你可以在各种加密算法之间进行选择。(一般使用 RSA 或 DSA。注意要在不输入密码的情况下访问系统你需要在下面的对话框中的两个提示符出不输入密码。
如果你已经有一个与此帐户关联的公钥/私钥对,请跳过此步骤。
```
boss$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/myself/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <== 按下回车键即可
Enter same passphrase again: <== 按下回车键即可
Your identification has been saved in /home/myself/.ssh/id_rsa.
Your public key has been saved in /home/myself/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:1zz6pZcMjA1av8iyojqo6NVYgTl1+cc+N43kIwGKOUI myself@boss
The key's randomart image is:
+---[RSA 3072]----+
| . .. |
| E+ .. . |
| .+ .o + o |
| ..+.. .o* . |
| ... So+*B o |
| + ...==B . |
| . o . ....++. |
|o o . . o..o+ |
|=..o.. ..o o. |
+----[SHA256]-----+
```
上面显示的命令将创建公钥和私钥。其中公钥用于加密,私钥用于解密。因此,这些密钥之间的关系是关键的,私有密钥**绝不**应该被共享。相反,它应该保存在 boss 系统的 `.ssh` 文件夹中。
注意,在创建时,你的公钥和私钥将会保存在 `.ssh` 文件夹中。
下一步是将**公钥**复制到你希望从 boss 系统免密访问的系统。你可以使用 `scp` 命令来完成此操作,但此时你仍然需要输入密码。在本例中,该系统称为 “target”。
```
boss$ scp .ssh/id_rsa.pub myacct@target:/home/myaccount
myacct@target's password:
```
你需要安装公钥在 target 系统(将运行命令的系统)上。如果你没有 `.ssh` 目录(例如,你从未在该系统上使用过 `ssh`),运行这样的命令将为你设置一个目录:
```
target$ ssh localhost date
target$ ls -la .ssh
total 12
drwx------ 2 myacct myacct 4096 Jan 19 11:48 .
drwxr-xr-x 6 myacct myacct 4096 Jan 19 11:49 ..
-rw-r--r-- 1 myacct myacct 222 Jan 19 11:48 known_hosts
```
仍然在目标系统上你需要将从“boss”系统传输的公钥添加到 `.ssh/authorized_keys` 文件中。如果该文件已经存在,使用下面的命令将把它添加到文件的末尾;如果文件不存在,则创建该文件并添加密钥。
```
target$ cat id_rsa.pub >> .ssh/authorized_keys
```
下一步,你需要确保你的 `authorized_keys` 文件权限为 600。如果还不是执行命令 `chmod 600 .ssh/authorized_keys`
```
target$ ls -l authorized_keys
-rw------- 1 myself myself 569 Jan 19 12:10 authorized_keys
```
还要检查目标系统上 `.ssh` 目录的权限是否设置为 700。如果需要执行 `chmod 700 .ssh` 命令修改权限。
```
target$ ls -ld .ssh
drwx------ 2 myacct myacct 4096 Jan 14 15:54 .ssh
```
此时,你应该能够从 boss 系统远程免密运行命令到目标系统。除非目标系统上的目标用户帐户拥有与你试图连接的用户和主机相同的旧公钥,否则这应该可以工作。如果是这样,你应该删除早期的(并冲突的)条目。
### 使用脚本
使用脚本可以使某些工作变得更加容易。但是,在下面的示例脚本中,你会遇到的一个烦人的问题是,在配置免密访问权限之前,你必须多次输入目标用户的密码。一种选择是将脚本分为两部分——需要在 boss 系统上运行的命令和需要在 target 系统上运行的命令。
这是“一步到位”版本的脚本:
```
#!/bin/bash
# NOTE: This script requires that you have the password for the remote acct
# in order to set up password-free access using your public key
LOC=`hostname` # the local system from which you want to run commands from
# wo a password
# get target system and account
echo -n "target system> "
read REM
echo -n "target user> "
read user
# create a key pair if no public key exists
if [ ! -f ~/.ssh/id_rsa.pub ]; then
ssh-keygen -t rsa
fi
# ensure a .ssh directory exists in the remote account
echo checking for .ssh directory on remote system
ssh $user@$REM "if [ ! -d /home/$user/.ssh ]; then mkdir /home/$user/.ssh; fi"
# share the public key (using local hostname)
echo copying the public key
scp ~/.ssh/id_rsa.pub $user@$REM:/home/$user/$user-$LOC.pub
# put the public key into the proper location
echo adding key to authorized_keys
ssh $user@$REM "cat /home/$user/$user-$LOC.pub >> /home/$user/.ssh/authorized_ke
ys"
# set permissions on authorized_keys and .ssh (might be OK already)
echo setting permissions
ssh $user@$REM "chmod 600 ~/.ssh/authorized_keys"
ssh $user@$REM "chmod 700 ~/.ssh"
# try it out -- should NOT ask for a password
echo testing -- if no password is requested, you are all set
ssh $user@$REM /bin/hostname
```
脚本已经配置为在你每次必须输入密码时告诉你它正在做什么。交互看起来是这样的:
```
$ ./rem_login_setup
target system> fruitfly
target user> lola
checking for .ssh directory on remote system
lola@fruitfly's password:
copying the public key
lola@fruitfly's password:
id_rsa.pub 100% 567 219.1KB/s 00:00
adding key to authorized_keys
lola@fruitfly's password:
setting permissions
lola@fruitfly's password:
testing -- if no password is requested, you are all set
fruitfly
```
在上面的场景之后,你就可以像这样登录到 lola 的帐户:
```
$ ssh lola@fruitfly
[lola@fruitfly ~]$
```
一旦设置了免密登录,你就可以不需要键入密码从 boss 系统登录到 target 系统,并且运行任意的 `ssh` 命令。以这种免密的方式运行并不意味着你的帐户不安全。然而,根据 target 系统的性质,保护你在 boss 系统上的密码可能变得更加重要。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3514607/setting-up-passwordless-linux-logins-using-publicprivate-keys.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.networkworld.com/article/3143050/linux/linux-hardening-a-15-step-checklist-for-a-secure-linux-server.html#tk.nww-fsb
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,165 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11827-1.html)
[#]: subject: (Wine 5.0 is Released! Heres How to Install it)
[#]: via: (https://itsfoss.com/wine-5-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Wine 5.0 发布了!
======
> Wine 的一个新的主要版本发布了。使用 Wine 5.0,在 Linux 上运行 Windows 应用程序和游戏的体验得到进一步改进。
通过一些努力,你可以使用 Wine [在 Linux 上运行 Windows 应用程序][1]。当你必须使用一个仅在 Windows 上可用的软件时Wine 是一个可以尝试的工具。它支持许多这样的软件。
Wine 的一个新的主要发布版本已经降临,即 Wine 5.0,几乎距它的 4.0 发布一年之后。
Wine 5.0 发布版本引进了几个主要特性和很多显著的更改/改进。在这篇文章中,我将重点介绍新的特性是什么,并且也将提到安装说明。
### 在 Wine 5.0 中有什么新的特性?
![][2]
如他们的[官方声明][3]所述,这是 5.0 发布版本中的关键更改:
* PE 格式的内置模块。
* 支持多显示器。
* 重新实现了 XAudio2。
* 支持 Vulkan 1.1。
* 支持微软安装程序MSI补丁文件。
* 性能提升。
因此,随着 Vulkan 1.1 和对多显示器的支持 —— Wine 5.0 发布版本是一件大事。
除了上面强调的这些关键内容以外,在新的版本中包含成千上万的更改/改进中,你还可以期待对控制器的支持更好。
值得注意的是,此版本特别纪念了 **Józef Kucia**vkd3d 项目的首席开发人员)。
他们也已经在[发布说明][4]中提到这一点:
> 这个发布版本特别纪念了 Józef Kucia他于 2019 年 8 月去世,年仅 30 岁。Józef 是 Wine 的 Direct3D 实现的一个主要贡献者,并且是 vkd3d 项目的首席开发人员。我们都非常怀念他的技能和友善。
### 如何在 Ubuntu 和 Linux Mint 上安装 Wine 5.0
> 注意:
> 如果你在以前安装过 Wine你应该将其完全移除如你希望的避免一些冲突。此外WineHQ 存储库的密钥最近已被更改,针对你的 Linux 发行版的更多的操作指南,你可以参考它的[下载页面][5]。
Wine 5.0 的源码可在它的[官方网站][3]上获得。为了使其工作,你可以阅读更多关于[构建 Wine][6] 的信息。基于 Arch 的用户应该很快就会得到它。
在这里,我将向你展示在 Ubuntu 和其它基于 Ubuntu 的发行版上安装 Wine 5.0 的步骤。请耐心,并按照步骤一步一步安装和使用 Wine。这里涉及几个步骤。
请记住Wine 安装了太多软件包。你会看到大量的软件包列表,下载大小约为 1.3 GB。
### 在 Ubuntu 上安装 Wine 5.0(不适用于 Linux Mint
首先,使用这个命令来移除现存的 Wine
```
sudo apt remove winehq-stable wine-stable wine1.6 wine-mono wine-geco winetricks
```
然后确保添加 32 位体系结构支持:
```
sudo dpkg --add-architecture i386
```
下载并添加官方 Wine 存储库密钥:
```
wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -
```
现在,接下来的步骤需要添加存储库,为此, 你需要首先[知道你的 Ubuntu 版本][7]。
对于 **Ubuntu 18.04 和 19.04**,用这个 PPA 添加 FAudio 依赖, **Ubuntu 19.10** 不需要它:
```
sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport
```
现在使用此命令添加存储库:
```
sudo apt-add-repository "deb https://dl.winehq.org/wine-builds/ubuntu $(lsb_release -cs) main"
```
现在你已经添加了正确的存储库,可以使用以下命令安装 Wine 5.0
```
sudo apt update && sudo apt install --install-recommends winehq-stable
```
请注意,尽管[在软件包列表中将 Wine 5 列为稳定版][8],但你仍可能会看到 winehq-stable 的 wine 4.0.3。也许它不会传播到所有地理位置。从今天早上开始,我可以看到 Wine 5.0。
### 在 Linux Mint 19.1、19.2 和 19.3 中安装 Wine 5.0
正如一些读者通知我的那样,[apt-add 存储库命令][9]不适用于 Linux Mint 19.x 系列。
这是添加自定义存储库的另一种方法。你必须执行与 Ubuntu 相同的步骤。如删除现存的 Wine 包:
```
sudo apt remove winehq-stable wine-stable wine1.6 wine-mono wine-geco winetricks
```
添加 32 位支持:
```
sudo dpkg --add-architecture i386
```
然后添加 GPG 密钥:
```
wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -
```
添加 FAudio 依赖:
```
sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport
```
现在为 Wine 存储库创建一个新条目:
```
sudo sh -c "echo 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' >> /etc/apt/sources.list.d/winehq.list"
```
更新软件包列表并安装Wine
```
sudo apt update && sudo apt install --install-recommends winehq-stable
```
### 总结
你尝试过最新的 Wine 5.0 发布版本吗?如果是的话,在运行中你看到什么改进?
在下面的评论区域,让我知道你对新的发布版本的看法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/wine-5-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/use-windows-applications-linux/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/wine_5.png?ssl=1
[3]: https://www.winehq.org/news/2020012101
[4]: https://www.winehq.org/announce/5.0
[5]: https://wiki.winehq.org/Download
[6]: https://wiki.winehq.org/Building_Wine
[7]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[8]: https://dl.winehq.org/wine-builds/ubuntu/dists/bionic/main/binary-amd64/
[9]: https://itsfoss.com/add-apt-repository-command-not-found/

View File

@ -0,0 +1,456 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11857-1.html)
[#]: subject: (Data streaming and functional programming in Java)
[#]: via: (https://opensource.com/article/20/1/javastream)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Java 中的数据流和函数式编程
======
> 学习如何使用 Java 8 中的流 API 和函数式编程结构。
![](https://img.linux.net.cn/data/attachment/album/202002/06/002505flazlb4cg4aavvb4.jpg)
当 Java SE 8又名核心 Java 8在 2014 年被推出时,它引入了一些更改,从根本上影响了用它进行的编程。这些更改中有两个紧密相连的部分:流 API 和函数式编程构造。本文使用代码示例,从基础到高级特性,介绍每个部分并说明它们之间的相互作用。
### 基础特性
流 API 是在数据序列中迭代元素的简洁而高级的方法。包 `java.util.stream``java.util.function` 包含了用于流 API 和相关函数式编程构造的新库。当然,代码示例胜过千言万语。
下面的代码段用大约 2,000 个随机整数值填充了一个 `List`
```
Random rand = new Random2();
List<Integer> list = new ArrayList<Integer>(); // 空 list
for (int i = 0; i < 2048; i++) list.add(rand.nextInt()); // 填充它
```
另外用一个 `for` 循环可用于遍历填充列表,以将偶数值收集到另一个列表中。
流 API 提供了一种更简洁的方法来执行此操作:
```
List <Integer> evens = list
.stream() // 流化 list
.filter(n -> (n & 0x1) == 0) // 过滤出奇数值
.collect(Collectors.toList()); // 收集偶数值
```
这个例子有三个来自流 API 的函数:
- `stream` 函数可以将**集合**转换为流,而流是一个每次可访问一个值的传送带。流化是惰性的(因此也是高效的),因为值是根据需要产生的,而不是一次性产生的。
- `filter` 函数确定哪些流的值(如果有的话)通过了处理管道中的下一个阶段,即 `collect` 阶段。`filter` 函数是 <ruby>高阶的<rt>higher-order</rt></ruby>,因为它的参数是一个函数 —— 在这个例子中是一个 lambda 表达式,它是一个未命名的函数,并且是 Java 新的函数式编程结构的核心。
lambda 语法与传统的 Java 完全不同:
```
n -> (n & 0x1) == 0
```
箭头(一个减号后面紧跟着一个大于号)将左边的参数列表与右边的函数体分隔开。参数 `n` 虽未明确类型,但也可以明确。在任何情况下,编译器都会发现 `n` 是个 `Integer`。如果有多个参数,这些参数将被括在括号中,并用逗号分隔。
在本例中,函数体检查一个整数的最低位(最右)是否为零,这用来表示偶数。过滤器应返回一个布尔值。尽管可以,但该函数的主体中没有显式的 `return`。如果主体没有显式的 `return`,则主体的最后一个表达式即是返回值。在这个例子中,主体按照 lambda 编程的思想编写,由一个简单的布尔表达式 `(n & 0x1) == 0` 组成。
- `collect` 函数将偶数值收集到引用为 `evens` 的列表中。如下例所示,`collect` 函数是线程安全的,因此,即使在多个线程之间共享了过滤操作,该函数也可以正常工作。
### 方便的功能和轻松实现多线程
在生产环境中,数据流的源可能是文件或网络连接。为了学习流 API, Java 提供了诸如 `IntStream` 这样的类型,它可以用各种类型的元素生成流。这里有一个 `IntStream` 的例子:
```
IntStream // 整型流
.range(1, 2048) // 生成此范围内的整型流
.parallel() // 为多个线程分区数据
.filter(i -> ((i & 0x1) > 0)) // 奇偶校验 - 只允许奇数通过
.forEach(System.out::println); // 打印每个值
```
`IntStream` 类型包括一个 `range` 函数,该函数在指定的范围内生成一个整数值流,在本例中,以 1 为增量,从 1 递增到 2048。`parallel` 函数自动划分该工作到多个线程中,在各个线程中进行过滤和打印。(线程数通常与主机系统上的 CPU 数量匹配。)函数 `forEach` 参数是一个*方法引用*,在本例中是对封装在 `System.out` 中的 `println` 方法的引用,方法输出类型为 `PrintStream`。方法和构造器引用的语法将在稍后讨论。
由于具有多线程,因此整数值整体上以任意顺序打印,但在给定线程中是按顺序打印的。例如,如果线程 T1 打印 409 和 411那么 T1 将按照顺序 409-411 打印,但是其它某个线程可能会预先打印 2045。`parallel` 调用后面的线程是并发执行的,因此它们的输出顺序是不确定的。
### map/reduce 模式
*map/reduce* 模式在处理大型数据集方面变得很流行。一个 map/reduce 宏操作由两个微操作构成。首先,将数据分散(<ruby>映射<rt>mapped</rt></ruby>)到各个工作程序中,然后将单独的结果收集在一起 —— 也可能收集统计起来成为一个值,即<ruby>归约<rt>reduction</rt></ruby>。归约可以采用不同的形式,如以下示例所示。
下面 `Number` 类的实例用 `EVEN``ODD` 表示有奇偶校验的整数值:
```
public class Number {
enum Parity { EVEN, ODD }
private int value;
public Number(int n) { setValue(n); }
public void setValue(int value) { this.value = value; }
public int getValue() { return this.value; }
public Parity getParity() {
return ((value & 0x1) == 0) ? Parity.EVEN : Parity.ODD;
}
public void dump() {
System.out.format("Value: %2d (parity: %s)\n", getValue(),
(getParity() == Parity.ODD ? "odd" : "even"));
}
}
```
下面的代码演示了用 `Number` 流进行 map/reduce 的情形,从而表明流 API 不仅可以处理 `int``float` 等基本类型,还可以处理程序员自定义的类类型。
在下面的代码段中,使用了 `parallelStream` 而不是 `stream` 函数对随机整数值列表进行流化处理。与前面介绍的 `parallel` 函数一样,`parallelStream` 变体也可以自动执行多线程。
```
final int howMany = 200;
Random r = new Random();
Number[] nums = new Number[howMany];
for (int i = 0; i < howMany; i++) nums[i] = new Number(r.nextInt(100));
List<Number> listOfNums = Arrays.asList(nums); // 将数组转化为 list
Integer sum4All = listOfNums
.parallelStream() // 自动执行多线程
.mapToInt(Number::getValue) // 使用方法引用,而不是 lambda
.sum(); // 将流值计算出和值
System.out.println("The sum of the randomly generated values is: " + sum4All);
```
高阶的 `mapToInt` 函数可以接受一个 lambda 作为参数,但在本例中,它接受一个方法引用,即 `Number::getValue`。`getValue` 方法不需要参数,它返回给定的 `Number` 实例的 `int` 值。语法并不复杂:类名 `Number` 后跟一个双冒号和方法名。回想一下先前的例子 `System.out::println`,它在 `System` 类中的 `static` 属性 `out` 后面有一个双冒号。
方法引用 `Number::getValue` 可以用下面的 lambda 表达式替换。参数 `n` 是流中的 `Number` 实例中的之一:
```
mapToInt(n -> n.getValue())
```
通常lambda 表达式和方法引用是可互换的:如果像 `mapToInt` 这样的高阶函数可以采用一种形式作为参数,那么这个函数也可以采用另一种形式。这两个函数式编程结构具有相同的目的 —— 对作为参数传入的数据执行一些自定义操作。在两者之间进行选择通常是为了方便。例如lambda 可以在没有封装类的情况下编写,而方法则不能。我的习惯是使用 lambda除非已经有了适当的封装方法。
当前示例末尾的 `sum` 函数通过结合来自 `parallelStream` 线程的部分和,以线程安全的方式进行归约。但是,程序员有责任确保在 `parallelStream` 调用引发的多线程过程中,程序员自己的函数调用(在本例中为 `getValue`)是线程安全的。
最后一点值得强调。lambda 语法鼓励编写<ruby>纯函数<rt>pure function</rt></ruby>,即函数的返回值仅取决于传入的参数(如果有);纯函数没有副作用,例如更新一个类中的 `static` 字段。因此,纯函数是线程安全的,并且如果传递给高阶函数的函数参数(例如 `filter``map` )是纯函数,则流 API 效果最佳。
对于更细粒度的控制,有另一个流 API 函数,名为 `reduce`,可用于对 `Number` 流中的值求和:
```
Integer sum4AllHarder = listOfNums
.parallelStream() // 多线程
.map(Number::getValue) // 每个 Number 的值
.reduce(0, (sofar, next) -> sofar + next); // 求和
```
此版本的 `reduce` 函数带有两个参数,第二个参数是一个函数:
- 第一个参数(在这种情况下为零)是*特征*值,该值用作求和操作的初始值,并且在求和过程中流结束时用作默认值。
- 第二个参数是*累加器*,在本例中,这个 lambda 表达式有两个参数:第一个参数(`sofar`)是正在运行的和,第二个参数(`next`)是来自流的下一个值。运行的和以及下一个值相加,然后更新累加器。请记住,由于开始时调用了 `parallelStream`,因此 `map``reduce` 函数现在都在多线程上下文中执行。
在到目前为止的示例中,流值被收集,然后被规约,但是,通常情况下,流 API 中的 `Collectors` 可以累积值,而不需要将它们规约到单个值。正如下一个代码段所示,收集活动可以生成任意丰富的数据结构。该示例使用与前面示例相同的 `listOfNums`
```
Map<Number.Parity, List<Number>> numMap = listOfNums
.parallelStream()
.collect(Collectors.groupingBy(Number::getParity));
List<Number> evens = numMap.get(Number.Parity.EVEN);
List<Number> odds = numMap.get(Number.Parity.ODD);
```
第一行中的 `numMap` 指的是一个 `Map`,它的键是一个 `Number` 奇偶校验位(`ODD` 或 `EVEN`),其值是一个具有指定奇偶校验位值的 `Number` 实例的 `List`。同样,通过 `parallelStream` 调用进行多线程处理,然后 `collect` 调用(以线程安全的方式)将部分结果组装到 `numMap` 引用的 `Map` 中。然后,在 `numMap` 上调用 `get` 方法两次,一次获取 `evens`,第二次获取 `odds`
实用函数 `dumpList` 再次使用来自流 API 的高阶 `forEach` 函数:
```
private void dumpList(String msg, List<Number> list) {
System.out.println("\n" + msg);
list.stream().forEach(n -> n.dump()); // 或者使用 forEach(Number::dump)
}
```
这是示例运行中程序输出的一部分:
```
The sum of the randomly generated values is: 3322
The sum again, using a different method: 3322
Evens:
Value: 72 (parity: even)
Value: 54 (parity: even)
...
Value: 92 (parity: even)
Odds:
Value: 35 (parity: odd)
Value: 37 (parity: odd)
...
Value: 41 (parity: odd)
```
### 用于代码简化的函数式结构
函数式结构(如方法引用和 lambda 表达式)非常适合在流 API 中使用。这些构造代表了 Java 中对高阶函数的主要简化。即使在糟糕的过去Java 也通过 `Method``Constructor` 类型在技术上支持高阶函数,这些类型的实例可以作为参数传递给其它函数。由于其复杂性,这些类型在生产级 Java 中很少使用。例如,调用 `Method` 需要对象引用(如果方法是非**静态**的)或至少一个类标识符(如果方法是**静态**的)。然后,被调用的 `Method` 的参数作为**对象**实例传递给它如果没有发生多态那会出现另一种复杂性则可能需要显式向下转换。相比之下lambda 和方法引用很容易作为参数传递给其它函数。
但是,新的函数式结构在流 API 之外具有其它用途。考虑一个 Java GUI 程序,该程序带有一个供用户按下的按钮,例如,按下以获取当前时间。按钮按下的事件处理程序可能编写如下:
```
JButton updateCurrentTime = new JButton("Update current time");
updateCurrentTime.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
currentTime.setText(new Date().toString());
}
});
```
这个简短的代码段很难解释。关注第二行,其中方法 `addActionListener` 的参数开始如下:
```
new ActionListener() {
```
这似乎是错误的,因为 `ActionListener` 是一个**抽象**接口,而**抽象**类型不能通过调用 `new` 实例化。但是,事实证明,还有其它一些实例被实例化了:一个实现此接口的未命名内部类。如果上面的代码封装在名为 `OldJava` 的类中,则该未命名的内部类将被编译为 `OldJava$1.class`。`actionPerformed` 方法在这个未命名的内部类中被重写。
现在考虑使用新的函数式结构进行这个令人耳目一新的更改:
```
updateCurrentTime.addActionListener(e -> currentTime.setText(new Date().toString()));
```
lambda 表达式中的参数 `e` 是一个 `ActionEvent` 实例,而 lambda 的主体是对按钮上的 `setText` 的简单调用。
### 函数式接口和函数组合
到目前为止,使用的 lambda 已经写好了。但是,为了方便起见,我们可以像引用封装方法一样引用 lambda 表达式。以下一系列简短示例说明了这一点。
考虑以下接口定义:
```
@FunctionalInterface // 可选,通常省略
interface BinaryIntOp {
abstract int compute(int arg1, int arg2); // abstract 声明可以被删除
}
```
注释 `@FunctionalInterface` 适用于声明*唯一*抽象方法的任何接口;在本例中,这个抽象接口是 `compute`。一些标准接口,(例如具有唯一声明方法 `run``Runnable` 接口)同样符合这个要求。在此示例中,`compute` 是已声明的方法。该接口可用作引用声明中的目标类型:
```
BinaryIntOp div = (arg1, arg2) -> arg1 / arg2;
div.compute(12, 3); // 4
```
`java.util.function` 提供各种函数式接口。以下是一些示例。
下面的代码段介绍了参数化的 `Predicate` 函数式接口。在此示例中,带有参数 `String``Predicate<String>` 类型可以引用具有 `String` 参数的 lambda 表达式或诸如 `isEmpty` 之类的 `String` 方法。通常情况下Predicate 是一个返回布尔值的函数。
```
Predicate<String> pred = String::isEmpty; // String 方法的 predicate 声明
String[] strings = {"one", "two", "", "three", "four"};
Arrays.asList(strings)
.stream()
.filter(pred) // 过滤掉非空字符串
.forEach(System.out::println); // 只打印空字符串
```
在字符串长度为零的情况下,`isEmpty` Predicate 判定结果为 `true`。 因此,只有空字符串才能进入管道的 `forEach` 阶段。
下一段代码将演示如何将简单的 lambda 或方法引用组合成更丰富的 lambda 或方法引用。考虑这一系列对 `IntUnaryOperator` 类型的引用的赋值,它接受一个整型参数并返回一个整型值:
```
IntUnaryOperator doubled = n -> n * 2;
IntUnaryOperator tripled = n -> n * 3;
IntUnaryOperator squared = n -> n * n;
```
`IntUnaryOperator` 是一个 `FunctionalInterface`,其唯一声明的方法为 `applyAsInt`。现在可以单独使用或以各种组合形式使用这三个引用 `doubled`、`tripled` 和 `squared`
```
int arg = 5;
doubled.applyAsInt(arg); // 10
tripled.applyAsInt(arg); // 15
squared.applyAsInt(arg); // 25
```
以下是一些函数组合的样例:
```
int arg = 5;
doubled.compose(squared).applyAsInt(arg); // 5 求 2 次方后乘 250
tripled.compose(doubled).applyAsInt(arg); // 5 乘 2 后再乘 330
doubled.andThen(squared).applyAsInt(arg); // 5 乘 2 后求 2 次方100
squared.andThen(tripled).applyAsInt(arg); // 5 求 2 次方后乘 375
```
函数组合可以直接使用 lambda 表达式实现,但是引用使代码更简洁。
### 构造器引用
构造器引用是另一种函数式编程构造,而这些引用在比 lambda 和方法引用更微妙的上下文中非常有用。再一次重申,代码示例似乎是最好的解释方式。
考虑这个 [POJO][13] 类:
```
public class BedRocker { // 基岩的居民
private String name;
public BedRocker(String name) { this.name = name; }
public String getName() { return this.name; }
public void dump() { System.out.println(getName()); }
}
```
该类只有一个构造函数,它需要一个 `String` 参数。给定一个名字数组,目标是生成一个 `BedRocker` 元素数组,每个名字代表一个元素。下面是使用了函数式结构的代码段:
```
String[] names = {"Fred", "Wilma", "Peebles", "Dino", "Baby Puss"};
Stream<BedRocker> bedrockers = Arrays.asList(names).stream().map(BedRocker::new);
BedRocker[] arrayBR = bedrockers.toArray(BedRocker[]::new);
Arrays.asList(arrayBR).stream().forEach(BedRocker::dump);
```
在较高的层次上,这个代码段将名字转换为 `BedRocker` 数组元素。具体来说,代码如下所示。`Stream` 接口(在包 `java.util.stream` 中)可以被参数化,而在本例中,生成了一个名为 `bedrockers``BedRocker` 流。
`Arrays.asList` 实用程序再次用于流化一个数组 `names`,然后将流的每一项传递给 `map` 函数,该函数的参数现在是构造器引用 `BedRocker::new`。这个构造器引用通过在每次调用时生成和初始化一个 `BedRocker` 实例来充当一个对象工厂。在第二行执行之后,名为 `bedrockers` 的流由五项 `BedRocker` 组成。
这个例子可以通过关注高阶 `map` 函数来进一步阐明。在通常情况下,一个映射将一个类型的值(例如,一个 `int`)转换为另一个*相同*类型的值(例如,一个整数的后继):
```
map(n -> n + 1) // 将 n 映射到其后继
```
然而,在 `BedRocker` 这个例子中,转换更加戏剧化,因为一个类型的值(代表一个名字的 `String`)被映射到一个*不同*类型的值,在这个例子中,就是一个 `BedRocker` 实例,这个字符串就是它的名字。转换是通过一个构造器调用来完成的,它是由构造器引用来实现的:
```
map(BedRocker::new) // 将 String 映射到 BedRocker
```
传递给构造器的值是 `names` 数组中的其中一项。
此代码示例的第二行还演示了一个你目前已经非常熟悉的转换:先将数组先转换成 `List`,然后再转换成 `Stream`
```
Stream<BedRocker> bedrockers = Arrays.asList(names).stream().map(BedRocker::new);
```
第三行则是另一种方式 —— 流 `bedrockers` 通过使用*数组*构造器引用 `BedRocker[]::new` 调用 `toArray` 方法:
```
BedRocker[ ] arrayBR = bedrockers.toArray(BedRocker[]::new);
```
该构造器引用不会创建单个 `BedRocker` 实例,而是创建这些实例的整个数组:该构造器引用现在为 `BedRocker[]:new`,而不是 `BedRocker::new`。为了进行确认,将 `arrayBR` 转换为 `List`,再次对其进行流式处理,以便可以使用 `forEach` 来打印 `BedRocker` 的名字。
```
Fred
Wilma
Peebles
Dino
Baby Puss
```
该示例对数据结构的微妙转换仅用几行代码即可完成,从而突出了可以将 lambda方法引用或构造器引用作为参数的各种高阶函数的功能。
### <ruby>柯里化<rt>Currying</rt></ruby>
*柯里化*函数是指减少函数执行任何工作所需的显式参数的数量(通常减少到一个)。(该术语是为了纪念逻辑学家 Haskell Curry。一般来说函数的参数越少调用起来就越容易也更健壮。回想一下一些需要半打左右参数的噩梦般的函数因此应将柯里化视为简化函数调用的一种尝试。`java.util.function` 包中的接口类型适合于柯里化,如以下示例所示。
引用的 `IntBinaryOperator` 接口类型是为函数接受两个整型参数,并返回一个整型值:
```
IntBinaryOperator mult2 = (n1, n2) -> n1 * n2;
mult2.applyAsInt(10, 20); // 200
mult2.applyAsInt(10, 30); // 300
```
引用 `mult2` 强调了需要两个显式参数,在本例中是 10 和 20。
前面介绍的 `IntUnaryOperator``IntBinaryOperator` 简单,因为前者只需要一个参数,而后者则需要两个参数。两者均返回整数值。因此,目标是将名为 `mult2` 的两个参数 `IntBinraryOperator` 柯里化成一个单一的 `IntUnaryOperator` 版本 `curriedMult2`
考虑 `IntFunction<R>` 类型。此类型的函数采用整型参数,并返回类型为 `R` 的结果,该结果可以是另一个函数 —— 更准确地说,是 `IntBinaryOperator`。让一个 lambda 返回另一个 lambda 很简单:
```
arg1 -> (arg2 -> arg1 * arg2) // 括号可以省略
```
完整的 lambda 以 `arg1` 开头,而该 lambda 的主体以及返回的值是另一个以 `arg2` 开头的 lambda。返回的 lambda 仅接受一个参数(`arg2`),但返回了两个数字的乘积(`arg1` 和 `arg2`)。下面的概述,再加上代码,应该可以更好地进行说明。
以下是如何柯里化 `mult2` 的概述:
- 类型为 `IntFunction<IntUnaryOperator>` 的 lambda 被写入并调用,其整型值为 10。返回的 `IntUnaryOperator` 缓存了值 10因此变成了已柯里化版本的 `mult2`,在本例中为 `curriedMult2`
- 然后使用单个显式参数例如20调用 `curriedMult2` 函数,该参数与缓存的参数(在本例中为 10相乘以生成返回的乘积。。
这是代码的详细信息:
```
// 创建一个接受一个参数 n1 并返回一个单参数 n2 -> n1 * n2 的函数该函数返回一个n1 * n2 乘积的)整型数。
IntFunction<IntUnaryOperator> curriedMult2Maker = n1 -> (n2 -> n1 * n2);
```
调用 `curriedMult2Maker` 生成所需的 `IntUnaryOperator` 函数:
```
// 使用 curriedMult2Maker 获取已柯里化版本的 mult2。
// 参数 10 是上面的 lambda 的 n1。
IntUnaryOperator curriedMult2 = curriedMult2Maker2.apply(10);
```
`10` 现在缓存在 `curriedMult2` 函数中,以便 `curriedMult2` 调用中的显式整型参数乘以 10
```
curriedMult2.applyAsInt(20); // 200 = 10 * 20
curriedMult2.applyAsInt(80); // 800 = 10 * 80
```
缓存的值可以随意更改:
```
curriedMult2 = curriedMult2Maker.apply(50); // 缓存 50
curriedMult2.applyAsInt(101); // 5050 = 101 * 50
```
当然,可以通过这种方式创建多个已柯里化版本的 `mult2`,每个版本都有一个 `IntUnaryOperator`
柯里化充分利用了 lambda 的强大功能:可以很容易地编写 lambda 表达式来返回需要的任何类型的值,包括另一个 lambda。
### 总结
Java 仍然是基于类的面向对象的编程语言。但是,借助流 API 及其支持的函数式构造Java 向函数式语言(例如 Lisp迈出了决定性的同时也是受欢迎的一步。结果是 Java 更适合处理现代编程中常见的海量数据流。在函数式方向上的这一步还使以在前面的代码示例中突出显示的管道的方式编写清晰简洁的 Java 代码更加容易:
```
dataStream
.parallelStream() // 多线程以提高效率
.filter(...) // 阶段 1
.map(...) // 阶段 2
.filter(...) // 阶段 3
...
.collect(...); // 或者,也可以进行归约:阶段 N
```
自动多线程,以 `parallel``parallelStream` 调用为例,建立在 Java 的 fork/join 框架上,该框架支持 <ruby>任务窃取<rt>task stealing</rt></ruby> 以提高效率。假设 `parallelStream` 调用后面的线程池由八个线程组成,并且 `dataStream` 被八种方式分区。某个线程例如T1可能比另一个线程例如T7工作更快这意味着应该将 T7 的某些任务移到 T1 的工作队列中。这会在运行时自动发生。
在这个简单的多线程世界中,程序员的主要职责是编写线程安全函数,这些函数作为参数传递给在流 API 中占主导地位的高阶函数。尤其是 lambda 鼓励编写纯函数(因此是线程安全的)函数。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/javastream
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen )
[2]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+list
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+number
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jbutton
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionlistener
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionevent
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+date
[13]: https://en.wikipedia.org/wiki/Plain_old_Java_object

View File

@ -0,0 +1,626 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11839-1.html)
[#]: subject: (Add scorekeeping to your Python game)
[#]: via: (https://opensource.com/article/20/1/add-scorekeeping-your-python-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
添加计分到你的 Python 游戏
======
> 在本系列的第十一篇有关使用 Python Pygame 模块进行编程的文章中,显示玩家获得战利品或受到伤害时的得分。
![](https://img.linux.net.cn/data/attachment/album/202002/01/154838led0y08y2aqetz1q.jpg)
这是仍在进行中的关于使用 [Pygame][3] 模块来在 [Python 3][2] 在创建电脑游戏的第十一部分。先前的文章是:
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
* [使用 Python 和 Pygame 模块构建一个游戏框架][5]
* [如何在你的 Python 游戏中添加一个玩家][6]
* [用 Pygame 使你的游戏角色移动起来][7]
* [如何向你的 Python 游戏中添加一个敌人][8]
* [在 Pygame 游戏中放置平台][19]
* [在你的 Python 游戏中模拟引力][9]
* [为你的 Python 平台类游戏添加跳跃功能][10]
* [使你的 Python 游戏玩家能够向前和向后跑][11]
* [在你的 Python 平台类游戏中放一些奖励][12]
如果你已经跟随这一系列很久,那么已经学习了使用 Python 创建一个视频游戏所需的所有基本语法和模式。然而,它仍然缺少一个至关重要的组成部分。这一组成部分不仅仅对用 Python 编程游戏重要;不管你探究哪个计算机分支,你都必需精通:作为一个程序员,通过阅读一种语言的或库的文档来学习新的技巧。
幸运的是,你正在阅读本文的事实表明你熟悉文档。为了使你的平台类游戏更加美观,在这篇文章中,你将在游戏屏幕上添加得分和生命值显示。不过,教你如何找到一个库的功能以及如何使用这些新的功能的这节课程并没有多神秘。
### 在 Pygame 中显示得分
现在,既然你有了可以被玩家收集的奖励,那就有充分的理由来记录分数,以便你的玩家看到他们收集了多少奖励。你也可以跟踪玩家的生命值,以便当他们被敌人击中时会有相应结果。
你已经有了跟踪分数和生命值的变量,但是这一切都发生在后台。这篇文章教你在游戏期间在游戏屏幕上以你选择的一种字体来显示这些统计数字。
### 阅读文档
大多数 Python 模块都有文档,即使那些没有文档的模块,也能通过 Python 的帮助功能来进行最小的文档化。[Pygame 的主页面][13] 链接了它的文档。不过Pygame 是一个带有很多文档的大模块,并且它的文档不像在 Opensource.com 上的文章一样,以同样易理解的(和友好的、易解释的、有用的)叙述风格来撰写的。它们是技术文档,并且列出在模块中可用的每个类和函数,各自要求的输入类型等等。如果你不适应参考代码组件描述,这可能会令人不知所措。
在烦恼于库的文档前,第一件要做的事,就是来想想你正在尝试达到的目标。在这种情况下,你想在屏幕上显示玩家的得分和生命值。
在你确定你需要的结果后,想想它需要什么的组件。你可以从变量和函数的方面考虑这一点,或者,如果你还没有自然地想到这一点,你可以进行一般性思考。你可能意识到需要一些文本来显示一个分数,你希望 Pygame 在屏幕上绘制这些文本。如果你仔细思考,你可能会意识到它与在屏幕上渲染一个玩家、奖励或一个平台并多么大的不同。
从技术上讲,你*可以*使用数字图形,并让 Pygame 显示这些数字图形。它不是达到你目标的最容易的方法,但是如果它是你唯一知道的方法,那么它是一个有效的方法。不过,如果你参考 Pygame 的文档,你看到列出的模块之一是 `font`,这是 Pygame 使得在屏幕上来使打印文本像输入文字一样容易的方法。
### 解密技术文档
`font` 文档页面以 `pygame.font.init()` 开始,它列出了用于初始化字体模块的函数。它由 `pygame.init()` 自动地调用,你已经在代码中调用了它。再强调一次,从技术上讲,你已经到达一个*足够好*的点。虽然你尚不知道*如何做*,你知道你*能够*使用 `pygame.font` 函数来在屏幕上打印文本。
然而,如果你阅读更多一些,你会找到这里还有一种更好的方法来打印字体。`pygame.freetype` 模块在文档中的描述方式如下:
> `pygame.freetype` 模块是 `pygame.fontpygame` 模块的一个替代品,用于加载和渲染字体。它有原函数的所有功能,外加很多新的功能。
`pygame.freetype` 文档页面的下方,有一些示例代码:
```
import pygame
import pygame.freetype
```
你的代码应该已经导入了 Pygame不过请修改你的 `import` 语句以包含 Freetype 模块:
```
import pygame
import sys
import os
import pygame.freetype
```
### 在 Pygame 中使用字体
`font` 模块的描述中可以看出,显然 Pygame 使用一种字体(不管它的你提供的或内置到 Pygame 的默认字体)在屏幕上渲染字体。滚动浏览 `pygame.freetype` 文档来找到 `pygame.freetype.Font` 函数:
```
pygame.freetype.Font
从支持的字体文件中创建一个新的字体实例。
Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
pygame.freetype.Font.name
  符合规则的字体名称。
pygame.freetype.Font.path
  字体文件路径。
pygame.freetype.Font.size
  在渲染中使用的默认点大小
```
这描述了如何在 Pygame 中构建一个字体“对象”。把屏幕上的一个简单对象视为一些代码属性的组合对你来说可能不太自然,但是这与你构建英雄和敌人精灵的方式非常类似。你需要一个字体文件,而不是一个图像文件。在你有一个字体文件后,你可以在你的代码中使用 `pygame.freetype.Font` 函数来创建一个字体对象,然后使用该对象来在屏幕上渲染文本。
因为并不是世界上的每个人的电脑上都有完全一样的字体,因此将你选择的字体与你的游戏捆绑在一起是很重要的。要捆绑字体,首先在你的游戏文件夹中创建一个新的目录,放在你为图像而创建的文件目录旁边。称其为 `fonts`
即使你的计算机操作系统随附了几种字体,但是将这些字体给予其他人是非法的。这看起来很奇怪,但法律就是这样运作的。如果想与你的游戏一起随附一种字体,你必需找到一种开源或知识共享的字体,以允许你随游戏一起提供该字体。
专门提供自由和合法字体的网站包括:
* [Font Library][14]
* [Font Squirrel][15]
* [League of Moveable Type][16]
当你找到你喜欢的字体后,下载下来。解压缩 ZIP 或 [TAR][17] 文件,并移动 `.ttf``.otf` 文件到你的项目目录下的 `fonts` 文件夹中。
你没有安装字体到你的计算机上。你只是放置字体到你游戏的 `fonts` 文件夹中,以便 Pygame 可以使用它。如果你想,你*可以*在你的计算机上安装该字体,但是没有必要。重要的是将字体放在你的游戏目录中,这样 Pygame 可以“描绘”字体到屏幕上。
如果字体文件的名称复杂且带有空格或特殊字符,只需要重新命名它即可。文件名称是完全任意的,并且对你来说,文件名称越简单,越容易将其键入你的代码中。
现在告诉 Pygame 你的字体。从文档中你知道,当你至少提供了字体文件路径给 `pygame.freetype.Font` 时(文档明确指出所有其余属性都是可选的),你将在返回中获得一个字体对象:
```
Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
```
创建一个称为 `myfont` 的新变量来充当你在游戏中字体,并放置 `Font` 函数的结果到这个变量中。这个示例中使用 `amazdoom.ttf` 字体,但是你可以使用任何你想使用的字体。在你的设置部分放置这些代码:
```
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
font_size = tx
myfont = pygame.freetype.Font(font_path, font_size)
```
### 在 Pygame 中显示文本
现在你已经创建一个字体对象,你需要一个函数来绘制你想绘制到屏幕上的文本。这和你在你的游戏中绘制背景和平台是相同的原理。
首先,创建一个函数,并使用 `myfont` 对象来创建一些文本,设置颜色为某些 RGB 值。这必须是一个全局函数;它不属于任何具体的类:
```
def stats(score,health):
    myfont.render_to(world, (4, 4), "Score:"+str(score), WHITE, None, size=64)
    myfont.render_to(world, (4, 72), "Health:"+str(health), WHITE, None, size=64)
```
当然,你此刻已经知道,如果它不在主循环中,你的游戏将不会发生任何事,所以在文件的底部添加一个对你的 `stats` 函数的调用:
```
    for e in enemy_list:
        e.move()
    stats(player.score,player.health) # draw text
    pygame.display.flip()
```
尝试你的游戏。
当玩家收集奖励品时,得分会上升。当玩家被敌人击中时,生命值下降。成功!
![Keeping score in Pygame][18]
不过,这里有一个问题。当一个玩家被敌人击中时,健康度会*一路*下降,这是不公平的。你刚刚发现一个非致命的错误。非致命的错误是这些在应用程序中小问题,(通常)不会阻止应用程序启动或甚至导致停止工作,但是它们要么没有意义,要么会惹恼用户。这里是如何解决这个问题的方法。
### 修复生命值计数
当前生命值系统的问题是敌人接触玩家时Pygame 时钟的每一次滴答,健康度都会减少。这意味着一个缓慢移动的敌人可能在一次遭遇中将一个玩家降低健康度至 -200 ,这不公平。当然,你可以给你的玩家一个 10000 的起始健康度得分,而不用担心它;这可以工作,并且可能没有人会注意。但是这里有一个更好的方法。
当前,你的代码侦查出一个玩家和一个敌人发生碰撞的时候。生命值问题的修复是检测*两个*独立的事件:什么时候玩家和敌人碰撞,并且,在它们碰撞后,什么时候它们*停止*碰撞。
首先,在你的玩家类中,创建一个变量来代表玩家和敌人碰撞在一起:
```
        self.frame = 0
        self.health = 10
        self.damage = 0
```
在你的 `Player` 类的 `update` 函数中,*移除*这块代码块:
```
        for enemy in enemy_hit_list:
            self.health -= 1
            #print(self.health)
```
并且在它的位置,只要玩家当前没有被击中,检查碰撞:
```
        if self.damage == 0:
            for enemy in enemy_hit_list:
                if not self.rect.contains(enemy):
                    self.damage = self.rect.colliderect(enemy)
```
你可能会在你删除的语句块和你刚刚添加的语句块之间看到相似之处。它们都在做相同的工作,但是新的代码更复杂。最重要的是,只有当玩家*当前*没有被击中时,新的代码才运行。这意味着,当一个玩家和敌人碰撞时,这些代码运行一次,而不是像以前那样一直发生碰撞。
新的代码使用两个新的 Pygame 函数。`self.rect.contains` 函数检查一个敌人当前是否在玩家的边界框内,并且当它是 `true` 时, `self.rect.colliderect` 设置你的新的 `self.damage` 变量为 1而不管它多少次是 `true`
现在,即使被一个敌人击中 3 秒,对 Pygame 来说仍然看作一次击中。
我通过通读 Pygame 的文档而发现了这些函数。你没有必要一次阅读完全部的文档,并且你也没有必要阅读每个函数的每个单词。不过,花费时间在你正在使用的新的库或模块的文档上是很重要的;否则,你极有可能在重新发明轮子。不要花费一个下午的时间来尝试修改拼接一个解决方案到一些东西,而这些东西已经被你正在使用的框架的所解决。阅读文档,知悉函数,并从别人的工作中获益!
最后,添加另一个代码语句块来侦查出什么时候玩家和敌人不再接触。然后直到那时,才从玩家减少一个生命值。
```
        if self.damage == 1:
            idx = self.rect.collidelist(enemy_hit_list)
            if idx == -1:
                self.damage = 0   # set damage back to 0
                self.health -= 1  # subtract 1 hp
```
注意,*只有*当玩家被击中时,这个新的代码才会被触发。这意味着,在你的玩家在你的游戏世界正在探索或收集奖励时,这个代码不会运行。它仅当 `self.damage` 变量被激活时运行。
当代码运行时,它使用 `self.rect.collidelist` 来查看玩家是否*仍然*接触在你敌人列表中的敌人(当其未侦查到碰撞时,`collidelist` 返回 -1。在它没有接触敌人时是该处理 `self.damage` 的时机:通过设置 `self.damage` 变量回到 0 来使其无效,并减少一点生命值。
现在尝试你的游戏。
### 得分反应
现在,你有一个来让你的玩家知道它们分数和生命值的方法,当你的玩家达到某些里程碑时,你可以确保某些事件发生。例如,也许这里有一个特殊的恢复一些生命值的奖励项目。也许一个到达 0 生命值的玩家不得不从一个关卡的起始位置重新开始。
你可以在你的代码中检查这些事件,并且相应地操纵你的游戏世界。你已经知道该怎么做,所以请浏览文档来寻找新的技巧,并且独立地尝试这些技巧。
这里是到目前为止所有的代码:
```
#!/usr/bin/env python3
# draw a world
# add a player and player control
# add player movement
# add enemy and basic collision
# add platform
# add gravity
# add jumping
# add scrolling
# add loot
# add score
# GNU All-Permissive License
# Copying and distribution of this file, with or without modification,
# are permitted in any medium without royalty provided the copyright
# notice and this notice are preserved. This file is offered as-is,
# without any warranty.
import pygame
import sys
import os
import pygame.freetype
'''
Objects
'''
class Platform(pygame.sprite.Sprite):
# x location, y location, img width, img height, img file
def __init__(self,xloc,yloc,imgw,imgh,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img)).convert()
self.image.convert_alpha()
self.rect = self.image.get_rect()
self.rect.y = yloc
self.rect.x = xloc
class Player(pygame.sprite.Sprite):
'''
Spawn a player
'''
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.movex = 0
self.movey = 0
self.frame = 0
self.health = 10
self.damage = 0
self.collide_delta = 0
self.jump_delta = 6
self.score = 1
self.images = []
for i in range(1,9):
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
img.convert_alpha()
img.set_colorkey(ALPHA)
self.images.append(img)
self.image = self.images[0]
self.rect = self.image.get_rect()
def jump(self,platform_list):
self.jump_delta = 0
def gravity(self):
self.movey += 3.2 # how fast player falls
if self.rect.y > worldy and self.movey >= 0:
self.movey = 0
self.rect.y = worldy-ty
def control(self,x,y):
'''
control player movement
'''
self.movex += x
self.movey += y
def update(self):
'''
Update sprite position
'''
self.rect.x = self.rect.x + self.movex
self.rect.y = self.rect.y + self.movey
# moving left
if self.movex < 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[self.frame//ani]
# moving right
if self.movex > 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[(self.frame//ani)+4]
# collisions
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
if self.damage == 0:
for enemy in enemy_hit_list:
if not self.rect.contains(enemy):
self.damage = self.rect.colliderect(enemy)
if self.damage == 1:
idx = self.rect.collidelist(enemy_hit_list)
if idx == -1:
self.damage = 0 # set damage back to 0
self.health -= 1 # subtract 1 hp
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
for loot in loot_hit_list:
loot_list.remove(loot)
self.score += 1
print(self.score)
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.collide_delta = 0 # stop jumping
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.movey = 0
self.rect.y = worldy-ty-ty
self.collide_delta = 0 # stop jumping
if self.rect.y > g.rect.y:
self.health -=1
print(self.health)
if self.collide_delta < 6 and self.jump_delta < 6:
self.jump_delta = 6*2
self.movey -= 33 # how high to jump
self.collide_delta += 6
self.jump_delta += 6
class Enemy(pygame.sprite.Sprite):
'''
Spawn an enemy
'''
def __init__(self,x,y,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img))
self.movey = 0
#self.image.convert_alpha()
#self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.counter = 0
def move(self):
'''
enemy movement
'''
distance = 80
speed = 8
self.movey += 3.2
if self.counter >= 0 and self.counter <= distance:
self.rect.x += speed
elif self.counter >= distance and self.counter <= distance*2:
self.rect.x -= speed
else:
self.counter = 0
self.counter += 1
if not self.rect.y >= worldy-ty-ty:
self.rect.y += self.movey
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.rect.y = worldy-ty-ty
class Level():
def bad(lvl,eloc):
if lvl == 1:
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
enemy_list = pygame.sprite.Group() # create enemy group
enemy_list.add(enemy) # add enemy to group
if lvl == 2:
print("Level " + str(lvl) )
return enemy_list
def loot(lvl,tx,ty):
if lvl == 1:
loot_list = pygame.sprite.Group()
loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
loot_list.add(loot)
if lvl == 2:
print(lvl)
return loot_list
def ground(lvl,gloc,tx,ty):
ground_list = pygame.sprite.Group()
i=0
if lvl == 1:
while i < len(gloc):
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
ground_list.add(ground)
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return ground_list
def platform(lvl,tx,ty):
plat_list = pygame.sprite.Group()
ploc = []
i=0
if lvl == 1:
ploc.append((20,worldy-ty-128,3))
ploc.append((300,worldy-ty-256,3))
ploc.append((500,worldy-ty-128,4))
while i < len(ploc):
j=0
while j <= ploc[i][2]:
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
plat_list.add(plat)
j=j+1
print('run' + str(i) + str(ploc[i]))
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return plat_list
def stats(score,health):
myfont.render_to(world, (4, 4), "Score:"+str(score), SNOWGRAY, None, size=64)
myfont.render_to(world, (4, 72), "Health:"+str(health), SNOWGRAY, None, size=64)
'''
Setup
'''
worldx = 960
worldy = 720
fps = 40 # frame rate
ani = 4 # animation cycles
clock = pygame.time.Clock()
pygame.init()
main = True
BLUE = (25,25,200)
BLACK = (23,23,23 )
WHITE = (254,254,254)
SNOWGRAY = (137,164,166)
ALPHA = (0,255,0)
world = pygame.display.set_mode([worldx,worldy])
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
backdropbox = world.get_rect()
player = Player() # spawn player
player.rect.x = 0
player.rect.y = 0
player_list = pygame.sprite.Group()
player_list.add(player)
steps = 10
forwardx = 600
backwardx = 230
eloc = []
eloc = [200,20]
gloc = []
tx = 64 #tile size
ty = 64 #tile size
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
font_size = tx
myfont = pygame.freetype.Font(font_path, font_size)
i=0
while i <= (worldx/tx)+tx:
gloc.append(i*tx)
i=i+1
enemy_list = Level.bad( 1, eloc )
ground_list = Level.ground( 1,gloc,tx,ty )
plat_list = Level.platform( 1,tx,ty )
loot_list = Level.loot(1,tx,ty)
'''
Main loop
'''
while main == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit(); sys.exit()
main = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT or event.key == ord('a'):
print("LEFT")
player.control(-steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
print("RIGHT")
player.control(steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
print('jump')
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(-steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump(plat_list)
if event.key == ord('q'):
pygame.quit()
sys.exit()
main = False
# scroll the world forward
if player.rect.x >= forwardx:
scroll = player.rect.x - forwardx
player.rect.x = forwardx
for p in plat_list:
p.rect.x -= scroll
for e in enemy_list:
e.rect.x -= scroll
for l in loot_list:
l.rect.x -= scroll
# scroll the world backward
if player.rect.x <= backwardx:
scroll = backwardx - player.rect.x
player.rect.x = backwardx
for p in plat_list:
p.rect.x += scroll
for e in enemy_list:
e.rect.x += scroll
for l in loot_list:
l.rect.x += scroll
world.blit(backdrop, backdropbox)
player.gravity() # check gravity
player.update()
player_list.draw(world) #refresh player position
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh enemies
plat_list.draw(world) # refresh platforms
loot_list.draw(world) # refresh loot
for e in enemy_list:
e.move()
stats(player.score,player.health) # draw text
pygame.display.flip()
clock.tick(fps)
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/add-scorekeeping-your-python-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_maze.png?itok=mZ5LP4-X (connecting yellow dots in a maze)
[2]: https://www.python.org/
[3]: https://www.pygame.org/news
[4]: https://linux.cn/article-9071-1.html
[5]: https://linux.cn/article-10850-1.html
[6]: https://linux.cn/article-10858-1.html
[7]: https://linux.cn/article-10874-1.html
[8]: https://linux.cn/article-10883-1.html
[9]: https://linux.cn/article-11780-1.html
[10]: https://linux.cn/article-11790-1.html
[11]: https://linux.cn/article-11819-1.html
[12]: https://linux.cn/article-11828-1.html
[13]: http://pygame.org/news
[14]: https://fontlibrary.org/
[15]: https://www.fontsquirrel.com/
[16]: https://www.theleagueofmoveabletype.com/
[17]: https://opensource.com/article/17/7/how-unzip-targz-file
[18]: https://opensource.com/sites/default/files/uploads/pygame-score.jpg (Keeping score in Pygame)
[19]: https://linux.cn/article-10902-1.html

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11841-1.html)
[#]: subject: (My favorite Bash hacks)
[#]: via: (https://opensource.com/article/20/1/bash-scripts-aliases)
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
我珍藏的 Bash 秘籍
======
> 通过别名和其他捷径来提高你经常忘记的那些事情的效率。
![bash logo on green background][1]
要是你整天使用计算机,如果能找到需要重复执行的命令并记下它们以便以后轻松使用那就太棒了。它们全都呆在那里,藏在 `~/.bashrc` 中(或 [zsh 用户][2]的 `~/.zshrc` 中),等待着改善你的生活!
在本文中,我分享了我最喜欢的这些助手命令,对于我经常遗忘的事情,它们很有用,也希望这可以帮助到你,以及为你解决一些经常头疼的问题。
### 完事吱一声
当我执行一个需要长时间运行的命令时,我经常采用多任务的方式,然后就必须回头去检查该操作是否已完成。然而通过有用的 `say` 命令,现在就不用再这样了(这是在 MacOS 上;请根据你的本地环境更改为等效的方式):
```
function looooooooong {
START=$(date +%s.%N)
$*
EXIT_CODE=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
result="$1 completed in $RES, exit code $EXIT_CODE."
echo -e "\n⏰ $result"
( say -r 250 $result 2>&1 > /dev/null & )
}
```
这个命令会记录命令的开始和结束时间,计算所需的分钟数,并“说”出调用的命令、花费的时间和退出码。当简单的控制台铃声无法使用时,我发现这个超级有用。
### 安装小助手
我在小时候就开始使用 Ubuntu而我需要学习的第一件事就是如何安装软件包。我曾经首先添加的别名之一是它的助手根据当天的流行梗命名的
```
alias canhas="sudo apt-get install -y"
```
### GPG 签名
有时候,我必须在没有 GPG 扩展程序或应用程序的情况下给电子邮件签署 [GPG][3] 签名,我会跳到命令行并使用以下令人讨厌的别名:
```
alias gibson="gpg --encrypt --sign --armor"
alias ungibson="gpg --decrypt"
```
### Docker
Docker 的子命令很多,但是 Docker compose 的更多。我曾经使用这些别名来将 `--rm` 标志丢到脑后,但是现在不再使用这些有用的别名了:
```
alias dc="docker-compose"
alias dcr="docker-compose run --rm"
alias dcb="docker-compose run --rm --build"
```
### Google Cloud 的 gcurl 助手
对于我来说Google Cloud 是一个相对较新的东西,而它有[极多的文档][4]。`gcurl` 是一个别名,可确保在用带有身份验证标头的本地 `curl` 命令连接 Google Cloud API 时,可以获得所有正确的标头。
### Git 和 ~/.gitignore
我工作中用 Git 很多,因此我有一个专门的部分来介绍 Git 助手。
我最有用的助手之一是我用来克隆 GitHub 存储库的。你不必运行:
```
git clone git@github.com:org/repo /Users/glasnt/git/org/repo
```
我设置了一个克隆函数:
```
clone(){
    echo Cloning $1 to ~/git/$1
    cd ~/git
    git clone git@github.com:$1 $1
    cd $1
}
```
即使每次进入 `~/.bashrc` 文件看到这个时,我总是会忘记和傻笑,我也有一个“刷新上游”命令:
```
alias yoink="git checkout master && git fetch upstream master && git merge upstream/master"
```
给 Git 一族的另一个助手是全局忽略文件。在你的 `git config --global --list` 中,你应该看到一个 `core.excludesfile`。如果没有,请[创建一个][6],然后将你总是放到各个 `.gitignore` 文件中的内容填满它。作为 MacOS 上的 Python 开发人员,对我来说,这些内容是:
```
.DS_Store     # macOS clutter
venv/         # I never want to commit my virtualenv
*.egg-info/*  # ... nor any locally compiled packages
__pycache__   # ... or source
*.swp         # ... nor any files open in vim
```
你可以在 [Gitignore.io][7] 或 GitHub 上的 [Gitignore 存储库][8]上找到其他建议。
### 轮到你了
你最喜欢的助手命令是什么?请在评论中分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/bash-scripts-aliases
作者:[Katie McLaughlin][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/glasnt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/19/9/getting-started-zsh
[3]: https://gnupg.org/
[4]: https://cloud.google.com/service-infrastructure/docs/service-control/getting-started
[5]: mailto:git@github.com
[6]: https://help.github.com/en/github/using-git/ignoring-files#create-a-global-gitignore
[7]: https://www.gitignore.io/
[8]: https://github.com/github/gitignore

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11846-1.html)
[#]: subject: (Keep a journal of your activities with this Python program)
[#]: via: (https://opensource.com/article/20/1/python-journal)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个 Python 程序记录你的活动
======
> jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
![](https://img.linux.net.cn/data/attachment/album/202002/03/105455tx03zo2pu7woyusp.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 jrnl 记录日志
在我的公司,许多人会在下班之前在 Slack 上发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。
![jrnl][2]
打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助。
jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。
由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 `pip3 install jrnl`。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。
![jrnl's first run][4]
现在,每当你需要做笔记或记录日志时,只需输入 `jrnl <some text>`,它将带有时间戳的记录保存到默认文件中。你可以使用 `jrnl -on YYYY-MM-DD` 搜索特定日期条目,`jrnl -from YYYY-MM-DD` 搜索在那日期之后的条目,以及用 `jrnl -to YYYY-MM-DD` 搜索到那日期的条目。搜索词可以与 `-and` 参数结合使用,允许像 `jrnl -from 2019-01-01 -and -to 2019-12-31` 这类搜索。
你还可以使用 `--edit` 标志编辑日志中的条目。开始之前,通过编辑文件 `~/.config/jrnl/jrnl.yaml` 来设置默认编辑器。你还可以指定日志使用什么文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vimjrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5]。
![Example jrnl config file][6]
jrnl 还可以加密日志文件。通过设置全局 `encrypt` 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的针对文件设置 `encrypt: true` 来加密文件。
```
journals:
  default: ~/journals/journal.txt
  work: ~/journals/work.txt
  private:
    journal: ~/journals/private.txt
    encrypt: true
```
如果日志尚未加密,系统将提示你输入在对它进行任何操作的密码。日志文件将加密保存在磁盘上,以免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。
![Encrypted jrnl file][8]
日志记录帮助我记住什么时候做了什么事,并在我需要的时候能够找到它。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/python-journal
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_ (Writing in a notebook)
[2]: https://opensource.com/sites/default/files/uploads/productivity_8-1.png (jrnl)
[3]: https://jrnl.sh/
[4]: https://opensource.com/sites/default/files/uploads/productivity_8-2.png (jrnl's first run)
[5]: https://jrnl.sh/recipes/#external-editors
[6]: https://opensource.com/sites/default/files/uploads/productivity_8-3.png (Example jrnl config file)
[7]: https://jrnl.sh/encryption/
[8]: https://opensource.com/sites/default/files/uploads/productivity_8-4.png (Encrypted jrnl file)

View File

@ -0,0 +1,128 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11838-1.html)
[#]: subject: (How to Set or Change Timezone in Ubuntu Linux [Beginners Tip])
[#]: via: (https://itsfoss.com/change-timezone-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何在 Ubuntu Linux 中设置或更改时区
======
[你安装 Ubuntu 时][1],它会要求你设置时区。如果你选择一个错误的时区,或者你移动到世界的一些其它地方,你可以很容易地在以后更改它。
### 如何在 Ubuntu 和其它 Linux 发行版中更改时区
这里有两种方法来更改 Ubuntu 中的时区。你可以使用图形化设置或在终端中使用 `timedatectl` 命令。你也可以直接更改 `/etc/timezone` 文件,但是我不建议这样做。
在这篇初学者教程中,我将向你展示图形化和终端两种方法:
* [通过 GUI 更改 Ubuntu 中的时区][2] (适合桌面用户)
* [通过命令行更改 Ubuntu 中的时区][3] (桌面和服务器都工作)
![][4]
#### 方法 1: 通过终端更改 Ubuntu 时区
[Ubuntu][5] 或一些使用 systemd 的其它发行版可以在 Linux 终端中使用 `timedatectl` 命令来设置时区。
你可以使用没有任何参数的 `timedatectl` 命令来检查当前是日期和时区设置:
```
[email protected]:~$ timedatectl
Local time: Sat 2020-01-18 17:39:52 IST
Universal time: Sat 2020-01-18 12:09:52 UTC
RTC time: Sat 2020-01-18 12:09:52
Time zone: Asia/Kolkata (IST, +0530)
System clock synchronized: yes
systemd-timesyncd.service active: yes
RTC in local TZ: no
```
正如你在上面的输出中所看,我的系统使用 Asia/Kolkata 。它也告诉我现在比世界时早 5 小时 30 分钟。
为在 Linux 中设置时区,你需要知道准确的时区。你必需使用时区的正确的格式 (时区格式是洲/城市)。
为获取时区列表,使用 `timedatectl` 命令的 `list-timezones` 参数:
```
timedatectl list-timezones
```
它将向你显示大量可用的时区列表。
![Timezones List][6]
你可以使用向上箭头和向下箭头或 `PgUp``PgDown` 键来在页面之间移动。
你也可以 `grep` 输出,并搜索你的时区。例如,假如你正在寻找欧洲的时区,你可以使用:
```
timedatectl list-timezones | grep -i europe
```
比方说,你想设置时区为巴黎。在这里,使用的时区值的 Europe/Paris
```
timedatectl set-timezone Europe/Paris
```
它虽然不显示任何成功信息,但是时区会立即更改。你不需要重新启动或注销。
记住,虽然你不需要成为 root 用户并对命令使用 `sudo`,但是你的账户仍然需要拥有管理器权限来更改时区。
你可以使用 [date 命令][7] 来验证更改的时间好时区:
```
[email protected]:~$ date
Sat Jan 18 13:56:26 CET 2020
```
#### 方法 2: 通过 GUI 更改 Ubuntu 时区
按下 `super` 键 (Windows 键) ,并搜索设置:
![Applications Menu Settings][8]
在左侧边栏中,向下滚动一点,查看详细信息:
![Go to Settings -> Details][9]
在详细信息中,你将在左侧边栏中找到“日期和时间”。在这里,你应该关闭自动时区选项(如果它已经被启用),然后在时区上单击:
![In Details -> Date & Time, turn off the Automatic Time Zone][10]
当你单击时区时,它将打开一个交互式地图,你可以在你选择的地理位置上单击,关闭窗口。
![Select a timezone][11]
在选择新的时区后,除了关闭这个地图后,你不必做任何事情。不需要注销或 [关闭 Ubuntu][12]。
我希望这篇快速教程能帮助你在 Ubuntu 和其它 Linux 发行版中更改时区。如果你有问题或建议,请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-timezone-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-ubuntu/
[2]: tmp.bHvVztzy6d#change-timezone-gui
[3]: tmp.bHvVztzy6d#change-timezone-command-line
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/Ubuntu_Change-_Time_Zone.png?ssl=1
[5]: https://ubuntu.com/
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/timezones_in_ubuntu.jpg?ssl=1
[7]: https://linuxhandbook.com/date-command/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/applications_menu_settings.jpg?ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/settings_detail_ubuntu.jpg?ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/change_timezone_in_ubuntu.jpg?ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/change_timezone_in_ubuntu_2.jpg?ssl=1
[12]: https://itsfoss.com/schedule-shutdown-ubuntu/

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11856-1.html)
[#]: subject: (One open source chat tool to rule them all)
[#]: via: (https://opensource.com/article/20/1/open-source-chat-tool)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
一个通过 IRC 管理所有聊天的开源聊天工具
======
> BitlBee 将多个聊天应用集合到一个界面中。在我们的 20 个使用开源提升生产力的系列的第九篇文章中了解如何设置和使用 BitlBee。
![](https://img.linux.net.cn/data/attachment/album/202002/05/123636dw8uw34mbkqzmw84.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 将所有聊天都放到 BitlBee 中
即时消息和聊天已经成为网络世界的主要内容。如果你像我一样,你可能打开五六个不同的应用与你的朋友、同事和其他人交谈。关注所有聊天真的很痛苦。谢天谢地,你可以使用一个应用(好吧,是两个)将这些聊天整个到一个地方。
![BitlBee on XChat][2]
[BitlBee][3] 是作为服务运行的应用,它可以将标准的 IRC 客户端与大量的消息服务进行桥接。而且,由于它本质上是 IRC 服务器,因此你可以选择很多客户端。
BitlBee 几乎包含在所有 Linux 发行版中。在 Ubuntu 上安装(我选择的 Linux 桌面),类似这样:
```
sudo apt install bitlbee-libpurple
```
在其他发行版上,包名可能略有不同,但搜索 “bitlbee” 应该就能看到。
你会注意到我用的 libpurple 版的 BitlBee。这个版本能让我使用 [libpurple][4] 即时消息库中提供的所有协议,该库最初是为 [Pidgin][5] 开发的。
安装完成后,服务应会自动启动。现在,使用一个 IRC 客户端(图片中为 [XChat][6]),我可以连接到端口 6667标准 IRC 端口)上的服务。
![Initial BitlBee connection][7]
你将自动连接到控制频道 &bitlbee。此频道对于你是独一无二的在多用户系统上每个人都有一个自己的。在这里你可以配置该服务。
在控制频道中输入 `help`,你可以随时获得完整的文档。浏览它,然后使用 `register` 命令在服务器上注册帐户。
```
register <mypassword>
```
现在你在服务器上所做的任何配置更改IM 帐户、设置等)都将在输入 `save` 时保存。每当你连接时,使用 `identify <mypassword>` 连接到你的帐户并加载这些设置。
![purple settings][8]
命令 `help purple` 将显示 libpurple 提供的所有可用协议。例如,我安装了 [telegram-purple][9] 包,它增加了连接到 Telegram 的能力。我可以使用 `account add` 命令将我的电话号码作为帐户添加。
```
account add telegram +15555555
```
BitlBee 将显示它已添加帐户。你可以使用 `account list` 列出你的帐户。因为我只有一个帐户,我可以通过 `account 0 on` 登录,它会进行 Telegram 登录,列出我所有的朋友和聊天,接下来就能正常聊天了。
但是,对于 Slack 这个最常见的聊天系统之一呢?你可以安装 [slack-libpurple][10] 插件,并且对 Slack 执行同样的操作。如果你不愿意编译和安装这些,这可能不适合你。
按照插件页面上的说明操作,安装后重新启动 BitlBee 服务。现在,当你运行 `help purple` 时,应该会列出 Slack。像其他协议一样添加一个 Slack 帐户。
```
account add slack ksonney@myslack.slack.com
account 1 set password my_legcay_API_token
account 1 on
```
你知道么,你已经连接到 Slack 中,你可以通过 `chat add` 命令添加你感兴趣的 Slack 频道。比如:
```
chat add 1 happyparty
```
将 Slack 频道 happyparty 添加为本地频道 #happyparty。现在可以使用标准 IRC `/join` 命令访问该频道。这很酷。
BitlBee 和 IRC 客户端帮助我的(大部分)聊天和即时消息保存在一个地方,并减少了我的分心,因为我不再需要查找并切换到任何一个刚刚找我的应用上。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-chat-tool
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://opensource.com/sites/default/files/uploads/productivity_9-1.png (BitlBee on XChat)
[3]: https://www.bitlbee.org/
[4]: https://developer.pidgin.im/wiki/WhatIsLibpurple
[5]: http://pidgin.im/
[6]: http://xchat.org/
[7]: https://opensource.com/sites/default/files/uploads/productivity_9-2.png (Initial BitlBee connection)
[8]: https://opensource.com/sites/default/files/uploads/productivity_9-3.png (purple settings)
[9]: https://github.com/majn/telegram-purple
[10]: https://github.com/dylex/slack-libpurple
[11]: mailto:ksonney@myslack.slack.com

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11858-1.html)
[#]: subject: (Use this Twitter client for Linux to tweet from the terminal)
[#]: via: (https://opensource.com/article/20/1/tweet-terminal-rainbow-stream)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个 Twitter 客户端在 Linux 终端中发推特
======
> 在我们的 20 个使用开源提升生产力的系列的第十篇文章中,使用 Rainbow Stream 跟上你的 Twitter 流而无需离开终端。
![](https://img.linux.net.cn/data/attachment/album/202002/06/113720bwi55j7xcccwwwi0.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 通过 Rainbow Stream 跟上Twitter
我喜欢社交网络和微博。它快速、简单,还有我可以与世界分享我的想法。当然,缺点是几乎所有非 Windows 的桌面客户端都对是网站的封装。[Twitter][2] 有很多客户端,但我真正想要的是轻量、易于使用,最重要的是吸引人的客户端。
![Rainbow Stream for Twitter][3]
[Rainbow Stream][4] 是好看的 Twitter 客户端之一。它简单易用,并且可以通过 `pip3 install rainbowstream` 快速安装。第一次运行时,它将打开浏览器窗口,并让你通过 Twitter 授权。完成后,你将回到命令行,你的 Twitter 时间线将开始滚动。
![Rainbow Stream first run][5]
要了解的最重要的命令是 `p` 暂停推流、`r` 继续推流、`h` 得到帮助,以及 `t` 发布新的推文。例如,`h tweets` 将提供发送和回复推文的所有选项。另一个有用的帮助页面是 `h messages`,它提供了处理直接消息的命令,这是我妻子和我经常使用的东西。还有很多其他命令,我会回头获得很多帮助。
随着时间线的滚动,你可以看到它有完整的 UTF-8 支持,并以正确的字体显示推文被转推以及喜欢的次数,图标和 emoji 也能正确显示。
![Kill this love][6]
关于 Rainbow Stream 的*最好*功能之一就是你不必放弃照片和图像。默认情况下,此功能是关闭的,但是你可以使用 `config` 命令尝试它。
```
config IMAGE_ON_TERM = true
```
此命令将任何图像渲染为 ASCII 艺术。如果你有大量照片流,它可能会有点多,但是我喜欢。它有非常复古的 1990 年代 BBS 感觉,我也确实喜欢 1990 年代的 BBS 场景。
你还可以使用 Rainbow Stream 管理列表、屏蔽某人、拉黑某人、关注、取消关注以及 Twitter API 的所有其他功能。它还支持主题,因此你可以用喜欢的颜色方案自定义流。
当我正在工作并且不想在浏览器上打开另一个选项卡时Rainbow Stream 让我可以留在终端中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/tweet-terminal-rainbow-stream
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
[2]: https://twitter.com/home
[3]: https://opensource.com/sites/default/files/uploads/productivity_10-1.png (Rainbow Stream for Twitter)
[4]: https://rainbowstream.readthedocs.io/en/latest/
[5]: https://opensource.com/sites/default/files/uploads/productivity_10-2.png (Rainbow Stream first run)
[6]: https://opensource.com/sites/default/files/uploads/day10-image3_1.png (Kill this love)

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11863-1.html)
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
COPR 仓库中 4 个很酷的新项目2020.01
======
![][1]
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Contrast
[Contrast][4] 是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5] 中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。
![][6]
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供了 Contrast。要安装 Contrast请使用以下命令
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8] 是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供了 Pamixer。要安装 Pamixer请使用以下命令
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] 是一款图像编辑器。它有简单且布局合理的用户界面,其中的大多数功能都可在工具栏中使用。尽管它不支持使用图层,但 PhotoFlare 提供了诸如各种颜色调整、图像变换、滤镜、画笔和自动裁剪等功能。此外PhotoFlare 可以批量编辑图片,来对所有图片应用相同的滤镜和转换,并将结果保存在指定目录中。
![][11]
#### 安装说明
[仓库][7]当前为 Fedora 31 提供了 PhotoFlare。要安装 PhotoFlare请使用以下命令
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] 是用于比较两个文件树的命令行工具。除了显示某些文件或目录仅存在于一棵树中之外tdiff 还显示文件大小、类型和内容,所有者用户和组 ID、权限、修改时间等方面的差异。
#### 安装说明
[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供了 tdiff。要安装 tdiff请使用以下命令
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -0,0 +1,189 @@
[#]: collector: (lujun9972)
[#]: translator: (mengxinayan)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11849-1.html)
[#]: subject: (Showing memory usage in Linux by process and user)
[#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
查看 Linux 系统中进程和用户的内存使用情况
======
> 有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令。
![Fancycrave][1]
有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 `free`、`ps`。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 `top`。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
下面是一些按照进程查看内存使用情况的命令:
### 按照进程查看内存使用情况
#### 使用 top
`top` 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 `top`,然后按下 `shift+m`,这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 `shift+m` ,你的 `top` 应该会得到类似于下面这样的输出结果:
```
$top
top - 09:39:34 up 5 days, 3 min, 3 users, load average: 4.77, 4.43, 3.72
Tasks: 251 total, 3 running, 247 sleeping, 1 stopped, 0 zombie
%Cpu(s): 50.6 us, 35.9 sy, 0.0 ni, 13.4 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 5944.4 total, 128.9 free, 2509.3 used, 3306.2 buff/cache
MiB Swap: 2048.0 total, 2045.7 free, 2.2 used. 3053.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
400 nemo 20 0 3309580 550188 168372 S 0.3 9.0 1:33.27 Web Content
32469 nemo 20 0 3492840 447372 163296 S 7.3 7.3 3:55.60 firefox
32542 nemo 20 0 2845732 433388 140984 S 6.0 7.1 4:11.16 Web Content
342 nemo 20 0 2848520 352288 118972 S 10.3 5.8 4:04.89 Web Content
2389 nemo 20 0 1774412 236700 90044 S 39.7 3.9 9:32.64 vlc
29527 nemo 20 0 2735792 225980 84744 S 9.6 3.7 3:02.35 gnome-shell
30497 nemo 30 10 1088476 159636 88884 S 0.0 2.6 0:11.99 update-manager
30058 nemo 20 0 1089464 140952 33128 S 0.0 2.3 0:04.58 gnome-software
32533 nemo 20 0 2389088 104712 79544 S 0.0 1.7 0:01.43 WebExtensions
2256 nemo 20 0 1217884 103424 31304 T 0.0 1.7 0:00.28 vlc
1713 nemo 20 0 2374396 79588 61452 S 0.0 1.3 0:00.49 Web Content
29306 nemo 20 0 389668 74376 54340 S 2.3 1.2 0:57.25 Xorg
32739 nemo 20 0 289528 58900 34480 S 1.0 1.0 1:04.08 RDD Process
29732 nemo 20 0 789196 57724 42428 S 0.0 0.9 0:00.38 evolution-alarm
2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd
```
注意 `%MEM` 排序。列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
#### 使用 ps
`ps` 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 `ps` 命令的结果传递给 `sort` 命令。下面是一个有用的示例:
```
$ ps aux | sort -rnk 4 | head -5
nemo 400 3.4 9.2 3309580 563336 ? Sl 08:59 1:36 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 32469 8.2 7.7 3492840 469516 ? Sl 08:54 4:15 /usr/lib/firefox/firefox -new-window
nemo 32542 8.9 7.6 2875428 462720 ? Sl 08:55 4:36 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
```
在上面的例子中(文中已截断),`sort` 命令使用了 `-r` 选项(反转)、`-n` 选项(数字值)、`-k` 选项(关键字),使 `sort` 命令对 `ps` 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 `ps` 命令的标题,那么将会便于查看。
```
$ ps aux | head -1; ps aux | sort -rnk 4 | head -5
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
nemo 400 3.4 9.2 3309580 563336 ? Sl 08:59 1:36 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 32469 8.2 7.7 3492840 469516 ? Sl 08:54 4:15 /usr/lib/firefox/firefox -new-window
nemo 32542 8.9 7.6 2875428 462720 ? Sl 08:55 4:36 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
```
如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 `~/.bashrc` 文件中。
```
$ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
```
下面是一些根据用户查看内存使用情况的命令:
### 按用户查看内存使用情况
#### 使用 top
按照用户检查内存使用情况会更复杂一些,因为你需要找到一种方法把用户所拥有的所有进程统计为单一的内存使用量。
如果你只想查看单个用户进程使用情况,`top` 命令可以采用与上文中同样的方法进行使用。只需要添加 `-U` 选项并在其后面指定你要查看的用户名,然后按下 `shift+m` 便可以按照内存使用有多到少进行查看。
```
$ top -U nemo
top - 10:16:33 up 5 days, 40 min, 3 users, load average: 1.91, 1.82, 2.15
Tasks: 253 total, 2 running, 250 sleeping, 1 stopped, 0 zombie
%Cpu(s): 28.5 us, 36.8 sy, 0.0 ni, 34.4 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 5944.4 total, 224.1 free, 2752.9 used, 2967.4 buff/cache
MiB Swap: 2048.0 total, 2042.7 free, 5.2 used. 2812.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
400 nemo 20 0 3315724 623748 165440 S 1.0 10.2 1:48.78 Web Content
32469 nemo 20 0 3629380 607492 161688 S 2.3 10.0 6:06.89 firefox
32542 nemo 20 0 2886700 404980 136648 S 5.6 6.7 6:50.01 Web Content
342 nemo 20 0 2922248 375784 116096 S 19.5 6.2 8:16.07 Web Content
2389 nemo 20 0 1762960 234644 87452 S 0.0 3.9 13:57.53 vlc
29527 nemo 20 0 2736924 227260 86092 S 0.0 3.7 4:09.11 gnome-shell
30497 nemo 30 10 1088476 156372 85620 S 0.0 2.6 0:11.99 update-manager
30058 nemo 20 0 1089464 138160 30336 S 0.0 2.3 0:04.62 gnome-software
32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions
```
#### 使用 ps
你依旧可以使用 `ps` 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 `grep` 命令来筛选得到某个用户的所有进程。
```
$ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
nemo 32469 7.1 11.5 3724364 701388 ? Sl 08:54 7:21 /usr/lib/firefox/firefox -new-window
nemo 400 2.0 8.9 3308556 543232 ? Sl 08:59 2:01 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni/usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 32542 7.9 7.1 2903084 436196 ? Sl 08:55 8:07 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 342 10.8 7.0 2941056 426484 ? Rl 08:59 10:45 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
nemo 2389 16.9 3.8 1762960 234644 pts/1 Sl+ 09:15 13:57 vlc videos/edge_computing.mp4
nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-shell
```
### 使用 ps 和其他命令的搭配
如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的方法,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u` 命令得到了用户列表。其中包含了系统用户比如 `syslog`。我们对每个任务使用 `awk` 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
```
#!/bin/bash
stats=””
echo "% user"
echo "============"
# collect the data
for user in `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u`
do
stats="$stats\n`ps aux | egrep ^$user | awk 'BEGIN{total=0}; \
{total += $4};END{print total,$1}'`"
done
# sort data numerically (largest first)
echo -e $stats | grep -v ^$ | sort -rn | head
```
这个脚本的输出可能如下:
```
$ ./show_user_mem_usage
% user
============
69.6 nemo
5.8 root
0.5 www-data
0.3 shs
0.2 whoopsie
0.2 systemd+
0.2 colord
0.2 clamav
0 syslog
0 rtkit
```
在 Linux 有许多方法可以报告内存使用情况。可以通过一些用心设计的工具和命令,来查看并获得某个进程或者用户占用着最多的内存。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/06/chips_processors_memory_cards_by_fancycrave_cc0_via_unsplash_1200x800-100760955-large.jpg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11864-1.html)
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 命令行简介
======
> 下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。
![](https://images.idgesg.net/images/article/2020/01/cmd_linux-control_linux-logo_-100828420-large.jpg)
如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面前兴奋地输入命令来使用大量工具和应用。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 “Activities” 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(类似 “cmd” 为你打开的窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 `Ctrl+Alt+t` 键打开命令窗口。
如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 `$` 或者像 `user@system:~$` 这样的东西,但它意味着系统已经准备好为你运行命令了。
一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
| 命令 | 用途 |
|---|---|
| `pwd` | 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录) |
| `ls` | 列出我的文件 |
| `ls -a` | 列出我更多的文件(包括隐藏文件) |
| `ls -al` | 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限) |
| `who` | 告诉我谁登录了(如果只有你,不要失望) |
| `date` | 日期提醒我今天是星期几(也显示时间) |
| `ps` | 列出我正在运行的进程(可能只是你的 shell 和 `ps` 命令) |
一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
| 命令 | 用途 |
|---|---|
| `cd /tmp` | 移动到其他文件夹(本例中,打开 `/tmp` 文件夹) |
| `ls` | 列出当前位置的文件 |
| `cd` | 回到主目录(不带参数的 `cd` 总是能将你带回到主目录) |
| `cat .bashrc` | 显示文件的内容(本例中显示 `.bashrc` 文件的内容) |
| `history` | 显示最近执行的命令 |
| `echo hello` | 跟自己说 “hello” |
| `cal` | 显示当前月份的日历 |
要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。“重定向”是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。“管道”是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
| 命令 | 用途 |
|---|---|
| `echo "echo hello" > tryme` | 创建一个新的文件并将 “echo hello” 写入该文件 |
| `chmod 700 tryme` | 使新建的文件可执行 |
| `tryme` | 运行新文件(它应当运行文件中包含的命令并且显示 “hello” |
| `ps aux` | 显示所有运行中的程序 |
| `ps aux | grep $USER` | 显示所有运行中的程序,但是限制输出的内容包含你的用户名 |
| `echo $USER` | 使用环境变量显示你的用户名 |
| `whoami` | 使用命令显示你的用户名 |
| `who | wc -l` | 计数所有当前登录的用户数目 |
### 总结
一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[c]: https://github.com/qianmingtian
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11853-1.html)
[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
[#]: via: (https://itsfoss.com/linux-kernel-5-6/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
四大亮点带你看 Linux 内核 5.6
======
当我们还在体验 Linux 5.5 稳定发行版带来更好的硬件支持时Linux 5.6 已经来了。
说实话Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将自带 Linux 5.5,你也需要切实了解一下 Linux 5.6 内核为我们提供了什么。
我将在本文中重点介绍 Linux 5.6 发布版中值得期待的关键更改和功能:
### Linux 5.6 功能亮点
![][1]
当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容:
#### 1、支持 WireGuard
WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。
你可以在官网上进一步了解 [WireGuard][3] 的优点。当然,如果你使用过它,那你可能已经知道它比 OpenVPN 更好的原因。
同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。
#### 2、支持 USB4
Linux 5.6 也将支持 **USB4**
如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5]。
根据文档“USB4 将使 USB 的最大带宽增大一倍并支持<ruby>多并发数据和显示协议<rt>multiple simultaneous data and display protocols</rt></ruby>。”
另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3这将是一个好消息。
#### 3、使用 LZO/LZ4 压缩 F2FS 数据
Linux 5.6 也将支持使用 LZO/LZ4 算法压缩 F2FS 数据。
换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展技术。
#### 4、解决 32 位系统的 2038 年问题
Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。
这意味着对于 32 位系统,时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。
Linux kernel 5.6 解决了这个问题,因此 32 位系统也可以运行到 2038 年以后。
#### 5、改进硬件支持
很显然,在下一个发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样是优先的。
新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。
除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPU、NVIDIA GPU 和 Intel Tiger Lake 芯片组的支持)。
#### 6、其他更新
此外Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进:
* 改进 AMD Zen 的温度/功率报告
* 修复华硕飞行堡垒系列笔记本中 AMD CPU 过热
* 开源支持 NVIDIA RTX 2000 图灵系列显卡
* 内建 FSCRYPT 加密
[Phoronix][6] 跟踪了 Linux 5.6 带来的许多技术性更改。因此,如果你好奇 Linux 5.6 所涉及的全部更改,则可以亲自了解一下。
现在你已经了解了 Linux 5.6 发布版带来的新功能,对此有什么看法呢?在下方评论中留下你的看法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-kernel-5-6/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/linux-kernel-5.6.jpg?ssl=1
[2]: https://openvpn.net/
[3]: https://www.wireguard.com/
[4]: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-20.04-Adds-WireGuard
[5]: https://www.usb.org/sites/default/files/2019-09/USB-IF_USB4%20spec%20announcement_FINAL.pdf
[6]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Spectacular

1
sources/README.md Normal file
View File

@ -0,0 +1 @@
这里放待翻译的文件。

View File

@ -1,108 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora CoreOS out of preview)
[#]: via: (https://fedoramagazine.org/fedora-coreos-out-of-preview/)
[#]: author: (bgilbert https://fedoramagazine.org/author/bgilbert/)
Fedora CoreOS out of preview
======
![The Fedora CoreOS logo on a gray background.][1]
The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now [available for general use][2].
Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. Its the successor to both [Fedora Atomic Host][3] and [CoreOS Container Linux][4] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][5].
Some highlights of the current Fedora CoreOS release:
* [Automatic updates][6], with staged deployments and phased rollouts
* Built from Fedora 31, featuring:
* Linux 5.4
* systemd 243
* Ignition 2.1
* OCI and Docker Container support via Podman 1.7 and Moby 18.09
* cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration
Fedora CoreOS is available on a variety of platforms:
* Bare metal, QEMU, OpenStack, and VMware
* Images available in all public AWS regions
* Downloadable cloud images for Alibaba, AWS, Azure, and GCP
* Can run live from RAM via ISO and PXE (netboot) images
Fedora CoreOS is under active development.  Planned future enhancements include:
* Addition of the _next_ release stream for extended testing of upcoming Fedora releases.
* Support for additional cloud and virtualization platforms, and processor architectures other than _x86_64_.
* Closer integration with Kubernetes distributions, including [OKD][7].
* [Aggregate statistics collection][8].
* Additional [documentation][9].
### Where do I get it?
To try out the new release, head over to the [download page][10] to get OS images or cloud image IDs.  Then use the [quick start guide][11] to get a machine running quickly.
### How do I get involved?
Its easy!  You can report bugs and missing features to the [issue tracker][12]. You can also discuss Fedora CoreOS in [Fedora Discourse][13], the [development mailing list][14], in _#fedora-coreos_ on Freenode, or at our [weekly IRC meetings][15].
### Are there stability guarantees?
In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  Weve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems.
Well continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the [coreos-status mailing list][16], along with recommended mitigations.
### How do I migrate from CoreOS Container Linux?
Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend [writing a new Fedora CoreOS Config][11] to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine.
Whether youre currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, youll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with [NetworkManager key files][17] instead of _systemd-networkd_, and time synchronization is performed by _chrony_ rather than _systemd-timesyncd_.  Initial migration documentation will be [available soon][9] and a skeleton list of differences between the two OSes is available in [this issue][18].
CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  Well announce the exact end-of-life date later this month.
### How do I migrate from Fedora Atomic Host?
Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend [writing a Fedora CoreOS Config][11] and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, youll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS.
Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-coreos-out-of-preview/
作者:[bgilbert][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bgilbert/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/introducing-fedora-coreos-816x345.png
[2]: https://getfedora.org/coreos/
[3]: https://www.projectatomic.io/
[4]: https://coreos.com/os/docs/latest/
[5]: https://fedoramagazine.org/introducing-fedora-coreos/
[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/
[7]: https://www.okd.io/
[8]: https://github.com/coreos/fedora-coreos-pinger/
[9]: https://docs.fedoraproject.org/en-US/fedora-coreos/
[10]: https://getfedora.org/coreos/download/
[11]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/
[12]: https://github.com/coreos/fedora-coreos-tracker/issues
[13]: https://discussion.fedoraproject.org/c/server/coreos
[14]: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/
[15]: https://github.com/coreos/fedora-coreos-tracker#meetings
[16]: https://lists.fedoraproject.org/archives/list/coreos-status@lists.fedoraproject.org/
[17]: https://developer.gnome.org/NetworkManager/stable/nm-settings-keyfile.html
[18]: https://github.com/coreos/fedora-coreos-tracker/issues/159

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NSA cloud advice, Facebook open source year in review, and more industry trends)
[#]: via: (https://opensource.com/article/20/1/nsa-facebook-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
NSA cloud advice, Facebook open source year in review, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Facebook open source year in review][2]
> Last year was a busy one for our [open source][3] engineers. In 2019 we released 170 new open source projects, bringing our portfolio to a total of 579 [active repositories][3]. While its important for our internal engineers to contribute to these projects (and they certainly do — with more than 82,000 commits this year), we are also incredibly grateful for the massive support from external contributors. Approximately 2,500 external contributors committed more than 32,000 changes. In addition to these contributions, nearly 93,000 new people starred our projects this year, growing the most important component of any open source project — the community! Facebook Open Source would not be here without your contributions, so we want to thank you for your participation in 2019.
**The impact**: Facebook got ~33% more changes than they would have had they decided to develop these as closed projects. Organizations addressing similar challenges got an 82,000-commit boost in exchange. What a clear illustration of the business impact of open source development.
## [Cloud advice from the NSA][4]
> This document divides cloud vulnerabilities into four classes (misconfiguration, poor access control, shared tenancy vulnerabilities, and supply chain vulnerabilities) that encompass the vast majority of known vulnerabilities. Cloud customers have a critical role in mitigating misconfiguration and poor access control, but can also take actions to protect cloud resources from the exploitation of shared tenancy and supply chain vulnerabilities. Descriptions of each vulnerability class along with the most effective mitigations are provided to help organizations lock down their cloud resources. By taking a risk-based approach to cloud adoption, organizations can securely benefit from the clouds extensive capabilities.
**The impact**: The Fear, Uncertainty, and Doubt (FUD) that has been associated with cloud adoption is being debunked more all the time. None other then the US Department of Defense has done a lot of the thinking so you don't have to, and there is a good chance that their concerns are at least as dire as yours are.
## [With Kubernetes, China Minsheng Bank transformed its legacy applications][5]
> But all of CMBCs legacy applications—for example, the core banking system, payment systems, and channel systems—were written in C and Java, using traditional architecture. “We wanted to do distributed applications because in the past we used VMs in our own data center, and that was quite expensive and with low resource utilization rate,” says Zhang. “Our biggest challenge is how to make our traditional legacy applications adaptable to the cloud native environment.” So far, around 20 applications are running in production on the Kubernetes platform, and 30 new applications are in active development to adopt the Kubernetes platform.
**The impact**: This illustrates nicely the challenges and opportunities facing businesses in a competitive environment, and suggests a common adoption pattern. Do new stuff the new way, and move the old stuff as it makes sense.
## [The '5 Rs' of the move to cloud native: Re-platform, re-host, re-factor, replace, retire][6]
> The bottom line is that telcos and service providers will go cloud native when it is cheaper for them to migrate to the cloud and pay cloud costs than it is to remain in the data centre. That time is now and by adhering to the "5 Rs" of the move to cloud native, Re-platform, Re-host, Re-factor, Replace and/or Retire, the path is open, clearly marked and the goal eminently achievable.
**The impact**: Cloud-native is basically used as a synonym for open source in this interview; there is no other type of technology that will deliver the same lift.
## [Fedora CoreOS out of preview][7]
> Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. Its the successor to both [Fedora Atomic Host][8] and [CoreOS Container Linux][9] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][10].
**The impact**: Collapsing these two branches of the Linux family tree into one another moves the state of the art forward for everyone (once you get through the migration).
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/nsa-facebook-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://opensource.com/article/20/1/hybrid-developer-future-industry-trends
[3]: https://opensource.facebook.com/
[4]: https://media.defense.gov/2020/Jan/22/2002237484/-1/-1/0/CSI-MITIGATING-CLOUD-VULNERABILITIES_20200121.PDF
[5]: https://www.cncf.io/blog/2020/01/23/with-kubernetes-china-minsheng-bank-transformed-its-legacy-applications-and-moved-into-ai-blockchain-and-big-data/
[6]: https://www.telecomtv.com/content/cloud-native/the-5-rs-of-the-move-to-cloud-native-re-platform-re-host-re-factor-replace-retire-37473/
[7]: https://fedoramagazine.org/fedora-coreos-out-of-preview/
[8]: https://www.projectatomic.io/
[9]: https://coreos.com/os/docs/latest/
[10]: https://fedoramagazine.org/introducing-fedora-coreos/

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news)
[#]: via: (https://opensource.com/article/20/2/linux-java-and-other-industry-news)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Need 32-bit Linux to run past 2038? When version 5.6 of the kernel pops, you're in for a treat][2]
> Arnd Bergmann, an engineer working on the thorny Y2038 problem in the Linux kernel, posted to the [mailing list][3] that, yup, Linux 5.6 "should be the first release that can serve as a base for a 32-bit system designed to run beyond year 2038."
**The impact:** Y2K didn't get fixed; it just got bigger and delayed. There is no magic in software or computers; just people trying to solve complicated problems as best they can, and some times introducing more complicated problems for different people to solve at some point in the future.
## [What the dev? Celebrating Java's 25th anniversary][4]
> Java is coming up on a big milestone: Its 25th anniversary! To celebrate, we take a look back over the last 25 years to see how Java has evolved over time. In this episode, Social Media and Online Editor Jenna Sargent talks to Rich Sharples, senior director of product management for middleware at Red Hat, to learn more.
**The impact:** There is something comforting about immersing yourself in a deep well of lived experience. Rich clearly lived through what he is talking about and shares insider knowlege with you (and his dog).
## [Do I need an API Gateway if I use a service mesh?][5]
> This post may not be able to break through the noise around API Gateways and Service Mesh. However, its 2020 and there is still abundant confusion around these topics. I have chosen to write this to help bring real concrete explanation to help clarify differences, overlap, and when to use which. Feel free to [@ me on twitter (@christianposta)][6] if you feel Im adding to the confusion, disagree, or wish to buy me a beer (and these are not mutually exclusive reasons).
**The impact:** Yes, though they use similar terms and concepts they have different concerns and scopes.
## [What Australia's AGL Energy learned about Cloud Native compliance][7]
> This is really at the heart of what open source is, enabling everybody to contribute equally. Within large enterprises, there are controls that are needed, but if we can automate the management of the majority of these controls, we can enable an amazing culture and development experience.
**The impact:** They say "software is eating the world" and "developers are the new kingmakers." The fact that compliance in an energy utility is subject to developer experience improvement basically proves both statements.
## [Monoliths are the future][8]
> And then what they end up doing is creating 50 deployables, but its really a _distributed_ monolith. So its actually the same thing, but instead of function calls and class instantiation, theyre initiating things and throwing it over a network and hoping that it comes back. And since they cant reliably _make it_ come back, they introduce things like [Prometheus][9], [OpenTracing][10], all of this stuff. Im like, **“What are you doing?!”**
**The impact:** Do things for real reasons with a clear-eyed understanding of what those reasons are and how they'll make your business or your organization better.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-java-and-other-industry-news
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.theregister.co.uk/2020/01/30/linux_5_6_2038/
[3]: https://lkml.org/lkml/2020/1/29/355
[4]: https://whatthedev.buzzsprout.com/673192/2543290-celebrating-java-s-25th-anniversary-episode-16
[5]: https://blog.christianposta.com/microservices/do-i-need-an-api-gateway-if-i-have-a-service-mesh/ (Do I Need an API Gateway if I Use a Service Mesh?)
[6]: http://twitter.com/christianposta?lang=en
[7]: https://thenewstack.io/what-australias-agl-energy-learned-about-cloud-native-compliance/
[8]: https://changelog.com/posts/monoliths-are-the-future
[9]: https://prometheus.io/
[10]: https://opentracing.io

1
sources/news/README.md Normal file
View File

@ -0,0 +1 @@
这里放新闻类文章,要求时效性

View File

@ -1,221 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Ultimate Guide to JavaScript Fatigue: Realities of our industry)
[#]: via: (https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html)
[#]: author: (Lucas Fernandes Da Costa https://lucasfcosta.com)
The Ultimate Guide to JavaScript Fatigue: Realities of our industry
======
**Complaining about JS Fatigue is just like complaining about the fact that humanity has created too many tools to solve the problems we have** , from email to airplanes and spaceships.
Last week Ive done a talk about this very same subject at the NebraskaJS 2017 Conference and I got so many positive feedbacks that I just thought this talk should also become a blog post in order to reach more people and help them deal with JS Fatigue and understand the realities of our industry. **My goal with this post is to change the way you think about software engineering in general and help you in any areas you might work on**.
One of the things that has inspired me to write this blog post and that totally changed my life is [this great post by Patrick McKenzie, called “Dont Call Yourself a Programmer and other Career Advice”][1]. **I highly recommend you read that**. Most of this blog post is advice based on what Patrick has written in that post applied to the JavaScript ecosystem and with a few more thoughts Ive developed during these last years working in the tech industry.
This first section is gonna be a bit philosophical, but I swear it will be worth reading.
### Realities of Our Industry 101
Just like Patrick has done in [his post][1], lets start with the most basic and essential truth about our industry:
Software solves business problems
This is it. **Software does not exist to please us as programmers** and let us write beautiful code. Neither it exists to create jobs for people in the tech industry. **Actually, it exists to kill as many jobs as possible, including ours** , and this is why basic income will become much more important in the next few years, but thats a whole other subject.
Im sorry to say that, but the reason things are that way is that there are only two things that matter in the software engineering (and any other industries):
**Cost versus Revenue**
**The more you decrease cost and increase revenue, the more valuable you are** , and one of the most common ways of decreasing cost and increasing revenue is replacing human beings by machines, which are more effective and usually cost less in the long run.
You are not paid to write code
**Technology is not a goal.** Nobody cares about which programming language you are using, nobody cares about which frameworks your team has chosen, nobody cares about how elegant your data structures are and nobody cares about how good is your code. **The only thing that somebody cares about is how much does your software cost and how much revenue it generates**.
Writing beautiful code does not matter to your clients. We write beautiful code because it makes us more productive in the long run and this decreases cost and increases revenue.
The whole reason why we try not to write bugs is not that we value correctness, but that **our clients** value correctness. If you have ever seen a bug becoming a feature you know what Im talking about. That bug exists but it should not be fixed. That happens because our goal is not to fix bugs, our goal is to generate revenue. If our bugs make clients happy then they increase revenue and therefore we are accomplishing our goals.
Reusable space rockets, self-driving cars, robots, artificial intelligence: these things do not exist just because someone thought it would be cool to create them. They exist because there are business interests behind them. And Im not saying the people behind them just want money, Im sure they think that stuff is also cool, but the truth is that if they were not economically viable or had any potential to become so, they would not exist.
Probably I should not even call this section “Realities of Our Industry 101”, maybe I should just call it “Realities of Capitalism 101”.
And given that our only goal is to increase revenue and decrease cost, I think we as programmers should be paying more attention to requirements and design and start thinking with our minds and participating more actively in business decisions, which is why it is extremely important to know the problem domain we are working on. How many times before have you found yourself trying to think about what should happen in certain edge cases that have not been thought before by your managers or business people?
In 1975, Boehm has done a research in which he found out that about 64% of all errors in the software he was studying were caused by design, while only 36% of all errors were coding errors. Another study called [“Higher Order Software—A Methodology for Defining Software”][2] also states that **in the NASA Apollo project, about 73% of all errors were design errors**.
The whole reason why Design and Requirements exist is that they define what problems were going to solve and solving problems is what generates revenue.
> Without requirements or design, programming is the art of adding bugs to an empty text file.
>
> * Louis Srygley
>
This same principle also applies to the tools weve got available in the JavaScript ecosystem. Babel, webpack, react, Redux, Mocha, Chai, Typescript, all of them exist to solve a problem and we gotta understand which problem they are trying to solve, we need to think carefully about when most of them are needed, otherwise, we will end up having JS Fatigue because:
JS Fatigue happens when people use tools they don't need to solve problems they don't have.
As Donald Knuth once said: “Premature optimization is the root of all evil”. Remember that software only exists to solve business problems and most software out there is just boring, it does not have any high scalability or high-performance constraints. Focus on solving business problems, focus on decreasing cost and generating revenue because this is all that matters. Optimize when you need, otherwise you will probably be adding unnecessary complexity to your software, which increases cost, and not generating enough revenue to justify that.
This is why I think we should apply [Test Driven Development][3] principles to everything we do in our job. And by saying this Im not just talking about testing. **Im talking about waiting for problems to appear before solving them. This is what TDD is all about**. As Kent Beck himself says: “TDD reduces fear” because it guides your steps and allows you take small steps towards solving your problems. One problem at a time. By doing the same thing when it comes to deciding when to adopt new technologies then we will also reduce fear.
Solving one problem at a time also decreases [Analysis Paralysis][4], which is basically what happens when you open Netflix and spend three hours concerned about making the optimal choice instead of actually watching something. By solving one problem at a time we reduce the scope of our decisions and by reducing the scope of our decisions we have fewer choices to make and by having fewer choices to make we decrease Analysis Paralysis.
Have you ever thought about how easier it was to decide what you were going to watch when there were only a few TV channels available? Or how easier it was to decide which game you were going to play when you had only a few cartridges at home?
### But what about JavaScript?
By the time Im writing this post NPM has 489,989 packages and tomorrow approximately 515 new ones are going to be published.
And the packages we use and complain about have a history behind them we must comprehend in order to understand why we need them. **They are all trying to solve problems.**
Babel, Dart, CoffeeScript and other transpilers come from our necessity of writing code other than JavaScript but making it runnable in our browsers. Babel even lets us write new generation JavaScript and make sure it will work even on older browsers, which has always been a great problem given the inconsistencies and different amount of compliance to the ECMA Specification between browsers. Even though the ECMA spec is becoming more and more solid these days, we still need Babel. And if you want to read more about Babels history I highly recommend that you read [this excellent post by Henry Zhu][5].
Module bundlers such as Webpack and Browserify also have their reason to exist. If you remember well, not so long ago we used to suffer a lot with lots of `script` tags and making them work together. They used to pollute the global namespace and it was reasonably hard to make them work together when one depended on the other. In order to solve this [`Require.js`][6] was created, but it still had its problems, it was not that straightforward and its syntax also made it prone to other problems, as you can see [in this blog post][7]. Then Node.js came with `CommonJS` imports, which were synchronous, simple and clean, but we still needed a way to make that work on our browsers and this is why we needed Webpack and Browserify.
And Webpack itself actually solves more problems than that by allowing us to deal with CSS, images and many other resources as if they were JavaScript dependencies.
Front-end frameworks are a bit more complicated, but the reason why they exist is to reduce the cognitive load when we write code so that we dont need to worry about manipulating the DOM ourselves or even dealing with messy browser APIs (another problem JQuery came to solve), which is not only error prone but also not productive.
This is what we have been doing this whole time in computer science. We use low-level abstractions and build even more abstractions on top of it. The more we worry about describing how our software should work instead of making it work, the more productive we are.
But all those tools have something in common: **they exist because the web platform moves too fast**. Nowadays were using web technology everywhere: in web browsers, in desktop applications, in phone applications or even in watch applications.
This evolution also creates problems we need to solve. PWAs, for example, do not exist only because theyre cool and we programmers have fun writing them. Remember the first section of this post: **PWAs exist because they create business value**.
And usually standards are not fast enough to be created and therefore we need to create our own solutions to these things, which is why it is great to have such a vibrant and creative community with us. Were solving problems all the time and **we are allowing natural selection to do its job**.
The tools that suit us better thrive, get more contributors and develop themselves more quickly and sometimes other tools end up incorporating the good ideas from the ones that thrive and becoming even more popular than them. This is how we evolve.
By having more tools we also have more choices. If you remember the UNIX philosophy well, it states that we should aim at creating programs that do one thing and do it well.
We can clearly see this happening in the JS testing environment, for example, where we have Mocha for running tests and Chai for doing assertions, while in Java JUnit tries to do all these things. This means that if we have a problem with one of them or if we find another one that suits us better, we can simply replace that small part and still have the advantages of the other ones.
The UNIX philosophy also states that we should write programs that work together. And this is exactly what we are doing! Take a look at Babel, Webpack and React, for example. They work very well together but we still do not need one to use the other. In the testing environment, for example, if were using Mocha and Chai all of a sudden we can just install Karma and run those same tests in multiple environments.
### How to Deal With It
My first advice for anyone suffering from JS Fatigue would definitely be to stay aware that **you dont need to know everything**. Trying to learn it all at once, even when we dont have to do so, only increases the feeling of fatigue. Go deep in areas that you love and for which you feel an inner motivation to study and adopt a lazy approach when it comes to the other ones. Im not saying that you should be lazy, Im just saying that you can learn those only when needed. Whenever you face a problem that requires you to use a certain technology to solve it, go learn.
Another important thing to say is that **you should start from the beginning**. Make sure you have learned enough about JavaScript itself before using any JavaScript frameworks. This is the only way you will be able to understand them and bend them to your will, otherwise, whenever you face an error you have never seen before you wont know which steps to take in order to solve it. Learning core web technologies such as CSS, HTML5, JavaScript and also computer science fundamentals or even how the HTTP protocol works will help you master any other technologies a lot more quickly.
But please, dont get too attached to that. Sometimes you gotta risk yourself and start doing things on your own. As Sacha Greif has written in [this blog post][8], spending too much time learning the fundamentals is just like trying to learn how to swim by studying fluid dynamics. Sometimes you just gotta jump into the pool and try to swim by yourself.
And please, dont get too attached to a single technology. All of the things we have available nowadays have already been invented in the past. Of course, they have different features and a brand new name, but, in their essence, they are all the same.
If you look at NPM, it is nothing new, we already had Maven Central and Ruby Gems quite a long time ago.
In order to transpile your code, Babel applies the very same principles and theory as some of the oldest and most well-known compilers, such as the GCC.
Even JSX is not a new idea. It E4X (ECMAScript for XML) already existed more than 10 years ago.
Now you might ask: “what about Gulp, Grunt and NPM Scripts?” Well, Im sorry but we can solve all those problems with GNU Make in 1976. And actually, there are a reasonable number of JavaScript projects that still use it, such as Chai.js, for example. But we do not do that because we are hipsters that like vintage stuff. We use `make` because it solves our problems, and this is what you should aim at doing, as weve talked before.
If you really want to understand a certain technology and be able to solve any problems you might face, please, dig deep. One of the most decisive factors to success is curiosity, so **dig deep into the technologies you like**. Try to understand them from bottom-up and whenever you think something is just “magic”, debunk that myth by exploring the codebase by yourself.
In my opinion, there is no better quote than this one by Richard Feinman, when it comes to really learning something:
> What I cannot create, I do not understand
And just below this phrase, [in the same blackboard, Richard also wrote][9]:
> Know how to solve every problem that has been solved
Isnt this just amazing?
When Richard said that, he was talking about being able to take any theoretical result and re-derive it, but I think the exact same principle can be applied to software engineering. The tools that solve our problems have already been invented, they already exist, so we should be able to get to them all by ourselves.
This is the very reason I love [some of the videos available in Egghead.io][10] in which Dan Abramov explains how to implement certain features that exist in Redux from scratch or [blog posts that teach you how to build your own JSX renderer][11].
So why not trying to implement these things by yourself or going to GitHub and reading their codebase in order to understand how they work? Im sure you will find a lot of useful knowledge out there. Comments and tutorials might lie and be incorrect sometimes, the code cannot.
Another thing that we have been talking a lot in this post is that **you should not get ahead of yourself**. Follow a TDD approach and solve one problem at a time. You are paid to increase revenue and decrease cost and you do this by solving problems, this is the reason why software exists.
And since we love comparing our role to the ones related to civil engineering, lets do a quick comparison between software development and civil engineering, just as [Sam Newman does in his brilliant book called “Building Microservices”][12].
We love calling ourselves “engineers” or “architects”, but is that term really correct? We have been developing software for what we know as computers less than a hundred years ago, while the Colosseum, for example, exists for about two thousand years.
When was the last time youve seen a bridge falling and when was the last time your telephone or your browser crashed?
In order to explain this, Ill use an example I love.
This is the beautiful and awesome city of Barcelona:
![The City of Barcelona][13]
When we look at it this way and from this distance, it just looks like any other city in the world, but when we look at it from above, this is how Barcelona looks:
![Barcelona from above][14]
As you can see, every block has the same size and all of them are very organized. If youve ever been to Barcelona you will also know how good it is to move through the city and how well it works.
But the people that planned Barcelona could not predict what it was going to look like in the next two or three hundred years. In cities, people come in and people move through it all the time so what they had to do was make it grow organically and adapt as the time goes by. They had to be prepared for changes.
This very same thing happens to our software. It evolves quickly, refactors are often needed and requirements change more frequently than we would like them to.
So, instead of acting like a Software Engineer, act as a Town Planner. Let your software grow organically and adapt as needed. Solve problems as they come by but make sure everything still has its place.
Doing this when it comes to software is even easier than doing this in cities due to the fact that **software is flexible, civil engineering is not**. **In the software world, our build time is compile time**. In Barcelona we cannot simply destroy buildings to give space to new ones, in Software we can do that a lot easier. We can break things all the time, we can make experiments because we can build as many times as we want and it usually takes seconds and we spend a lot more time thinking than building. Our job is purely intellectual.
So **act like a town planner, let your software grow and adapt as needed**.
By doing this you will also have better abstractions and know when its the right time to adopt them.
As Sam Koblenski says:
> Abstractions only work well in the right context, and the right context develops as the system develops.
Nowadays something I see very often is people looking for boilerplates when theyre trying to learn a new technology, but, in my opinion, **you should avoid boilerplates when youre starting out**. Of course boilerplates and generators are useful if you are already experienced, but they take a lot of control out of your hands and therefore you wont learn how to set up a project and you wont understand exactly where each piece of the software you are using fits.
When you feel like you are struggling more than necessary to get something simple done, it might be the right time for you to look for an easier way to do this. In our role **you should strive to be lazy** , you should work to not work. By doing that you have more free time to do other things and this decreases cost and increases revenue, so thats another way of accomplishing your goal. You should not only work harder, you should work smarter.
Probably someone has already had the same problem as youre having right now, but if nobody did it might be your time to shine and build your own solution and help other people.
But sometimes you will not be able to realize you could be more effective in your tasks until you see someone doing them better. This is why it is so important to **talk to people**.
By talking to people you share experiences that help each others careers and we discover new tools to improve our workflow and, even more important than that, learn how they solve their problems. This is why I like reading blog posts in which companies explain how they solve their problems.
Especially in our area we like to think that Google and StackOverflow can answer all our questions, but we still need to know which questions to ask. Im sure you have already had a problem you could not find a solution for because you didnt know exactly what was happening and therefore didnt know what was the right question to ask.
But if I needed to sum this whole post in a single advice, it would be:
Solve problems.
Software is not a magic box, software is not poetry (unfortunately). It exists to solve problems and improves peoples lives. Software exists to push the world forward.
**Now its your time to go out there and solve problems**.
--------------------------------------------------------------------------------
via: https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html
作者:[Lucas Fernandes Da Costa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://lucasfcosta.com
[b]: https://github.com/lujun9972
[1]: http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/
[2]: http://ieeexplore.ieee.org/document/1702333/
[3]: https://en.wikipedia.org/wiki/Test_Driven_Development
[4]: https://en.wikipedia.org/wiki/Analysis_paralysis
[5]: https://babeljs.io/blog/2016/12/07/the-state-of-babel
[6]: http://requirejs.org
[7]: https://benmccormick.org/2015/05/28/moving-past-requirejs/
[8]: https://medium.freecodecamp.org/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1
[9]: https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-cannot-create-I-do-not-understand
[10]: https://egghead.io/lessons/javascript-redux-implementing-store-from-scratch
[11]: https://jasonformat.com/wtf-is-jsx/
[12]: https://www.barnesandnoble.com/p/building-microservices-sam-newman/1119741399/2677517060476?st=PLA&sid=BNB_DRS_Marketplace+Shopping+Books_00000000&2sid=Google_&sourceId=PLGoP4760&k_clickid=3x4760
[13]: /assets/barcelona-city.jpeg
[14]: /assets/barcelona-above.jpeg
[15]: https://twitter.com/thewizardlucas

View File

@ -1,513 +0,0 @@
What every software engineer should know about search
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*5AlsVRQrewLw74uHYTZ36w.jpeg)
### Want to build or improve a search experience? Start here.
Ask a software engineer: “[How would you add search functionality to your product?][78]” or “[How do I build a search engine?][79]” Youll probably immediately hear back something like: “Oh, wed just launch an ElasticSearch cluster. Search is easy these days.”
But is it? Numerous current products [still][80] [have][81] [suboptimal][82] [search][83] [experiences][84]. Any true search expert will tell you that few engineers have a very deep understanding of how search engines work, knowledge thats often needed to improve search quality.
Even though many open source software packages exist, and the research is vast, the knowledge around building solid search experiences is limited to a select few. Ironically, [searching online][85] for search-related expertise doesnt yield any recent, thoughtful overviews.
#### Emoji Legend
```
❗ “Serious” gotcha: consequences of ignorance can be deadly
🔷 Especially notable idea or piece of technology
☁️ Cloud/SaaS
🍺 Open source / free software
🦏 JavaScript
🐍 Python
☕ Java
🇨 C/C++
```
### Why read this?
Think of this post as a collection of insights and resources that could help you to build search experiences. It cant be a complete reference, of course, but hopefully we can improve it based on feedback (please comment or reach out!).
Ill point at some of the most popular approaches, algorithms, techniques, and tools, based on my work on general purpose and niche search experiences of varying sizes at Google, Airbnb and several startups.
Not appreciating or understanding the scope and complexity of search problems can lead to bad user experiences, wasted engineering effort, and product failure.
If youre impatient or already know a lot of this, you might find it useful to jump ahead to the tools and services sections.
### Some philosophy
This is a long read. But most of what we cover has four underlying principles:
#### 🔷 Search is an inherently messy problem:
* Queries are highly variable. The search problems are highly variablebased on product needs.
* Think about how different Facebook search (searching a graph of people).
* YouTube search (searching individual videos).
* Or how different both of those are are from Kayak ([air travel planning is a really hairy problem][2]).
* Google Maps (making sense of geo-spacial data).
* Pinterest (pictures of a brunch you might cook one day).
#### Quality, metrics, and processes matter a lot:
* There is no magic bullet (like PageRank) nor a magic ranking formula that makes for a good approach. Processes are always evolving collection of techniques and processes that solve aspects of the problem and improve overall experience, usually gradually and continuously.
* ❗In other words, search is not just just about building software that does ranking or retrieval (which we will discuss below) for a specific domain. Search systems are usually an evolving pipeline of components that are tuned and evolve over time and that build up to a cohesive experience.
* In particular, the key to success in search is building processes for evaluation and tuning into the product and development cycles. A search system architect should think about processes and metrics, not just technologies.
#### Use existing technologies first:
* As in most engineering problems, dont reinvent the wheel yourself. When possible, use existing services or open source tools. If an existing SaaS (such as [Algolia][3] or managed Elasticsearch) fits your constraints and you can afford to pay for it, use it. This solution will likely will be the best choice for your product at first, even if down the road you need to customize, enhance, or replace it.
#### ❗Even if you buy, know the details:
* Even if you are using an existing open source or commercial solution, you should have some sense of the complexity of the search problem and where there are likely to be pitfalls.
### Theory: the search problem
Search is different for every product, and choices depend on many technical details of the requirements. It helps to identify the key parameters of your search problem:
1. Size: How big is the corpus (a complete set of documents that need to be searched)? Is it thousands or billions of documents?
2. Media: Are you searching through text, images, graphical relationships, or geospatial data?
3. 🔷 Corpus control and quality: Are the sources for the documents under your control, or coming from a (potentially adversarial) third party? Are all the documents ready to be indexed or need to be cleaned up and selected?
4. Indexing speed: Do you need real-time indexing, or is building indices in batch is fine?
5. Query language: Are the queries structured, or you need to support unstructured ones?
6. Query structure: Are your queries textual, images, sounds? Street addresses, record ids, peoples faces?
7. Context-dependence: Do the results depend on who the user is, what is their history with the product, their geographical location, time of the day etc?
8. Suggest support: Do you need to support incomplete queries?
9. Latency: What are the serving latency requirements? 100 milliseconds or 100 seconds?
10. Access control: Is it entirely public or should users only see a restricted subset of the documents?
11. Compliance: Are there compliance or organizational limitations?
12. Internationalization: Do you need to support documents with multilingual character sets or Unicode? (Hint: Always use UTF-8 unless you really know what youre doing.) Do you need to support a multilingual corpus? Multilingual queries?
Thinking through these points up front can help you make significant choices designing and building individual search system components.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*qTK1iCtyJUr4zOyw4IFD7A.jpeg)
A production indexing pipeline.
### Theory: the search pipeline
Now lets go through a list of search sub-problems. These are usually solved by separate subsystems that form a pipeline. What that means is that a given subsystem consumes the output of previous subsystems, and produces input for the following subsystems.
This leads to an important property of the ecosystem: once you change how an upstream subsystem works, you need to evaluate the effect of the change and possibly change the behavior downstream.
Here are the most important problems you need to solve:
#### Index selection:
given a set of documents (e.g. the entirety of the Internet, all the Twitter posts, all the pictures on Instagram), select a potentially smaller subset of documents that may be worthy for consideration as search results and only include those in the index, discarding the rest. This is done to keep your indexes compact, and is almost orthogonal to selecting the documents to show to the user. Examples of particular classes of documents that dont make the cut may include:
#### Spam:
oh, all the different shapes and sizes of search spam! A giant topic in itself, worthy of a separate guide. [A good web spam taxonomy overview][86].
#### Undesirable documents:
domain constraints might require filtering: [porn][87], illegal content, etc. The techniques are similar to spam filtering, probably with extra heuristics.
#### Duplicates:
Or near-duplicates and redundant documents. Can be done with [Locality-sensitive hashing][88], [similarity measures][89], clustering techniques or even [clickthrough data][90]. A [good overview][91] of techniques.
#### Low-utility documents:
The definition of utility depends highly on the problem domain, so its hard to recommend the approaches here. Some ideas are: it might be possible to build a utility function for your documents; heuristics might work, or example an image that contains only black pixels is not a useful document; utility might be learned from user behavior.
#### Index construction:
For most search systems, document retrieval is performed using an [inverted index][92]often just called the index.
* The index is a mapping of search terms to documents. A search term could be a word, an image feature or any other document derivative useful for query-to-document matching. The list of the documents for a given term is called a [posting list][1]. It can be sorted by some metric, like document quality.
* Figure out whether you need to index the data in real time.❗Many companies with large corpora of documents use a batch-oriented indexing approach, but then find this is unsuited to a product where users expect results to be current.
* With text documents, term extraction usually involves using NLP techniques, such as stop lists, [stemming][4] and [entity extraction][5]; for images or videos computer vision methods are used etc.
* In addition, documents are mined for statistical and meta information, such as references to other documents (used in the famous [PageRank][6]ranking signal), [topics][7], counts of term occurrences, document size, entities A mentioned etc. That information can be later used in ranking signal construction or document clustering. Some larger systems might contain several indexes, e.g. for documents of different types.
* Index formats. The actual structure and layout of the index is a complex topic, since it can be optimized in many ways. For instance there are [posting lists compression methods][8], one could target [mmap()able data representation][9] or use[ LSM-tree][10] for continuously updated index.
#### Query analysis and document retrieval:
Most popular search systems allow non-structured queries. That means the system has to extract structure out of the query itself. In the case of an inverted index, you need to extract search terms using [NLP][93] techniques.
The extracted terms can be used to retrieve relevant documents. Unfortunately, most queries are not very well formulated, so it pays to do additional query expansion and rewriting, like:
* [Term re-weighting][11].
* [Spell checking][12]. Historical query logs are very useful as a dictionary.
* [Synonym matching][13]. [Another survey][14].
* [Named entity recognition][15]. A good approach is to use [HMM-based language modeling][16].
* Query classification. Detect queries of particular type. For example, Google Search detects queries that contain a geographical entity, a porny query, or a query about something in the news. The retrieval algorithm can then make a decision about which corpora or indexes to look at.
* Expansion through [personalization][17] or [local context][18]. Useful for queries like “gas stations around me”.
#### Ranking:
Given a list of documents (retrieved in the previous step), their signals, and a processed query, create an optimal ordering (ranking) for those documents.
Originally, most ranking models in use were hand-tuned weighted combinations of all the document signals. Signal sets might include PageRank, clickthrough data, topicality information and [others][94].
To further complicate things, many of those signals, such as PageRank, or ones generated by [statistical language models][95] contain parameters that greatly affect the performance of a signal. Those have to be hand-tuned too.
Lately, 🔷 [learning to rank][96], signal-based discriminative supervised approaches are becoming more and more popular. Some popular examples of LtR are [McRank][97] and [LambdaRank][98] from Microsoft, and [MatrixNet][99] from Yandex.
A new, [vector space based approach][100] for semantic retrieval and ranking is gaining popularity lately. The idea is to learn individual low-dimensional vector document representations, then build a model which maps queries into the same vector space.
Then, retrieval is just finding several documents that are closest by some metric (e.g. Eucledian distance) to the query vector. Ranking is the distance itself. If the mapping of both the documents and queries is built well, the documents are chosen not by a fact of presence of some simple pattern (like a word), but how close the documents are to the query by  _meaning_ .
### Indexing pipeline operation
Usually, each of the above pieces of the pipeline must be operated on a regular basis to keep the search index and search experience current.
Operating a search pipeline can be complex and involve a lot of moving pieces. Not only is the data moving through the pipeline, but the code for each module and the formats and assumptions embedded in the data will change over time.
A pipeline can be run in “batch” or based on a regular or occasional basis (if indexing speed does not need to be real time) or in a streamed way (if real-time indexing is needed) or based on certain triggers.
Some complex search engines (like Google) have several layers of pipelines operating on different time scalesfor example, a page that changes often (like [cnn.com][101]) is indexed with a higher frequency than a static page that hasnt changed in years.
### Serving systems
Ultimately, the goal of a search system is to accept queries, and use the index to return appropriately ranked results. While this subject can be incredibly complex and technical, we mention a few of the key aspects to this part of the system.
* Performance: users notice when the system they interact with is laggy. ❗Google has done [extensive research][19], and they have noticed that number of searches falls 0.6%, when serving is slowed by 300ms. They recommend to serve results under 200 ms for most of your queries. A good article [on the topic][20]. This is the hard part: the system needs to collect documents from, possibly, many computers, than merge them into possible a very long list and then sort that list in the ranking order. To complicate things further, ranking might be query-dependent, so, while sorting, the system is not just comparing 2 numbers, but performing computation.
* 🔷 Caching results: is often necessary to achieve decent performance. ❗️ But caches are just one large gotcha. The might show stale results when indices are updated or some results are blacklisted. Purging caches is a can of warm of itself: a search system might not have the capacity to serve the entire query stream with an empty (cold) cache, so the [cache needs to be pre-warmed][21] before the queries start arriving. Overall, caches complicate a systems performance profile. Choosing a cache size and a replacement algorithm is also a [challenge][22].
* Availability: is often defined by an uptime/(uptime + downtime) metric. When index is distributed, in order to serve any search results, the system often needs to query all the shards for their share of results. ❗That means, that if one shard is unavailable, the entire search system is compromised. The more machines are involved in serving the indexthe higher the probability of one of them becoming defunct and bringing the whole system down.
* Managing multiple indices: Indices for large systems may separated into shards (pieces) or divided by media type or indexing cadence (fresh versus long-term indices). Results can then be merged.
* Merging results of different kinds: e.g. Google showing results from Maps, News etc.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*M8WQu17E7SDziV0rVwUKbw.jpeg)
A human rater. Yeah, you should still have those.
### Quality, evaluation, and improvement
So youve launched your indexing pipeline and search servers, and its all running nicely. Unfortunately the road to a solid search experience only begins with running infrastructure.
Next, youll need to build a set of processes around continuous search quality evaluation and improvement. In fact, this is actually most of the work and the hardest problem youll have to solve.
🔷 What is quality? First, youll need to determine (and get your boss or the product lead to agree), what quality means in your case:
* Self-reported user satisfaction (includes UX)
* Perceived relevance of the returned results (not including UX)
* Satisfaction relative to competitors
* Satisfaction relative performance of the previous version of the search engine (e.g. last week)
* [User engagement][23]
Metrics: Some of these concepts can be quite hard to quantify. On the other hand, its incredibly useful to be able to express how well a search engine is performing in a single number, a quality metric.
Continuously computing such a metric for your (and your competitors) system you can both track your progress and explain how well you are doing to your boss. Here are some classical ways to quantify quality, that can help you construct your magic quality metric formula:
* [Precision][24] and [recall][25] measure how well the retrieved set of documents corresponds to the set you expected to see.
* [F score][26] (specifically F1 score) is a single number, that represents both precision and recall well.
* [Mean Average Precision][27] (MAP) allows to quantify the relevance of the top returned results.
* 🔷 [Normalized Discounted Cumulative Gain][28] (nDCG) is like MAP, but weights the relevance of the result by its position.
* [Long and short clicks][29]Allow to quantify how useful the results are to the real users.
* [A good detailed overview][30].
🔷 Human evaluations: Quality metrics might seem like statistical calculations, but they cant all be done by automated calculations. Ultimately, metrics need to represent subjective human evaluation, and this is where a “human in the loop” comes into play.
Skipping human evaluation is probably the most spread reason of sub-par search experiences.
Usually, at early stages the developers themselves evaluate the results manually. At later point [human raters][102] (or assessors) may get involved. Raters typically use custom tools to look at returned search results and provide feedback on the quality of the results.
Subsequently, you can use the feedback signals to guide development, help make launch decisions or even feed them back into the index selection, retrieval or ranking systems.
Here is the list of some other types of human-driven evaluation, that can be done on a search system:
* Basic user evaluation: The user ranks their satisfaction with the whole experience
* Comparative evaluation: Compare with other search results (compare with search results from earlier versions of the system or competitors)
* Retrieval evaluation: The query analysis and retrieval quality is often evaluated using manually constructed query-document sets. A user is shown a query and the list of the retrieved documents. She can then mark all the documents that are relevant to the query, and the ones that are not. The resulting pairs of (query, [relevant docs]) are called a “golden set”. Golden sets are remarkably useful. For one, an engineer can set up automatic retrieval regression tests using those sets. The selection signal from golden sets can also be fed back as ground truth to term re-weighting and other query re-writing models.
* Ranking evaluation: Raters are presented with a query and two documents side-by-side. The rater must choose the document that fits the query better. This creates a partial ordering on the documents for a given query. That ordering can be later be compared to the output of the ranking system. The usual ranking quality measures used are MAP and nDCG.
#### Evaluation datasets:
One should start thinking about the datasets used for evaluation (like “golden sets” mentioned above) early in the search experience design process. How you collect and update them? How you push them to the production eval pipeline? Is there a built-in bias?
Live experiments:
After your search engine catches on and gains enough users, you might want to start conducting [live search experiments][103] on a portion of your traffic. The basic idea is to turn some optimization on for a group of people, and then compare the outcome with that of a “control” groupa similar sample of your users that did not have the experiment feature on for them. How you would measure the outcome is, once again, very product specific: it could be clicks on results, clicks on ads etc.
Evaluation cycle time: How fast you improve your search quality is directly related to how fast you can complete the above cycle of measurement and improvement. It is essential from the beginning to ask yourself, “how fast can we measure and improve our performance?”
Will it take days, hours, minutes or seconds to make changes and see if they improve quality? ❗Running evaluation should also be as easy as possible for the engineers and should not take too much hands-on time.
### 🔷 So… How do I PRACTICALLY build it?
This blogpost is not meant as a tutorial, but here is a brief outline of how Id approach building a search experience right now:
1. As was said above, if you can afford itjust buy the existing SaaS (some good ones are listed below). An existing service fits if:
* Your experience is a “connected” one (your service or app has internet connection).
* Does it support all the functionality you need out of box? This post gives a pretty good idea of what functions would you want. To name a few, Id at least consider: support for the media you are searching; real-time indexing support; query flexibility, including context-dependent queries.
* Given the size of the corpus and the expected [QpS][31], can you afford to pay for it for the next 12 months?
* Can the service support your expected traffic within the required latency limits? In case when you are querying the service from an app, make sure that the given service is accessible quickly enough from where your users are.
2\. If a hosted solution does not fit your needs or resources, you probably want to use one of the open source libraries or tools. In case of connected apps or websites, Id choose ElasticSearch right now. For embedded experiences, there are multiple tools below.
3\. You most likely want to do index selection and clean up your documents (say extract relevant text from HTML pages) before uploading them to the search index. This will decrease the index size and make getting to good results easier. If your corpus fits on a single machine, just write a script (or several) to do that. If not, Id use [Spark][104].
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*lGw4kVVQyj8E5by2GWVoQg.jpeg)
You can never have too many tools.
### ☁️ SaaS
☁️ 🔷[Algolia][105]a proprietary SaaS that indexes a clients website and provides an API to search the websites pages. They also have an API to submit your own documents, support context dependent searches and serve results really fast. If I were building a web search experience right now and could afford it, Id probably use Algolia firstand buy myself time to build a comparable search experience.
* Various ElasticSearch providers: AWS (☁️ [ElasticSearch Cloud)][32], ☁️[elastic.co][33] and from ☁️ [Qbox][34].
* ☁️[ Azure Search][35]a SaaS solution from Microsoft. Accessible through a REST API, it can scale to billions of documents. Has a Lucene query interface to simplify migrations from Lucene-based solutions.
* ☁️[ Swiftype][36]an enterprise SaaS that indexes your companys internal services, like Salesforce, G Suite, Dropbox and the intranet site.
### Tools and libraries
🍺☕🔷[ Lucene][106] is the most popular IR library. Implements query analysis, index retrieval and ranking. Either of the components can be replaced by an alternative implementation. There is also a C port🍺[Lucy][107].
* 🍺☕🔷[ Solr][37] is a complete search server, based on Lucene. Its a part of the [Hadoop][38] ecosystem of tools.
* 🍺☕🔷[ Hadoop][39] is the most widely used open source MapReduce system, originally designed as a indexing pipeline framework for Solr. It has been gradually loosing ground to 🍺[Spark][40] as the batch data processing framework used for indexing. ☁️[EMR][41] is a proprietary implementation of MapReduce on AWS.
* 🍺☕🔷 [ElasticSearch][42] is also based on Lucene ([feature comparison with Solr][43]). It has been getting more attention lately, so much that a lot of people think of ES when they hear “search”, and for good reasons: its well supported, has [extensive API][44], [integrates with Hadoop][45] and [scales well][46]. There are open source and [Enterprise][47] versions. ES is also available as a SaaS on Can scale to billions of documents, but scaling to that point can be very challenging, so typical scenario would involve orders of magnitude smaller corpus.
* 🍺🇨 [Xapian][48]a C++-based IR library. Relatively compact, so good for embedding into desktop or mobile applications.
* 🍺🇨 [Sphinx][49]an full-text search server. Has a SQL-like query language. Can also act as a [storage engine for MySQL][50] or used as a library.
* 🍺☕ [Nutch][51]a web crawler. Can be used in conjunction with Solr. Its also the tool behind [🍺Common Crawl][52].
* 🍺🦏 [Lunr][53]a compact embedded search library for web apps on the client-side.
* 🍺🦏 [searchkit][54]a library of web UI components to use with ElasticSearch.
* 🍺🦏 [Norch][55]a [LevelDB][56]-based search engine library for Node.js.
* 🍺🐍 [Whoosh][57]a fast, full-featured search library implemented in pure Python.
* OpenStreetMaps has its own 🍺[deck of search software][58].
### Datasets
A few fun or useful data sets to try building a search engine or evaluating search engine quality:
* 🍺🔷 [Commoncrawl][59]a regularly-updated open web crawl data. There is a [mirror on AWS][60], accessible for free within the service.
* 🍺🔷 [Openstreetmap data dump][61] is a very rich source of data for someone building a geospacial search engine.
* 🍺 [Google Books N-grams][62] can be very useful for building language models.
* 🍺 [Wikipedia dumps][63] are a classic source to build, among other things, an entity graph out of. There is a [wide range of helper tools][64] available.
* [IMDb dumps][65] are a fun dataset to build a small toy search engine for.
### References
* [Modern Information Retrieval][66] by R. Baeza-Yates and B. Ribeiro-Neto is a good, deep academic treatment of the subject. This is a good overview for someone completely new to the topic.
* [Information Retrieval][67] by S. Büttcher, C. Clarke and G. Cormack is another academic textbook with a wide coverage and is more up-to-date. Covers learn-to-rank and does a pretty good job at discussing theory of search systems evaluation. Also is a good overview.
* [Learning to Rank][68] by T-Y Liu is a best theoretical treatment of LtR. Pretty thin on practical aspects though. Someone considering building an LtR system should probably check this out.
* [Managing Gigabytes][69]published in 1999, is still a definitive reference for anyone embarking on building an efficient index of a significant size.
* [Text Retrieval and Search Engines][70]a MOOC from Coursera. A decent overview of basics.
* [Indexing the World Wide Web: The Journey So Far][71] ([PDF][72]), an overview of web search from 2012, by Ankit Jain and Abhishek Das of Google.
* [Why Writing Your Own Search Engine is Hard][73] a classic article from 2004 from Anna Patterson.
* [https://github.com/harpribot/awesome-information-retrieval][74]a curated list of search-related resources.
* A [great blog][75] on everything search by [Daniel Tunkelang][76].
* Some good slides on [search engine evaluation][77].
This concludes my humble attempt to make a somewhat-useful “map” for an aspiring search engine engineer. Did I miss something important? Im pretty sure I didyou know, [the margin is too narrow][108] to contain this enormous topic. Let me know if you think that something should be here and is notyou can reach [me][109] at[ forwidur@gmail.com][110] or at [@forwidur][111].
> P.S.This post is part of a open, collaborative effort to build an online reference, the Open Guide to Practical AI, which well release in draft form soon. See [this popular guide][112] for an example of whats coming. If youd like to get updates on or help with with this effort, sign up [here][113].
> Special thanks to [Joshua Levy][114], [Leo Polovets][115] and [Abhishek Das][116] for reading drafts of this and their invaluable feedback!
> Header image courtesy of [Mickaël Forrett][117]. The beautiful toolbox is called [The Studley Tool Chest][118].
--------------------------------------------------------------------------------
作者简介:
Max Grigorev
distributed systems, data, AI
-------------
via: https://medium.com/startup-grind/what-every-software-engineer-should-know-about-search-27d1df99f80d
作者:[Max Grigorev][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@forwidur?source=post_header_lockup
[1]:https://en.wikipedia.org/wiki/Inverted_index
[2]:http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/ITA-software-travel-complexity.pdf
[3]:https://www.algolia.com/
[4]:https://en.wikipedia.org/wiki/Stemming
[5]:https://en.wikipedia.org/wiki/Named-entity_recognition
[6]:http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
[7]:https://gofishdigital.com/semantic-topic-modeling/
[8]:https://nlp.stanford.edu/IR-book/html/htmledition/postings-file-compression-1.html
[9]:https://deplinenoise.wordpress.com/2013/03/31/fast-mmapable-data-structures/
[10]:https://en.wikipedia.org/wiki/Log-structured_merge-tree
[11]:http://orion.lcg.ufrj.br/Dr.Dobbs/books/book5/chap11.htm
[12]:http://norvig.com/spell-correct.html
[13]:http://nlp.stanford.edu/IR-book/html/htmledition/query-expansion-1.html
[14]:https://www.iro.umontreal.ca/~nie/IFT6255/carpineto-Survey-QE.pdf
[15]:https://en.wikipedia.org/wiki/Named-entity_recognition
[16]:http://www.aclweb.org/anthology/P02-1060
[17]:https://en.wikipedia.org/wiki/Personalized_search
[18]:http://searchengineland.com/future-search-engines-context-217550
[19]:http://services.google.com/fh/files/blogs/google_delayexp.pdf
[20]:http://highscalability.com/latency-everywhere-and-it-costs-you-sales-how-crush-it
[21]:https://stackoverflow.com/questions/22756092/what-does-it-mean-by-cold-cache-and-warm-cache-concept
[22]:https://en.wikipedia.org/wiki/Cache_performance_measurement_and_metric
[23]:http://blog.popcornmetrics.com/5-user-engagement-metrics-for-growth/
[24]:https://en.wikipedia.org/wiki/Information_retrieval#Precision
[25]:https://en.wikipedia.org/wiki/Information_retrieval#Recall
[26]:https://en.wikipedia.org/wiki/F1_score
[27]:http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/
[28]:https://en.wikipedia.org/wiki/Discounted_cumulative_gain
[29]:http://www.blindfiveyearold.com/short-clicks-versus-long-clicks
[30]:https://arxiv.org/pdf/1302.2318.pdf
[31]:https://en.wikipedia.org/wiki/Queries_per_second
[32]:https://aws.amazon.com/elasticsearch-service/
[33]:https://www.elastic.co/
[34]:https://qbox.io/
[35]:https://azure.microsoft.com/en-us/services/search/
[36]:https://swiftype.com/
[37]:http://lucene.apache.org/solr/
[38]:http://hadoop.apache.org/
[39]:http://hadoop.apache.org/
[40]:http://spark.apache.org/
[41]:https://aws.amazon.com/emr/
[42]:https://www.elastic.co/products/elasticsearch
[43]:http://solr-vs-elasticsearch.com/
[44]:https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html
[45]:https://github.com/elastic/elasticsearch-hadoop
[46]:https://www.elastic.co/guide/en/elasticsearch/guide/current/distributed-cluster.html
[47]:https://www.elastic.co/cloud/enterprise
[48]:https://xapian.org/
[49]:http://sphinxsearch.com/
[50]:https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/
[51]:https://nutch.apache.org/
[52]:http://commoncrawl.org/
[53]:https://lunrjs.com/
[54]:https://github.com/searchkit/searchkit
[55]:https://github.com/fergiemcdowall/norch
[56]:https://github.com/google/leveldb
[57]:https://bitbucket.org/mchaput/whoosh/wiki/Home
[58]:http://wiki.openstreetmap.org/wiki/Search_engines
[59]:http://commoncrawl.org/
[60]:https://aws.amazon.com/public-datasets/common-crawl/
[61]:http://wiki.openstreetmap.org/wiki/Downloading_data
[62]:http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
[63]:https://dumps.wikimedia.org/
[64]:https://www.mediawiki.org/wiki/Alternative_parsers
[65]:http://www.imdb.com/interfaces
[66]:https://www.amazon.com/dp/0321416910
[67]:https://www.amazon.com/dp/0262528878/
[68]:https://www.amazon.com/dp/3642142664/
[69]:https://www.amazon.com/dp/1558605703
[70]:https://www.coursera.org/learn/text-retrieval
[71]:https://research.google.com/pubs/pub37043.html
[72]:https://pdfs.semanticscholar.org/28d8/288bff1b1fc693e6d80c238de9fe8b5e8160.pdf
[73]:http://queue.acm.org/detail.cfm?id=988407
[74]:https://github.com/harpribot/awesome-information-retrieval
[75]:https://medium.com/@dtunkelang
[76]:https://www.cs.cmu.edu/~quixote/
[77]:https://web.stanford.edu/class/cs276/handouts/lecture8-evaluation_2014-one-per-page.pdf
[78]:https://stackoverflow.com/questions/34314/how-do-i-implement-search-functionality-in-a-website
[79]:https://www.quora.com/How-to-build-a-search-engine-from-scratch
[80]:https://github.com/isaacs/github/issues/908
[81]:https://www.reddit.com/r/Windows10/comments/4jbxgo/can_we_talk_about_how_bad_windows_10_search_sucks/d365mce/
[82]:https://www.reddit.com/r/spotify/comments/2apwpd/the_search_function_sucks_let_me_explain/
[83]:https://medium.com/@RohitPaulK/github-issues-suck-723a5b80a1a3#.yp8ui3g9i
[84]:https://thenextweb.com/opinion/2016/01/11/netflix-search-sucks-flixed-fixes-it/
[85]:https://www.google.com/search?q=building+a+search+engine
[86]:http://airweb.cse.lehigh.edu/2005/gyongyi.pdf
[87]:https://www.researchgate.net/profile/Gabriel_Sanchez-Perez/publication/262371199_Explicit_image_detection_using_YCbCr_space_color_model_as_skin_detection/links/549839cf0cf2519f5a1dd966.pdf
[88]:https://en.wikipedia.org/wiki/Locality-sensitive_hashing
[89]:https://en.wikipedia.org/wiki/Similarity_measure
[90]:https://www.microsoft.com/en-us/research/wp-content/uploads/2011/02/RadlinskiBennettYilmaz_WSDM2011.pdf
[91]:http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
[92]:https://en.wikipedia.org/wiki/Inverted_index
[93]:https://en.wikipedia.org/wiki/Natural_language_processing
[94]:http://backlinko.com/google-ranking-factors
[95]:http://times.cs.uiuc.edu/czhai/pub/slmir-now.pdf
[96]:https://en.wikipedia.org/wiki/Learning_to_rank
[97]:https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf
[98]:https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/lambdarank.pdf
[99]:https://yandex.com/company/technologies/matrixnet/
[100]:https://arxiv.org/abs/1708.02702
[101]:http://cnn.com/
[102]:http://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf
[103]:https://googleblog.blogspot.co.uk/2008/08/search-experiments-large-and-small.html
[104]:https://spark.apache.org/
[105]:https://www.algolia.com/
[106]:https://lucene.apache.org/
[107]:https://lucy.apache.org/
[108]:https://www.brainyquote.com/quotes/quotes/p/pierredefe204944.html
[109]:https://www.linkedin.com/in/grigorev/
[110]:mailto:forwidur@gmail.com
[111]:https://twitter.com/forwidur
[112]:https://github.com/open-guides/og-aws
[113]:https://upscri.be/d29cfe/
[114]:https://twitter.com/ojoshe
[115]:https://twitter.com/lpolovets
[116]:https://www.linkedin.com/in/abhishek-das-3280053/
[117]:https://www.behance.net/gallery/3530289/-HORIZON-
[118]:https://en.wikipedia.org/wiki/Henry_O._Studley

View File

@ -1,69 +0,0 @@
Why I love technical debt
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory1.png?itok=nbSRovsj)
This is not necessarily the title you'd expect for an article, I guess,* but I'm a fan of [technical debt][1]. There are two reasons for this: a Bad Reason and a Good Reason. I'll be upfront about the Bad Reason first, then explain why even that isn't really a reason to love it. I'll then tackle the Good Reason, and you'll nod along in agreement.
### The Bad Reason I love technical debt
We'll get this out of the way, then, shall we? The Bad Reason is that, well, there's just lots of it, it's interesting, it keeps me in a job, and it always provides a reason, as a security architect, for me to get involved in** projects that might give me something new to look at. I suppose those aren't all bad things. It can also be a bit depressing, because there's always so much of it, it's not always interesting, and sometimes I need to get involved even when I might have better things to do.
And what's worse is that it almost always seems to be security-related, and it's always there. That's the bad part.
Security, we all know, is the piece that so often gets left out, or tacked on at the end, or done in half the time it deserves, or done by people who have half an idea, but don't quite fully grasp it. I should be clear at this point: I'm not saying that this last reason is those people's fault. That people know they need security is fantastic. If we (the security folks) or we (the organization) haven't done a good enough job in making sufficient security resources--whether people, training, or visibility--available to those people who need it, the fact that they're trying is great and something we can work on. Let's call that a positive. Or at least a reason for hope.***
### The Good Reason I love technical debt
Let's get on to the other reason: the legitimate reason. I love technical debt when it's named.
What does that mean?
We all get that technical debt is a bad thing. It's what happens when you make decisions for pragmatic reasons that are likely to come back and bite you later in a project's lifecycle. Here are a few classic examples that relate to security:
* Not getting around to applying authentication or authorization controls on APIs that might, at some point, be public.
* Lumping capabilities together so it's difficult to separate out appropriate roles later on.
* Hard-coding roles in ways that don't allow for customisation by people who may use your application in different ways from those you initially considered.
* Hard-coding cipher suites for cryptographic protocols, rather than putting them in a config file where they can be changed or selected later.
There are lots more, of course, but those are just a few that jump out at me and that I've seen over the years. Technical debt means making decisions that will mean more work later on to fix them. And that can't be good, can it?
There are two words in the preceding paragraphs that should make us happy: they are "decisions" and "pragmatic." Because, in order for something to be named technical debt, I'd argue, it has to have been subject to conscious decision-making, and trade-offs must have been made--hopefully for rational reasons. Those reasons may be many and various--lack of qualified resources; project deadlines; lack of sufficient requirement definition--but if they've been made consciously, then the technical debt can be named, and if technical debt can be named, it can be documented.
And if it's documented, we're halfway there. As a security guy, I know that I can't force everything that goes out of the door to meet all the requirements I'd like--but the same goes for the high availability gal, the UX team, the performance folks, etc.
What we need--what we all need--is for documentation to exist about why decisions were made, because when we return to the problem we'll know it was thought about. And, what's more, the recording of that information might even make it into product documentation. "This API is designed to be used in a protected environment and should not be exposed on the public Internet" is a great piece of documentation. It may not be what a customer is looking for, but at least they know how to deploy the product, and, crucially, it's an opportunity for them to come back to the product manager and say, "We'd really like to deploy that particular API in this way. Could you please add this as a feature request?" Product managers like that. Very much.****
The best thing, though, is not just that named technical debt is visible technical debt, but that if you encourage your developers to document the decisions in code,***** then there's a decent chance that they'll record some ideas about how this should be done in the future. If you're really lucky, they might even add some hooks in the code to make it easier (an "auth" parameter on the API, which is unused in the current version, but will make API compatibility so much simpler in new releases; or cipher entry in the config file that currently only accepts one option, but is at least checked by the code).
I've been a bit disingenuous, I know, by defining technical debt as named technical debt. But honestly, if it's not named, then you can't know what it is, and until you know what it is, you can't fix it.******* My advice is this: when you're doing a release close-down (or in your weekly standup--EVERY weekly standup), have an agenda item to record technical debt. Name it, document it, be proud, sleep at night.
* Well, apart from the obvious clickbait reason--for which I'm (a little) sorry.
** I nearly wrote "poke my nose into."
*** Work with me here.
**** If you're software engineer/coder/hacker, here's a piece of advice: Learn to talk to product managers like real people, and treat them nicely. They (the better ones, at least) are invaluable allies when you need to prioritize features or have tricky trade-offs to make.
***** Do this. Just do it. Documentation that isn't at least mirrored in code isn't real documentation.******
****** Don't believe me? Talk to developers. "Who reads product documentation?" "Oh, the spec? I skimmed it. A few releases back. I think." "I looked in the header file; couldn't see it there."
******* Or decide not to fix it, which may also be an entirely appropriate decision.
This article originally appeared on [Alice, Eve, and Bob - a security blog][2] and is republished with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/10/why-i-love-technical-debt
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Technical_debt
[2]:https://aliceevebob.wordpress.com/2017/08/29/why-i-love-technical-debt/

View File

@ -1,86 +0,0 @@
How to Monetize an Open Source Project
======
![](http://www.itprotoday.com/sites/itprotoday.com/files/styles/article_featured_standard/public/ThinkstockPhotos-629994230_0.jpg?itok=5dZ68OTn)
The problem for any small group of developers putting the finishing touches on a commercial open source application is figuring out how to monetize the software in order to keep the bills paid and food on the table. Often these small pre-startups will start by deciding which of the recognized open source business models they're going to adapt, whether that be following Red Hat's lead and offering professional services, going the SaaS route, releasing as open core or something else.
Steven Grandchamp, general manager for MariaDB's North America operations and CEO for Denver-based startup [Drud Tech][1], thinks that might be putting the cart before the horse. With an open source project, the best first move is to get people downloading and using your product for free.
**Related:** [Demand for Open Source Skills Continues to Grow][2]
"The number one tangent to monetization in any open source product is adoption, because the key to monetizing an open source product is you flip what I would call the sales funnel upside down," he told ITPro at the recent All Things Open conference in Raleigh, North Carolina.
In many ways, he said, selling open source solutions is the opposite of marketing traditional proprietary products, where adoption doesn't happen until after a contract is signed.
**Related:** [Is Raleigh the East Coast's Silicon Valley?][3]
"In a proprietary software company, you advertise, you market, you make claims about what the product can do, and then you have sales people talk to customers. Maybe you have a free trial or whatever. Maybe you have a small version. Maybe it's time bombed or something like that, but you don't really get to realize the benefit of the product until there's a contract and money changes hands."
Selling open source solutions is different because of the challenge of selling software that's freely available as a GitHub download.
"The whole idea is to put the product out there, let people use it, experiment with it, and jump on the chat channels," he said, pointing out that his company Drud has a public chat channel that's open to anybody using their product. "A subset of that group is going to raise their hand and go, 'Hey, we need more help. We'd like a tighter relationship with the company. We'd like to know where your road map's going. We'd like to know about customization. We'd like to know if maybe this thing might be on your road map.'"
Grandchamp knows more than a little about making software pay, from both the proprietary and open source sides of the fence. In the 1980s he served as VP of research and development at Formation Technologies, and became SVP of R&D at John H. Harland after it acquired Formation in the mid-90s. He joined MariaDB in 2016, after serving eight years as CEO at OpenLogic, which was providing commercial support for more than 600 open-source projects at the time it was acquired by Rogue Wave Software. Along the way, there was a two year stint at Microsoft's Redmond campus.
OpenLogic was where he discovered open source, and his experiences there are key to his approach for monetizing open source projects.
"When I got to OpenLogic, I was told that we had 300 customers that were each paying $99 a year for access to our tool," he explained. "But the problem was that nobody was renewing the tool. So I called every single customer that I could find and said 'did you like the tool?'"
It turned out that nearly everyone he talked to was extremely happy with the company's software, which ironically was the reason they weren't renewing. The company's tool solved their problem so well there was no need to renew.
"What could we have offered that would have made you renew the tool?" he asked. "They said, 'If you had supported all of the open source products that your tool assembled for me, then I would have that ongoing relationship with you.'"
Grandchamp immediately grasped the situation, and when the CTO said such support would be impossible, Grandchamp didn't mince words: "Then we don't have a company."
"We figured out a way to support it," he said. "We created something called the Open Logic Expert Community. We developed relationships with committers and contributors to a couple of hundred open source packages, and we acted as sort of the hub of the SLA for our customers. We had some people on staff, too, who knew the big projects."
After that successful launch, Grandchamp and his team began hearing from customers that they were confused over exactly what open source code they were using in their projects. That lead to the development of what he says was the first software-as-a-service compliance portal of open source, which could scan an application's code and produce a list of all of the open source code included in the project. When customers then expressed confusion over compliance issues, the SaaS service was expanded to flag potential licensing conflicts.
Although the product lines were completely different, the same approach was used to monetize MariaDB, then called SkySQL, after MySQL co-founders Michael "Monty" Widenius, David Axmark, and Allan Larsson created the project by forking MySQL, which Oracle had acquired from Sun Microsystems in 2010.
Again, users were approached and asked what things they would be willing to purchase.
"They wanted different functionality in the database, and you didn't really understand this if you didn't talk to your customers," Grandchamp explained. "Monty and his team, while they were being acquired at Sun and Oracle, were working on all kinds of new functionality, around cloud deployments, around different ways to do clustering, they were working on lots of different things. That work, Oracle and MySQL didn't really pick up."
Rolling in the new features customers wanted needed to be handled gingerly, because it was important to the folks at MariaDB to not break compatibility with MySQL. This necessitated a strategy around when the code bases would come together and when they would separate. "That road map, knowledge, influence and technical information was worth paying for."
As with OpenLogic, MariaDB customers expressed a willingness to spend money on a variety of fronts. For example, a big driver in the early days was a project called Remote DBA, which helped customers make up for a shortage of qualified database administrators. The project could help with design issues, as well as monitor existing systems to take the workload off of a customer's DBA team. The service also offered access to MariaDB's own DBAs, many of whom had a history with the database going back to the early days of MySQL.
"That was a subscription offering that people were definitely willing to pay for," he said.
The company also learned, again by asking and listening to customers, that there were various types of support subscriptions that customers were willing to purchase, including subscriptions around capability and functionality, and a managed service component of Remote DBA.
These days Grandchamp is putting much of his focus on his latest project, Drud, a startup that offers a suite of integrated, automated, open source development tools for developing and managing multiple websites, which can be running on any combination of content management systems and deployment platforms. It is monetized partially through modules that add features like a centralized dashboard and an "intelligence engine."
As you might imagine, he got it off the ground by talking to customers and giving them what they indicated they'd be willing to purchase.
"Our number one customer target is the agency market," he said. "The enterprise market is a big target, but I believe it's our second target, not our first. And the reason it's number two is they don't make decisions very fast. There are technology refresh cycles that have to come up, there are lots of politics involved and lots of different vendors. It's lucrative once you're in, but in a startup you've got to figure out how to pay your bills. I want to pay my bills today. I don't want to pay them in three years."
Drud's focus on the agency market illustrates another consideration: the importance of understanding something about your customers' business. When talking with agencies, many said they were tired of being offered generic software that really didn't match their needs from proprietary vendors that didn't understand their business. In Drud's case, that understanding is built into the company DNA. The software was developed by an agency to fill its own needs.
"We are a platform designed by an agency for an agency," Grandchamp said. "Right there is a relationship that they're willing to pay for. We know their business."
Grandchamp noted that startups also need to be able to distinguish users from customers. Most of the people downloading and using commercial open source software aren't the people who have authorization to make purchasing decisions. These users, however, can point to the people who control the purse strings.
"It's our job to build a way to communicate with those users, provide them value so that they'll give us value," he explained. "It has to be an equal exchange. I give you value of a tool that works, some advice, really good documentation, access to experts who can sort of guide you along. Along the way I'm asking you for pieces of information. Who do you work for? How are the technology decisions happening in your company? Are there other people in your company that we should refer the product to? We have to create the dialog."
In the end, Grandchamp said, in the open source world the people who go out to find business probably shouldn't see themselves as salespeople, but rather, as problem solvers.
"I believe that you're not really going to need salespeople in this model. I think you're going to need customer success people. I think you're going to need people who can enable your customers to be successful in a business relationship that's more highly transactional."
"People don't like to be sold," he added, "especially in open source. The last person they want to see is the sales person, but they like to ply and try and consume and give you input and give you feedback. They love that."
--------------------------------------------------------------------------------
via: http://www.itprotoday.com/software-development/how-monetize-open-source-project
作者:[Christine Hall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itprotoday.com/author/christine-hall
[1]:https://www.drud.com/
[2]:http://www.itprotoday.com/open-source/demand-open-source-skills-continues-grow
[3]:http://www.itprotoday.com/software-development/raleigh-east-coasts-silicon-valley

View File

@ -1,87 +0,0 @@
Why pair writing helps improve documentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
Professional writers, at least in the Red Hat documentation team, nearly always work on docs alone. But have you tried writing as part of a pair? In this article, I'll explain a few benefits of pair writing.
### What is pair writing?
Pair writing is when two writers work in real time, on the same piece of text, in the same room. This approach improves document quality, speeds up writing, and allows writers to learn from each other. The idea of pair writing is borrowed from [pair programming][1].
When pair writing, you and your colleague work on the text together, making suggestions and asking questions as needed. Meanwhile, you're observing each other's work. For example, while one is writing, the other writer observes details such as structure or context. Often discussion around the document turns into sharing experiences and opinions, and brainstorming about writing in general.
At all times, the writing is done by only one person. Thus, you need only one computer, unless you want one writer to do online research while the other person does the writing. The text workflow is the same as if you are working alone: a text editor, the documentation source files, git, and so on.
### Pair writing in practice
My colleague Aneta Steflova and I have done more than 50 hours of pair writing working on the Red Hat Enterprise Linux System Administration docs and on the Red Hat Identity Management docs. I've found that, compared to writing alone, pair writing:
* is as productive or more productive;
* improves document quality;
* helps writers share technical expertise; and
* is more fun.
### Speed
Two writers writing one text? Sounds half as productive, right? Wrong. (Usually.)
Pair writing can help you work faster because two people have solutions to a bigger set of problems, which means getting blocked less often during the process. For example, one time we wrote urgent API docs for identity management. I know at least the basics of web APIs, the REST protocol, and so on, which helped us speed through those parts of the documentation. Working alone, Aneta would have needed to interrupt the writing process frequently to study these topics.
### Quality
Poor wording or sentence structure, inconsistencies in material, and so on have a harder time surviving under the scrutiny of four eyes. For example, one of our pair writing documents was reviewed by an extremely critical developer, who was known for catching technical inaccuracies and bad structure. After this particular review, he said, "Perfect. Thanks a lot."
### Sharing expertise
Each of us lives in our own writing bubble, and we normally don't know how others approach writing. Pair writing can help you improve your own writing process. For example, Aneta showed me how to better handle assignments in which the developer has provided starting text (as opposed to the writer writing from scratch using their own knowledge of the subject), which I didn't have experience with. Also, she structures the docs thoroughly, which I began doing as well.
As another example, I'm good enough at Vim that XML editing (e.g., tags manipulation) is enjoyable instead of torturous. Aneta saw how I was using Vim, asked about it, suffered through the learning curve, and now takes advantage of the Vim features that help me.
Pair writing is especially good for helping and mentoring new writers, and it's a great way to get to know professionally (and have fun with) colleagues.
### When pair writing shines
In addition to benefits I've already listed, pair writing is especially good for:
* **Working with[Bugzilla][2]** : Bugzillas can be cumbersome and cause problems, especially for administration-clumsy people (like me).
* **Reviewing existing documents** : When documentation needs to be expanded or fixed, it is necessary to first examine the existing document.
* **Learning new technology** : A fellow writer can be a better teacher than an engineer.
* **Writing emails/requests for information to developers with well-chosen questions** : The difficulty of this task rises in proportion to the difficulty of technology you are documenting.
Also, with pair writing, feedback is in real time, as-needed, and two-way.
On the downside, pair writing can be a faster pace, giving a writer less time to mull over a topic or wording. On the other hand, generally peer review is not necessary after pair writing.
### Words of caution
To get the most out of pair writing:
* Go into the project well prepared, otherwise you can waste your colleague's time.
* Talkative types need to stay focused on the task, otherwise they end up talking rather than writing.
* Be prepared for direct feedback. Pair writing is not for feedback-allergic writers.
* Beware of session hijackers. Dominant personalities can turn pair writing into writing solo with a spectator. (However, it _can _ be good if one person takes over at times, as long as the less-experienced partner learns from the hijacker, or the more-experienced writer is providing feedback to the hijacker.)
### Conclusion
Pair writing is a meeting, but one in which you actually get work done. It's an activity that lets writers focus on the one indispensable thing in our vocation--writing.
_This post was written with the help of pair writing with Aneta Steflova._
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/try-pair-writing
作者:[Maxim Svistunov][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/maxim-svistunov
[1]:https://developer.atlassian.com/blog/2015/05/try-pair-programming/
[2]:https://www.bugzilla.org/

View File

@ -1,120 +0,0 @@
Why and How to Set an Open Source Strategy
============================================================
![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg)
This article explains how to walk through, measure, and define strategies collaboratively in an open source community.
_“If you dont know where you are going, youll end up someplace else.” _ _—_  Yogi Berra
Open source projects are generally started as a way to scratch ones itch  and frankly thats one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand.
Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge  how does a project start to build a strategic vision? In this article, Ill describe how to walk through, measure, and define strategies collaboratively, in a community.
Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isnt a tool to justify unfortunate actions but to help show focus and where each community member can contribute.
A good application of strategy achieves the following:
* Why the project exists?
* What the project looks to achieve?
* What is the ideal end state for a project is.
The key to success is answering these questions as simply as possible, with consensus from your community. Lets look at some ways to do this.
### Setting a mission and vision
_“_ _Efforts and courage are not enough without purpose and direction.”_   John F. Kennedy
All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are  _Mission_  and  _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision).
A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions:
* What drove you to join and/or contribute the project?
* How do you define success for your participation?
In a company, youd ask your customers these questions usually. But in open source projects, the customers are the project participants  and their time investment is what makes the project a success.
Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each members answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights.
Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Heres their mission:
_Build community and adoption of Open Source on the mainframe by:_
* _Eliminating barriers to Open Source adoption on the mainframe_
* _Demonstrating value of the mainframe on technical and business levels_
* _Strengthening collaboration points and resources for the community to thrive_
At 40 words, it passes the key eye tests of a good mission statement; its clear, concise, and demonstrates the useful value the project aims for.
The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well:
_Linux on the Mainframe as the standard for enterprise class systems and applications._
You could read that as a [BHAG][1], but its really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision  its only 13 words, inspirational, clear, memorable, and concise.
Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there?
### Goals, Objectives, Actions, and Results
_“I dont focus on what Im up against. I focus on my goals and I try to ignore the rest.”_   Venus Williams
Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off.
There are lots of methods of time bound planning, but the method I think works the best for projects is what Ive dubbed the GOAR method. Its an acronym that stands for:
* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction.
* Objectives show how you measure a goals completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.”
* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors.
* Results come along the way, showing progress both positive and negative from the actions.
You can put these into a table like this:
| Goals | Objectives | Actions | Results |
|:--|:--|:--|:--|
| Grow a diverse contributor base     | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | |
| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | |
In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along.
The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project  this person could help introduce those developers to the codebase and guide them through their first commit.
### What happens if the project doesnt hit the goals?
_“I have not failed. Ive just found 10,000 ways that wont work.”_  — Thomas A. Edison
Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not!
Instead, you can use this experience as a way to better understand your projects velocity, its impact, and its community, and perhaps as a way to prioritize what is important and whats not.
--------------------------------------------------------------------------------
via: https://www.linuxfoundation.org/blog/set-open-source-strategy/
作者:[ John Mertic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxfoundation.org/author/jmertic/
[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal
[2]:https://www.linuxfoundation.org/author/jmertic/
[3]:https://www.linuxfoundation.org/category/blog/
[4]:https://www.linuxfoundation.org/category/audience/c-level/
[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/
[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/
[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/
[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/
[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
[11]:https://www.linuxfoundation.org/category/audience/open-source-users/
[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/

View File

@ -1,94 +0,0 @@
Why is collaboration so difficult?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_block_collaboration.png?itok=pKbXpr1e)
Many contemporary definitions of "collaboration" define it simply as "working together"--and, in part, it is working together. But too often, we tend to use the term "collaboration" interchangeably with cognate terms like "cooperation" and "coordination." These terms also refer to some manner of "working together," yet there are subtle but important differences between them all.
How does collaboration differ from coordination or cooperation? What is so important about collaboration specifically? Does it have or do something that coordination and cooperation don't? The short answer is a resounding "yes!"
[This unit explores collaboration][1], a problematic term because it has become a simple buzzword for "working together." By the time you've studied the cases and practiced the exercises contained in this section, you will understand that it's so much more than that.
### Not like the others
"Coordination" can be defined as the ordering of a variety of people acting in an effective, unified manner toward an end goal or state
In traditional organizations and businesses, people contributed according to their role definitions, such as in manufacturing, where each employee was responsible for adding specific components to the widget on an assembly line until the widget was complete. In contexts like these, employees weren't expected to contribute beyond their pre-defined roles (they were probably discouraged from doing so), and they didn't necessarily have a voice in the work or in what was being created. Often, a manager oversaw the unification of effort (hence the role "project coordinator"). Coordination is meant to connote a sense of harmony and unity, as if elements are meant to go together, resulting in efficiency among the ordering of the elements.
One common assumption is that coordinated efforts are aimed at the same, single goal. So some end result is "successful" when people and parts work together seamlessly; when one of the parts breaks down and fails, then the whole goal fails. Many traditional businesses (for instance, those with command-and-control hierarchies) manage work through coordination.
Cooperation is another term whose surface meaning is "working together." Rather than the sense of compliance that is part of "coordination," it carries a sense of agreement and helpfulness on the path toward completing a shared activity or goal.
"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
People tend to use the term "cooperation" when joining two semi-related entities where one or more entity could decide not to cooperate. The people and pieces that are part of a cooperative effort make the shared activity easier to perform or the shared goal easier to reach. "Cooperation" implies a shared goal or activity we agree to pursue jointly. One example is how police and witnesses cooperate to solve crimes.
"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
Sometimes collaboration involves two or more groups that do not normally work together; they are disparate groups or not usually connected. For instance, a traitor collaborates with the enemy, or rival businesses collaborate with each other. The subtlety of collaboration is that the two groups may have oppositional initial goals but work together to create a shared goal. Collaboration can be more contentious than coordination or cooperation, but like cooperation, any one of the entities could choose not to collaborate. Despite the contention and conflict, however, there is discourse--whether in the form of multi-way discussion or one-way feedback--because without discourse, there is no way for people to express a point of dissent that is ripe for negotiation.
The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
### For example
One way to think about these things is through a real-life example--like the writing of [this book][1].
The editor, [Bryan][2], coordinates the authors' work through the call for proposals, setting dates and deadlines, collecting the writing, and meeting editing dates and deadlines for feedback about our work. He coordinates the authors, the writing, the communications. In this example, I'm not coordinating anything except myself (still a challenge most days!).
The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
I cooperate with Bryan's dates and deadlines, and with the ways he has decided to coordinate the work. I propose the introduction on GitHub; I wait for approval. I comply with instructions, write some stuff, and send it to him by the deadlines. He cooperates by accepting a variety of document formats. I get his edits,incorporate them, send it back him, and so forth. If I don't cooperate (or something comes up and I can't cooperate), then maybe someone else writes this introduction instead.
Bryan and I collaborate when either one of us challenges something, including pieces of the work or process that aren't clear, things that we thought we agreed to, or things on which we have differing opinions. These intersections are ripe for negotiation and therefore indicative of collaboration. They are the opening for us to negotiate some creative work.
Once the collaboration is negotiated and settled, writing and editing the book returns to cooperation/coordination; that is why collaboration relies on the other two terms of joint work.
One of the most interesting parts of this example (and of work and shared activity in general) is the moment-by-moment pivot from any of these terms to the other. The writing of this book is not completely collaborative, coordinated, or cooperative. It's a messy mix of all three.
### Why is collaboration important?
Collaboration is an important facet of contemporary organizations--specifically those oriented toward knowledge work--because it allows for productive disagreement between actors. That kind of disagreement then helps increase the level of engagement and provide meaning to the group's work.
In his book, The Age of Discontinuity: Guidelines to our Changing Society, [Peter Drucker discusses][3] the "knowledge worker" and the pivot from work based on experience (e.g. apprenticeships) to work based on knowledge and the application of knowledge. This change in work and workers, he writes:
> ...will make the management of knowledge workers increasingly crucial to the performance and achievement of the knowledge society. We will have to learn to manage the knowledge worker both for productivity and for satisfaction, both for achievement and for status. We will have to learn to give the knowledge worker a job big enough to challenge him, and to permit performance as a "professional."
In other words, knowledge workers aren't satisfied with being subordinate--told what to do by managers as, if there is one right way to do a task. And, unlike past workers, they expect more from their work lives, including some level of emotional fulfillment or meaning-making from their work. The knowledge worker, according to Drucker, is educated toward continual learning, "paid for applying his knowledge, exercising his judgment, and taking responsible leadership." So it then follows that knowledge workers expect from work the chance to apply and share their knowledge, develop themselves professionally, and continuously augment their knowledge.
Interesting to note is the fact that Peter Drucker wrote about those concepts in 1969, nearly 50 years ago--virtually predicting the societal and organizational changes that would reveal themselves, in part, through the development of knowledge sharing tools such as forums, bulletin boards, online communities, and cloud knowledge sharing like DropBox and GoogleDrive as well as the creation of social media tools such as MySpace, Facebook, Twitter, YouTube and countless others. All of these have some basis in the idea that knowledge is something to liberate and share.
In this light, one might view the open organization as one successful manifestation of a system of management for knowledge workers. In other words, open organizations are a way to manage knowledge workers by meeting the needs of the organization and knowledge workers (whether employees, customers, or the public) simultaneously. The foundational values this book explores are the scaffolding for the management of knowledge, and they apply to ways we can:
* make sure there's a lot of varied knowledge around (inclusivity)
* help people come together and participate (community)
* circulate information, knowledge, and decision making (transparency)
* innovate and not become entrenched in old ways of thinking and being (adaptability)
* develop a shared goal and work together to use knowledge (collaboration)
Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups. As we've discovered, collaboration is more than working together with some degree of compliance; in fact, it describes a type of working together that overcomes compliance because people can disagree, question, and express their needs in a negotiation and in collaboration. And, collaboration is more than "working toward a shared goal"; collaboration is a process which defines the shared goals via negotiation and, when successful, leads to cooperation and coordination to focus activity on the negotiated outcome.
Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups.
Collaboration works best when the other four open organization values are present. For instance, when people are transparent, there is no guessing about what is needed, why, by whom, or when. Also, because collaboration involves negotiation, it also needs diversity (a product of inclusivity); after all, if we aren't negotiating among differing views, needs, or goals, then what are we negotiating? During a negotiation, the parties are often asked to give something up so that all may gain, so we have to be adaptable and flexible to the different outcomes that negotiation can provide. Lastly, collaboration is often an ongoing process rather than one which is quickly done and over, so it's best to enter collaboration as if you are part of the same community, desiring everyone to benefit from the negotiation. In this way, acts of authentic and purposeful collaboration directly necessitate the emergence of the other four values--transparency, inclusivity, adaptability, and community--as they assemble part of the organization's collective purpose spontaneously.
### Collaboration in open organizations
Traditional organizations advance an agreed-upon set of goals that people are welcome to support or not. In these organizations, there is some amount of discourse and negotiation, but often a higher-ranking or more powerful member of the organization intervenes to make a decision, which the membership must accept (and sometimes ignores). In open organizations, however, the focus is for members to perform their activity and to work out their differences; only if necessary would someone get involved (and even then would try to do it in the most minimal way that support the shared values of community, transparency, adaptability, collaboration and inclusivity.) This make the collaborative processes in open organizations "messier" (or "chaotic" to use Jim Whitehurst's term) but more participatory and, hopefully, innovative.
This article is part of the [Open Organization Workbook project][1].
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/17/11/what-is-collaboration
作者:[Heidi Hess Von Ludewig][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/heidi-hess-von-ludewig
[1]:https://opensource.com/open-organization/17/8/workbook-project-announcement
[2]:http://opensource.com/users/bbehrens
[3]:https://www.elsevier.com/books/the-age-of-discontinuity/drucker/978-0-434-90395-5

View File

@ -1,95 +0,0 @@
Changing how we use Slack solved our transparency and silo problems
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_abstract_pieces.jpg?itok=tGR1d2MU)
Collaboration and information silos are a reality in most organizations today. People tend to regard them as huge barriers to innovation and organizational efficiency. They're also a favorite target for solutions from software tool vendors of all types.
Tools by themselves, however, are seldom (if ever), the answer to a problem like organizational silos. The reason for this is simple: Silos are made of people, and human dynamics are key drivers for the existence of silos in the first place.
So what is the answer?
Successful communities are the key to breaking down silos. Tools play an important role in the process, but if you don't build successful communities around those tools, then you'll face an uphill battle with limited chances for success. Tools enable communities; they do not build them. This takes a thoughtful approach--one that looks at culture first, process second, and tools last.
Successful communities are the key to breaking down silos.
However, this is a challenge because, in most cases, this is not the way the process works in most businesses. Too many companies begin their journey to fix silos by thinking about tools first and considering metrics that don't evaluate the right factors for success. Too often, people choose tools for purely cost-based, compliance-based, or effort-based reasons--instead of factoring in the needs and desires of the user base. But subjective measures like "customer/user delight" are a real factor for these internal tools, and can make or break the success of both the tool adoption and the goal of increased collaboration.
It's critical to understand the best technical tool (or what the business may consider the most cost-effective) is not always the solution that drives community, transparency, and collaboration forward. There is a reason that "Shadow IT"--users choosing their own tool solution, building community and critical mass around them--exists and is so effective: People who choose their own tools are more likely to stay engaged and bring others with them, breaking down silos organically.
This is a story of how Autodesk ended up adopting Slack at enterprise scale to help solve our transparency and silo problems. Interestingly, Slack wasn't (and isn't) an IT-supported application at Autodesk. It's an enterprise solution that was adopted, built, and is still run by a group of passionate volunteers who are committed to a "default to open" paradigm.
Utilizing Slack makes transparency happen for us.
### Chat-tastrophe
First, some perspective: My job at Autodesk is running our [Open@ADSK][1] initiative. I was originally hired to drive our open source strategy, but we quickly expanded my role to include driving open source best practices for internal development (inner source), and transforming how we collaborate internally as an organization. This last piece is where we pick up our story of Slack adoption in the company.
But before we even begin to talk about our journey with Slack, let's address why lack of transparency and openness was a challenge for us. What is it that makes transparency such a desirable quality in organizations, and what was I facing when I started at Autodesk?
Every company says they want "better collaboration." In our case, we are a 35-year-old software company that has been immensely successful at selling desktop "shrink-wrapped" software to several industries, including architecture, engineering, construction, manufacturing, and entertainment. But no successful company rests on its laurels, and Autodesk leadership recognized that a move to Cloud-based solutions for our products was key to the future growth of the company, including opening up new markets through product combinations that required Cloud computing and deep product integrations.
The challenge in making this move was far more than just technical or architectural--it was rooted in the DNA of the company, in everything from how we were organized to how we integrated our products. The basic format of integration in our desktop products was file import/export. While this is undoubtedly important, it led to a culture of highly-specialized teams working in an environment that's more siloed than we'd like and not sharing information (or code). Prior to the move to a cloud-based approach, this wasn't as a much of a problem--but, in an environment that requires organizations to behave more like open source projects do, transparency, openness, and collaboration go from "nice-to-have" to "business critical."
Like many companies our size, Autodesk has had many different collaboration solutions through the years, some of them commercial, and many of them home-grown. However, none of them effectively solved the many-to-many real-time collaboration challenge. Some reasons for this were technical, but many of them were cultural.
I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last."
When someone first tasked me with trying to find a solution for this, I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last." This is still a challenge for engineering folks like myself. We want to jump immediately to tools as the solution to any problem. However, it's critical to evaluate a company's ethos (culture), as well as existing processes to determine what kinds of tools might be a good fit. Unfortunately, I've seen too many cases where leaders have dictated a tool choice from above, based on the factors discussed earlier. I needed a different approach that relied more on fitting a tool into the culture we wanted to become, not the other way around.
What I found at Autodesk were several small camps of people using tools like HipChat, IRC, Microsoft Lync, and others, to try to meet their needs. However, the most interesting thing I found was 85 separate instances of Slack in the company!
Eureka! I'd stumbled onto a viral success (one enabled by Slack's ability to easily spin up "free" instances). I'd also landed squarely in what I like to call "silo-land."
All of those instances were not talking to each other--so, effectively, we'd created isolated islands of information that, while useful to those in them, couldn't transform the way we operated as an enterprise. Essentially, our existing organizational culture was recreated in digital format in these separate Slack systems. Our organization housed a mix of these small, free instances, as well as multiple paid instances, which also meant we were not taking advantage of a common billing arrangement.
My first (open source) thought was: "Hey, why aren't we using IRC, or some other open source tool, for this?" I quickly realized that didn't matter, as our open source engineers weren't the only people using Slack. People from all areas of the company--even senior leadership--were adopting Slack in droves, and, in some cases, convincing their management to pay for it!
My second (engineering) thought was: "Oh, this is simple. We just collapse all 85 of those instances into a single cohesive Slack instance." What soon became obvious was that was the easy part of the solution. Much harder was the work of cajoling, convincing, and moving people to a single, transparent instance. Building in the "guard rails" to enable a closed source tool to provide this transparency was key. These guard rails came in the form of processes, guidelines, and community norms that were the hardest part of this transformation.
### The real work begins
As I began to slowly help users migrate to the common instance (paying for it was also a challenge, but a topic for another day), I discovered a dedicated group of power users who were helping each other in the #adsk-slack-help channel on our new common instance of Slack. These power users were, in effect, building the roots of our transparency and community through their efforts.
The open source community manager in me quickly realized these users were the path to successfully scaling Slack at Autodesk. I enlisted five of them to help me, and, together we set about fabricating the community structure for the tool's rollout.
We did, however, learn an important lesson about transparency and company culture along the way.
Here I should note the distinction between a community structure/governance model and traditional IT policies: With the exception of security and data privacy/legal policies, volunteer admins and user community members completely define and govern our Slack instance. One of the keys to our success with Slack (currently approximately 9,100 users and roughly 4,300 public channels) was how we engaged and involved our users in building these governance structures. Things like channel naming conventions and our growing list of frequently asked questions were organic and have continued in that same vein. Our community members feel like their voices are heard (even if some disagree), and that they have been a part of the success of our deployment of Slack.
We did, however, learn an important lesson about transparency and company culture along the way.
### It's not the tool
When we first launched our main Slack instance, we left the ability for anyone to make a channel private turned on. After about three months of usage, we saw a clear trend: More people were creating private channels (and messages) than they were public channels (the ratio was about two to one, private versus public). Since our effort to merge 85 Slack instances was intended to increase participation and transparency, we quickly adjusted our policy and turned off this feature for regular users. We instead implemented a policy of review by the admin team, with clear criteria (finance, legal, personnel discussions among the reasons) defined for private channels.
This was probably the only time in this entire process that I regretted something.
We took an amazing amount of flak for this decision because we were dealing with a corporate culture that was used to working in independent units that had minimal interaction with each other. Our defining moment of clarity (and the tipping point where things started to get better) occurred in an all-hands meeting when one of our senior executives asked me to address a question about Slack. I stood up to answer the question, and said (paraphrased from memory): "It's not about the tool. I could give you all the best, gold-plated collaboration platform in existence, but we aren't going to be successful if we don't change our approach to collaboration and learn to default to open."
I didn't think anything more about that statement--until that senior executive starting using the phrase "default to open" in his slide decks, in his staff meetings, and with everyone he met. That one moment has defined what we have been trying to do with Slack: The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise.
### What we learned
The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise.
I say all the time that this could have happened with other, similar tools (Hipchat, IRC, etc), but it works in this case specifically because we chose an approach of supporting a solution that the user community adopted for their needs, not strictly what the company may have chosen if the decision was coming from the top of the organizational chart. We put a lot of work into making it an acceptable solution (from the perspectives of security, legal, finance, etc.) for the company, but, ultimately, our success has come from the fact that we built this rollout (and continue to run the tool) as a community, not as a traditional corporate IT system.
The most important lesson I learned through all of this is that transparency and community are evolutionary, not revolutionary. You have to understand where your culture is, where you want it to go, and utilize the lever points that the community is adopting itself to make sustained and significant progress. There is a fine balance point between an anarchy, and a thriving community, and we've tried to model our approach on the successful practices of today's thriving open source communities.
Communities are personal. Tools come and go, but keeping your community at the forefront of your push to transparency is the key to success.
This article is part of the [Open Organization Workbook project][2].
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/17/12/chat-platform-default-to-open
作者:[Guy Martin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/guyma
[1]:mailto:Open@ADSK
[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement

View File

@ -1,116 +0,0 @@
How Mycroft used WordPress and GitHub to improve its documentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
Image credits : Photo by Unsplash; modified by Rikki Endsley. CC BY-SA 4.0
Imagine you've just joined a new technology company, and one of the first tasks you're assigned is to improve and centralize the organization's developer-facing documentation. There's just one catch: That documentation exists in many different places, across several platforms, and differs markedly in accuracy, currency, and style.
So how did we tackle this challenge?
### Understanding the scope
As with any project, we first needed to understand the scope and bounds of the problem we were trying to solve. What documentation was good? What was working? What wasn't? How much documentation was there? What format was it in? We needed to do a **documentation audit**. Luckily, [Aneta Šteflova][1] had recently [published an article on OpenSource.com][2] about this, and it provided excellent guidance.
![mycroft doc audit][4]
Mycroft documentation audit, showing source, topic, medium, currency, quality and audience
Next, every piece of publicly facing documentation was assessed for the topic it covered, the medium it used, currency, and quality. A pattern quickly emerged that different platforms had major deficiencies, allowing us to make a data-driven approach to decommission our existing Jekyll-based sites. The audit also highlighted just how fragmented our documentation sources were--we had developer-facing documentation across no fewer than seven sites. Although search engines were finding this content just fine, the fragmentation made it difficult for developers and users of Mycroft--our primary audiences--to navigate the information they needed. Again, this data helped us make the decision to centralize our documentation on to one platform.
### Choosing a central platform
As an organization, we wanted to constrain the number of standalone platforms in use. Over time, maintenance and upkeep of multiple platforms and integration touchpoints becomes cumbersome for any organization, but this is exacerbated for a small startup.
One of the other business drivers in platform choice was that we had two primary but very different audiences. On one hand, we had highly technical developers who we were expecting would push documentation to its limits--and who would want to contribute to technical documentation using their tools of choice--[Git][5], [GitHub][6], and [Markdown][7]. Our second audience--end users--would primarily consume technical documentation and would want to do so in an inviting, welcoming platform that was visually appealing and provided additional features such as the ability to identify reading time and to provide feedback. The ability to capture feedback was also a key requirement from our side as without feedback on the quality of the documentation, we would not have a solid basis to undertake continuous quality improvement.
Would we be able to identify one platform that met all of these competing needs?
We realised that two platforms covered all of our needs:
* [WordPress][8]: Our existing website is built on WordPress, and we have some reasonably robust WordPress skills in-house. The flexibility of WordPress also fulfilled our requirements for functionality like reading time and the ability to capture user feedback.
* [GitHub][9]: Almost [all of Mycroft.AI's source code is available on GitHub][10], and our development team uses this platform daily.
But how could we marry the two?
![](https://opensource.com/sites/default/files/images/life-uploads/wordpress-github-sync.png)
### Integrating WordPress and GitHub with WordPress GitHub Sync
Luckily, our COO, [Nate Tomasi][11], spotted a WordPress plugin that promised to integrate the two.
This was put through its paces on our test website, and it passed with flying colors. It was easy to install, had a straightforward configuration, which just required an OAuth token and webhook with GitHub, and provided two-way integration between WordPress and GitHub.
It did, however, have a dependency--on Markdown--which proved a little harder to implement. We trialed several Markdown plugins, but each had several quirks that interfered with the rendering of non-Markdown-based content. After several days of frustration, and even an attempt to custom-write a plugin for our needs, we stumbled across [Parsedown Party][12]. There was much partying! With WordPress GitHub Sync and Parsedown Party, we had integrated our two key platforms.
Now it was time to make our content visually appealing and usable for our user audience.
### Reading time and feedback
To implement the reading time and feedback functionality, we built a new [page template for WordPress][13], and leveraged plugins within the page template.
Knowing the estimated reading time of an article in advance has been [proven to increase engagement with content][14] and provides developers and users with the ability to decide whether to read the content now or bookmark it for later. We tested several WordPress plugins for reading time, but settled on [Reading Time WP][15] because it was highly configurable and could be easily embedded into WordPress page templates. Our decision to place Reading Time at the top of the content was designed to give the user the choice of whether to read now or save for later. With Reading Time in place, we then turned our attention to gathering user feedback and ratings for our documentation.
![](https://opensource.com/sites/default/files/images/life-uploads/screenshot-from-2017-12-08-00-55-31.png)
There are several rating and feedback plugins available for WordPress. We needed one that could be easily customized for several use cases, and that could aggregate or summarize ratings. After some experimentation, we settled on [Multi Rating Pro][16] because of its wide feature set, especially the ability to create a Review Ratings page in WordPress--i.e., a central page where staff can review ratings without having to be logged in to the WordPress backend. The only gap we ran into here was the ability to set the display order of rating options--but it will likely be added in a future release.
The WordPress GitHub Integration plugin also gave us the ability to link back to the GitHub repository where the original Markdown content was held, inviting technical developers to contribute to improving our documentation.
### Updating the existing documentation
Now that the "container" for our new documentation had been developed, it was time to update the existing content. Because much of our documentation had grown organically over time, there were no style guidelines to shape how keywords and code were styled. This was tackled first, so that it could be applied to all content. [You can see our content style guidelines on GitHub.][17]
As part of the update, we also ran several checks to ensure that the content was technically accurate, augmenting the existing documentation with several images for better readability.
There were also a couple of additional tools that made creating internal links for documentation pieces easier. First, we installed the [WP Anchor Header][18] plugin. This plugin provided a small but important function: adding `id` content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes to each `<h1>`, `<h2>` (and so on) element. This meant that internal anchors could be automatically generated on the command line from the Markdown content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes generated by WP Anchor Header.
Next, we imported the updated documentation into WordPress from GitHub, and made sure we had meaningful and easy-to-search on slugs, descriptions, and keywords--because what good is excellent documentation if no one can find it?! A final activity was implementing redirects so that people hitting the old documentation would be taken to the new version.
### What next?
[Please do take a moment and have a read through our new documentation][20]. We know it isn't perfect--far from it--but we're confident that the mechanisms we've baked into our new documentation infrastructure will make it easier to identify gaps--and resolve them quickly. If you'd like to know more, or have suggestions for our documentation, please reach out to Kathy Reid on [Chat][21] (@kathy-mycroft) or via [email][22].
_Reprinted with permission from[Mycroft.ai][23]._
### About the author
Kathy Reid - Director of Developer Relations @MycroftAI, President of @linuxaustralia. Kathy Reid has expertise in open source technology management, web development, video conferencing, digital signage, technical communities and documentation. She has worked in a number of technical and leadership roles over the last 20 years, and holds Arts and Science undergraduate degrees... more about Kathy Reid
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/rocking-docs-mycroft
作者:[Kathy Reid][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/kathyreid
[1]:https://opensource.com/users/aneta
[2]:https://opensource.com/article/17/10/doc-audits
[3]:/file/382466
[4]:https://opensource.com/sites/default/files/images/life-uploads/mycroft-documentation-audit.png (mycroft documentation audit)
[5]:https://git-scm.com/
[6]:https://github.com/MycroftAI
[7]:https://en.wikipedia.org/wiki/Markdown
[8]:https://www.wordpress.org/
[9]:https://github.com/
[10]:https://github.com/mycroftai
[11]:http://mycroft.ai/team/
[12]:https://wordpress.org/plugins/parsedown-party/
[13]:https://developer.wordpress.org/themes/template-files-section/page-template-files/
[14]:https://marketingland.com/estimated-reading-times-increase-engagement-79830
[15]:https://jasonyingling.me/reading-time-wp/
[16]:https://multiratingpro.com/
[17]:https://github.com/MycroftAI/docs-rewrite/blob/master/README.md
[18]:https://wordpress.org/plugins/wp-anchor-header/
[19]:https://github.com/jonschlinkert/markdown-toc
[20]:https://mycroft.ai/documentation
[21]:https://chat.mycroft.ai/
[22]:mailto:kathy.reid@mycroft.ai
[23]:https://mycroft.ai/blog/improving-mycrofts-documentation/

View File

@ -1,121 +0,0 @@
The open organization and inner sourcing movements can share knowledge
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gov_collaborative_risk.png?itok=we8DKHuL)
Image by : opensource.com
Red Hat is a company with roughly 11,000 employees. The IT department consists of roughly 500 members. Though it makes up just a fraction of the entire organization, the IT department is still sufficiently staffed to have many application service, infrastructure, and operational teams within it. Our purpose is "to enable Red Hatters in all functions to be effective, productive, innovative, and collaborative, so that they feel they can make a difference,"--and, more specifically, to do that by providing technologies and related services in a fashion that is as open as possible.
Being open like this takes time, attention, and effort. While we always strive to be as open as possible, it can be difficult. For a variety of reasons, we don't always succeed.
In this story, I'll explain a time when, in the rush to innovate, the Red Hat IT organization lost sight of its open ideals. But I'll also explore how returning to those ideals--and using the collaborative tactics of "inner source"--helped us to recover and greatly improve the way we deliver services.
### About inner source
Before I explain how inner source helped our team, let me offer some background on the concept.
Inner source is the adoption of open source development practices between teams within an organization to promote better and faster delivery without requiring project resources be exposed to the world or openly licensed. It allows an organization to receive many of the benefits of open source development methods within its own walls.
In this way, inner source aligns well with open organization strategies and principles; it provides a path for open, collaborative development. While the open organization defines its principles of openness broadly as transparency, inclusivity, adaptability, collaboration, and community--and covers how to use these open principles for communication, decision making, and many other topics--inner source is about the adoption of specific and tactical practices, processes, and patterns from open source communities to improve delivery.
For instance, [the Open Organization Maturity Model][1] suggests that in order to be transparent, teams should, at minimum, share all project resources with the project team (though it suggests that it's generally better to share these resources with the entire organization). The common pattern in both inner source and open source development is to host all resources in a publicly available version control system, for source control management, which achieves the open organization goal of high transparency.
Inner source aligns well with open organization strategies and principles.
Another example of value alignment appears in the way open source communities accept contributions. In open source communities, source code is transparently available. Community contributions in the form of patches or merge requests are commonly accepted practices (even expected ones). This provides one example of how to meet the open organization's goal of promoting inclusivity and collaboration.
### The challenge
Early in 2014, Red Hat IT began its first steps toward making Amazon Web Services (AWS) a standard hosting offering for business critical systems. While teams within Red Hat IT had built several systems and services in AWS by this time, these were bespoke creations, and we desired to make deploying services to IT standards in AWS both simple and standardized.
In order to make AWS cloud hosting meet our operational standards (while being scalable), the Cloud Enablement team within Red Hat IT decided that all infrastructure in AWS would be configured through code, rather than manually, and that everyone would use a standard set of tools. The Cloud Enablement team designed and built these standard tools; a separate group, the Platform Operations team, was responsible for provisioning and hosting systems and services in AWS using the tools.
The Cloud Enablement team built a toolset, obtusely named "Template Util," based on AWS Cloud Formations configurations wrapped in a management layer to enforce certain configuration requirements and make stamping out multiple copies of services across environments easier. While the Template Util toolset technically met all our initial requirements, and we eventually provisioned the infrastructure for more than a dozen services with it, engineers in every team working with the tool found using it to be painful. Michael Johnson, one engineer using the tool, said "It made doing something relatively straightforward really complicated."
Among the issues Template Util exhibited were:
* Underlying cloud formations technologies implied constraints on application stack management at odds with how we managed our application systems.
* The tooling was needlessly complex and brittle in places, using multiple layered templating technologies and languages making syntax issues hard to debug.
* The code for the tool--and some of the data users needed to manipulate the tool--were kept in a repository that was difficult for most users to access.
* There was no standard process to contributing or accepting changes.
* The documentation was poor.
As more engineers attempted to use the Template Util toolset, they found even more issues and limitations with the tools. Unhappiness continued to grow. To make matters worse, the Cloud Enablement team then shifted priorities to other deliverables without relinquishing ownership of the tool, so bug fixes and improvements to the tools were further delayed.
The real, core issues here were our inability to build an inclusive community to collaboratively build shared tooling that met everyone's needs. Fear of losing "ownership," fear of changing requirements, and fear of seeing hard work abandoned all contributed to chronic conflict, which in turn led to poorer outcomes.
### Crisis point
By September 2015, more than a year after launching our first major service in AWS with the Template Util tool, we hit a crisis point.
Many engineers refused to use the tools. That forced all of the related service provisioning work on a small set of engineers, further fracturing the community and disrupting service delivery roadmaps as these engineers struggled to deal with unexpected work. We called an emergency meeting and invited all the teams involved to find a solution.
During the emergency meeting, we found that people generally thought we needed immediate change and should start the tooling effort over, but even the decision to start over wasn't unanimous. Many solutions emerged--sometimes multiple solutions from within a single team--all of which would require significant work to implement. While we couldn't reach a consensus on which solution to use during this meeting, we did reach an agreement to give proponents of different technologies two weeks to work together, across teams, to build their case with a prototype, which the community could then review.
While we didn't reach a final and definitive decision, this agreement was the first point where we started to return to the open source ideals that guide our mission. By inviting all involved parties, we were able to be transparent and inclusive, and we could begin rebuilding our internal community. By making clear that we wanted to improve things and were open to new options, we showed our commitment to adaptability and meritocracy. Most importantly, the plan for building prototypes gave people a clear, return path to collaboration.
When the community reviewed the prototypes, it determined that the clear leader was an Ansible-based toolset that would eventually become known, internally, as Ansicloud. (At the time, no one involved with this work had any idea that Red Hat would acquire Ansible the following month. It should also be noted that other teams within Red Hat have found tools based on Cloud Formation extremely useful, even when our specific Template Util tool did not find success.)
This prototyping and testing phase didn't fix things overnight, though. While we had consensus on the general direction we needed to head, we still needed to improve the new prototype to the point at which engineers could use it reliably for production services.
So over the next several months, a handful of engineers worked to further build and extend the Ansicloud toolset. We built three new production services. While we were sharing code, that sharing activity occurred at a low level of maturity. Some engineers had trouble getting access due to older processes. Other engineers headed in slightly different directions, with each engineer having to rediscover some of the core design issues themselves.
### Returning to openness
This led to a turning point: Building on top of the previous agreement, we focused on developing a unified vision and providing easier access. To do this, we:
1. created a list of specific goals for the project (both "must-haves" and "nice-to-haves"),
2. created an open issue log for the project to avoid solving the same problem repeatedly,
3. opened our code base so anyone in Red Hat could read or clone it, and
4. made it easy for engineers to get trusted committer access
Our agreement to collaborate, our finally unified vision, and our improved tool development methods spurred the growth of our community. Ansicloud adoption spread throughout the involved organizations, but this led to a new problem: The tool started changing more quickly than users could adapt to it, and improvements that different groups submitted were beginning to affect other groups in unanticipated ways.
These issues resulted in our recent turn to inner source practices. While every open source project operates differently, we focused on adopting some best practices that seemed common to many of them. In particular:
* We identified the business owner of the project and the core-contributor group of developers who would govern the development of the tools and decide what contributions to accept. While we want to keep things open, we can't have people working against each other or breaking each other's functionality.
* We developed a project README clarifying the purpose of the tool and specifying how to use it. We also created a CONTRIBUTING document explaining how to contribute, what sort of contributions would be useful, and what sort of tests a contribution would need to pass to be accepted.
* We began building continuous integration and testing services for the Ansicloud tool itself. This helped us ensure we could quickly and efficiently validate contributions technically, before the project accepted and merged them.
With these basic agreements, documents, and tools available, we were back onto the path of open collaboration and successful inner sourcing.
### Why it matters
Why does inner source matter?
From a developer community point of view, shifting from a traditional siloed development model to the inner source model has produced significant, quantifiable improvements:
* Contributions to our tooling have grown 72% per week (by number of commits).
* The percentage of contributions from non-core committers has grown from 27% to 78%; the users of the toolset are driving its development.
* The contributor list has grown by 15%, primarily from new users of the tool set, rather than core committers, increasing our internal community.
And the tools we've delivered through this project have allowed us to see dramatic improvements in our business outcomes. Using the Ansicloud tools, 54 new multi-environment application service deployments were created in 385 days (compared to 20 services in 1,013 days with the Template Util tools). We've gone from one new service deployment in a 50-day period to one every week--a seven-fold increase in the velocity of our delivery.
What really matters here is that the improvements we saw were not aberrations. Inner source provides common, easily understood patterns that organizations can adopt to effectively promote collaboration (not to mention other open organization principles). By mirroring open source production practices, inner source can also mirror the benefits of open source code, which have been seen time and time again: higher quality code, faster development, and more engaged communities.
This article is part of the [Open Organization Workbook project][2].
### about the author
Tom Benninger - Tom Benninger is a Solutions Architect, Systems Engineer, and continual tinkerer at Red Hat, Inc. Having worked with startups, small businesses, and larger enterprises, he has experience within a broad set of IT disciplines. His current area of focus is improving Application Lifecycle Management in the enterprise. He has a particular interest in how open source, inner source, and collaboration can help support modern application development practices and the adoption of DevOps, CI/CD, Agile,...
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/1/open-orgs-and-inner-source-it
作者:[Tom Benninger][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tomben
[1]:https://opensource.com/open-organization/resources/open-org-maturity-model
[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement

View File

@ -1,181 +0,0 @@
in which the cost of structured data is reduced
======
Last year I got the wonderful opportunity to attend [RacketCon][1] as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.
![lensmen chronicles][2]
I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)
The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.
### GUIs and XML
I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:
```
(define (main path)
(let ([frame (new frame% [label "World color"])]
[categorizations (box '())]
[doc (call-with-input-file path read-xml/document)])
(new (class canvas%
(define/override (on-char event)
(handle-key this categorizations (send event get-key-code)))
(super-new))
[parent frame]
[paint-callback (draw doc categorizations)])
(send frame show #t)))
```
While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of [generic interfaces][3] in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a [`box`][4] which you use in the way you'd use a `ref` in ML or Clojure: a mutable wrapper around an immutable data structure.
The world map I'm using is [an SVG of the Robinson projection][5] from Wikipedia. If you look closely there's a call to bind `doc` that calls [`call-with-input-file`][6] with [`read-xml/document`][7] which loads up the whole map file's SVG; just about as easily as you could ask for.
The data you get back from `read-xml/document` is in fact a [document][8] struct, which contains an `element` struct containing `attribute` structs and lists of more `element` structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.
Here's how we handle keyboard input; we're displaying a map with one country highlighted, and `key` here tells us what the user pressed to categorize the highlighted country. If that key is in the `categories` hash then we put it into `categorizations`.
```
(define categories #hash((select . "eeeeff")
(#\1 . "993322")
(#\2 . "229911")
(#\3 . "ABCD31")
(#\4 . "91FF55")
(#\5 . "2439DF")))
(define (handle-key canvas categorizations key)
(cond [(equal? #\backspace key) (swap! categorizations cdr)]
[(member key (dict-keys categories)) (swap! categorizations (curry cons key))]
[(equal? #\space key) (display (unbox categorizations))])
(send canvas refresh))
```
### Nested updates: the bad parts
Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a [`fold`][9] reduction over the XML document struct and the list of country categorizations (plus `'select` for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to [`draw-pict`][10]:
```
(define (update original-doc categorizations)
(for/fold ([doc original-doc])
([category (cons 'select (unbox categorizations))]
[n (in-range (length (unbox categorizations)) 0 -1)])
(set-style doc n (style-for category))))
(define ((draw doc categorizations) _ context)
(let* ([newdoc (update doc categorizations)]
[xml (call-with-output-string (curry write-xml newdoc))])
(draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))
```
The problem is in that pesky `set-style` function. All it has to do is reach deep down into the `document` struct to find the `n`th `path` element (the one associated with a given country), and change its `'style` attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:
```
(define (set-style doc n new-style)
(let* ([root (document-element doc)]
[g (list-ref (element-content root) 8)]
[paths (element-content g)]
[path (first (drop (filter element? paths) n))]
[path-num (list-index (curry eq? path) paths)]
[style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
(element-attributes path))]
[attr (list-ref (element-attributes path) style-index)]
[new-attr (make-attribute (source-start attr)
(source-stop attr)
(attribute-name attr)
new-style)]
[new-path (make-element (source-start path)
(source-stop path)
(element-name path)
(list-set (element-attributes path)
style-index new-attr)
(element-content path))]
[new-g (make-element (source-start g)
(source-stop g)
(element-name g)
(element-attributes g)
(list-set paths path-num new-path))]
[root-contents (list-set (element-content root) 8 new-g)])
(make-document (document-prolog doc)
(make-element (source-start root)
(source-stop root)
(element-name root)
(element-attributes root)
root-contents)
(document-misc doc))))
```
The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field `x` replaced by the value of `(f (lookup x))`". Racket can [do this with dictionaries][11] but not with structs2. If you want a modified version you have to create a fresh one3.
### Lenses to the rescue?
![first lensman][12]
When I brought this up in the `#racket` channel on Freenode, I was helpfully pointed to the 3rd-party [Lens][13] library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's [a flaw][14] preventing them from working with `xml` structs, so it seemed I was out of luck.
But then I was pointed to [X-expressions][15] as an alternative to structs. The [`xml->xexpr`][16] function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.
For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the `n`th country and its `style` attribute. The [`lens-compose`][17] function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way [`compose`][18] works for functions). Also note that defining one lens gives us the ability to both get nested values (with [`lens-view`][19]) and update them.
```
(define (style-lens n)
(lens-compose (dict-ref-lens 'style)
second-lens
(list-ref-lens (add1 (* n 2)))
(list-ref-lens 10)))
```
Our `<path>` XML elements are under the 10th item of the root xexpr, (hence the [`list-ref-lens`][20] with 10) and they are interspersed with whitespace, so we have to double `n` to find the `<path>` we want. The [`second-lens`][21] call gets us to that element's attribute alist, and [`dict-ref-lens`][22] lets us zoom in on the `'style` key out of that alist.
Once we have our lens, it's just a matter of replacing `set-style` with a call to [`lens-set`][23] in our `update` function we had above, and then we're off:
```
(define (update doc categorizations)
(for/fold ([d doc])
([category (cons 'select (unbox categorizations))]
[n (in-range (length (unbox categorizations)) 0 -1)])
(lens-set (style-lens n) d (list (style-for category)))))
```
![second stage lensman][24]
Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the `xml` structs4, lenses provide a way to get the best of both worlds, at least in some situations.
The final version of the code clocks in at 51 lines and is is available [on GitLab][25].
--------------------------------------------------------------------------------
via: https://technomancy.us/185
作者:[Phil Hagelberg][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://technomancy.us/
[1]:https://con.racket-lang.org/
[2]:https://technomancy.us/i/chronicles-of-lensmen.jpg
[3]:https://docs.racket-lang.org/reference/struct-generics.html
[4]:https://docs.racket-lang.org/reference/boxes.html?q=box#%28def._%28%28quote._~23~25kernel%29._box%29%29
[5]:https://commons.wikimedia.org/wiki/File:BlankMap-World_gray.svg
[6]:https://docs.racket-lang.org/reference/port-lib.html#(def._((lib._racket%2Fport..rkt)._call-with-input-string))
[7]:https://docs.racket-lang.org/xml/index.html?q=read-xml#%28def._%28%28lib._xml%2Fmain..rkt%29._read-xml%2Fdocument%29%29
[8]:https://docs.racket-lang.org/xml/#%28def._%28%28lib._xml%2Fmain..rkt%29._document%29%29
[9]:https://docs.racket-lang.org/reference/for.html?q=for%2Ffold#%28form._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._for%2Ffold%29%29
[10]:https://docs.racket-lang.org/pict/Rendering.html?q=draw-pict#%28def._%28%28lib._pict%2Fmain..rkt%29._draw-pict%29%29
[11]:https://docs.racket-lang.org/reference/dicts.html?q=dict-update#%28def._%28%28lib._racket%2Fdict..rkt%29._dict-update%29%29
[12]:https://technomancy.us/i/first-lensman.jpg
[13]:https://docs.racket-lang.org/lens/lens-guide.html
[14]:https://github.com/jackfirth/lens/issues/290
[15]:https://docs.racket-lang.org/pollen/second-tutorial.html?q=xexpr#%28part._.X-expressions%29
[16]:https://docs.racket-lang.org/xml/index.html?q=xexpr#%28def._%28%28lib._xml%2Fmain..rkt%29._xml-~3exexpr%29%29
[17]:https://docs.racket-lang.org/lens/lens-reference.html#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-compose%29%29
[18]:https://docs.racket-lang.org/reference/procedures.html#%28def._%28%28lib._racket%2Fprivate%2Flist..rkt%29._compose%29%29
[19]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-view%29%29
[20]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._list-ref-lens%29%29
[21]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._second-lens%29%29
[22]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Fdict..rkt%29._dict-ref-lens%29%29
[23]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-set%29%29
[24]:https://technomancy.us/i/second-stage-lensman.jpg
[25]:https://gitlab.com/technomancy/world-color/blob/master/world-color.rkt

View File

@ -1,87 +0,0 @@
Security Chaos Engineering: A new paradigm for cybersecurity
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_bank_vault_secure_safe.png?itok=YoW93h7C)
Security is always changing and failure always exists.
This toxic scenario requires a fresh perspective on how we think about operational security. We must understand that we are often the primary cause of our own security flaws. The industry typically looks at cybersecurity and failure in isolation or as separate matters. We believe that our lack of insight and operational intelligence into our own security control failures is one of the most common causes of security incidents and, subsequently, data breaches.
> Fall seven times, stand up eight." --Japanese proverb
The simple fact is that "to err is human," and humans derive their success as a direct result of the failures they encounter. Their rate of failure, how they fail, and their ability to understand that they failed in the first place are important building blocks to success. Our ability to learn through failure is inherent in the systems we build, the way we operate them, and the security we use to protect them. Yet there has been a lack of focus when it comes to how we approach preventative security measures, and the spotlight has trended toward the evolving attack landscape and the need to buy or build new solutions.
### Security spending is continually rising and so are security incidents
We spend billions on new information security technologies, however, we rarely take a proactive look at whether those security investments perform as expected. This has resulted in a continual increase in security spending on new solutions to keep up with the evolving attacks.
Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries. We have marched so fast down this path of the "get-ahead-of-the-attacker" strategy that we haven't considered that we may be a primary cause of our own demise. How is it that we are building more and more security measures, but the problem seems to be getting worse? Furthermore, many of the notable data breaches over the past year were not the result of an advanced nation-state or spy-vs.-spy malicious advanced persistent threats (APTs); rather the principal causes of those events were incomplete implementation, misconfiguration, design flaws, and lack of oversight.
The 2017 Ponemon Cost of a Data Breach Study breaks down the [root causes of data breaches][1] into three areas: malicious or criminal attacks, human factors or errors, and system glitches, including both IT and business-process failure. Of the three categories, malicious or criminal attacks comprises the largest distribution (47%), followed by human error (28%), and system glitches (25%). Cybersecurity vendors have historically focused on malicious root causes of data breaches, as it is the largest sole cause, but together human error and system glitches total 53%, a larger share of the overall problem.
What is not often understood, whether due to lack of insight, reporting, or analysis, is that malicious or criminal attacks are often successful due to human error and system glitches. Both human error and system glitches are, at their root, primary markers of the existence of failure. Whether it's IT system failures, failures in process, or failures resulting from humans, it begs the question: "Should we be focusing on finding a method to identify, understand, and address our failures?" After all, it can be an arduous task to predict the next malicious attack, which often requires investment of time to sift threat intelligence, dig through forensic data, or churn threat feeds full of unknown factors and undetermined motives. Failure instrumentation, identification, and remediation are mostly comprised of things that we know, have the ability to test, and can measure.
Failures we can analyze consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers. How often do we proactively instrument what we designed, built, and are operationally managing to determine if the controls are failing? Most organizations do not discover that their security controls were failing until a security incident results from that failure. The worst time to find out your security investment failed is during a security incident at 3 a.m.
> Security incidents are not detective measures and hope is not a strategy when it comes to operating effective security controls.
We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches. Failure in security controls can arise from poor control placement, technical misconfiguration, gaps in coverage, inadequate testing practices, human error, and numerous other things.
### The journey into Security Chaos Testing
Our venture into this new territory of Security Chaos Testing has shifted our thinking about the root cause of many of our notable security incidents and data breaches.
We were brought together by [Bruce Wong][2], who now works at Stitch Fix with Charles, one of the authors of this article. Prior to Stitch Fix, Bruce was a founder of the Chaos Engineering and System Reliability Engineering (SRE) practices at Netflix, the company commonly credited with establishing the field. Bruce learned about this article's other author, Aaron, through the open source [ChaoSlingr][3] Security Chaos Testing tool project, on which Aaron was a contributor. Aaron was interested in Bruce's perspective on the idea of applying Chaos Engineering to cybersecurity, which led Bruce to connect us to share what we had been working on. As security practitioners, we were both intrigued by the idea of Chaos Engineering and had each begun thinking about how this new method of instrumentation might have a role in cybersecurity.
Within a short timeframe, we began finishing each other's thoughts around testing and validating security capabilities, which we collectively call "Security Chaos Engineering." We directly challenged many of the concepts we had come to depend on in our careers, such as compensating security controls, defense-in-depth, and how to design preventative security. Quickly we realized that we needed to challenge the status quo "set-it-and-forget-it" model and instead execute on continuous instrumentation and validation of security capabilities.
Businesses often don't fully understand whether their security capabilities and controls are operating as expected until they are not. We had both struggled throughout our careers to provide measurements on security controls that go beyond simple uptime metrics. Our journey has shown us there is a need for a more pragmatic approach that emphasizes proactive instrumentation and experimentation over blind faith.
### Defining new terms
In the security industry, we have a habit of not explaining terms and assuming we are speaking the same language. To correct that, here are a few key terms in this new approach:
* **(Security) Chaos Experiments** are foundationally rooted in the scientific method, in that they seek not to validate what is already known to be true or already known to be false, rather they are focused on deriving new insights about the current state.
* **Security Chaos Engineering** is the discipline of instrumentation, identification, and remediation of failure within security controls through proactive experimentation to build confidence in the system's ability to defend against malicious conditions in production.
### Security and distributed systems
Consider the evolving nature of modern application design where systems are becoming more and more distributed, ephemeral, and immutable in how they operate. In this shifting paradigm, it is becoming difficult to comprehend the operational state and health of our systems' security. Moreover, how are we ensuring that it remains effective and vigilant as the surrounding environment is changing its parameters, components, and methodologies?
What does it mean to be effective in terms of security controls? After all, a single security capability could easily be implemented in a wide variety of diverse scenarios in which failure may arise from many possible sources. For example, a standard firewall technology may be implemented, placed, managed, and configured differently depending on complexities in the business, web, and data logic.
It is imperative that we not operate our business products and services on the assumption that something works. We must constantly, consistently, and proactively instrument our security controls to ensure they cut the mustard when it matters. This is why Security Chaos Testing is so important. What Security Chaos Engineering does is it provides a methodology for the experimentation of the security of distributed systems in order to build confidence in the ability to withstand malicious conditions.
In Security Chaos Engineering:
* Security capabilities must be end-to-end instrumented.
* Security must be continuously instrumented to build confidence in the system's ability to withstand malicious conditions.
* Readiness of a system's security defenses must be proactively assessed to ensure they are battle-ready and operating as intended.
* The security capability toolchain must be instrumented from end to end to drive new insights into not only the effectiveness of the functionality within the toolchain but also to discover where added value and improvement can be injected.
* Practiced instrumentation seeks to identify, detect, and remediate failures in security controls.
* The focus is on vulnerability and failure identification, not failure management.
* The operational effectiveness of incident management is sharpened.
As Henry Ford said, "Failure is only the opportunity to begin again, this time more intelligently." Security Chaos Engineering and Security Chaos Testing give us that opportunity.
Would you like to learn more? Join the discussion by following [@aaronrinehart][4] and [@charles_nwatu][5] on Twitter.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/new-paradigm-cybersecurity
作者:[Aaron Rinehart][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/aaronrinehart
[1]:https://www.ibm.com/security/data-breach
[2]:https://twitter.com/bruce_m_wong?lang=en
[3]:https://github.com/Optum/ChaoSlingr
[4]:https://twitter.com/aaronrinehart
[5]:https://twitter.com/charles_nwatu

View File

@ -1,395 +0,0 @@
How to write a really great resume that actually gets you hired
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*k7HRLZAsuINP9vIs2BIh1g.png)
This is a data-driven guide to writing a resume that actually gets you hired. Ive spent the past four years analyzing which resume advice works regardless of experience, role, or industry. The tactics laid out below are the result of what Ive learned. They helped me land offers at Google, Microsoft, and Twitter and have helped my students systematically land jobs at Amazon, Apple, Google, Microsoft, Facebook, and more.
### Writing Resumes Sucks.
Its a vicious cycle.
We start by sifting through dozens of articles by career “gurus,” forced to compare conflicting advice and make our own decisions on what to follow.
The first article says “one page MAX” while the second says “take two or three and include all of your experience.”
The next says “write a quick summary highlighting your personality and experience” while another says “summaries are a waste of space.”
You scrape together your best effort and hit “Submit,” sending your resume into the ether. When you dont hear back, you wonder what went wrong:
_“Was it the single page or the lack of a summary? Honestly, who gives a s**t at this point. Im sick of sending out 10 resumes every day and hearing nothing but crickets.”_
![](https://cdn-images-1.medium.com/max/1000/1*_zQqAjBhB1R4fz55InrrIw.jpeg)
How it feels to try and get your resume read in todays world.
Writing resumes sucks but its not your fault.
The real reason its so tough to write a resume is because most of the advice out there hasnt been proven against the actual end goal of getting a job. If you dont know what consistently works, you cant lay out a system to get there.
Its easy to say “one page works best” when youve seen it happen a few times. But how does it hold up when we look at 100 resumes across different industries, experience levels, and job titles?
Thats what this article aims to answer.
Over the past four years, Ive personally applied to hundreds of companies and coached hundreds of people through the job search process. This has given me a huge opportunity to measure, analyze, and test the effectiveness of different resume strategies at scale.
This article is going to walk through everything Ive learned about resumes over the past 4 years, including:
* Mistakes that more than 95% of people make, causing their resumes to get tossed immediately
* Three things that consistently appear in the resumes of highly effective job searchers (who go on to land jobs at the worlds best companies)
* A quick hack that will help you stand out from the competition and instantly build relationships with whomever is reading your resume (increasing your chances of hearing back and getting hired)
* The exact resume template that got me interviews and offers at Google, Microsoft, Twitter, Uber, and more
Before we get to the unconventional strategies that will help set you apart, we need to make sure our foundational bases are covered. That starts with understanding the mistakes most job seekers make so we can make our resume bulletproof.
### Resume Mistakes That 95% Of People Make
Most resumes that come through an online portal or across a recruiters desk are tossed out because they violate a simple rule.
When recruiters scan a resume, the first thing they look for is mistakes. Your resume could be fantastic, but if you violate a rule like using an unprofessional email address or improper grammar, its going to get tossed out.
Our goal is to fully understand the triggers that cause recruiters/ATS systems to make the snap decisions on who stays and who goes.
In order to get inside the heads of these decision makers, I collected data from dozens of recruiters and hiring mangers across industries. These people have several hundred years of hiring experience under their belts and theyve reviewed 100,000+ resumes across industries.
They broke down the five most common mistakes that cause them to cut resumes from the pile:
![](https://cdn-images-1.medium.com/max/1000/1*5Zbr3HFeKSjvPGZdq_LCKA.png)
### The Five Most Common Resume Mistakes (According To Recruiters & Hiring Managers)
Issue #1: Sloppiness (typos, spelling errors, & grammatical mistakes). Close to 60% of resumes have some sort of typo or grammatical issue.
Solution: Have your resume reviewed by three separate sourcesspell checking software, a friend, and a professional. Spell check should be covered if youre using Microsoft Word or Google Docs to create your resume.
A friend or family member can cover the second base, but make sure you trust them with reviewing the whole thing. You can always include an obvious mistake to see if they catch it.
Finally, you can hire a professional editor on [Upwork][1]. It shouldnt take them more than 1520 minutes to review so its worth paying a bit more for someone with high ratings and lots of hours logged.
Issue #2: Summaries are too long and formal. Many resumes include summaries that consist of paragraphs explaining why they are a “driven, results oriented team player.” When hiring managers see a block of text at the top of the resume, you can bet they arent going to read the whole thing. If they do give it a shot and read something similar to the sentence above, theyre going to give up on the spot.
Solution: Summaries are highly effective, but they should be in bullet form and showcase your most relevant experience for the role. For example, if Im applying for a new business sales role my first bullet might read “Responsible for driving $11M of new business in 2018, achieved 168% attainment (#1 on my team).”
Issue #3: Too many buzz words. Remember our driven team player from the last paragraph? Phrasing like that makes hiring managers cringe because your attempt to stand out actually makes you sound like everyone else.
Solution: Instead of using buzzwords, write naturally, use bullets, and include quantitative results whenever possible. Would you rather hire a salesperson who “is responsible for driving new business across the healthcare vertical to help companies achieve their goals” or “drove $15M of new business last quarter, including the largest deal in company history”? Skip the buzzwords and focus on results.
Issue #4: Having a resume that is more than one page. The average employer spends six seconds reviewing your resumeif its more than one page, it probably isnt going to be read. When asked, recruiters from Google and Barclays both said multiple page resumes “are the bane of their existence.”
Solution: Increase your margins, decrease your font, and cut down your experience to highlight the most relevant pieces for the role. It may seem impossible but its worth the effort. When youre dealing with recruiters who see hundreds of resumes every day, you want to make their lives as easy as possible.
### More Common Mistakes & Facts (Backed By Industry Research)
In addition to personal feedback, I combed through dozens of recruitment survey results to fill any gaps my contacts might have missed. Here are a few more items you may want to consider when writing your resume:
* The average interviewer spends 6 seconds scanning your resume
* The majority of interviewers have not looked at your resume until
 you walk into the room
* 76% of resumes are discarded for an unprofessional email address
* Resumes with a photo have an 88% rejection rate
* 58% of resumes have typos
* Applicant tracking software typically eliminates 75% of resumes due to a lack of keywords and phrases being present
Now that you know every mistake you need to avoid, the first item on your to-do list is to comb through your current resume and make sure it doesnt violate anything mentioned above.
Once you have a clean resume, you can start to focus on more advanced tactics that will really make you stand out. There are a few unique elements you can use to push your application over the edge and finally get your dream company to notice you.
![](https://cdn-images-1.medium.com/max/1000/1*KthhefFO33-8tm0kBEPbig.jpeg)
### The 3 Elements Of A Resume That Will Get You Hired
My analysis showed that highly effective resumes typically include three specific elements: quantitative results, a simple design, and a quirky interests section. This section breaks down all three elements and shows you how to maximize their impact.
### Quantitative Results
Most resumes lack them.
Which is a shame because my data shows that they make the biggest difference between resumes that land interviews and resumes that end up in the trash.
Heres an example from a recent resume that was emailed to me:
> Experience
> + Identified gaps in policies and processes and made recommendations for solutions at the department and institution level
> + Streamlined processes to increase efficiency and enhance quality
> + Directly supervised three managers and indirectly managed up to 15 staff on multiple projects
> + Oversaw execution of in-house advertising strategy
> + Implemented comprehensive social media plan
As an employer, that tells me absolutely nothing about what to expect if I hire this person.
They executed an in-house marketing strategy. Did it work? How did they measure it? What was the ROI?
They also also identified gaps in processes and recommended solutions. What was the result? Did they save time and operating expenses? Did it streamline a process resulting in more output?
Finally, they managed a team of three supervisors and 15 staffers. How did that team do? Was it better than the other teams at the company? What results did they get and how did those improve under this persons management?
See what Im getting at here?
These types of bullets talk about daily activities, but companies dont care about what you do every day. They care about results. By including measurable metrics and achievements in your resume, youre showcasing the value that the employer can expect to get if they hire you.
Lets take a look at revised versions of those same bullets:
> Experience
> + Managed a team of 20 that consistently outperformed other departments in lead generation, deal size, and overall satisfaction (based on our culture survey)
> + Executed in-house marketing strategy that resulted in a 15% increase in monthly leads along with a 5% drop in the cost per lead
> + Implemented targeted social media campaign across Instagram & Pintrest, which drove an additional 50,000 monthly website visits and generated 750 qualified leads in 3 months
If you were in the hiring managers shoes, which resume would you choose?
Thats the power of including quantitative results.
### Simple, Aesthetic Design That Hooks The Reader
These days, its easy to get carried away with our mission to “stand out.” Ive seen resume overhauls from graphic designers, video resumes, and even resumes [hidden in a box of donuts.][2]
While those can work in very specific situations, we want to aim for a strategy that consistently gets results. The format I saw the most success with was a black and white Word template with sections in this order:
* Summary
* Interests
* Experience
* Education
* Volunteer Work (if you have it)
This template is effective because its familiar and easy for the reader to digest.
As I mentioned earlier, hiring managers scan resumes for an average of 6 seconds. If your resume is in an unfamiliar format, those 6 seconds wont be very comfortable for the hiring manager. Our brains prefer things we can easily recognize. You want to make sure that a hiring manager can actually catch a glimpse of who you are during their quick scan of your resume.
If were not relying on design, this hook needs to come from the  _Summary_ section at the top of your resume.
This section should be done in bullets (not paragraph form) and it should contain 34 highlights of the most relevant experience you have for the role. For example, if I was applying for a New Business Sales position, my summary could look like this:
> Summary
> Drove quarterly average of $11M in new business with a quota attainment of 128% (#1 on my team)
> Received award for largest sales deal of the year
> Developed and trained sales team on new lead generation process that increased total leads by 17% in 3 months, resulting in 4 new deals worth $7M
Those bullets speak directly to the value I can add to the company if I was hired for the role.
### An “Interests” Section Thats Quirky, Unique, & Relatable
This is a little “hack” you can use to instantly build personal connections and positive associations with whomever is reading your resume.
Most resumes have a skills/interests section, but its usually parked at the bottom and offers little to no value. Its time to change things up.
[Research shows][3] that people rely on emotions, not information, to make decisions. Big brands use this principle all the timeemotional responses to advertisements are more influential on a persons intent to buy than the content of an ad.
You probably remember Apples famous “Get A Mac” campaign:
When it came to specs and performance, Macs didnt blow every single PC out of the water. But these ads solidified who was “cool” and who wasnt, which was worth a few extra bucks to a few million people.
By tugging at our need to feel “cool,” Apples campaign led to a [42% increase in market share][4] and a record sales year for Macbooks.
Now were going to take that same tactic and apply it to your resume.
If you can invoke an emotional response from your recruiter, you can influence the mental association they assign to you. This gives you a major competitive advantage.
Lets start with a questionwhat could you talk about for hours?
It could be cryptocurrency, cooking, World War 2, World of Warcraft, or how Googles bet on segmenting their company under the Alphabet is going to impact the technology sector over the next 5 years.
Did a topic (or two) pop into year head? Great.
Now think about what it would be like to have a conversation with someone who was just as passionate and knew just as much as you did on the topic. Itd be pretty awesome, right?  _Finally, _ someone who gets it!
Thats exactly the kind of emotional response were aiming to get from a hiring manager.
There are five “neutral” topics out there that people enjoy talking about:
1. Food/Drink
2. Sports
3. College
4. Hobbies
5. Geography (travel, where people are from, etc.)
These topics are present in plenty of interest sections but we want to take them one step further.
Lets say you had the best night of your life at the Full Moon Party in Thailand. Which of the following two options would you be more excited to read:
* Traveling
* Ko Pha Ngan beaches (where the full moon party is held)
Or, lets say that you went to Duke (an ACC school) and still follow their basketball team. Which would you be more pumped about:
* College Sports
* ACC Basketball (Go Blue Devils!)
In both cases, the second answer would probably invoke a larger emotional response because it is tied directly to your experience.
I want you to think about your interests that fit into the five categories I mentioned above.
Now I want you to write a specific favorite associated with each category in parentheses next to your original list. For example, if you wrote travel you can add (ask me about the time I was chased by an elephant in India) or (specifically meditation in a Tibetan monastery).
Here is the [exact set of interests][5] I used on my resume when I interviewed at Google, Microsoft, and Twitter:
_ABC Kitchens Atmosphere, Stumptown Coffee (primarily cold brew), Michael Lewis (Liars Poker), Fishing (especially fly), Foods That Are Vehicles For Hot Sauce, ACC Sports (Go Deacs!) & The New York Giants_
![](https://cdn-images-1.medium.com/max/1000/1*ONxtGr_xUYmz4_Xe66aeng.jpeg)
If you want to cheat here, my experience shows that anything about hot sauce is an instant conversation starter.
### The Proven Plug & Play Resume Template
Now that we have our strategies down, its time to apply these tactics to a real resume. Our goal is to write something that increases your chances of hearing back from companies, enhances your relationships with hiring managers, and ultimately helps you score the job offer.
The example below is the exact resume that I used to land interviews and offers at Microsoft, Google, and Twitter. I was targeting roles in Account Management and Sales, so this sample is tailored towards those positions. Well break down each section below:
![](https://cdn-images-1.medium.com/max/1000/1*B2RQ89ue2dGymRdwMY2lBA.png)
First, I want you to notice how clean this is. Each section is clearly labeled and separated and flows nicely from top to bottom.
My summary speaks directly to the value Ive created in the past around company culture and its bottom line:
* I consistently exceeded expectations
* I started my own business in the space (and saw real results)
* Im a team player who prioritizes culture
I purposefully include my Interests section right below my Summary. If my hiring managers six second scan focused on the summary, I know theyll be interested. Those bullets cover all the subconscious criteria for qualification in sales. Theyre going to be curious to read more in my Experience section.
By sandwiching my Interests in the middle, Im upping their visibility and increasing the chance of creating that personal connection.
You never knowthe person reading my resume may also be a hot sauce connoisseur and I dont want that to be overlooked because my interests were sitting at the bottom.
Next, my Experience section aims to flesh out the points made in my Summary. I mentioned exceeding my quota up top, so I included two specific initiatives that led to that attainment, including measurable results:
* A partnership leveraging display advertising to drive users to a gamified experience. The campaign resulted in over 3000 acquisitions and laid the groundwork for the 2nd largest deal in company history.
* A partnership with a top tier agency aimed at increasing conversions for a client by improving user experience and upgrading tracking during a company-wide website overhaul (the client has ~20 brand sites). Our efforts over 6 months resulted in a contract extension worth 316% more than their original deal.
Finally, I included my education at the very bottom starting with the most relevant coursework.
Download My Resume Templates For Free
You can download a copy of the resume sample above as well as a plug and play template here:
Austins Resume: [Click To Download][6]
Plug & Play Resume Template: [Click To Download][7]
### Bonus Tip: An Unconventional Resume “Hack” To Help You Beat Applicant Tracking Software
If youre not already familiar, Applicant Tracking Systems are pieces of software that companies use to help “automate” the hiring process.
After you hit submit on your online application, the ATS software scans your resume looking for specific keywords and phrases (if you want more details, [this article][8] does a good job of explaining ATS).
If the language in your resume matches up, the software sees it as a good fit for the role and will pass it on to the recruiter. However, even if youre highly qualified for the role but you dont use the right wording, your resume can end up sitting in a black hole.
Im going to teach you a little hack to help improve your chances of beating the system and getting your resume in the hands of a human:
Step 1: Highlight and select the entire job description page and copy it to your clipboard.
Step 2: Head over to [WordClouds.com][9] and click on the “Word List” button at the top. Towards the top of the pop up box, you should see a link for Paste/Type Text. Go ahead and click that.
Step 3: Now paste the entire job description into the box, then hit “Apply.”
WordClouds is going to spit out an image that showcases every word in the job description. The larger words are the ones that appear most frequently (and the ones you want to make sure to include when writing your resume). Heres an example for a data a science role:
![](https://cdn-images-1.medium.com/max/1000/1*O7VO1C9nhC9LZct7vexTbA.png)
You can also get a quantitative view by clicking “Word List” again after creating your cloud. That will show you the number of times each word appeared in the job description:
9 data
6 models
4 experience
4 learning
3 Experience
3 develop
3 team
2 Qualifications
2 statistics
2 techniques
2 libraries
2 preferred
2 research
2 business
When writing your resume, your goal is to include those words in the same proportions as the job description.
Its not a guaranteed way to beat the online application process, but it will definitely help improve your chances of getting your foot in the door!
* * *
### Want The Inside Info On Landing A Dream Job Without Connections, Without “Experience,” & Without Applying Online?
[Click here to get the 5 free strategies that my students have used to land jobs at Google, Microsoft, Amazon, and more without applying online.][10]
_Originally published at _ [_cultivatedculture.com_][11] _._
--------------------------------------------------------------------------------
作者简介:
I help people land jobs they love and salaries they deserve at CultivatedCulture.com
----------
via: https://medium.freecodecamp.org/how-to-write-a-really-great-resume-that-actually-gets-you-hired-e18533cd8d17
作者:[Austin Belcak ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@austin.belcak
[1]:http://www.upwork.com/
[2]:https://www.thrillist.com/news/nation/this-guy-hides-his-resume-in-boxes-of-donuts-to-score-job-interviews
[3]:https://www.psychologytoday.com/blog/inside-the-consumer-mind/201302/how-emotions-influence-what-we-buy
[4]:https://www.businesswire.com/news/home/20070608005253/en/Apple-Mac-Named-Successful-Marketing-Campaign-2007
[5]:http://cultivatedculture.com/resume-skills-section/
[6]:https://drive.google.com/file/d/182gN6Kt1kBCo1LgMjtsGHOQW2lzATpZr/view?usp=sharing
[7]:https://drive.google.com/open?id=0B3WIcEDrxeYYdXFPVlcyQlJIbWc
[8]:https://www.jobscan.co/blog/8-things-you-need-to-know-about-applicant-tracking-systems/
[9]:https://www.wordclouds.com/
[10]:https://cultivatedculture.com/dreamjob/
[11]:https://cultivatedculture.com/write-a-resume/

View File

@ -1,99 +0,0 @@
UQDS: A software-development process that puts quality first
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
The Ultimate Quality Development System (UQDS) is a software development process that provides clear guidelines for how to use branches, tickets, and code reviews. It was invented more than a decade ago by Divmod and adopted by [Twisted][1], an event-driven framework for Python that underlies popular commercial platforms like HipChat as well as open source projects like Scrapy (a web scraper).
Divmod, sadly, is no longer around—it has gone the way of many startups. Luckily, since many of its products were open source, its legacy lives on.
When Twisted was a young project, there was no clear process for when code was "good enough" to go in. As a result, while some parts were highly polished and reliable, others were alpha quality software—with no way to tell which was which. UQDS was designed as a process to help an existing project with definite quality challenges ramp up its quality while continuing to add features and become more useful.
UQDS has helped the Twisted project evolve from having frequent regressions and needing multiple release candidates to get a working version, to achieving its current reputation of stability and reliability.
### UQDS's building blocks
UQDS was invented by Divmod back in 2006. At that time, Continuous Integration (CI) was in its infancy and modern version control systems, which allow easy branch merging, were barely proofs of concept. Although Divmod did not have today's modern tooling, it put together CI, some ad-hoc tooling to make [Subversion branches][2] work, and a lot of thought into a working process. Thus the UQDS methodology was born.
UQDS is based upon fundamental building blocks, each with their own carefully considered best practices:
1. Tickets
2. Branches
3. Tests
4. Reviews
5. No exceptions
Let's go into each of those in a little more detail.
#### Tickets
In a project using the UQDS methodology, no change is allowed to happen if it's not accompanied by a ticket. This creates a written record of what change is needed and—more importantly—why.
* Tickets should define clear, measurable goals.
* Work on a ticket does not begin until the ticket contains goals that are clearly defined.
#### Branches
Branches in UQDS are tightly coupled with tickets. Each branch must solve one complete ticket, no more and no less. If a branch addresses either more or less than a single ticket, it means there was a problem with the ticket definition—or with the branch. Tickets might be split or merged, or a branch split and merged, until congruence is achieved.
Enforcing that each branch addresses no more nor less than a single ticket—which corresponds to one logical, measurable change—allows a project using UQDS to have fine-grained control over the commits: A single change can be reverted or changes may even be applied in a different order than they were committed. This helps the project maintain a stable and clean codebase.
#### Tests
UQDS relies upon automated testing of all sorts, including unit, integration, regression, and static tests. In order for this to work, all relevant tests must pass at all times. Tests that don't pass must either be fixed or, if no longer relevant, be removed entirely.
Tests are also coupled with tickets. All new work must include tests that demonstrate that the ticket goals are fully met. Without this, the work won't be merged no matter how good it may seem to be.
A side effect of the focus on tests is that the only platforms that a UQDS-using project can say it supports are those on which the tests run with a CI framework—and where passing the test on the platform is a condition for merging a branch. Without this restriction on supported platforms, the quality of the project is not Ultimate.
#### Reviews
While automated tests are important to the quality ensured by UQDS, the methodology never loses sight of the human factor. Every branch commit requires code review, and each review must follow very strict rules:
1. Each commit must be reviewed by a different person than the author.
2. Start with a comment thanking the contributor for their work.
3. Make a note of something that the contributor did especially well (e.g., "that's the perfect name for that variable!").
4. Make a note of something that could be done better (e.g., "this line could use a comment explaining the choices.").
5. Finish with directions for an explicit next step, typically either merge as-is, fix and merge, or fix and submit for re-review.
These rules respect the time and effort of the contributor while also increasing the sharing of knowledge and ideas. The explicit next step allows the contributor to have a clear idea on how to make progress.
#### No exceptions
In any process, it's easy to come up with reasons why you might need to flex the rules just a little bit to let this thing or that thing slide through the system. The most important fundamental building block of UQDS is that there are no exceptions. The entire community works together to make sure that the rules do not flex, not for any reason whatsoever.
Knowing that all code has been approved by a different person than the author, that the code has complete test coverage, that each branch corresponds to a single ticket, and that this ticket is well considered and complete brings a piece of mind that is too valuable to risk losing, even for a single small exception. The goal is quality, and quality does not come from compromise.
### A downside to UQDS
While UQDS has helped Twisted become a highly stable and reliable project, this reliability hasn't come without cost. We quickly found that the review requirements caused a slowdown and backlog of commits to review, leading to slower development. The answer to this wasn't to compromise on quality by getting rid of UQDS; it was to refocus the community priorities such that reviewing commits became one of the most important ways to contribute to the project.
To help with this, the community developed a bot in the [Twisted IRC channel][3] that will reply to the command `review tickets` with a list of tickets that still need review. The [Twisted review queue][4] website returns a prioritized list of tickets for review. Finally, the entire community keeps close tabs on the number of tickets that need review. It's become an important metric the community uses to gauge the health of the project.
### Learn more
The best way to learn about UQDS is to [join the Twisted Community][5] and see it in action. If you'd like more information about the methodology and how it might help your project reach a high level of reliability and stability, have a look at the [UQDS documentation][6] in the Twisted wiki.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/uqds
作者:[Moshe Zadka][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/moshez
[1]:https://twistedmatrix.com/trac/
[2]:http://structure.usc.edu/svn/svn.branchmerge.html
[3]:http://webchat.freenode.net/?channels=%23twisted
[4]:https://twisted.reviews
[5]:https://twistedmatrix.com/trac/wiki/TwistedCommunity
[6]:https://twistedmatrix.com/trac/wiki/UltimateQualityDevelopmentSystem

View File

@ -1,73 +0,0 @@
Why Mainframes Aren't Going Away Any Time Soon
======
![](http://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/styles/article_featured_standard/public/ibm%20z13%20mainframe%202015%20getty.jpg?itok=uB8agshi)
IBM's last earnings report showed the [first uptick in revenue in more than five years.][1] Some of that growth was from an expected source, cloud revenue, which was up 24 percent year over year and now accounts for 21 percent of Big Blue's take. Another major boost, however, came from a spike in mainframe revenue. Z series mainframe sales were up 70 percent, the company said.
This may sound somewhat akin to a return to vacuum tube technology in a world where transistors are yesterday's news. In actuality, this is only a sign of the changing face of IT.
**Related:** [One Click and Voilà, Your Entire Data Center is Encrypted][2]
Modern mainframes definitely aren't your father's punch card-driven machines that filled entire rooms. These days, they most often run Linux and have found a renewed place in the data center, where they're being called upon to do a lot of heavy lifting. Want to know where the largest instance of Oracle's database runs? It's on a Linux mainframe. How about the largest implementation of SAP on the planet? Again, Linux on a mainframe.
"Before the advent of Linux on the mainframe, the people who bought mainframes primarily were people who already had them," Leonard Santalucia explained to Data Center Knowledge several months back at the All Things Open conference. "They would just wait for the new version to come out and upgrade to it, because it would run cheaper and faster.
**Related:** [IBM Designs a “Performance Beast” for AI][3]
"When Linux came out, it opened up the door to other customers that never would have paid attention to the mainframe. In fact, probably a good three to four hundred new clients that never had mainframes before got them. They don't have any old mainframes hanging around or ones that were upgraded. These are net new mainframes."
Although Santalucia is CTO at Vicom Infinity, primarily an IBM reseller, at the conference he was wearing his hat as chairperson of the Linux Foundation's Open Mainframe Project. He was joined in the conversation by John Mertic, the project's director of program management.
Santalucia knows IBM's mainframes from top to bottom, having spent 27 years at Big Blue, the last eight as CTO for the company's systems and technology group.
"Because of Linux getting started with it back in 1999, it opened up a lot of doors that were closed to the mainframe," he said. "Beforehand it was just z/OS, z/VM, z/VSE, z/TPF, the traditional operating systems. When Linux came along, it got the mainframe into other areas that it never was, or even thought to be in, because of how open it is, and because Linux on the mainframe is no different than Linux on any other platform."
The focus on Linux isn't the only motivator behind the upsurge in mainframe use in data centers. Increasingly, enterprises with heavy IT needs are finding many advantages to incorporating modern mainframes into their plans. For example, mainframes can greatly reduce power, cooling, and floor space costs. In markets like New York City, where real estate is at a premium, electricity rates are high, and electricity use is highly taxed to reduce demand, these are significant advantages.
"There was one customer where we were able to do a consolidation of 25 x86 cores to one core on a mainframe," Santalucia said. "They have several thousand machines that are ten and twenty cores each. So, as far as the eye could see in this data center, [x86 server workloads] could be picked up and moved onto this box that is about the size of a sub-zero refrigerator in your kitchen."
In addition to saving on physical data center resources, this customer by design would likely see better performance.
"When you look at the workload as it's running on an x86 system, the math, the application code, the I/O to manage the disk, and whatever else is attached to that system, is all run through the same chip," he explained. "On a Z, there are multiple chip architectures built into the system. There's one specifically just for the application code. If it senses the application needs an I/O or some mathematics, it sends it off to a separate processor to do math or I/O, all dynamically handled by the underlying firmware. Your Linux environment doesn't have to understand that. When it's running on a mainframe, it knows it's running on a mainframe and it will exploit that architecture."
The operating system knows it's running on a mainframe because when IBM was readying its mainframe for Linux it open sourced something like 75,000 lines of code for Linux distributions to use to make sure their OS's were ready for IBM Z.
"A lot of times people will hear there's 170 processors on the Z14," Santalucia said. "Well, there's actually another 400 other processors that nobody counts in that count of application chips, because it is taken for granted."
Mainframes are also resilient when it comes to disaster recovery. Santalucia told the story of an insurance company located in lower Manhattan, within sight of the East River. The company operated a large data center in a basement that among other things housed a mainframe backed up to another mainframe located in Upstate New York. When Hurricane Sandy hit in 2012, the data center flooded, electrocuting two employees and destroying all of the servers, including the mainframe. But the mainframe's workload was restored within 24 hours from the remote backup.
The x86 machines were all destroyed, and the data was never recovered. But why weren't they also backed up?
"The reason they didn't do this disaster recovery the same way they did with the mainframe was because it was too expensive to have a mirror of all those distributed servers someplace else," he explained. "With the mainframe, you can have another mainframe as an insurance policy that's lower in price, called Capacity BackUp, and it just sits there idling until something like this happens."
Mainframes are also evidently tough as nails. Santalucia told another story in which a data center in Japan was struck by an earthquake strong enough to destroy all of its x86 machines. The center's one mainframe fell on its side but continued to work.
The mainframe also comes with built-in redundancy to guard against situations that would be disastrous with x86 machines.
"What if a hard disk fails on a node in x86?" the Open Mainframe Project's Mertic asked. "You're taking down a chunk of that cluster potentially. With a mainframe you're not. A mainframe just keeps on kicking like nothing's ever happened."
Mertic added that a motherboard can be pulled from a running mainframe, and again, "the thing keeps on running like nothing's ever happened."
So how do you figure out if a mainframe is right for your organization? Simple, says Santalucia. Do the math.
"The approach should be to look at it from a business, technical, and financial perspective -- not just a financial, total-cost-of-acquisition perspective," he said, pointing out that often, costs associated with software, migration, networking, and people are not considered. The break-even point, he said, comes when at least 20 to 30 servers are being migrated to a mainframe. After that point the mainframe has a financial advantage.
"You can get a few people running the mainframe and managing hundreds or thousands of virtual servers," he added. "If you tried to do the same thing on other platforms, you'd find that you need significantly more resources to maintain an environment like that. Seven people at ADP handle the 8,000 virtual servers they have, and they need seven only in case somebody gets sick.
"If you had eight thousand servers on x86, even if they're virtualized, do you think you could get away with seven?"
--------------------------------------------------------------------------------
via: http://www.datacenterknowledge.com/hardware/why-mainframes-arent-going-away-any-time-soon
作者:[Christine Hall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.datacenterknowledge.com/archives/author/christine-hall
[1]:http://www.datacenterknowledge.com/ibm/mainframe-sales-fuel-growth-ibm
[2]:http://www.datacenterknowledge.com/design/one-click-and-voil-your-entire-data-center-encrypted
[3]:http://www.datacenterknowledge.com/design/ibm-designs-performance-beast-ai

View File

@ -1,127 +0,0 @@
Arch Anywhere Is Dead, Long Live Anarchy Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_main.jpg?itok=fyBpTjQW)
Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to [Anarchy Linux][1]. And Im here to say, if youre looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy.
Lets face it; Arch Linux isnt for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). Thats where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution.
Anarchy Linux goes a little bit further than that, however. Lets fire it up and see what it does.
### The installation
The installation of Anarchy Linux isnt terribly challenging, but its also not quite as simple as for, say, [Ubuntu][2], [Linux Mint][3], or [Elementary OS][4]. Although you can run the installer from within the default graphical desktop environment (Xfce4), its still much in the same vein as Arch Linux. In other words, youre going to have to do a bit of work—all within a text-based installer.
To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users.
![Updating the mirror][6]
Figure 1: Updating the mirror list is a necessity for the Anarchy Linux installation.
[Used with permission][7]
From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard.
![partitioning][9]
Figure 2: Anarchy makes partitioning easy.
[Used with permission][7]
You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once youve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, youll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If youll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue.
The next screen that could give new users pause, requires the selection of the desired installation. There are five options:
* Anarchy-Desktop
* Anarchy-Desktop-LTS
* Anarchy-Server
* Anarchy-Server-LTS
* Anarchy-Advanced
If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4.
Once youve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Dont worry, if you miss something, you can always installed it later.
![software][11]
Figure 3: Selecting the software you want on your system.
[Used with permission][7]
Once youve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard.
![ready to install][13]
Figure 4: Everything is ready to install.
[Used with permission][7]
Once the installation completes, reboot and enjoy Anarchy.
### Post install
I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5).
![GNOME and Budgie][15]
Figure 5: GNOME is on the right, Budgie is on the left.
[Used with permission][7]
My guess is that youll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, youll see those similarities immediately disappear (Figure 6).
![GNOME and Budgie][17]
Figure 6: The GNOME Dash and the Budgie menu are nothing alike.
[Used with permission][7]
Regardless of which desktop you select, youll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done.
### Why use Arch Linux without the “Arch”?
This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like [Arch Linux][18] because they want the feeling of “elitism” that comes with using, say, [Gentoo][19], without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means its accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if youre looking for a Linux distribution with high stability, thats not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, youll wind up with an outstanding desktop platform thats easy to work with (and maintain), based on a very highly regarded distribution of Linux.
Thats why you might opt for Arch Linux without the Arch.
Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux Ive ever had the privilege of using. Without a doubt, if youre looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy.
Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/2/arch-anywhere-dead-long-live-anarchy-linux
作者:[Jack Wallen][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://anarchy-linux.org/
[2]:https://www.ubuntu.com/
[3]:https://linuxmint.com/
[4]:https://elementary.io/
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_1.jpg?itok=WgHRqFTf (Updating the mirror)
[7]:https://www.linux.com/licenses/category/used-permission
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_2.jpg?itok=D7HkR97t (partitioning)
[10]:/files/images/anarchyinstall3jpg
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_3.jpg?itok=5-9E2u0S (software)
[12]:/files/images/anarchyinstall4jpg
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_4.jpg?itok=fuSZqtZS (ready to install)
[14]:/files/images/anarchyinstall5jpg
[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_5.jpg?itok=4y9kiC8I (GNOME and Budgie)
[16]:/files/images/anarchyinstall6jpg
[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_6.jpg?itok=fJ7Lmdci (GNOME and Budgie)
[18]:https://www.archlinux.org/
[19]:https://www.gentoo.org/
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,149 +0,0 @@
How writing can change your career for the better, even if you don't identify as a writer
======
Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?
Early in the book, Kondo talks about keeping possessions that "spark joy." In this article, I'll examine ways writing about what we and other people are doing in the open source world can "spark joy," or at least how writing can improve your career in unexpected ways.
Because I'm a community manager and editor on Opensource.com, you might be thinking, "She just wants us to [write for Opensource.com][2]." And that is true. But everything I will tell you about why you should write is true, even if you never send a story in to Opensource.com. Writing can change your career for the better, even if you don't identify as a writer. Let me explain.
### How I started writing
Early in the first decade of my career, I transitioned from a customer service-related role at a tech publishing company into an editing role on Sys Admin Magazine. I was plugging along, happily laying low in my career, and then that all changed when I started writing about open source technologies and communities, and the people in them. But I did _not_ start writing voluntarily. The tl;dr: of it is that my colleagues at Linux New Media eventually talked me into launching our first blog on the [Linux Pro Magazine][3] site. And as it turns out, it was one of the best career decisions I've ever made. I would not be working on Opensource.com today had I not started writing about what other people in open source were doing all those years ago.
When I first started writing, my goal was to raise awareness of the company I worked for and our publications, while also helping raise the visibility of women in tech. But soon after I started writing, I began seeing unexpected results.
#### My network started growing
When I wrote about a person, an organization, or a project, I got their attention. Suddenly the people I wrote about knew who I was. And because I was sharing knowledge—that is to say, I wasn't being a critic—I'd generally become an ally, and in many cases, a friend. I had a platform and an audience, and I was sharing them with other people in open source.
#### I was learning
In addition to promoting our website and magazine and growing my network, the research and fact-checking I did when writing articles helped me become more knowledgeable in my field and improve my tech chops.
#### I started meeting more people IRL
When I went to conferences, I found that my blog posts helped me meet people. I introduced myself to people I'd written about or learned about during my research, and I met new people to interview. People started knowing who I was because they'd read my articles. Sometimes people were even excited to meet me because I'd highlighted them, their projects, or someone or something they were interested in. I had no idea writing could be so exciting and interesting away from the keyboard.
#### My conference talks improved
I started speaking at events about a year after launching my blog. A few years later, I started writing articles based on my talks prior to speaking at events. The process of writing the articles helps me organize my talks and slides, and it was a great way to provide "notes" for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person.
### What should you write about?
Maybe you're interested in writing, but you struggle with what to write about. You should write about two things: what you know, and what you don't know.
#### Write about what you know
Writing about what you know can be relatively easy. For example, a script you wrote to help automate part of your daily tasks might be something you don't give any thought to, but it could make for a really exciting article for someone who hates doing that same task every day. That could be a relatively quick, short, and easy article for you to write, and you might not even think about writing it. But it could be a great contribution to the open source community.
#### Write about what you don't know
Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it.
> "When I write about a technical topic, I usually learn a lot more about it. I want to make sure my article is as good as it can be. So even if I'm writing about something I know well, I'll research the topic a bit more so I can make sure to get everything right." ~Jim Hall, FreeDOS project leader
For example, I wanted to learn about machine learning, and I thought narrowing down the topic would help me get started. My team mate Jason Baker suggested that I write an article on the [Top 3 machine learning libraries for Python][4], which gave me a focus for research.
The process of researching that article inspired another article, [3 cool machine learning projects using TensorFlow and the Raspberry Pi][5]. That article was also one of our most popular last year. I'm not an _expert_ on machine learning now, but researching the topic with writing an article in mind allowed me to give myself a crash course in the topic.
### Why people in tech write
Now let's look at a few benefits of writing that other people in tech have found. I emailed the Opensource.com writers' list and asked, and here's what writers told me.
#### Grow your network or your project community
Xavier Ho wrote for us for the first time last year ("[A programmer's cleaning guide for messy sensor data][6]"). He says: "I've been getting Twitter mentions from all over the world, including Spain, US, Australia, Indonesia, the UK, and other European countries. It shows the article is making some impact... This is the kind of reach I normally don't have. Hope it's really helping someone doing similar work!"
#### Help people
Writing about what other people are working on is a great way to help your fellow community members. Antoine Thomas, who wrote "[Linux helped me grow as a musician][7]", says, "I began to use open source years ago, by reading tutorials and documentation. That's why now I share my tips and tricks, experience or knowledge. It helped me to get started, so I feel that it's my turn to help others to get started too."
#### Give back to the community
[Jim Hall][8], who started the [FreeDOS project][9], says, "I like to write ... because I like to support the open source community by sharing something neat. I don't have time to be a program maintainer anymore, but I still like to do interesting stuff. So when something cool comes along, I like to write about it and share it."
#### Highlight your community
Emilio Velis wrote an article, "[Open hardware groups spread across the globe][10]", about projects in Central and South America. He explains, "I like writing about specific aspects of the open culture that are usually enclosed in my region (Latin America). I feel as if smaller communities and their ideas are hidden from the mainstream, so I think that creating this sense of broadness in participation is what makes some other cultures as valuable."
#### Gain confidence
[Don Watkins][11] is one of our regular writers and a [community moderator][12]. He says, "When I first started writing I thought I was an impostor, later I realized that many people feel that way. Writing and contributing to Opensource.com has been therapeutic, too, as it contributed to my self esteem and helped me to overcome feelings of inadequacy. … Writing has given me a renewed sense of purpose and empowered me to help others to write and/or see the valuable contributions that they too can make if they're willing to look at themselves in a different light. Writing has kept me younger and more open to new ideas."
#### Get feedback
One of our writers described writing as a feedback loop. He said that he started writing as a way to give back to the community, but what he found was that community responses give back to him.
Another writer, [Stuart Keroff][13] says, "Writing for Opensource.com about the program I run at school gave me valuable feedback, encouragement, and support that I would not have had otherwise. Thousands upon thousands of people heard about the Asian Penguins because of the articles I wrote for the website."
#### Exhibit expertise
Writing can help you show that you've got expertise in a subject, and having writing samples on well-known websites can help you move toward better pay at your current job, get a new role at a different organization, or start bringing in writing income.
[Jeff Macharyas][14] explains, "There are several ways I've benefitted from writing for Opensource.com. One, is the credibility I can add to my social media sites, resumes, bios, etc., just by saying 'I am a contributing writer to Opensource.com.' … I am hoping that I will be able to line up some freelance writing assignments, using my Opensource.com articles as examples, in the future."
### Where should you publish your articles?
That depends. Why are you writing?
You can always post on your personal blog, but if you don't already have a lot of readers, your article might get lost in the noise online.
Your project or company blog is a good option—again, you'll have to think about who will find it. How big is your company's reach? Or will you only get the attention of people who already give you their attention?
Are you trying to reach a new audience? A bigger audience? That's where sites like Opensource.com can help. We attract more than a million page views a month, and more than 700,000 unique visitors. Plus you'll work with editors who will polish and help promote your article.
We aren't the only site interested in your story. What are your favorite sites to read? They might want to help you share your story, and it's ok to pitch to multiple publications. Just be transparent about whether your article has been shared on other sites when working with editors. Occasionally, editors can even help you modify articles so that you can publish variations on multiple sites.
#### Do you want to get rich by writing? (Don't count on it.)
If your goal is to make money by writing, pitch your article to publications that have author budgets. There aren't many of them, the budgets don't tend to be huge, and you will be competing with experienced professional tech journalists who write seven days a week, 365 days a year, with large social media followings and networks. I'm not saying it can't be done—I've done it—but I am saying don't expect it to be easy or lucrative. It's not. (And frankly, I've found that nothing kills my desire to write much like having to write if I want to eat...)
A couple of people have asked me whether Opensource.com pays for content, or whether I'm asking someone to write "for exposure." Opensource.com does not have an author budget, but I won't tell you to write "for exposure," either. You should write because it meets a need.
If you already have a platform that meets your needs, and you don't need editing or social media and syndication help: Congratulations! You are privileged.
### Spark joy!
Most people don't know they have a story to tell, so I'm here to tell you that you probably do, and my team can help, if you just submit a proposal.
Most people—myself included—could use help from other people. Sites like Opensource.com offer one way to get editing and social media services at no cost to the writer, which can be hugely valuable to someone starting out in their career, someone who isn't a native English speaker, someone who wants help with their project or organization, and so on.
If you don't already write, I hope this article helps encourage you to get started. Or, maybe you already write. In that case, I hope this article makes you think about friends, colleagues, or people in your network who have great stories and experiences to share. I'd love to help you help them get started.
I'll conclude with feedback I got from a recent writer, [Mario Corchero][15], a Senior Software Developer at Bloomberg. He says, "I wrote for Opensource because you told me to :)" (For the record, I "invited" him to write for our [PyCon speaker series][16] last year.) He added, "And I am extremely happy about it—not only did it help me at my workplace by gaining visibility, but I absolutely loved it! The article appeared in multiple email chains about Python and was really well received, so I am now looking to publish the second :)" Then he [wrote for us][17] again.
I hope you find writing to be as fulfilling as we do.
You can connect with Opensource.com editors, community moderators, and writers in our Freenode [IRC][18] channel #opensource.com, and you can reach me and the Opensource.com team by email at [open@opensource.com][19].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/career-changing-magic-writing
作者:[Rikki Endsley][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rikki-endsley
[1]:http://tidyingup.com/books/the-life-changing-magic-of-tidying-up-hc
[2]:https://opensource.com/how-submit-article
[3]:http://linuxpromagazine.com/
[4]:https://opensource.com/article/17/2/3-top-machine-learning-libraries-python
[5]:https://opensource.com/article/17/2/machine-learning-projects-tensorflow-raspberry-pi
[6]:https://opensource.com/article/17/9/messy-sensor-data
[7]:https://opensource.com/life/16/9/my-linux-story-musician
[8]:https://opensource.com/users/jim-hall
[9]:http://www.freedos.org/
[10]:https://opensource.com/article/17/6/open-hardware-latin-america
[11]:https://opensource.com/users/don-watkins
[12]:https://opensource.com/community-moderator-program
[13]:https://opensource.com/education/15/3/asian-penguins-Linux-middle-school-club
[14]:https://opensource.com/users/jeffmacharyas
[15]:https://opensource.com/article/17/5/understanding-datetime-python-primer
[16]:https://opensource.com/tags/pycon
[17]:https://opensource.com/article/17/9/python-logging
[18]:https://opensource.com/article/16/6/getting-started-irc
[19]:mailto:open@opensource.com

Some files were not shown because too many files have changed in this diff Show More