Merge pull request #1 from LCTT/master

主仓库同步
This commit is contained in:
chenmu-kk 2020-02-11 18:08:47 +08:00 committed by GitHub
commit f3f0d56266
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
460 changed files with 8737 additions and 49536 deletions

View File

@ -0,0 +1,149 @@
分布式跟踪系统的四大功能模块如何协同工作
======
> 了解分布式跟踪中的主要体系结构决策,以及各部分如何组合在一起。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
早在十年前,认真研究过分布式跟踪基本上只有学者和一小部分大型互联网公司中的人。对于任何采用微服务的组织来说,它如今成为一种筹码。其理由是确立的:微服务通常会发生让人意想不到的错误,而分布式跟踪则是描述和诊断那些错误的最好方法。
也就是说,一旦你准备将分布式跟踪集成到你自己的应用程序中,你将很快意识到对于不同的人来说“<ruby>分布式跟踪<rt>Distributed Tracing</rt></ruby>”一词意味着不同的事物。此外,跟踪生态系统里挤满了具有相似内容的重叠项目。本文介绍了分布式跟踪系统中四个(可能)独立的功能模块,并描述了它们间将如何协同工作。
### 分布式跟踪:一种思维模型
大多数用于跟踪的思维模型来源于 [Google 的 Dapper 论文][1]。[OpenTracing][2] 使用相似的术语,因此,我们从该项目借用了以下术语:
![Tracing][3]
* <ruby>跟踪<rt>Trace</rt></ruby>:事物在分布式系统运行的过程描述。
* <ruby>跨度<rt>Span</rt></ruby>:一种命名的定时操作,表示工作流的一部分。跨度可接受键值对标签以及附加到特定跨度实例的细粒度的、带有时间戳的结构化日志。
* <ruby>跨度上下文<rt>Span context</rt></ruby>:携带分布式事务的跟踪信息,包括当它通过网络或消息总线将服务传递给服务时。跨度上下文包含跟踪标识符、跨度标识符以及跟踪系统所需传播到下游服务的任何其他数据。
如果你想要深入研究这种思维模式的细节,请仔细参照 [OpenTracing 技术规范][1]。
### 四大功能模块
从应用层分布式跟踪系统的观点来看,现代软件系统架构如下图所示:
![Tracing][5]
现代软件系统的组件可分为三类:
* **应用程序和业务逻辑**:你的代码。
* **广泛共享库**:他人的代码
* **广泛共享服务**:他人的基础架构
这三类组件有着不同的需求,驱动着监控应用程序的分布式跟踪系统的设计。最终的设计得到了四个重要的部分:
* <ruby>跟踪检测 API<rt>A tracing instrumentation API</rt></ruby>:修饰应用程序代码
* <ruby>线路协议<rt>Wire protocol</rt></ruby>:在 RPC 请求中与应用程序数据一同发送的规定
* <ruby>数据协议<rt>Data protocol</rt></ruby>:将异步信息(带外)发送到你的分析系统的规定
* <ruby>分析系统<rt>Analysis system</rt></ruby>:用于处理跟踪数据的数据库和交互式用户界面
为了更深入的解释这个概念,我们将深入研究驱动该设计的细节。如果你只需要我的一些建议,请跳转至下方的四大解决方案。
### 需求,细节和解释
应用程序代码、共享库以及共享式服务在操作上有显著的差别,这种差别严重影响了对其进行检测的请求操作。
#### 检测应用程序代码和业务逻辑
在任何特定的微服务中,由微服务开发者编写的大部分代码是应用程序或者商业逻辑。这部分代码规定了特定区域的操作。通常,它包含任何特殊、独一无二的逻辑判断,这些逻辑判断首先证明了创建新型微服务的合理性。基本上按照定义,**该代码通常不会在多个服务中共享或者以其他方式出现。**
也即是说你仍需了解它,这也意味着需要以某种方式对它进行检测。一些监控和跟踪分析系统使用<ruby>黑盒代理<rt>black-box agents</rt></ruby>自动检测代码,另一些系统更想使用显式的白盒检测工具。对于后者,抽象跟踪 API 提供了许多对于微服务的应用程序代码来说更为实用的优势:
* 抽象 API 允许你在不重新编写检测代码的条件下换新的监视工具。你可能想要变更云服务提供商、供应商和监测技术,而一大堆不可移植的检测代码将会为该过程增加有意义的开销和麻烦。
* 事实证明,除了生产监控之外,该工具还有其他有趣的用途。现有的项目使用相同的跟踪工具来驱动测试工具、分布式调试器、“混沌工程”故障注入器和其他元应用程序。
* 但更重要的是,若将应用程序组件提取到共享库中要怎么办呢?由上述内容可得到结论:
#### 检测共享库
在大多数应用程序中出现的实用程序代码(处理网络请求、数据库调用、磁盘写操作、线程、并发管理等)通常情况下是通用的,而非特别应用于某个特定应用程序。这些代码会被打包成库和框架,而后就可以被装载到许多的微服务上并且被部署到多种不同的环境中。
其真正的不同是:对于共享代码,其他人则成为了使用者。大多数用户有不同的依赖关系和操作风格。如果尝试去使用该共享代码,你将会注意到几个常见的问题:
* 你需要一个 API 来编写检测。然而,你的库并不知道你正在使用哪个分析系统。会有多种选择,并且运行在相同应用下的所有库无法做出不兼容的选择。
* 由于这些包封装了所有网络处理代码,因此从请求报头注入和提取跨度上下文的任务往往指向 RPC 库。然而,共享库必须了解到每个应用程序正在使用哪种跟踪协议。
* 最后,你不想强制用户使用相互冲突的依赖项。大多数用户有不同的依赖关系和操作风格。即使他们使用 gRPC绑定的 gRPC 版本是否相同?因此任何你的库附带用于跟踪的监控 API 必定是免于依赖的。
**因此一个a没有依赖关系、b与线路协议无关、c使用流行的供应商和分析系统的抽象 API 应该是对检测共享库代码的要求。**
#### 检测共享式服务
最后,有时整个服务(或微服务集合体)的通用性足以使许多独立的应用程序使用它们。这种共享式服务通常由第三方托管和管理,例如缓存服务器、消息队列以及数据库。
从应用程序开发者的角度来看,理解共享式服务本质上是黑盒子是极其重要的。它不可能将你的应用程序监控注入到共享式服务。恰恰相反,托管服务通常会运行它自己的监控方案。
### 四个方面的解决方案
因此,抽象的跟踪应用程序接口将会帮助库发出数据并且注入/抽取跨度上下文。标准的线路协议将会帮助黑盒服务相互连接,而标准的数据格式将会帮助分离的分析系统合并其中的数据。让我们来看一下部分有希望解决这些问题的方案。
#### 跟踪 APIOpenTracing 项目
如你所见,我们需要一个跟踪 API 来检测应用程序代码。为了将这种工具扩展到大多数进行跨度上下文注入和提取的共享库中,则必须以某种关键方式对 API 进行抽象。
[OpenTracing][2] 项目主要针对解决库开发者的问题OpenTracing 是一个与供应商无关的跟踪 API它没有依赖关系并且迅速得到了许多监控系统的支持。这意味着如果库附带了内置的本地 OpenTracing 工具,当监控系统在应用程序启动连接时,跟踪将会自动启动。
就个人而言,作为一个已经编写、发布和操作开源软件十多年的人,在 OpenTracing 项目上工作并最终解决这个观察性的难题令我十分满意。
除了 API 之外OpenTracing 项目还维护了一个不断增长的工具列表,其中一些可以在[这里][6]找到。如果你想参与进来,无论是通过提供一个检测插件,对你自己的 OSS 库进行本地测试,或者仅仅只想问个问题,都可以通过 [Gitter][7] 向我们打招呼。
#### 线路协议: HTTP 报头 trace-context
为了监控系统能进行互操作,以及减轻从一个监控系统切换为另外一个时带来的迁移问题,需要标准的线路协议来传播跨度上下文。
[w3c 分布式跟踪上下文社区小组][8]在努力制定此标准。目前的重点是制定一系列标准的 HTTP 报头。该规范的最新草案可以在[此处][9]找到。如果你对此小组有任何的疑问,[邮件列表][10]和[Gitter 聊天室][11]是很好的解惑地点。
LCTT 译注:本文原文发表于 2018 年 5 月,可能现在社区已有不同进展)
#### 数据协议 (还未出现!!)
对于黑盒服务,在无法安装跟踪程序或无法与程序进行交互的情况下,需要使用数据协议从系统中导出数据。
目前这种数据格式和协议的开发工作尚处在初级阶段,并且大多在 w3c 分布式跟踪上下文工作组的上下文中进行工作。需要特别关注的是在标准数据模式中定义更高级别的概念,例如 RPC 调用、数据库语句等。这将允许跟踪系统对可用数据类型做出假设。OpenTracing 项目也通过定义一套[标准标签集][12]来解决这一事务。该计划是为了使这两项努力结果相互配合。
注意当前有一个中间地带。对于由应用程序开发者操作但不想编译或以其他方式执行代码修改的“网络设备”,动态链接可以帮助避免这种情况。主要的例子就是服务网格和代理,就像 Envoy 或者 NGINX。针对这种情况可将兼容 OpenTracing 的跟踪器编译为共享对象,然后在运行时动态链接到可执行文件中。目前 [C++ OpenTracing API][13] 提供了该选项。而 JAVA 的 OpenTracing [跟踪器解析][14]也在开发中。
这些解决方案适用于支持动态链接,并由应用程序开发者部署的的服务。但从长远来看,标准的数据协议可以更广泛地解决该问题。
#### 分析系统:从跟踪数据中提取有见解的服务
最后不得不提的是,现在有足够多的跟踪监视解决方案。可以在[此处][15]找到已知与 OpenTracing 兼容的监控系统列表,但除此之外仍有更多的选择。我更鼓励你研究你的解决方案,同时希望你在比较解决方案时发现本文提供的框架能派上用场。除了根据监控系统的操作特性对其进行评级外(更不用提你是否喜欢 UI 和其功能),确保你考虑到了上述三个重要方面、它们对你的相对重要性以及你感兴趣的跟踪系统如何为它们提供解决方案。
### 结论
最后,每个部分的重要性在很大程度上取决于你是谁以及正在建立什么样的系统。举个例子,开源库的作者对 OpenTracing API 非常感兴趣,而服务开发者对 trace-context 规范更感兴趣。当有人说一部分比另一部分重要时,他们的意思通常是“一部分对我来说比另一部分重要”。
然而,事实是:分布式跟踪已经成为监控现代系统所必不可少的事物。在为这些系统进行构建模块时,“尽可能解耦”的老方法仍然适用。在构建像分布式监控系统一样的跨系统的系统时,干净地解耦组件是维持灵活性和前向兼容性地最佳方式。
感谢你的阅读!现在当你准备好在你自己的应用程序中实现跟踪服务时,你已有一份指南来了解他们正在谈论哪部分部分以及它们之间如何相互协作。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/distributed-tracing
作者:[Ted Young][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[chenmu-kk](https://github.com/chenmu-kk)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tedsuo
[1]:https://research.google.com/pubs/pub36356.html
[2]:http://opentracing.io/
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
[4]:https://github.com/opentracing/specification/blob/master/specification.md
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
[6]:https://github.com/opentracing-contrib/
[7]:https://gitter.im/opentracing/public
[8]:https://www.w3.org/community/trace-context/
[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
[10]:http://lists.w3.org/Archives/Public/public-trace-context/
[11]:https://gitter.im/TraceContext/Lobby
[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
[13]:https://github.com/opentracing/opentracing-cpp
[14]:https://github.com/opentracing-contrib/java-tracerresolver
[15]:http://opentracing.io/documentation/pages/supported-tracers
[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/

View File

@ -0,0 +1,159 @@
MidnightBSD或许是你通往 FreeBSD 的大门
======
![](https://www.linux.com/wp-content/uploads/2019/08/midnight_4_0.jpg)
[FreeBSD][1] 是一个开源操作系统,衍生自著名的 <ruby>[伯克利软件套件][2]<rt>Berkeley Software Distribution</rt></ruby>BSD。FreeBSD 的第一个版本发布于 1993 年并且仍然在继续发展。2007 年左右Lucas Holt 想要利用 OpenStep现在是 Cocoa的 Objective-C 框架、widget 工具包和应用程序开发工具的 [GnuStep][3] 实现,来创建一个 FreeBSD 的分支。为此,他开始开发 MidnightBSD 桌面发行版。
MidnightBSD以 Lucas 的猫 Midnight 命名)仍然在积极地(尽管缓慢)开发。从 2017 年 8 月开始可以获得最新的稳定发布版本0.8.6LCTT 译注:截止至本译文发布时,当前是 2019/10/31 发布的 1.2 版)。尽管 BSD 发行版不是你所说的用户友好型发行版,但上手安装是熟悉如何处理 文本ncurses安装过程以及通过命令行完成安装的好方法。
这样,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。这需要花费一点精力,但是如果你是一名正在寻找扩展你的技能的 Linux 用户……这是一个很好的起点。
我将带你走过安装 MidnightBSD 的流程,如何添加一个图形桌面环境,然后如何安装应用程序。
### 安装
正如我所提到的这是一个文本ncurses安装过程因此在这里找不到可以用鼠标点击的地方。相反你将使用你键盘的 `Tab` 键和箭头键。在你下载[最新的发布版本][4]后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装程序将打开并给你三个选项(图 1。使用你的键盘的箭头键选择 “Install”并敲击回车键。
![MidnightBSD installer][6]
*图 1: 启动 MidnightBSD 安装程序。*
在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
1. 设置非默认键盘映射(是/否)
2. 设置主机名称
3. 添加可选系统组件文档、游戏、32 位兼容性、系统源码代码)
4. 对硬盘分区
5. 管理员密码
6. 配置网络接口
7. 选择地区(时区)
8. 启用服务(例如 ssh
9. 添加用户(图 2
![Adding a user][7]
*图 2: 向系统添加一个用户。*
在你向系统添加用户后,你将被进入到一个窗口中(图 3在这里你可以处理任何你可能忘记配置或你想重新配置的东西。如果你不需要作出任何更改选择 “Exit”然后你的配置就会被应用。
![Applying your configurations][8]
*图 3: 应用你的配置。*
在接下来的窗口中,当出现提示时,选择 “No”接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
### 后安装阶段
当你最新安装的 MidnightBSD 启动时你将发现你自己处于命令提示符当中。此刻还没有图形界面。要安装应用程序MidnightBSD 依赖于 `mport` 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
```
sudo mport index
sudo mport install xorg
```
你现在已经安装好 Xorg 窗口服务器了,它允许你安装桌面环境。使用命令来安装 Xfce
```
sudo mport install xfce
```
现在 Xfce 已经安装好。不过,我们必须让它同命令 `startx` 一起启用。为此,让我们先安装 nano 编辑器。发出命令:
```
sudo mport install nano
```
随着 nano 安装好,发出命令:
```
nano ~/.xinitrc
```
这个文件仅包含一行内容:
```
exec startxfce4
```
保存并关闭这个文件。如果你现在发出命令 `startx`, Xfce 桌面环境将会启动。你应该会感到有点熟悉了吧(图 4
![ Xfce][9]
*图 4: Xfce 桌面界面已准备好服务。*
因为你不会总是想必须发出命令 `startx`,你希望启用登录守护进程。然而,它却没有安装。要安装这个子系统,发出命令:
```
sudo mport install mlogind
```
当完成安装后,通过在 `/etc/rc.conf` 文件中添加一个项目来在启动时启用 mlogind。在 `rc.conf` 文件的底部,添加以下内容:
```
mlogind_enable=”YES”
```
保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后我最后得到一个空白屏幕和讨厌的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必须使用 `startx` 命令。
### 安装应用
默认情况下,你找不到很多能可用的应用程序。如果你尝试使用 `mport` 安装应用程序,你很快就会感到沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 `svnlite` 命令来查看检出的可用 mport 软件列表。回到终端窗口,并发出命令:
```
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
```
在你完成这些后,你应该看到一个命名为 `~/mports` 的新目录。使用命令 `cd ~/.mports` 更改到这个目录。发出 `ls` 命令,然后你应该看到许多的类别(图 5
![applications][10]
*图 5: mport 现在可用的应用程序类别。*
你想安装 Firefox 吗?如果你查看 `www` 目录,你将看到一个 `linux-firefox` 列表。发出命令:
```
sudo mport install linux-firefox
```
现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项。翻找所有的类别,并使用 `mport` 命令来安装你需要的所有软件。
### 一个悲哀的警告
一个悲哀的小警告是,`mport` (通过 `svnlite`)仅能找到的一个办公套件的版本是 OpenOffice 3 。那是非常过时的。尽管在 `~/mports/editors` 目录中能找到 Abiword ,但是它看起来不能安装。甚至在安装 OpenOffice 3 后,它会输出一个执行格式错误。换句话说,你不能使用 MidnightBSD 在办公生产效率方面做很多的事情。但是,嘿嘿,如果你周围正好有一个旧的 Palm Pilot你可以安装 pilot-link。换句话说可用的软件不足以构成一个极其有用的桌面发行版……至少对普通用户不是。但是如果你想在 MidnightBSD 上开发,你将找到很多可用的工具可以安装(查看 `~/mports/devel` 目录)。你甚至可以使用命令安装 Drupal
```
sudo mport install drupal7
```
当然在此之后你将需要创建一个数据库MySQL 已经安装)、安装 Apache`sudo mport install apache24`),并配置必要的 Apache 配置。
显然地,已安装的和可以安装的是一个应用程序、系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务于特殊目的的发行版。
### 享受 \*BSD 优良
这就是如何使 MidnightBSD 启动,并使其运行某种有用的桌面发行版的方法。它不像很多其它的 Linux 发行版一样快速简便,但是如果你想要一个促使你思考的发行版,这可能正是你正在寻找的。尽管大多数竞争对手都准备了很多可以安装的应用软件,但 MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣挑战。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.freebsd.org/
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]:https://en.wikipedia.org/wiki/GNUstep
[4]:http://www.midnightbsd.org/download/
[5]:https://www.virtualbox.org/
[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,49 +1,45 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (mengxinayan)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11870-1.html)
[#]: subject: (8 Commands to Check Memory Usage on Linux) [#]: subject: (8 Commands to Check Memory Usage on Linux)
[#]: via: (https://www.2daygeek.com/linux-commands-check-memory-usage/) [#]: via: (https://www.2daygeek.com/linux-commands-check-memory-usage/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
8 Commands to Check Memory Usage on Linux 检查 Linux 中内存使用情况的 8 条命令
====== ======
Linux is not like Windows and you will not get a GUI always, especially in a server environment. ![](https://img.linux.net.cn/data/attachment/album/202002/09/121112mg0jigxtcc5xr8or.jpg)
As a Linux administrator, it is important to know how to check your available and used resources, such as memory, CPU, disk space, etc. Linux 并不像 Windows你经常不会有图形界面可供使用特别是在服务器环境中。
If there are any applications that use too much resources on the system to run your system at the optimum level you need to find and fix. 作为一名 Linux 管理员知道如何获取当前可用的和已经使用的资源情况比如内存、CPU、磁盘等是相当重要的。如果某一应用在你的系统上占用了太多的资源导致你的系统无法达到最优状态那么你需要找到并修正它。
If you want to **[find out the top 10 memory (RAM) consumption processes in Linux][1]**, go to the following article. 如果你想找到消耗内存前十名的进程,你需要去阅读这篇文章:[如何在 Linux 中找出内存消耗最大的进程][1]。
In Linux, there are commands for everything, so use the corresponding commands. 在 Linux 中,命令能做任何事,所以使用相关命令吧。在这篇教程中,我们将会给你展示 8 个有用的命令来即查看在 Linux 系统中内存的使用情况,包括 RAM 和交换分区。
In this tutorial, we will show you eight powerful commands to check memory usage on a Linux system, including RAM and swap. 创建交换分区在 Linux 系统中是非常重要的,如果你想了解如何创建,可以去阅读这篇文章:[在 Linux 系统上创建交换分区][2]。
**[Creating swap space on a Linux system][2]** is very important. 下面的命令可以帮助你以不同的方式查看 Linux 内存使用情况。
The following commands can help you check memory usage in Linux in different ways. * `free` 命令
* `/proc/meminfo` 文件
* `vmstat` 命令
* `ps_mem` 命令
* `smem` 命令
* `top` 命令
* `htop` 命令
* `glances` 命令
* free Command ### 1如何使用 free 命令查看 Linux 内存使用情况
* /proc/meminfo File
* vmstat Command
* ps_mem Command
* smem Command
* top Command
* htop Command
* glances Command
[free 命令][3] 是被 Linux 管理员广泛使用的主要命令。但是它提供的信息比 `/proc/meminfo` 文件少。
`free` 命令会分别展示物理内存和交换分区内存中已使用的和未使用的数量,以及内核使用的缓冲区和缓存。
### 1) How to Check Memory Usage on Linux Using the free Command 这些信息都是从 `/proc/meminfo` 文件中获取的。
**[Free command][3]** is the most powerful command widely used by the Linux administrator. But it provides very little information compared to the “/proc/meminfo” file.
Free command displays the total amount of free and used physical and swap memory on the system, as well as buffers and caches used by the kernel.
These information is gathered from the “/proc/meminfo” file.
``` ```
# free -m # free -m
@ -52,24 +48,18 @@ Mem: 15867 9199 1702 3315 4965 3039
Swap: 17454 666 16788 Swap: 17454 666 16788
``` ```
* **total:** Total installed memory * `total`:总的内存量
* **used:** Memory is currently in use by running processes (used= total free buff/cache) * `used`:被当前运行中的进程使用的内存量(`used` = `total` `free` `buff/cache`
* **free:** Unused memory (free= total used buff/cache) * `free` 未被使用的内存量(`free` = `total` `used` `buff/cache`
* **shared:** Memory shared between two or more processes (multiple processes) * `shared` 在两个或多个进程之间共享的内存量
* **buffers:** Memory reserved by the kernel to hold a process queue request. * `buffers` 内存中保留用于内核记录进程队列请求的内存量
* **cache:** Size of the page cache that holds recently used files in RAM * `cache` 在 RAM 中存储最近使用过的文件的页缓冲大小
* **buff/cache:** Buffers + Cache * `buff/cache` 缓冲区和缓存总的使用内存量
* **available:** Estimation of how much memory is available for starting new applications, without swapping. * `available` 可用于启动新应用的可用内存量(不含交换分区)
### 2) 如何使用 /proc/meminfo 文件查看 Linux 内存使用情况
`/proc/meminfo` 文件是一个包含了多种内存使用的实时信息的虚拟文件。它展示内存状态单位使用的是 kB其中大部分属性都难以理解。然而它也包含了内存使用情况的有用信息。
### 2) How to Check Memory Usage on Linux Using the /proc/meminfo File
The “/proc/meminfo” file is a virtual file that contains various real-time information about memory usage.
It shows memory stats in kilobytes, most of which are somewhat difficult to understand.
However it contains useful information about memory usage.
``` ```
# cat /proc/meminfo # cat /proc/meminfo
@ -124,13 +114,11 @@ DirectMap2M: 14493696 kB
DirectMap1G: 2097152 kB DirectMap1G: 2097152 kB
``` ```
### 3) How to Check Memory Usage on Linux Using the vmstat Command ### 3) 如何使用 vmstat 命令查看 Linux 内存使用情况
The **[vmstat command][4]** is another useful tool for reporting virtual memory statistics. [vmstat 命令][4] 是另一个报告虚拟内存统计信息的有用工具。
vmstat reports information about processes, memory, paging, block IO, traps, disks, and cpu functionality. `vmstat` 报告的信息包括:进程、内存、页面映射、块 I/O、陷阱、磁盘和 CPU 特性信息。`vmstat` 不需要特殊的权限,并且它可以帮助诊断系统瓶颈。
vmstat does not require special permissions, and it can help identify system bottlenecks.
``` ```
# vmstat # vmstat
@ -140,58 +128,35 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
1 0 682060 1769324 234188 4853500 0 3 25 91 31 16 34 13 52 0 0 1 0 682060 1769324 234188 4853500 0 3 25 91 31 16 34 13 52 0 0
``` ```
If you want to understand this in detail, read the field description below. 如果你想详细了解每一项的含义,阅读下面的描述。
**Procs** * `procs`:进程
* `r` 可以运行的进程数目(正在运行或等待运行)
* `b` 处于不可中断睡眠中的进程数目
* `memory`:内存
* `swpd` 使用的虚拟内存数量
* `free` 空闲的内存数量
* `buff` 用作缓冲区内存的数量
* `cache` 用作缓存内存的数量
* `inact` 不活动的内存数量(使用 `-a` 选项)
* `active` 活动的内存数量(使用 `-a` 选项)
* `Swap`:交换分区
* `si` 每秒从磁盘交换的内存数量
* `so` 每秒交换到磁盘的内存数量
* `IO`:输入输出
* `bi` 从一个块设备中收到的块(块/秒)
* `bo` 发送到一个块设备的块(块/秒)
* `System`:系统
* `in` 每秒的中断次数,包括时钟。
* `cs` 每秒的上下文切换次数。
* `CPU`:下面这些是在总的 CPU 时间占的百分比
* `us` 花费在非内核代码上的时间占比(包括用户时间,调度时间)
* `sy` 花费在内核代码上的时间占比 (系统时间)
* `id` 花费在闲置的时间占比。在 Linux 2.5.41 之前,包括 I/O 等待时间
* `wa` 花费在 I/O 等待上的时间占比。在 Linux 2.5.41 之前,包括在空闲时间中
* `st` 被虚拟机偷走的时间占比。在 Linux 2.6.11 之前,这部分称为 unknown
* **r:** The number of runnable processes (running or waiting for run time). 运行下面的命令查看详细的信息。
* **b:** The number of processes in uninterruptible sleep.
**Memory**
* **swpd:** the amount of virtual memory used.
* **free:** the amount of idle memory.
* **buff:** the amount of memory used as buffers.
* **cache:** the amount of memory used as cache.
* **inact:** the amount of inactive memory. (-a option)
* **active:** the amount of active memory. (-a option)
**Swap**
* **si:** Amount of memory swapped in from disk (/s).
* **so:** Amount of memory swapped to disk (/s).
**IO**
* **bi:** Blocks received from a block device (blocks/s).
* **bo:** Blocks sent to a block device (blocks/s).
**System**
* **in:** The number of interrupts per second, including the clock.
* **cs:** The number of context switches per second.
**CPU : These are percentages of total CPU time.**
* **us:** Time spent running non-kernel code. (user time, including nice time)
* **sy:** Time spent running kernel code. (system time)
* **id:** Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
* **wa:** Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
* **st:** Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.
Run the following command for detailed information.
``` ```
# vmstat -s # vmstat -s
@ -223,16 +188,13 @@ Run the following command for detailed information.
1577163147 boot time 1577163147 boot time
3318 forks 3318 forks
``` ```
### 4) 如何使用 ps_mem 命令查看 Linux 内存使用情况
### 4) How to Check Memory Usage on Linux Using the ps_mem Command [ps_mem][5] 是一个用来查看当前内存使用情况的简单的 Python 脚本。该工具可以确定每个程序使用了多少内存(不是每个进程)。
**[ps_mem][5]** is a simple Python script that allows you to get core memory usage accurately for a program in Linux. 该工具采用如下的方法计算每个程序使用内存:总的使用 = 程序进程私有的内存 + 程序进程共享的内存。
This can determine how much RAM is used per program (not per process). 计算共享内存是存在不足之处的,该工具可以为运行中的内核自动选择最准确的方法。
It calculates the total amount of memory used per program, total = sum (private RAM for program processes) + sum (shared RAM for program processes).
The shared RAM is problematic to calculate, and the tool automatically selects the most accurate method available for the running kernel.
``` ```
# ps_mem # ps_mem
@ -285,15 +247,13 @@ The shared RAM is problematic to calculate, and the tool automatically selects t
================================== ==================================
``` ```
### 5) How to Check Memory Usage on Linux Using the smem Command ### 5)如何使用 smem 命令查看 Linux 内存使用情况
**[smem][6]** is a tool that can provide numerous reports of memory usage on Linux systems. Unlike existing tools, smem can report Proportional Set Size (PSS), Unique Set Size (USS) and Resident Set Size (RSS). [smem][6] 是一个可以为 Linux 系统提供多种内存使用情况报告的工具。不同于现有的工具,`smem` 可以报告<ruby>比例集大小<rt>Proportional Set Size</rt></ruby>PSS<ruby>唯一集大小<rt>Unique Set Size</rt></ruby>USS<ruby>驻留集大小<rt>Resident Set Size</rt></ruby>RSS
Proportional Set Size (PSS): refers to the amount of memory used by libraries and applications in the virtual memory system. - 比例集大小PSS库和应用在虚拟内存系统中的使用量。
- 唯一集大小USS其报告的是非共享内存。
Unique Set Size (USS) : Unshared memory is reported as USS (Unique Set Size). - 驻留集大小RSS物理内存通常多进程共享使用情况其通常高于内存使用量。
Resident Set Size (RSS) : The standard measure of physical memory (it typically shared among multiple applications) usage known as resident set size (RSS) will significantly overestimate memory usage.
``` ```
# smem -tk # smem -tk
@ -336,13 +296,11 @@ Resident Set Size (RSS) : The standard measure of physical memory (it typically
90 1 0 4.8G 5.2G 8.0G 90 1 0 4.8G 5.2G 8.0G
``` ```
### 6) How to Check Memory Usage on Linux Using the top Command ### 6) 如何使用 top 命令查看 Linux 内存使用情况
**[top command][7]** is one of the most frequently used commands by Linux administrators to understand and view the resource usage for a process on a Linux system. [top 命令][7] 是一个 Linux 系统的管理员最常使用的用于查看进程的资源使用情况的命令。
It displays the total memory of the system, current memory usage, free memory and total memory used by the buffers. 该命令会展示了系统总的内存量、当前内存使用量、空闲内存量和缓冲区使用的内存总量。此外,该命令还会展示总的交换空间内存量、当前交换空间的内存使用量、空闲的交换空间内存量和缓存使用的内存总量。
In addition, it displays total swap memory, current swap usage, free swap memory, and total cached memory by the system.
``` ```
# top -b | head -10 # top -b | head -10
@ -368,27 +326,25 @@ KiB Swap: 17873388 total, 17873388 free, 0 used. 9179772 avail Mem
2174 daygeek 20 2466680 122196 78604 S 0.8 0.8 0:17.75 WebExtensi+ 2174 daygeek 20 2466680 122196 78604 S 0.8 0.8 0:17.75 WebExtensi+
``` ```
### 7) How to Check Memory Usage on Linux Using the htop Command ### 7) 如何使用 htop 命令查看 Linux 内存使用情况
The **[htop command][8]** is an interactive process viewer for Linux/Unix systems. It is a text-mode application and requires the ncurses library, it was developed by Hisham. [htop 命令][8] 是一个可交互的 Linux/Unix 系统进程查看器。它是一个文本模式应用,且使用它需要 Hisham 开发的 ncurses 库。
It is designed as an alternative to the top command. 该名令的设计目的使用来代替 `top` 命令。该命令与 `top` 命令很相似,但是其允许你可以垂直地或者水平地的滚动以便可以查看系统中所有的进程情况。
This is similar to the top command, but allows you to scroll vertically and horizontally to see all the processes running the system. `htop` 命令拥有不同的颜色,这个额外的优点当你在追踪系统性能情况时十分有用。
htop comes with Visual Colors, which have added benefits and are very evident when it comes to tracking system performance. 此外你可以自由地执行与进程相关的任务比如杀死进程或者改变进程的优先级而不需要其进程号PID
You are free to carry out any tasks related to processes, such as process killing and renicing without entering their PIDs. ![][10]
[![][9]][10] ### 8如何使用 glances 命令查看 Linux 内存使用情况
### 8) How to Check Memory Usage on Linux Using the glances Command [Glances][11] 是一个 Python 编写的跨平台的系统监视工具。
**[Glances][11]** is a cross-platform system monitoring tool written in Python. 你可以在一个地方查看所有信息比如CPU 使用情况、内存使用情况、正在运行的进程、网络接口、磁盘 I/O、RAID、传感器、文件系统信息、Docker、系统信息、运行时间等等。
You can see all information in one place such as CPU usage, Memory usage, running process, Network interface, Disk I/O, Raid, Sensors, Filesystem info, Docker, System info, Uptime, etc,. ![][12]
![][9]
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -396,21 +352,22 @@ via: https://www.2daygeek.com/linux-commands-check-memory-usage/
作者:[Magesh Maruthamuthu][a] 作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID) 译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/ [a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972 [b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/how-to-find-high-cpu-consumption-processes-in-linux/ [1]: https://linux.cn/article-11542-1.html
[2]: https://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/ [2]: https://linux.cn/article-9579-1.html
[3]: https://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/ [3]: https://linux.cn/article-8314-1.html
[4]: https://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/ [4]: https://linux.cn/article-8157-1.html
[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/ [5]: https://linux.cn/article-8639-1.html
[6]: https://www.2daygeek.com/smem-linux-memory-usage-statistics-reporting-tool/ [6]: https://linux.cn/article-7681-1.html
[7]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/ [7]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[8]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/ [8]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 [9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[10]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-2.jpg [10]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-2.jpg
[11]: https://www.2daygeek.com/linux-glances-advanced-real-time-linux-system-performance-monitoring-tool/ [11]: https://www.2daygeek.com/linux-glances-advanced-real-time-linux-system-performance-monitoring-tool/
[12]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-3.jpg

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11875-1.html)
[#]: subject: (Top CI/CD resources to set you up for success)
[#]: via: (https://opensource.com/article/19/12/cicd-resources)
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
顶级 CI / CD 资源,助你成功
======
> 随着企业期望实现无缝、灵活和可扩展的部署,持续集成和持续部署成为 2019 年的关键主题。
![Plumbing tubes in many directions][1]
对于 CI/CD 和 DevOps 来说2019 年是非常棒的一年。Opensource.com 的作者分享了他们专注于无缝、灵活和可扩展部署时是如何朝着敏捷和 scrum 方向发展的。以下是我们 2019 年发布的 CI/CD 文章中的一些重要文章。
### 学习和提高你的 CI/CD 技能
我们最喜欢的一些文章集中在 CI/CD 的实操经验上,并涵盖了许多方面。通常以 [Jenkins][2] 管道开始Bryant Son 的文章《[用 Jenkins 构建 CI/CD 管道][3]》将为你提供足够的经验以开始构建你的第一个管道。Daniel Oh 在《[用 DevOps 管道进行自动验收测试][4]》一文中,提供了有关验收测试的重要信息,包括可用于自行测试的各种 CI/CD 应用程序。我写的《[安全扫描 DevOps 管道][5]》非常简短,其中简要介绍了如何使用 Jenkins 平台在管道中设置安全性。
### 交付工作流程
正如 Jithin Emmanuel 在《[Screwdriver一个用于持续交付的可扩展构建平台][6]》中分享的,在学习如何使用和提高你的 CI/CD 技能方面工作流程很重要特别是当涉及到管道时。Emily Burns 在《[为什么 Spinnaker 对 CI/CD 很重要][7]》中解释了灵活地使用 CI/CD 工作流程准确构建所需内容的原因。Willy-Peter Schaub 还盛赞了为所有产品创建统一管道的想法,以便《[在一个 CI/CD 管道中一致地构建每个产品][8]》。这些文章将让你很好地了解在团队成员加入工作流程后会发生什么情况。
### CI/CD 如何影响企业
2019 年也是认识到 CI/CD 的业务影响以及它是如何影响日常运营的一年。Agnieszka Gancarczyk 分享了 Red Hat 《[小型 Scrum vs. 大型 Scrum][9]》的调查结果, 包括受访者对 Scrum、敏捷运动及对团队的影响的不同看法。Will Kelly 的《[持续部署如何影响整个组织][10]》也提及了开放式沟通的重要性。Daniel Oh 也在《[DevOps 团队必备的 3 种指标仪表板][11]》中强调了指标和可观测性的重要性。最后是 Ann Marie Fred 的精彩文章《[不在生产环境中测试?要在生产环境中测试!][12]》详细说明了在验收测试前在生产环境中测试的重要性。
感谢许多贡献者在 2019 年与 Opensource 的读者分享他们的见解,我期望在 2020 年里从他们那里了解更多有关 CI/CD 发展的信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/cicd-resources
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
[2]: https://jenkins.io/
[3]: https://linux.cn/article-11546-1.html
[4]: https://opensource.com/article/19/4/devops-pipeline-acceptance-testing
[5]: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
[6]: https://opensource.com/article/19/3/screwdriver-cicd
[7]: https://opensource.com/article/19/8/why-spinnaker-matters-cicd
[8]: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
[9]: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
[10]: https://opensource.com/article/19/7/organizational-impact-continuous-deployment
[11]: https://linux.cn/article-11183-1.html
[12]: https://opensource.com/article/19/5/dont-test-production

View File

@ -0,0 +1,169 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11837-1.html)
[#]: subject: (Root User in Ubuntu: Important Things You Should Know)
[#]: via: (https://itsfoss.com/root-user-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 中的 root 用户:你应该知道的重要事情
======
![][5]
当你刚开始使用 Linux 时,你将发现与 Windows 的很多不同。其中一个“不同的东西”是 root 用户的概念。
在这个初学者系列中,我将解释几个关于 Ubuntu 的 root 用户的重要的东西。
**请记住,尽管我正在从 Ubuntu 用户的角度编写这篇文章,它应该对大多数的 Linux 发行版也是有效的。**
你将在这篇文章中学到下面的内容:
* 为什么在 Ubuntu 中禁用 root 用户
* 像 root 用户一样使用命
* 切换为 root 用户
* 解锁 root 用户
### 什么是 root 用户?为什么它在 Ubuntu 中被锁定?
在 Linux 中,有一个称为 [root][6] 的超级用户。这是超级管理员账号,它可以做任何事以及使用系统的一切东西。它可以在你的 Linux 系统上访问任何文件和运行任何命令。
能力越大责任越大。root 用户给予你完全控制系统的能力因此它应该被谨慎地使用。root 用户可以访问系统文件,运行更改系统配置的命令。因此,一个错误的命令可能会破坏系统。
这就是为什么 [Ubuntu][7] 和其它基于 Ubuntu 的发行版默认锁定 root 用户,以从意外的灾难中挽救你的原因。
对于你的日常任务,像移动你家目录中的文件,从互联网下载文件,创建文档等等,你不需要拥有 root 权限。
**打个比方来更好地理解它。假设你想要切一个水果,你可以使用一把厨房用刀。假设你想要砍一颗树,你就得使用一把锯子。现在,你可以使用锯子来切水果,但是那不明智,不是吗?**_
这意味着,你不能是 Ubuntu 中 root 用户或者不能使用 root 权限来使用系统吗?不,你仍然可以在 `sudo` 的帮助下来拥有 root 权限来访问(在下一节中解释)。
> **要点:** 使用于常规任务root 用户权限太过强大。这就是为什么不建议一直使用 root 用户。你仍然可以使用 root 用户来运行特殊的命令。
### 如何在 Ubuntu 中像 root 用户一样运行命令?
![Image Credit: xkcd][8]
对于一些系统的特殊任务来说,你将需要 root 权限。例如。如果你想[通过命令行更新 Ubuntu][9],你不能作为一个常规用户运行该命令。它将给出权限被拒绝的错误。
```
apt update
Reading package lists... Done
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
```
那么,你如何像 root 用户一样运行命令?简单的答案是,在命令前添加 `sudo`,来像 root 用户一样运行。
```
sudo apt update
```
Ubuntu 和很多其它的 Linux 发行版使用一个被称为 `sudo` 的特殊程序机制。`sudo` 是一个以 root 用户(或其它用户)来控制运行命令访问的程序。
实际上,`sudo` 是一个非常多用途的工具。它可以配置为允许一个用户像 root 用户一样来运行所有的命令,或者仅仅一些命令。你也可以配置为无需密码即可使用 sudo 运行命令。这个主题内容比较丰富,也许我将在另一篇文章中详细讨论它。
就目前而言,你应该知道[当你安装 Ubuntu 时][10],你必须创建一个用户账号。这个用户账号在你系统上以管理员身份来工作,并且按照 Ubuntu 中的默认 sudo 策略,它可以在你的系统上使用 root 用户权限来运行任何命令。
`sudo` 的问题是,运行 **sudo 不需要 root 用户密码,而是需要用户自己的密码**
并且这就是为什么当你使用 `sudo` 运行一个命令,会要求输入正在运行 `sudo` 命令的用户的密码的原因:
```
[email protected]:~$ sudo apt update
[sudo] password for abhishek:
```
正如你在上面示例中所见 `abhishek` 在尝试使用 `sudo` 来运行 `apt update` 命令,系统要求输入 `abhishek` 的密码。
**如果你对 Linux 完全不熟悉,当你在终端中开始输入密码时,你可能会惊讶,在屏幕上什么都没有发生。这是十分正常的,因为作为默认的安全功能,在屏幕上什么都不会显示。甚至星号(`*`)都没有。输入你的密码并按回车键。**
> **要点:**为在 Ubuntu 中像 root 用户一样运行命令,在命令前添加 `sudo`。 当被要求输入密码时,输入你的账户的密码。当你在屏幕上输入密码时,什么都看不到。请继续输入密码,并按回车键。
### 如何在 Ubuntu 中成为 root 用户?
你可以使用 `sudo` 来像 root 用户一样运行命令。但是,在某些情况下,你必须以 root 用户身份来运行一些命令,而你总是忘了在命令前添加 `sudo`,那么你可以临时切换为 root 用户。
`sudo` 命令允许你来模拟一个 root 用户登录的 shell ,使用这个命令:
```
sudo -i
```
```
[email protected]:~$ sudo -i
[sudo] password for abhishek:
[email protected]:~# whoami
root
[email protected]:~#
```
你将注意到,当你切换为 root 用户时shell 命令提示符从 `$`(美元符号)更改为 `#`(英镑符号)。我开个(拙劣的)玩笑,英镑比美元强大。
**虽然我已经向你显示如何成为 root 用户,但是我必须警告你,你应该避免作为 root 用户使用系统。毕竟它有阻拦你使用 root 用户的原因。**
另外一种临时切换为 root 用户的方法是使用 `su` 命令:
```
sudo su
```
如果你尝试使用不带有的 `sudo``su` 命令,你将遇到 “su authentication failure” 错误。
你可以使用 `exit` 命令来恢复为正常用户。
```
exit
```
### 如何在 Ubuntu 中启用 root 用户?
现在你知道root 用户在基于 Ubuntu 发行版中是默认锁定的。
Linux 给予你在系统上想做什么就做什么的自由。解锁 root 用户就是这些自由之一。
如果出于某些原因,你决定启用 root 用户,你可以通过为其设置一个密码来做到:
```
sudo passwd root
```
再强调一次,不建议使用 root 用户,并且我也不鼓励你在桌面上这样做。如果你忘记了密码,你将不能再次[在 Ubuntu 中更改 root 用户密码][11]。LCTT 译注:可以通过单用户模式修改。)
你可以通过移除密码来再次锁定 root 用户:
```
sudo passwd -dl root
```
### 最后…
我希望你现在对 root 概念理解得更好一点。如果你仍然有些关于它的困惑和问题,请在评论中让我知道。我将尝试回答你的问题,并且也可能更新这篇文章。
--------------------------------------------------------------------------------
via: https://itsfoss.com/root-user-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: tmp.IrHYJBAqVn#what-is-root
[2]: tmp.IrHYJBAqVn#run-command-as-root
[3]: tmp.IrHYJBAqVn#become-root
[4]: tmp.IrHYJBAqVn#enable-root
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/root_user_ubuntu.png?ssl=1
[6]: http://www.linfo.org/root.html
[7]: https://ubuntu.com/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/sudo_sandwich.png?ssl=1
[9]: https://itsfoss.com/update-ubuntu/
[10]: https://itsfoss.com/install-ubuntu/
[11]: https://itsfoss.com/how-to-hack-ubuntu-password/

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11835-1.html)
[#]: subject: (Get started with this open source to-do list manager)
[#]: via: (https://opensource.com/article/20/1/open-source-to-do-list)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
开始使用开源待办事项清单管理器
======
> 待办事项清单是跟踪任务列表的强大方法。在我们的 20 个使用开源提升生产力的系列的第七篇文章中了解如何使用它。
![](https://img.linux.net.cn/data/attachment/album/202001/31/111103kmv55ploshuso4ot.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 todo 跟踪任务
任务管理和待办事项清单是我非常喜欢0的东西。我是一位生产效率的狂热粉丝以至于我为此做了一个[播客][2]),我尝试了各种不同的应用。我甚至为此[做了演讲][3]并[写了些文章][4]。因此,当我谈到提高工作效率时,肯定会出现任务管理和待办事项清单工具。
![Getting fancy with Todo.txt][5]
说实话,由于简单、跨平台且易于同步,用 [todo.txt][6] 肯定不会错。它是我不断反复提到的两个待办事项清单以及任务管理应用之一(另一个是 [Org 模式][7])。让我反复使用它的原因是它简单、可移植、易于理解,并且有许多很好的附加组件,并且当一台机器有附加组件,而另一台没有,也不会破坏它。由于它是一个 Bash shell 脚本,我还没发现一个无法支持它的系统。
#### 设置 todo.txt
首先,你需要安装基本 shell 脚本并将默认配置文件复制到 `~/.todo` 目录:
```
git clone https://github.com/todotxt/todo.txt-cli.git
cd todo.txt-cli
make
sudo make install
mkdir ~/.todo
cp todo.cfg ~/.todo/config
```
接下来,设置配置文件。一般,我想取消对颜色设置的注释,但必须马上设置的是 `TODO_DIR` 变量:
```
export TODO_DIR="$HOME/.todo"
```
#### 添加待办事件
要添加第一个待办事件,只需输入 `todo.sh add <NewTodo>` 就能添加。这还将在 `$HOME/.todo/` 中创建三个文件:`todo.txt`、`done.txt` 和 `reports.txt`
添加几个项目后,运行 `todo.sh ls` 查看你的待办事项。
![Basic todo.txt list][8]
#### 管理任务
你可以通过给项目设置优先级来稍微改善它。要向项目添加优先级,运行 `todo.sh pri # A`。数字是列表中任务的数量,而字母 `A` 是优先级。你可以将优先级设置为从 A 到 Z因为这是它的排序方式。
要完成任务,运行 `todo.sh do #` 来标记项目已完成并将它移动到 `done.txt`。运行 `todo.sh report` 会向 `report.txt` 写入已完成和未完成项的数量。
所有这三个文件的格式都有详细的说明,因此你可以使用你的文本编辑器修改。`todo.txt` 的基本格式是:
```
(Priority) YYYY-MM-DD Task
```
该日期表示任务的到期日期(如果已设置)。手动编辑文件时,只需在任务前面加一个 `x` 来标记为已完成。运行 `todo.sh archive` 会将这些项目移动到 `done.txt`,你可以编辑该文本文件,并在有时间时将已完成的项目归档。
#### 设置重复任务
我有很多重复的任务,我需要以每天/周/月来计划。
![Recurring tasks with the ice_recur add-on][9]
这就是 `todo.txt` 的灵活性所在。通过在 `~/.todo.actions.d/` 中使用[附加组件][10],你可以添加命令并扩展基本 `todo.sh` 的功能。附加组件基本上是实现特定命令的脚本。对于重复执行的任务,插件 [ice_recur][11] 应该符合要求。按照其页面上的说明操作,你可以设置任务以非常灵活的方式重复执行。
![Todour on MacOS][12]
在该[附加组件目录][10]中有很多附加组件,包括同步到某些云服务,也有链接到桌面或移动端应用的组件,这样你可以随时看到待办列表。
我只是简单介绍了这个代办事项清单功能,请花点时间深入了解这个工具的强大!它确实可以帮助我每天完成任务。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-to-do-list
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://productivityalchemy.com/
[3]: https://www.slideshare.net/AllThingsOpen/getting-to-done-on-the-command-line
[4]: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line
[5]: https://opensource.com/sites/default/files/uploads/productivity_7-1.png
[6]: http://todotxt.org/
[7]: https://orgmode.org/
[8]: https://opensource.com/sites/default/files/uploads/productivity_7-2.png (Basic todo.txt list)
[9]: https://opensource.com/sites/default/files/uploads/productivity_7-3.png (Recurring tasks with the ice_recur add-on)
[10]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
[11]: https://github.com/rlpowell/todo-text-stuff
[12]: https://opensource.com/sites/default/files/uploads/productivity_7-4.png (Todour on MacOS)

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (laingke) [#]: translator: (laingke)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11857-1.html)
[#]: subject: (Data streaming and functional programming in Java) [#]: subject: (Data streaming and functional programming in Java)
[#]: via: (https://opensource.com/article/20/1/javastream) [#]: via: (https://opensource.com/article/20/1/javastream)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu) [#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Java 中的数据流和函数式编程 Java 中的数据流和函数式编程
====== ======
学习如何使用 Java 8 中的流 API 和函数式编程结构。
![computer screen ][1]
当 Java SE 8又名核心 Java 8在 2014 年被推出时,它引入了一些从根本上影响 IT 编程的更改。这些更改中有两个紧密相连的部分流API和功能编程构造。本文使用代码示例从基础到高级特性介绍每个部分并说明它们之间的相互作用。 > 学习如何使用 Java 8 中的流 API 和函数式编程结构。
![](https://img.linux.net.cn/data/attachment/album/202002/06/002505flazlb4cg4aavvb4.jpg)
当 Java SE 8又名核心 Java 8在 2014 年被推出时,它引入了一些更改,从根本上影响了用它进行的编程。这些更改中有两个紧密相连的部分:流 API 和函数式编程构造。本文使用代码示例,从基础到高级特性,介绍每个部分并说明它们之间的相互作用。
### 基础特性 ### 基础特性
流 API 是在数据序列中迭代元素的简洁而高级的方法。包 `java.util.stream``java.util.function` 包含用于流 API 和相关函数式编程构造的新库。当然,一个代码示例胜过千言万语。 流 API 是在数据序列中迭代元素的简洁而高级的方法。包 `java.util.stream``java.util.function` 包含用于流 API 和相关函数式编程构造的新库。当然,代码示例胜过千言万语。
下面的代码段用大约 2,000 个随机整数值填充了一个 `List` 下面的代码段用大约 2,000 个随机整数值填充了一个 `List`
@ -26,7 +28,9 @@ List<Integer> list = new ArrayList<Integer>(); // 空 list
for (int i = 0; i < 2048; i++) list.add(rand.nextInt()); // 填充它 for (int i = 0; i < 2048; i++) list.add(rand.nextInt()); // 填充它
``` ```
另一个 `for` 循环可用于遍历填充 list以将偶数值收集到另一个 list 中。流 API 提供了一种更简洁的方法来执行此操作: 另外用一个 `for` 循环可用于遍历填充列表,以将偶数值收集到另一个列表中。
流 API 提供了一种更简洁的方法来执行此操作:
``` ```
List <Integer> evens = list List <Integer> evens = list
@ -37,43 +41,42 @@ List <Integer> evens = list
这个例子有三个来自流 API 的函数: 这个例子有三个来自流 API 的函数:
> `stream` 函数可以将**集合**转换为流,而流是一个每次可访问一个值的传送带。流化是惰性的(因此也是高效的),因为值是根据需要产生的,而不是一次性产生的。 - `stream` 函数可以将**集合**转换为流,而流是一个每次可访问一个值的传送带。流化是惰性的(因此也是高效的),因为值是根据需要产生的,而不是一次性产生的。
- `filter` 函数确定哪些流的值(如果有的话)通过了处理管道中的下一个阶段,即 `collect` 阶段。`filter` 函数是 <ruby>高阶的<rt>higher-order</rt></ruby>,因为它的参数是一个函数 —— 在这个例子中是一个 lambda 表达式,它是一个未命名的函数,并且是 Java 新的函数式编程结构的核心。
> `filter` 函数确定哪些流的值(如果有的话)通过了处理管道中的下一个阶段,即 `collect` 阶段。`filter` 函数是 _<ruby>高阶的<rt>higher-order</rt></ruby>_,因为它的参数是一个函数 —— 在这个例子中是一个 lambda 表达式,它是一个未命名的函数,并且是 Java 新的函数式编程结构的核心。
lambda 语法与传统的 Java 完全不同: lambda 语法与传统的 Java 完全不同:
``` ```
`n -> (n & 0x1) == 0` n -> (n & 0x1) == 0
``` ```
箭头(一个减号后面紧跟着一个大于号)将左边的参数列表与右边的函数体分隔开。参数 `n` 虽未明确输入,但可以显式输入。在任何情况下,编译器都会计算出 `n` `Integer`。如果有多个参数,这些参数将被括在括号中,并用逗号分隔。 箭头(一个减号后面紧跟着一个大于号)将左边的参数列表与右边的函数体分隔开。参数 `n` 虽未明确类型,但也可以明确。在任何情况下,编译器都会发现 `n` 是个 `Integer`。如果有多个参数,这些参数将被括在括号中,并用逗号分隔。
在本例中,函数体检查一个整数的最低顺序(最右)位是否为零,以表示为偶数。过滤器应返回一个布尔值。尽管可以,但函数的主体中没有显式的 `return`。如果主体没有显式的 `return`,则主体的最后一个表达式是返回值。在这个例子中,主体按照 lambda 编程的思想编写,由一个简单的布尔表达式 `(n & 0x1) == 0` 组成。 在本例中,函数体检查一个整数的最低位(最右)是否为零,这用来表示偶数。过滤器应返回一个布尔值。尽管可以,但函数的主体中没有显式的 `return`。如果主体没有显式的 `return`,则主体的最后一个表达式是返回值。在这个例子中,主体按照 lambda 编程的思想编写,由一个简单的布尔表达式 `(n & 0x1) == 0` 组成。
> `collect` 函数将偶数值收集到引用为 `evens` 的 list 中。如下例所示,`collect` 函数是线程安全的,因此,即使在多个线程之间共享了过滤操作,该函数也可以正常工作。 - `collect` 函数将偶数值收集到引用为 `evens` 的列表中。如下例所示,`collect` 函数是线程安全的,因此,即使在多个线程之间共享了过滤操作,该函数也可以正常工作。
### 方便的功能和轻松实现多线程 ### 方便的功能和轻松实现多线程
在生产环境中,数据流的源可能是文件或网络连接。为了学习流 API, Java 提供了诸如 `IntStream` 这样的类型,它可以用各种类型的元素生成流。这里有一个 `IntStream` 的例子: 在生产环境中,数据流的源可能是文件或网络连接。为了学习流 API, Java 提供了诸如 `IntStream` 这样的类型,它可以用各种类型的元素生成流。这里有一个 `IntStream` 的例子:
``` ```
IntStream // 整型流 IntStream // 整型流
.range(1, 2048) // 生成此范围内的整型流 .range(1, 2048) // 生成此范围内的整型流
.parallel() // 为多个线程分区数据 .parallel() // 为多个线程分区数据
.filter(i -> ((i & 0x1) > 0)) // 奇偶校验 - 只允许奇数通过 .filter(i -> ((i & 0x1) > 0)) // 奇偶校验 - 只允许奇数通过
.forEach(System.out::println); // 打印每个值 .forEach(System.out::println); // 打印每个值
``` ```
`IntStream` 类型包括一个 `range` 函数,该函数在指定的范围内生成一个整数值流,在本例中,以 1 为增量,从 1 递增到 2,048。`parallel` 函数自动工作划分到多个线程中,在各个线程中进行过滤和打印。(线程数通常与主机系统上的 CPU 数量匹配。)函数 `forEach` 参数是一个_方法引用_,在本例中是对封装在 `System.out` 中的 `println` 方法的引用,方法输出类型为 `PrintStream`。方法和构造器引用的语法将在稍后讨论。 `IntStream` 类型包括一个 `range` 函数,该函数在指定的范围内生成一个整数值流,在本例中,以 1 为增量,从 1 递增到 2048。`parallel` 函数自动划分该工作到多个线程中,在各个线程中进行过滤和打印。(线程数通常与主机系统上的 CPU 数量匹配。)函数 `forEach` 参数是一个*方法引用*,在本例中是对封装在 `System.out` 中的 `println` 方法的引用,方法输出类型为 `PrintStream`。方法和构造器引用的语法将在稍后讨论。
由于具有多线程,因此整数值整体上以任意顺序打印,但在给定线程中按顺序打印。例如,如果线程 T1 打印 409 和 411那么 T1 将按照顺序 409-411 打印,但是其它某个线程可能会预先打印 2,045。`parallel` 调用后面的线程是并发执行的,因此它们的输出顺序是不确定的。 由于具有多线程,因此整数值整体上以任意顺序打印,但在给定线程中按顺序打印。例如,如果线程 T1 打印 409 和 411那么 T1 将按照顺序 409-411 打印,但是其它某个线程可能会预先打印 2045。`parallel` 调用后面的线程是并发执行的,因此它们的输出顺序是不确定的。
### map/reduce 模式 ### map/reduce 模式
_map/reduce_ 模式在处理大型数据集方面已变得很流行。一个 map/reduce 宏操作由两个微操作构成。首先将数据分散_mapped_)到各个工作程序中,然后将单独的结果收集在一起 —— 也可能收集统计起来成为一个值,即 _reduction_。Reduction 可以采用不同的形式,如以下示例所示。 *map/reduce* 模式在处理大型数据集方面变得很流行。一个 map/reduce 宏操作由两个微操作构成。首先,将数据分散(<ruby>映射<rt>mapped</rt></ruby>)到各个工作程序中,然后将单独的结果收集在一起 —— 也可能收集统计起来成为一个值,即<ruby>归约<rt>reduction</rt></ruby>。归约可以采用不同的形式,如以下示例所示。
下面 `Number` 类的实例用 **EVEN****ODD** 表示有奇偶校验的整数值: 下面 `Number` 类的实例用 `EVEN``ODD` 表示有奇偶校验的整数值:
``` ```
public class Number { public class Number {
@ -92,7 +95,7 @@ public class Number {
} }
``` ```
下面的代码演示了带有 `Number` 流进行 map/reduce 的情形,从而表明流 API 不仅可以处理 `int``float` 等基本类型,还可以处理程序员自定义的类类型。 下面的代码演示了 `Number` 流进行 map/reduce 的情形,从而表明流 API 不仅可以处理 `int``float` 等基本类型,还可以处理程序员自定义的类类型。
在下面的代码段中,使用了 `parallelStream` 而不是 `stream` 函数对随机整数值列表进行流化处理。与前面介绍的 `parallel` 函数一样,`parallelStream` 变体也可以自动执行多线程。 在下面的代码段中,使用了 `parallelStream` 而不是 `stream` 函数对随机整数值列表进行流化处理。与前面介绍的 `parallel` 函数一样,`parallelStream` 变体也可以自动执行多线程。
@ -115,14 +118,14 @@ System.out.println("The sum of the randomly generated values is: " + sum4All);
方法引用 `Number::getValue` 可以用下面的 lambda 表达式替换。参数 `n` 是流中的 `Number` 实例中的之一: 方法引用 `Number::getValue` 可以用下面的 lambda 表达式替换。参数 `n` 是流中的 `Number` 实例中的之一:
``` ```
`mapToInt(n -> n.getValue())` mapToInt(n -> n.getValue())
``` ```
通常lambdas 和方法引用是可互换的:如果像 `mapToInt` 这样的高阶函数可以采用一种形式作为参数,那么这个函数也可以采用另一种形式。这两个函数式编程结构具有相同的目的 —— 对作为参数传入的数据执行一些自定义操作。在两者之间进行选择通常是为了方便。例如lambda 可以在没有封装类的情况下编写,而方法则不能。我的习惯是使用 lambda除非已经有了适当的封装方法。 通常lambda 表达式和方法引用是可互换的:如果像 `mapToInt` 这样的高阶函数可以采用一种形式作为参数,那么这个函数也可以采用另一种形式。这两个函数式编程结构具有相同的目的 —— 对作为参数传入的数据执行一些自定义操作。在两者之间进行选择通常是为了方便。例如lambda 可以在没有封装类的情况下编写,而方法则不能。我的习惯是使用 lambda除非已经有了适当的封装方法。
当前示例末尾的 `sum` 函数通过结合来自 `parallelStream` 线程的部分和,以线程安全的方式进行归约。但是,程序员有责任确保在 `parallelStream` 调用引发的多线程过程中,程序员自己的函数调用(在本例中为 `getValue`)是线程安全的。 当前示例末尾的 `sum` 函数通过结合来自 `parallelStream` 线程的部分和,以线程安全的方式进行归约。但是,程序员有责任确保在 `parallelStream` 调用引发的多线程过程中,程序员自己的函数调用(在本例中为 `getValue`)是线程安全的。
最后一点值得强调。lambda 语法鼓励编写 _<ruby>纯函数<rt>pure function</rt></ruby>_,即函数的返回值仅取决于传入的参数(如果有);纯函数没有副作用,例如更新类中的 `static` 字段。因此,纯函数是线程安全的,并且如果传递给高阶函数的函数参数(例如 `filter``map` )是纯函数,则流 API 效果最佳。 最后一点值得强调。lambda 语法鼓励编写<ruby>纯函数<rt>pure function</rt></ruby>,即函数的返回值仅取决于传入的参数(如果有);纯函数没有副作用,例如更新一个类中的 `static` 字段。因此,纯函数是线程安全的,并且如果传递给高阶函数的函数参数(例如 `filter``map` )是纯函数,则流 API 效果最佳。
对于更细粒度的控制,有另一个流 API 函数,名为 `reduce`,可用于对 `Number` 流中的值求和: 对于更细粒度的控制,有另一个流 API 函数,名为 `reduce`,可用于对 `Number` 流中的值求和:
@ -135,11 +138,10 @@ Integer sum4AllHarder = listOfNums
此版本的 `reduce` 函数带有两个参数,第二个参数是一个函数: 此版本的 `reduce` 函数带有两个参数,第二个参数是一个函数:
> 第一个参数(在这种情况下为零)是 _特征_ 值,该值用作求和操作的初始值,并且在求和过程中流结束时用作默认值。 - 第一个参数(在这种情况下为零)是*特征*值,该值用作求和操作的初始值,并且在求和过程中流结束时用作默认值。
- 第二个参数是*累加器*,在本例中,这个 lambda 表达式有两个参数:第一个参数(`sofar`)是正在运行的和,第二个参数(`next`)是来自流的下一个值。运行的和以及下一个值相加,然后更新累加器。请记住,由于开始时调用了 `parallelStream`,因此 `map``reduce` 函数现在都在多线程上下文中执行。
> 第二个参数是 _累加器_,在本例中,这个 lambda 表达式有两个参数:第一个参数(`sofar`)是正在运行的和,第二个参数(`next`)是来自流的下一个值。运行总和以及下一个值相加,然后更新累加器。请记住,由于开始时调用了 `parallelStream`,因此 `map``reduce` 函数现在都在多线程上下文中执行。 在到目前为止的示例中,流值被收集,然后被规约,但是,通常情况下,流 API 中的 `Collectors` 可以累积值,而不需要将它们规约到单个值。正如下一个代码段所示,收集活动可以生成任意丰富的数据结构。该示例使用与前面示例相同的 `listOfNums`
在到目前为止的示例中,流值被收集,然后被归并,但是,通常情况下,流 API 中的 `Collectors` 可以累积值,而不需要将它们减少到单个值。正如下一个代码段所示,收集活动可以生成任意丰富的数据结构。该示例使用与前面示例相同的 `listOfNums`
``` ```
Map<Number.Parity, List<Number>> numMap = listOfNums Map<Number.Parity, List<Number>> numMap = listOfNums
@ -150,7 +152,7 @@ List<Number> evens = numMap.get(Number.Parity.EVEN);
List<Number> odds = numMap.get(Number.Parity.ODD); List<Number> odds = numMap.get(Number.Parity.ODD);
``` ```
第一行中的 `numMap` 指的是一个 `Map`,它的键是一个 `Number` 奇偶校验位(**ODD** 或 **EVEN**),其值是一个具有指定奇偶校验位值的 `Number` 实例的`List`。同样,通过 `parallelStream` 调用进行多线程处理,然后 `collect` 调用(以线程安全的方式)将部分结果组装到 `numMap` 引用的 `Map` 中。然后,在 `numMap` 上调用 `get` 方法两次,一次获取 `evens`,第二次获取 `odds` 第一行中的 `numMap` 指的是一个 `Map`,它的键是一个 `Number` 奇偶校验位(`ODD` 或 `EVEN`),其值是一个具有指定奇偶校验位值的 `Number` 实例的 `List`。同样,通过 `parallelStream` 调用进行多线程处理,然后 `collect` 调用(以线程安全的方式)将部分结果组装到 `numMap` 引用的 `Map` 中。然后,在 `numMap` 上调用 `get` 方法两次,一次获取 `evens`,第二次获取 `odds`
实用函数 `dumpList` 再次使用来自流 API 的高阶 `forEach` 函数: 实用函数 `dumpList` 再次使用来自流 API 的高阶 `forEach` 函数:
@ -184,7 +186,7 @@ Value: 41 (parity: odd)
### 用于代码简化的函数式结构 ### 用于代码简化的函数式结构
函数式结构(如方法引用和 lambdas)非常适合在流 API 中使用。这些构造代表了 Java 中对高阶函数的主要简化。即使在糟糕的过去Java 也通过 `Method``Constructor` 类型在技术上支持高阶函数,这些类型的实例可以作为参数传递给其它函数。由于其复杂性,这些类型在生产级 Java 中很少使用。例如,调用 `Method` 需要对象引用(如果方法是非**静态**的)或至少一个类标识符(如果方法是**静态**的)。然后,被调用的 `Method` 的参数作为**对象**实例传递给它如果没有发生多态那会出现另一种复杂性则可能需要显式向下转换。相比之下lambda 和方法引用很容易作为参数传递给其它函数。 函数式结构(如方法引用和 lambda 表达式)非常适合在流 API 中使用。这些构造代表了 Java 中对高阶函数的主要简化。即使在糟糕的过去Java 也通过 `Method``Constructor` 类型在技术上支持高阶函数,这些类型的实例可以作为参数传递给其它函数。由于其复杂性,这些类型在生产级 Java 中很少使用。例如,调用 `Method` 需要对象引用(如果方法是非**静态**的)或至少一个类标识符(如果方法是**静态**的)。然后,被调用的 `Method` 的参数作为**对象**实例传递给它如果没有发生多态那会出现另一种复杂性则可能需要显式向下转换。相比之下lambda 和方法引用很容易作为参数传递给其它函数。
但是,新的函数式结构在流 API 之外具有其它用途。考虑一个 Java GUI 程序,该程序带有一个供用户按下的按钮,例如,按下以获取当前时间。按钮按下的事件处理程序可能编写如下: 但是,新的函数式结构在流 API 之外具有其它用途。考虑一个 Java GUI 程序,该程序带有一个供用户按下的按钮,例如,按下以获取当前时间。按钮按下的事件处理程序可能编写如下:
@ -201,7 +203,7 @@ updateCurrentTime.addActionListener(new ActionListener() {
这个简短的代码段很难解释。关注第二行,其中方法 `addActionListener` 的参数开始如下: 这个简短的代码段很难解释。关注第二行,其中方法 `addActionListener` 的参数开始如下:
``` ```
`new ActionListener() {` new ActionListener() {
``` ```
这似乎是错误的,因为 `ActionListener` 是一个**抽象**接口,而**抽象**类型不能通过调用 `new` 实例化。但是,事实证明,还有其它一些实例被实例化了:一个实现此接口的未命名内部类。如果上面的代码封装在名为 `OldJava` 的类中,则该未命名的内部类将被编译为 `OldJava$1.class`。`actionPerformed` 方法在这个未命名的内部类中被重写。 这似乎是错误的,因为 `ActionListener` 是一个**抽象**接口,而**抽象**类型不能通过调用 `new` 实例化。但是,事实证明,还有其它一些实例被实例化了:一个实现此接口的未命名内部类。如果上面的代码封装在名为 `OldJava` 的类中,则该未命名的内部类将被编译为 `OldJava$1.class`。`actionPerformed` 方法在这个未命名的内部类中被重写。
@ -209,7 +211,7 @@ updateCurrentTime.addActionListener(new ActionListener() {
现在考虑使用新的函数式结构进行这个令人耳目一新的更改: 现在考虑使用新的函数式结构进行这个令人耳目一新的更改:
``` ```
`updateCurrentTime.addActionListener(e -> currentTime.setText(new Date().toString()));` updateCurrentTime.addActionListener(e -> currentTime.setText(new Date().toString()));
``` ```
lambda 表达式中的参数 `e` 是一个 `ActionEvent` 实例,而 lambda 的主体是对按钮上的 `setText` 的简单调用。 lambda 表达式中的参数 `e` 是一个 `ActionEvent` 实例,而 lambda 的主体是对按钮上的 `setText` 的简单调用。
@ -227,7 +229,7 @@ interface BinaryIntOp {
} }
``` ```
注释 `@FunctionalInterface` 适用于声明 _唯一_ 抽象方法的任何接口;在本例中,这个抽象接口是 `compute`。一些标准接口,(例如具有唯一声明方法 `run``Runnable` 接口)同样符合这个要求。在此示例中,`compute` 是已声明的方法。该接口可用作引用声明中的目标类型: 注释 `@FunctionalInterface` 适用于声明*唯一*抽象方法的任何接口;在本例中,这个抽象接口是 `compute`。一些标准接口,(例如具有唯一声明方法 `run``Runnable` 接口)同样符合这个要求。在此示例中,`compute` 是已声明的方法。该接口可用作引用声明中的目标类型:
``` ```
BinaryIntOp div = (arg1, arg2) -> arg1 / arg2; BinaryIntOp div = (arg1, arg2) -> arg1 / arg2;
@ -236,7 +238,7 @@ div.compute(12, 3); // 4
`java.util.function` 提供各种函数式接口。以下是一些示例。 `java.util.function` 提供各种函数式接口。以下是一些示例。
下面的代码段介绍了参数化的 `Predicate` 函数式接口。在此示例中,带有参数 `String``Predicate<String>` 类型可以引用具有 `String` 参数的 lambda 表达式或诸如 `isEmpty` 之类的 `String` 方法。通常情况下,_predicate_ 是一个返回布尔值的函数。 下面的代码段介绍了参数化的 `Predicate` 函数式接口。在此示例中,带有参数 `String``Predicate<String>` 类型可以引用具有 `String` 参数的 lambda 表达式或诸如 `isEmpty` 之类的 `String` 方法。通常情况下,Predicate 是一个返回布尔值的函数。
``` ```
Predicate<String> pred = String::isEmpty; // String 方法的 predicate 声明 Predicate<String> pred = String::isEmpty; // String 方法的 predicate 声明
@ -247,7 +249,7 @@ Arrays.asList(strings)
.forEach(System.out::println); // 只打印空字符串 .forEach(System.out::println); // 只打印空字符串
``` ```
在字符串长度为零的情况下,`isEmpty` predicate 判定结果为 `true`。 因此,只有空字符串才能进入管道的 `forEach` 阶段。 在字符串长度为零的情况下,`isEmpty` Predicate 判定结果为 `true`。 因此,只有空字符串才能进入管道的 `forEach` 阶段。
下一段代码将演示如何将简单的 lambda 或方法引用组合成更丰富的 lambda 或方法引用。考虑这一系列对 `IntUnaryOperator` 类型的引用的赋值,它接受一个整型参数并返回一个整型值: 下一段代码将演示如何将简单的 lambda 或方法引用组合成更丰富的 lambda 或方法引用。考虑这一系列对 `IntUnaryOperator` 类型的引用的赋值,它接受一个整型参数并返回一个整型值:
@ -308,16 +310,16 @@ Arrays.asList(arrayBR).stream().forEach(BedRocker::dump);
`Arrays.asList` 实用程序再次用于流化一个数组 `names`,然后将流的每一项传递给 `map` 函数,该函数的参数现在是构造器引用 `BedRocker::new`。这个构造器引用通过在每次调用时生成和初始化一个 `BedRocker` 实例来充当一个对象工厂。在第二行执行之后,名为 `bedrockers` 的流由五项 `BedRocker` 组成。 `Arrays.asList` 实用程序再次用于流化一个数组 `names`,然后将流的每一项传递给 `map` 函数,该函数的参数现在是构造器引用 `BedRocker::new`。这个构造器引用通过在每次调用时生成和初始化一个 `BedRocker` 实例来充当一个对象工厂。在第二行执行之后,名为 `bedrockers` 的流由五项 `BedRocker` 组成。
这个例子可以通过关注高阶 `map` 函数来进一步阐明。在通常情况下,一个映射将一个类型的值(例如,一个 `int`)转换为另一个 _相同_ 类型的值(例如,一个整数的后继): 这个例子可以通过关注高阶 `map` 函数来进一步阐明。在通常情况下,一个映射将一个类型的值(例如,一个 `int`)转换为另一个*相同*类型的值(例如,一个整数的后继):
``` ```
map(n -> n + 1) // 将 n 映射到其后继 map(n -> n + 1) // 将 n 映射到其后继
``` ```
然而,在 `BedRocker` 这个例子中,转换更加戏剧化,因为一个类型的值(代表一个名字的 `String`)被映射到一个 _不同_ 类型的值,在这个例子中,就是一个 `BedRocker` 实例,这个字符串就是它的名字。转换是通过一个构造器调用来完成的,它是由构造器引用来实现的: 然而,在 `BedRocker` 这个例子中,转换更加戏剧化,因为一个类型的值(代表一个名字的 `String`)被映射到一个*不同*类型的值,在这个例子中,就是一个 `BedRocker` 实例,这个字符串就是它的名字。转换是通过一个构造器调用来完成的,它是由构造器引用来实现的:
``` ```
`map(BedRocker::new) // 将 String 映射到 BedRocker map(BedRocker::new) // 将 String 映射到 BedRocker
``` ```
传递给构造器的值是 `names` 数组中的其中一项。 传递给构造器的值是 `names` 数组中的其中一项。
@ -325,13 +327,13 @@ map(n -> n + 1) // 将 n 映射到其后继
此代码示例的第二行还演示了一个你目前已经非常熟悉的转换:先将数组先转换成 `List`,然后再转换成 `Stream` 此代码示例的第二行还演示了一个你目前已经非常熟悉的转换:先将数组先转换成 `List`,然后再转换成 `Stream`
``` ```
`Stream<BedRocker> bedrockers = Arrays.asList(names).stream().map(BedRocker::new);` Stream<BedRocker> bedrockers = Arrays.asList(names).stream().map(BedRocker::new);
``` ```
第三行则是另一种方式 —— 流 `bedrockers` 通过使用_数组_构造器引用 `BedRocker[]::new` 调用 `toArray` 方法: 第三行则是另一种方式 —— 流 `bedrockers` 通过使用*数组*构造器引用 `BedRocker[]::new` 调用 `toArray` 方法:
``` ```
`BedRocker[ ] arrayBR = bedrockers.toArray(BedRocker[]::new);` BedRocker[ ] arrayBR = bedrockers.toArray(BedRocker[]::new);
``` ```
该构造器引用不会创建单个 `BedRocker` 实例,而是创建这些实例的整个数组:该构造器引用现在为 `BedRocker[]:new`,而不是 `BedRocker::new`。为了进行确认,将 `arrayBR` 转换为 `List`,再次对其进行流式处理,以便可以使用 `forEach` 来打印 `BedRocker` 的名字。 该构造器引用不会创建单个 `BedRocker` 实例,而是创建这些实例的整个数组:该构造器引用现在为 `BedRocker[]:new`,而不是 `BedRocker::new`。为了进行确认,将 `arrayBR` 转换为 `List`,再次对其进行流式处理,以便可以使用 `forEach` 来打印 `BedRocker` 的名字。
@ -348,7 +350,7 @@ Baby Puss
### <ruby>柯里化<rt>Currying</rt></ruby> ### <ruby>柯里化<rt>Currying</rt></ruby>
_柯里化_ 函数是指减少函数执行任何工作所需的显式参数的数量(通常减少到一个)。(该术语是为了纪念逻辑学家 Haskell Curry。一般来说函数的参数越少调用起来就越容易也更健壮。回想一下一些需要半打左右参数的噩梦般的函数因此应将柯里化视为简化函数调用的一种尝试。`java.util.function` 包中的接口类型适合于柯里化,如以下示例所示。 *柯里化*函数是指减少函数执行任何工作所需的显式参数的数量(通常减少到一个)。(该术语是为了纪念逻辑学家 Haskell Curry。一般来说函数的参数越少调用起来就越容易也更健壮。回想一下一些需要半打左右参数的噩梦般的函数因此应将柯里化视为简化函数调用的一种尝试。`java.util.function` 包中的接口类型适合于柯里化,如以下示例所示。
引用的 `IntBinaryOperator` 接口类型是为函数接受两个整型参数,并返回一个整型值: 引用的 `IntBinaryOperator` 接口类型是为函数接受两个整型参数,并返回一个整型值:
@ -368,13 +370,12 @@ mult2.applyAsInt(10, 30); // 300
arg1 -> (arg2 -> arg1 * arg2) // 括号可以省略 arg1 -> (arg2 -> arg1 * arg2) // 括号可以省略
``` ```
完整的 lambda 以 `arg1,` 开头,而该 lambda 的主体以及返回的值是另一个以 `arg2` 开头的 lambda。返回的 lambda 仅接受一个参数(`arg2`),但返回了两个数字的乘积(`arg1` 和 `arg2`)。下面的概述,再加上代码,应该可以更好地进行说明。 完整的 lambda 以 `arg1` 开头,而该 lambda 的主体以及返回的值是另一个以 `arg2` 开头的 lambda。返回的 lambda 仅接受一个参数(`arg2`),但返回了两个数字的乘积(`arg1` 和 `arg2`)。下面的概述,再加上代码,应该可以更好地进行说明。
以下是如何柯里化 `mult2` 的概述: 以下是如何柯里化 `mult2` 的概述:
> 类型为 `IntFunction<IntUnaryOperator>` 的 lambda 被写入并调用,其整型值为 10。返回的 `IntUnaryOperator` 缓存了值 10因此变成了已柯里化版本的 `mult2`,在本例中为 `curriedMult2` - 类型为 `IntFunction<IntUnaryOperator>` 的 lambda 被写入并调用,其整型值为 10。返回的 `IntUnaryOperator` 缓存了值 10因此变成了已柯里化版本的 `mult2`,在本例中为 `curriedMult2`
- 然后使用单个显式参数例如20调用 `curriedMult2` 函数,该参数与缓存的参数(在本例中为 10相乘以生成返回的乘积。。
> 然后使用单个显式参数例如20调用 `curriedMult2` 函数,该参数与缓存的参数(在本例中为 10相乘以生成返回的乘积。。
这是代码的详细信息: 这是代码的详细信息:
@ -391,7 +392,7 @@ IntFunction<IntUnaryOperator> curriedMult2Maker = n1 -> (n2 -> n1 * n2);
IntUnaryOperator curriedMult2 = curriedMult2Maker2.apply(10); IntUnaryOperator curriedMult2 = curriedMult2Maker2.apply(10);
``` ```
值 10 现在缓存在 `curriedMult2` 函数中,以便 `curriedMult2` 调用中的显式整型参数乘以 10 `10` 现在缓存在 `curriedMult2` 函数中,以便 `curriedMult2` 调用中的显式整型参数乘以 10
``` ```
curriedMult2.applyAsInt(20); // 200 = 10 * 20 curriedMult2.applyAsInt(20); // 200 = 10 * 20
@ -423,9 +424,9 @@ dataStream
.collect(...); // 或者,也可以进行归约:阶段 N .collect(...); // 或者,也可以进行归约:阶段 N
``` ```
自动多线程,以 `parallel``parallelStream` 调用为例,建立在 Java 的 fork/join 框架上,该框架支持 _<ruby>任务窃取<rt>task stealing</rt></ruby>_ 以提高效率。假设 `parallelStream` 调用后面的线程池由八个线程组成,并且 `dataStream` 被八种方式分区。某个线程例如T1可能比另一个线程例如T7工作更快这意味着应该将 T7 的某些任务移到 T1 的工作队列中。这会在运行时自动发生。 自动多线程,以 `parallel``parallelStream` 调用为例,建立在 Java 的 fork/join 框架上,该框架支持 <ruby>任务窃取<rt>task stealing</rt></ruby> 以提高效率。假设 `parallelStream` 调用后面的线程池由八个线程组成,并且 `dataStream` 被八种方式分区。某个线程例如T1可能比另一个线程例如T7工作更快这意味着应该将 T7 的某些任务移到 T1 的工作队列中。这会在运行时自动发生。
在这个简单的多线程世界中,程序员的主要职责是编写线程安全函数,这些函数作为参数传递给在流 API 中占主导地位的高阶函数。 尤其是 lambda 鼓励编写纯函数(因此是线程安全的)函数。 在这个简单的多线程世界中,程序员的主要职责是编写线程安全函数,这些函数作为参数传递给在流 API 中占主导地位的高阶函数。尤其是 lambda 鼓励编写纯函数(因此是线程安全的)函数。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -434,7 +435,7 @@ via: https://opensource.com/article/20/1/javastream
作者:[Marty Kalin][a] 作者:[Marty Kalin][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke) 译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,626 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11839-1.html)
[#]: subject: (Add scorekeeping to your Python game)
[#]: via: (https://opensource.com/article/20/1/add-scorekeeping-your-python-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
添加计分到你的 Python 游戏
======
> 在本系列的第十一篇有关使用 Python Pygame 模块进行编程的文章中,显示玩家获得战利品或受到伤害时的得分。
![](https://img.linux.net.cn/data/attachment/album/202002/01/154838led0y08y2aqetz1q.jpg)
这是仍在进行中的关于使用 [Pygame][3] 模块来在 [Python 3][2] 在创建电脑游戏的第十一部分。先前的文章是:
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
* [使用 Python 和 Pygame 模块构建一个游戏框架][5]
* [如何在你的 Python 游戏中添加一个玩家][6]
* [用 Pygame 使你的游戏角色移动起来][7]
* [如何向你的 Python 游戏中添加一个敌人][8]
* [在 Pygame 游戏中放置平台][19]
* [在你的 Python 游戏中模拟引力][9]
* [为你的 Python 平台类游戏添加跳跃功能][10]
* [使你的 Python 游戏玩家能够向前和向后跑][11]
* [在你的 Python 平台类游戏中放一些奖励][12]
如果你已经跟随这一系列很久,那么已经学习了使用 Python 创建一个视频游戏所需的所有基本语法和模式。然而,它仍然缺少一个至关重要的组成部分。这一组成部分不仅仅对用 Python 编程游戏重要;不管你探究哪个计算机分支,你都必需精通:作为一个程序员,通过阅读一种语言的或库的文档来学习新的技巧。
幸运的是,你正在阅读本文的事实表明你熟悉文档。为了使你的平台类游戏更加美观,在这篇文章中,你将在游戏屏幕上添加得分和生命值显示。不过,教你如何找到一个库的功能以及如何使用这些新的功能的这节课程并没有多神秘。
### 在 Pygame 中显示得分
现在,既然你有了可以被玩家收集的奖励,那就有充分的理由来记录分数,以便你的玩家看到他们收集了多少奖励。你也可以跟踪玩家的生命值,以便当他们被敌人击中时会有相应结果。
你已经有了跟踪分数和生命值的变量,但是这一切都发生在后台。这篇文章教你在游戏期间在游戏屏幕上以你选择的一种字体来显示这些统计数字。
### 阅读文档
大多数 Python 模块都有文档,即使那些没有文档的模块,也能通过 Python 的帮助功能来进行最小的文档化。[Pygame 的主页面][13] 链接了它的文档。不过Pygame 是一个带有很多文档的大模块,并且它的文档不像在 Opensource.com 上的文章一样,以同样易理解的(和友好的、易解释的、有用的)叙述风格来撰写的。它们是技术文档,并且列出在模块中可用的每个类和函数,各自要求的输入类型等等。如果你不适应参考代码组件描述,这可能会令人不知所措。
在烦恼于库的文档前,第一件要做的事,就是来想想你正在尝试达到的目标。在这种情况下,你想在屏幕上显示玩家的得分和生命值。
在你确定你需要的结果后,想想它需要什么的组件。你可以从变量和函数的方面考虑这一点,或者,如果你还没有自然地想到这一点,你可以进行一般性思考。你可能意识到需要一些文本来显示一个分数,你希望 Pygame 在屏幕上绘制这些文本。如果你仔细思考,你可能会意识到它与在屏幕上渲染一个玩家、奖励或一个平台并多么大的不同。
从技术上讲,你*可以*使用数字图形,并让 Pygame 显示这些数字图形。它不是达到你目标的最容易的方法,但是如果它是你唯一知道的方法,那么它是一个有效的方法。不过,如果你参考 Pygame 的文档,你看到列出的模块之一是 `font`,这是 Pygame 使得在屏幕上来使打印文本像输入文字一样容易的方法。
### 解密技术文档
`font` 文档页面以 `pygame.font.init()` 开始,它列出了用于初始化字体模块的函数。它由 `pygame.init()` 自动地调用,你已经在代码中调用了它。再强调一次,从技术上讲,你已经到达一个*足够好*的点。虽然你尚不知道*如何做*,你知道你*能够*使用 `pygame.font` 函数来在屏幕上打印文本。
然而,如果你阅读更多一些,你会找到这里还有一种更好的方法来打印字体。`pygame.freetype` 模块在文档中的描述方式如下:
> `pygame.freetype` 模块是 `pygame.fontpygame` 模块的一个替代品,用于加载和渲染字体。它有原函数的所有功能,外加很多新的功能。
`pygame.freetype` 文档页面的下方,有一些示例代码:
```
import pygame
import pygame.freetype
```
你的代码应该已经导入了 Pygame不过请修改你的 `import` 语句以包含 Freetype 模块:
```
import pygame
import sys
import os
import pygame.freetype
```
### 在 Pygame 中使用字体
`font` 模块的描述中可以看出,显然 Pygame 使用一种字体(不管它的你提供的或内置到 Pygame 的默认字体)在屏幕上渲染字体。滚动浏览 `pygame.freetype` 文档来找到 `pygame.freetype.Font` 函数:
```
pygame.freetype.Font
从支持的字体文件中创建一个新的字体实例。
Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
pygame.freetype.Font.name
  符合规则的字体名称。
pygame.freetype.Font.path
  字体文件路径。
pygame.freetype.Font.size
  在渲染中使用的默认点大小
```
这描述了如何在 Pygame 中构建一个字体“对象”。把屏幕上的一个简单对象视为一些代码属性的组合对你来说可能不太自然,但是这与你构建英雄和敌人精灵的方式非常类似。你需要一个字体文件,而不是一个图像文件。在你有一个字体文件后,你可以在你的代码中使用 `pygame.freetype.Font` 函数来创建一个字体对象,然后使用该对象来在屏幕上渲染文本。
因为并不是世界上的每个人的电脑上都有完全一样的字体,因此将你选择的字体与你的游戏捆绑在一起是很重要的。要捆绑字体,首先在你的游戏文件夹中创建一个新的目录,放在你为图像而创建的文件目录旁边。称其为 `fonts`
即使你的计算机操作系统随附了几种字体,但是将这些字体给予其他人是非法的。这看起来很奇怪,但法律就是这样运作的。如果想与你的游戏一起随附一种字体,你必需找到一种开源或知识共享的字体,以允许你随游戏一起提供该字体。
专门提供自由和合法字体的网站包括:
* [Font Library][14]
* [Font Squirrel][15]
* [League of Moveable Type][16]
当你找到你喜欢的字体后,下载下来。解压缩 ZIP 或 [TAR][17] 文件,并移动 `.ttf``.otf` 文件到你的项目目录下的 `fonts` 文件夹中。
你没有安装字体到你的计算机上。你只是放置字体到你游戏的 `fonts` 文件夹中,以便 Pygame 可以使用它。如果你想,你*可以*在你的计算机上安装该字体,但是没有必要。重要的是将字体放在你的游戏目录中,这样 Pygame 可以“描绘”字体到屏幕上。
如果字体文件的名称复杂且带有空格或特殊字符,只需要重新命名它即可。文件名称是完全任意的,并且对你来说,文件名称越简单,越容易将其键入你的代码中。
现在告诉 Pygame 你的字体。从文档中你知道,当你至少提供了字体文件路径给 `pygame.freetype.Font` 时(文档明确指出所有其余属性都是可选的),你将在返回中获得一个字体对象:
```
Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
```
创建一个称为 `myfont` 的新变量来充当你在游戏中字体,并放置 `Font` 函数的结果到这个变量中。这个示例中使用 `amazdoom.ttf` 字体,但是你可以使用任何你想使用的字体。在你的设置部分放置这些代码:
```
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
font_size = tx
myfont = pygame.freetype.Font(font_path, font_size)
```
### 在 Pygame 中显示文本
现在你已经创建一个字体对象,你需要一个函数来绘制你想绘制到屏幕上的文本。这和你在你的游戏中绘制背景和平台是相同的原理。
首先,创建一个函数,并使用 `myfont` 对象来创建一些文本,设置颜色为某些 RGB 值。这必须是一个全局函数;它不属于任何具体的类:
```
def stats(score,health):
    myfont.render_to(world, (4, 4), "Score:"+str(score), WHITE, None, size=64)
    myfont.render_to(world, (4, 72), "Health:"+str(health), WHITE, None, size=64)
```
当然,你此刻已经知道,如果它不在主循环中,你的游戏将不会发生任何事,所以在文件的底部添加一个对你的 `stats` 函数的调用:
```
    for e in enemy_list:
        e.move()
    stats(player.score,player.health) # draw text
    pygame.display.flip()
```
尝试你的游戏。
当玩家收集奖励品时,得分会上升。当玩家被敌人击中时,生命值下降。成功!
![Keeping score in Pygame][18]
不过,这里有一个问题。当一个玩家被敌人击中时,健康度会*一路*下降,这是不公平的。你刚刚发现一个非致命的错误。非致命的错误是这些在应用程序中小问题,(通常)不会阻止应用程序启动或甚至导致停止工作,但是它们要么没有意义,要么会惹恼用户。这里是如何解决这个问题的方法。
### 修复生命值计数
当前生命值系统的问题是敌人接触玩家时Pygame 时钟的每一次滴答,健康度都会减少。这意味着一个缓慢移动的敌人可能在一次遭遇中将一个玩家降低健康度至 -200 ,这不公平。当然,你可以给你的玩家一个 10000 的起始健康度得分,而不用担心它;这可以工作,并且可能没有人会注意。但是这里有一个更好的方法。
当前,你的代码侦查出一个玩家和一个敌人发生碰撞的时候。生命值问题的修复是检测*两个*独立的事件:什么时候玩家和敌人碰撞,并且,在它们碰撞后,什么时候它们*停止*碰撞。
首先,在你的玩家类中,创建一个变量来代表玩家和敌人碰撞在一起:
```
        self.frame = 0
        self.health = 10
        self.damage = 0
```
在你的 `Player` 类的 `update` 函数中,*移除*这块代码块:
```
        for enemy in enemy_hit_list:
            self.health -= 1
            #print(self.health)
```
并且在它的位置,只要玩家当前没有被击中,检查碰撞:
```
        if self.damage == 0:
            for enemy in enemy_hit_list:
                if not self.rect.contains(enemy):
                    self.damage = self.rect.colliderect(enemy)
```
你可能会在你删除的语句块和你刚刚添加的语句块之间看到相似之处。它们都在做相同的工作,但是新的代码更复杂。最重要的是,只有当玩家*当前*没有被击中时,新的代码才运行。这意味着,当一个玩家和敌人碰撞时,这些代码运行一次,而不是像以前那样一直发生碰撞。
新的代码使用两个新的 Pygame 函数。`self.rect.contains` 函数检查一个敌人当前是否在玩家的边界框内,并且当它是 `true` 时, `self.rect.colliderect` 设置你的新的 `self.damage` 变量为 1而不管它多少次是 `true`
现在,即使被一个敌人击中 3 秒,对 Pygame 来说仍然看作一次击中。
我通过通读 Pygame 的文档而发现了这些函数。你没有必要一次阅读完全部的文档,并且你也没有必要阅读每个函数的每个单词。不过,花费时间在你正在使用的新的库或模块的文档上是很重要的;否则,你极有可能在重新发明轮子。不要花费一个下午的时间来尝试修改拼接一个解决方案到一些东西,而这些东西已经被你正在使用的框架的所解决。阅读文档,知悉函数,并从别人的工作中获益!
最后,添加另一个代码语句块来侦查出什么时候玩家和敌人不再接触。然后直到那时,才从玩家减少一个生命值。
```
        if self.damage == 1:
            idx = self.rect.collidelist(enemy_hit_list)
            if idx == -1:
                self.damage = 0   # set damage back to 0
                self.health -= 1  # subtract 1 hp
```
注意,*只有*当玩家被击中时,这个新的代码才会被触发。这意味着,在你的玩家在你的游戏世界正在探索或收集奖励时,这个代码不会运行。它仅当 `self.damage` 变量被激活时运行。
当代码运行时,它使用 `self.rect.collidelist` 来查看玩家是否*仍然*接触在你敌人列表中的敌人(当其未侦查到碰撞时,`collidelist` 返回 -1。在它没有接触敌人时是该处理 `self.damage` 的时机:通过设置 `self.damage` 变量回到 0 来使其无效,并减少一点生命值。
现在尝试你的游戏。
### 得分反应
现在,你有一个来让你的玩家知道它们分数和生命值的方法,当你的玩家达到某些里程碑时,你可以确保某些事件发生。例如,也许这里有一个特殊的恢复一些生命值的奖励项目。也许一个到达 0 生命值的玩家不得不从一个关卡的起始位置重新开始。
你可以在你的代码中检查这些事件,并且相应地操纵你的游戏世界。你已经知道该怎么做,所以请浏览文档来寻找新的技巧,并且独立地尝试这些技巧。
这里是到目前为止所有的代码:
```
#!/usr/bin/env python3
# draw a world
# add a player and player control
# add player movement
# add enemy and basic collision
# add platform
# add gravity
# add jumping
# add scrolling
# add loot
# add score
# GNU All-Permissive License
# Copying and distribution of this file, with or without modification,
# are permitted in any medium without royalty provided the copyright
# notice and this notice are preserved. This file is offered as-is,
# without any warranty.
import pygame
import sys
import os
import pygame.freetype
'''
Objects
'''
class Platform(pygame.sprite.Sprite):
# x location, y location, img width, img height, img file
def __init__(self,xloc,yloc,imgw,imgh,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img)).convert()
self.image.convert_alpha()
self.rect = self.image.get_rect()
self.rect.y = yloc
self.rect.x = xloc
class Player(pygame.sprite.Sprite):
'''
Spawn a player
'''
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.movex = 0
self.movey = 0
self.frame = 0
self.health = 10
self.damage = 0
self.collide_delta = 0
self.jump_delta = 6
self.score = 1
self.images = []
for i in range(1,9):
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
img.convert_alpha()
img.set_colorkey(ALPHA)
self.images.append(img)
self.image = self.images[0]
self.rect = self.image.get_rect()
def jump(self,platform_list):
self.jump_delta = 0
def gravity(self):
self.movey += 3.2 # how fast player falls
if self.rect.y > worldy and self.movey >= 0:
self.movey = 0
self.rect.y = worldy-ty
def control(self,x,y):
'''
control player movement
'''
self.movex += x
self.movey += y
def update(self):
'''
Update sprite position
'''
self.rect.x = self.rect.x + self.movex
self.rect.y = self.rect.y + self.movey
# moving left
if self.movex < 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[self.frame//ani]
# moving right
if self.movex > 0:
self.frame += 1
if self.frame > ani*3:
self.frame = 0
self.image = self.images[(self.frame//ani)+4]
# collisions
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
if self.damage == 0:
for enemy in enemy_hit_list:
if not self.rect.contains(enemy):
self.damage = self.rect.colliderect(enemy)
if self.damage == 1:
idx = self.rect.collidelist(enemy_hit_list)
if idx == -1:
self.damage = 0 # set damage back to 0
self.health -= 1 # subtract 1 hp
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
for loot in loot_hit_list:
loot_list.remove(loot)
self.score += 1
print(self.score)
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.collide_delta = 0 # stop jumping
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.movey = 0
self.rect.y = worldy-ty-ty
self.collide_delta = 0 # stop jumping
if self.rect.y > g.rect.y:
self.health -=1
print(self.health)
if self.collide_delta < 6 and self.jump_delta < 6:
self.jump_delta = 6*2
self.movey -= 33 # how high to jump
self.collide_delta += 6
self.jump_delta += 6
class Enemy(pygame.sprite.Sprite):
'''
Spawn an enemy
'''
def __init__(self,x,y,img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img))
self.movey = 0
#self.image.convert_alpha()
#self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.counter = 0
def move(self):
'''
enemy movement
'''
distance = 80
speed = 8
self.movey += 3.2
if self.counter >= 0 and self.counter <= distance:
self.rect.x += speed
elif self.counter >= distance and self.counter <= distance*2:
self.rect.x -= speed
else:
self.counter = 0
self.counter += 1
if not self.rect.y >= worldy-ty-ty:
self.rect.y += self.movey
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.movey = 0
if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.rect.y = worldy-ty-ty
class Level():
def bad(lvl,eloc):
if lvl == 1:
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
enemy_list = pygame.sprite.Group() # create enemy group
enemy_list.add(enemy) # add enemy to group
if lvl == 2:
print("Level " + str(lvl) )
return enemy_list
def loot(lvl,tx,ty):
if lvl == 1:
loot_list = pygame.sprite.Group()
loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
loot_list.add(loot)
if lvl == 2:
print(lvl)
return loot_list
def ground(lvl,gloc,tx,ty):
ground_list = pygame.sprite.Group()
i=0
if lvl == 1:
while i < len(gloc):
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
ground_list.add(ground)
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return ground_list
def platform(lvl,tx,ty):
plat_list = pygame.sprite.Group()
ploc = []
i=0
if lvl == 1:
ploc.append((20,worldy-ty-128,3))
ploc.append((300,worldy-ty-256,3))
ploc.append((500,worldy-ty-128,4))
while i < len(ploc):
j=0
while j <= ploc[i][2]:
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
plat_list.add(plat)
j=j+1
print('run' + str(i) + str(ploc[i]))
i=i+1
if lvl == 2:
print("Level " + str(lvl) )
return plat_list
def stats(score,health):
myfont.render_to(world, (4, 4), "Score:"+str(score), SNOWGRAY, None, size=64)
myfont.render_to(world, (4, 72), "Health:"+str(health), SNOWGRAY, None, size=64)
'''
Setup
'''
worldx = 960
worldy = 720
fps = 40 # frame rate
ani = 4 # animation cycles
clock = pygame.time.Clock()
pygame.init()
main = True
BLUE = (25,25,200)
BLACK = (23,23,23 )
WHITE = (254,254,254)
SNOWGRAY = (137,164,166)
ALPHA = (0,255,0)
world = pygame.display.set_mode([worldx,worldy])
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
backdropbox = world.get_rect()
player = Player() # spawn player
player.rect.x = 0
player.rect.y = 0
player_list = pygame.sprite.Group()
player_list.add(player)
steps = 10
forwardx = 600
backwardx = 230
eloc = []
eloc = [200,20]
gloc = []
tx = 64 #tile size
ty = 64 #tile size
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
font_size = tx
myfont = pygame.freetype.Font(font_path, font_size)
i=0
while i <= (worldx/tx)+tx:
gloc.append(i*tx)
i=i+1
enemy_list = Level.bad( 1, eloc )
ground_list = Level.ground( 1,gloc,tx,ty )
plat_list = Level.platform( 1,tx,ty )
loot_list = Level.loot(1,tx,ty)
'''
Main loop
'''
while main == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit(); sys.exit()
main = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT or event.key == ord('a'):
print("LEFT")
player.control(-steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
print("RIGHT")
player.control(steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
print('jump')
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(steps,0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(-steps,0)
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump(plat_list)
if event.key == ord('q'):
pygame.quit()
sys.exit()
main = False
# scroll the world forward
if player.rect.x >= forwardx:
scroll = player.rect.x - forwardx
player.rect.x = forwardx
for p in plat_list:
p.rect.x -= scroll
for e in enemy_list:
e.rect.x -= scroll
for l in loot_list:
l.rect.x -= scroll
# scroll the world backward
if player.rect.x <= backwardx:
scroll = backwardx - player.rect.x
player.rect.x = backwardx
for p in plat_list:
p.rect.x += scroll
for e in enemy_list:
e.rect.x += scroll
for l in loot_list:
l.rect.x += scroll
world.blit(backdrop, backdropbox)
player.gravity() # check gravity
player.update()
player_list.draw(world) #refresh player position
enemy_list.draw(world) # refresh enemies
ground_list.draw(world) # refresh enemies
plat_list.draw(world) # refresh platforms
loot_list.draw(world) # refresh loot
for e in enemy_list:
e.move()
stats(player.score,player.health) # draw text
pygame.display.flip()
clock.tick(fps)
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/add-scorekeeping-your-python-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_maze.png?itok=mZ5LP4-X (connecting yellow dots in a maze)
[2]: https://www.python.org/
[3]: https://www.pygame.org/news
[4]: https://linux.cn/article-9071-1.html
[5]: https://linux.cn/article-10850-1.html
[6]: https://linux.cn/article-10858-1.html
[7]: https://linux.cn/article-10874-1.html
[8]: https://linux.cn/article-10883-1.html
[9]: https://linux.cn/article-11780-1.html
[10]: https://linux.cn/article-11790-1.html
[11]: https://linux.cn/article-11819-1.html
[12]: https://linux.cn/article-11828-1.html
[13]: http://pygame.org/news
[14]: https://fontlibrary.org/
[15]: https://www.fontsquirrel.com/
[16]: https://www.theleagueofmoveabletype.com/
[17]: https://opensource.com/article/17/7/how-unzip-targz-file
[18]: https://opensource.com/sites/default/files/uploads/pygame-score.jpg (Keeping score in Pygame)
[19]: https://linux.cn/article-10902-1.html

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11841-1.html)
[#]: subject: (My favorite Bash hacks)
[#]: via: (https://opensource.com/article/20/1/bash-scripts-aliases)
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
我珍藏的 Bash 秘籍
======
> 通过别名和其他捷径来提高你经常忘记的那些事情的效率。
![bash logo on green background][1]
要是你整天使用计算机,如果能找到需要重复执行的命令并记下它们以便以后轻松使用那就太棒了。它们全都呆在那里,藏在 `~/.bashrc` 中(或 [zsh 用户][2]的 `~/.zshrc` 中),等待着改善你的生活!
在本文中,我分享了我最喜欢的这些助手命令,对于我经常遗忘的事情,它们很有用,也希望这可以帮助到你,以及为你解决一些经常头疼的问题。
### 完事吱一声
当我执行一个需要长时间运行的命令时,我经常采用多任务的方式,然后就必须回头去检查该操作是否已完成。然而通过有用的 `say` 命令,现在就不用再这样了(这是在 MacOS 上;请根据你的本地环境更改为等效的方式):
```
function looooooooong {
START=$(date +%s.%N)
$*
EXIT_CODE=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
result="$1 completed in $RES, exit code $EXIT_CODE."
echo -e "\n⏰ $result"
( say -r 250 $result 2>&1 > /dev/null & )
}
```
这个命令会记录命令的开始和结束时间,计算所需的分钟数,并“说”出调用的命令、花费的时间和退出码。当简单的控制台铃声无法使用时,我发现这个超级有用。
### 安装小助手
我在小时候就开始使用 Ubuntu而我需要学习的第一件事就是如何安装软件包。我曾经首先添加的别名之一是它的助手根据当天的流行梗命名的
```
alias canhas="sudo apt-get install -y"
```
### GPG 签名
有时候,我必须在没有 GPG 扩展程序或应用程序的情况下给电子邮件签署 [GPG][3] 签名,我会跳到命令行并使用以下令人讨厌的别名:
```
alias gibson="gpg --encrypt --sign --armor"
alias ungibson="gpg --decrypt"
```
### Docker
Docker 的子命令很多,但是 Docker compose 的更多。我曾经使用这些别名来将 `--rm` 标志丢到脑后,但是现在不再使用这些有用的别名了:
```
alias dc="docker-compose"
alias dcr="docker-compose run --rm"
alias dcb="docker-compose run --rm --build"
```
### Google Cloud 的 gcurl 助手
对于我来说Google Cloud 是一个相对较新的东西,而它有[极多的文档][4]。`gcurl` 是一个别名,可确保在用带有身份验证标头的本地 `curl` 命令连接 Google Cloud API 时,可以获得所有正确的标头。
### Git 和 ~/.gitignore
我工作中用 Git 很多,因此我有一个专门的部分来介绍 Git 助手。
我最有用的助手之一是我用来克隆 GitHub 存储库的。你不必运行:
```
git clone git@github.com:org/repo /Users/glasnt/git/org/repo
```
我设置了一个克隆函数:
```
clone(){
    echo Cloning $1 to ~/git/$1
    cd ~/git
    git clone git@github.com:$1 $1
    cd $1
}
```
即使每次进入 `~/.bashrc` 文件看到这个时,我总是会忘记和傻笑,我也有一个“刷新上游”命令:
```
alias yoink="git checkout master && git fetch upstream master && git merge upstream/master"
```
给 Git 一族的另一个助手是全局忽略文件。在你的 `git config --global --list` 中,你应该看到一个 `core.excludesfile`。如果没有,请[创建一个][6],然后将你总是放到各个 `.gitignore` 文件中的内容填满它。作为 MacOS 上的 Python 开发人员,对我来说,这些内容是:
```
.DS_Store     # macOS clutter
venv/         # I never want to commit my virtualenv
*.egg-info/*  # ... nor any locally compiled packages
__pycache__   # ... or source
*.swp         # ... nor any files open in vim
```
你可以在 [Gitignore.io][7] 或 GitHub 上的 [Gitignore 存储库][8]上找到其他建议。
### 轮到你了
你最喜欢的助手命令是什么?请在评论中分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/bash-scripts-aliases
作者:[Katie McLaughlin][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/glasnt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/19/9/getting-started-zsh
[3]: https://gnupg.org/
[4]: https://cloud.google.com/service-infrastructure/docs/service-control/getting-started
[5]: mailto:git@github.com
[6]: https://help.github.com/en/github/using-git/ignoring-files#create-a-global-gitignore
[7]: https://www.gitignore.io/
[8]: https://github.com/github/gitignore

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11877-1.html)
[#]: subject: (What's HTTPS for secure computing?)
[#]: via: (https://opensource.com/article/20/1/confidential-computing)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
用于安全计算的 HTTPS 是什么?
======
> 在默认的情况下,网站的安全性还不足够。
![](https://img.linux.net.cn/data/attachment/album/202002/11/123552rqncn4c7474j44jq.jpg)
在过去的几年里,寻找一个只以 “http://...” 开头的网站变得越来越难,这是因为业界终于意识到,网络安全“是件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算(使用例如<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby> TEEs 这样的硬件功能来提供数据和算法这种类型的保护)可以保护主机系统中的或者易受攻击的环境中的数据。
关于 [TEEs][2],当然,还有我和 Nathaniel McCallum 共同创立的 [Enarx 项目][3],我已经写了几次文章(参见《[给每个人的 Enarx一个任务][4]》 和 《[Enarx 迈向多平台][5]》。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台以此来让你能够安全地将敏感应用或者敏感组件例如微服务部署在你不信任的主机上。当然Enarx 是完全开源的(顺便提一下,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作负载,这是可信计算的承诺,它扩展了使用静态敏感数据和传输中数据的常规做法:
* **存储**:你要加密你的静态数据,因为你不完全信任你的基础存储架构。
* **网络**:你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
* **计算**:你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
关于信任,我有非常多的话想说,而且,上述说法里的单词“**完全**”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。不论哪种情况,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如<ruby>同态加密<rt>homomorphic encryption</rt></ruby>一类的技术,这些技术正在开始提供一些可能性,但是它们依然有限,这些技术还不够成熟)。
考虑到发现的一些 CPU 安全性问题,是否应该完全信任 CPU 有时自然会产生疑问,以及它们是否在针对其所在的主机的物理攻击中具有完全的安全性。
这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的[威胁模型][6]并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号、T 型号和 U 型号的芯片以及任何 P、M 和 N 供应商的任何芯片上去。”
我认为这里发生了三处改变,这些改变引起了人们现在对<ruby>机密计算<rt>confidential computing</rt></ruby>的兴趣和采用。
1. **硬件可用**:只是在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
2. **行业就绪**就像上云越来越多地被接受作为应用程序部署的模型监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始呼吁在不受信任的主机运行敏感程序或者是处理敏感数据的应用程序的方法更确切地说是在无法完全信任且带有敏感数据的主机上运行的方法。这不足为奇如果芯片制造商看不到这项技术的市场他们就不会投太多的钱在这项技术上。Linux 基金会的[机密计算联盟CCC][7]的成立就是业界对如何寻找使用加密计算的通用模型并且鼓励开源项目使用这些技术感兴趣的案例。(红帽发起的 Enarx 是一个 CCC 项目。)
3. **开放源码**:就像区块链一样,机密计算是使用开源绝对明智的技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内执行你的工作负载的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE”但是如果你不够了解 TEE 软件环境那你就是将一种软件不透明换成另外一种。TEEs 的开源支持将允许你或者社区(实际上是你与社区)以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会旗下(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果它们还没有成为开源)。
我认为,在过去的 15 到 20 年里,硬件可用、行业就绪和开放源码已成为推动技术改变的驱动力。区块链、人工智能、云计算、<ruby>大规模计算<rt>webscale computing</rt></ruby>、大数据和互联网商务都是这三个点同时发挥作用的例子,并且在业界带来了巨大的改变。
在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的安全变得越来越实用和无处不在,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化 —— 而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/confidential-computing
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
[3]: https://enarx.io/
[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
[7]: https://confidentialcomputing.io/
[8]: tmp.VEZpFGxsLv#1
[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/

View File

@ -1,41 +1,42 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11846-1.html)
[#]: subject: (Keep a journal of your activities with this Python program) [#]: subject: (Keep a journal of your activities with this Python program)
[#]: via: (https://opensource.com/article/20/1/python-journal) [#]: via: (https://opensource.com/article/20/1/python-journal)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个 Python 程序记录你的活动 使用这个 Python 程序记录你的活动
====== ======
Jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
![Writing in a notebook][1] > jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
![](https://img.linux.net.cn/data/attachment/album/202002/03/105455tx03zo2pu7woyusp.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 jrnl 记录日志 ### 使用 jrnl 记录日志
在我的公司,许多人会在下班之前发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。 在我的公司,许多人会在下班之前在 Slack 上发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。
![jrnl][2] ![jrnl][2]
打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助 打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助。
Jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。 jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。
由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 **pip3 install jrnl**。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。 由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 `pip3 install jrnl`。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。
![jrnl's first run][4] ![jrnl's first run][4]
现在,每当你需要做笔记或记录日志时,只需输入 **jrnl &lt;some text&gt;**,它将带有时间戳的记录保存到默认文件中。你可以使用 **jrnl -on YYYY-MM-DD** 搜索特定日期条目,**jrnl -from YYYY-MM-DD** 搜索在那日期之后的条目,以及用 **jrnl -to YYYY-MM-DD** 搜索自那日期之后的条目。搜索词可以与 **-and** 参数结合使用,允许像 **jrnl -from 2019-01-01 -and -to 2019-12-31** 这类搜索。 现在,每当你需要做笔记或记录日志时,只需输入 `jrnl <some text>`,它将带有时间戳的记录保存到默认文件中。你可以使用 `jrnl -on YYYY-MM-DD` 搜索特定日期条目,`jrnl -from YYYY-MM-DD` 搜索在那日期之后的条目,以及用 `jrnl -to YYYY-MM-DD` 搜索到那日期的条目。搜索词可以与 `-and` 参数结合使用,允许像 `jrnl -from 2019-01-01 -and -to 2019-12-31` 这类搜索。
你还可以使用 **\--edit** 标志编辑日志中的条目。开始之前,通过编辑文件 **~/.config/jrnl/jrnl.yaml** 来设置默认编辑器。你还可以指定使用什么日志文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vimjrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5] 你还可以使用 `--edit` 标志编辑日志中的条目。开始之前,通过编辑文件 `~/.config/jrnl/jrnl.yaml` 来设置默认编辑器。你还可以指定日志使用什么文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vimjrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5]
![Example jrnl config file][6] ![Example jrnl config file][6]
Jrnl 还可以加密日志文件。通过设置全局 **encrypt** 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的每个文件中设置 **encrypt: true** 来加密文件。 jrnl 还可以加密日志文件。通过设置全局 `encrypt` 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的针对文件设置 `encrypt: true` 来加密文件。
``` ```
journals: journals:
@ -46,7 +47,7 @@ journals:
    encrypt: true     encrypt: true
``` ```
如果日志尚未加密,系统将提示你输入对它任何操作的密码。日志文件将加密保存在磁盘上,免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。 如果日志尚未加密,系统将提示你输入对它进行任何操作的密码。日志文件将加密保存在磁盘上,免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。
![Encrypted jrnl file][8] ![Encrypted jrnl file][8]
@ -59,7 +60,7 @@ via: https://opensource.com/article/20/1/python-journal
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,35 +1,33 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (robsean) [#]: translator: (robsean)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11838-1.html)
[#]: subject: (How to Set or Change Timezone in Ubuntu Linux [Beginners Tip]) [#]: subject: (How to Set or Change Timezone in Ubuntu Linux [Beginners Tip])
[#]: via: (https://itsfoss.com/change-timezone-ubuntu/) [#]: via: (https://itsfoss.com/change-timezone-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何在 Ubuntu Linux 中设置或更改时区 [初学者技巧] 如何在 Ubuntu Linux 中设置或更改时区
====== ======
[你安装 Ubuntu 时][1],它要求你设置时区。如果你选择一个错误的时区,或者你移动到世界的一些其它地方,你可以很容易地在以后更改它。 [你安装 Ubuntu 时][1],它要求你设置时区。如果你选择一个错误的时区,或者你移动到世界的一些其它地方,你可以很容易地在以后更改它。
### 如何在 Ubuntu 和其它 Linux 发行版中更改时区 ### 如何在 Ubuntu 和其它 Linux 发行版中更改时区
这里有两种方法来更改 Ubuntu 中的时区。你可以使用图形化设置或在终端中使用 timedatectl 命令。你也可以直接更改 /etc/timezone 文件,但是我不建议这样做。 这里有两种方法来更改 Ubuntu 中的时区。你可以使用图形化设置或在终端中使用 `timedatectl` 命令。你也可以直接更改 `/etc/timezone` 文件,但是我不建议这样做。
在这篇初学者教程中,我将向你展示图形化和终端两种方法: 在这篇初学者教程中,我将向你展示图形化和终端两种方法:
* [通过 GUI 更改 Ubuntu 中的时区][2] (适合桌面用户) * [通过 GUI 更改 Ubuntu 中的时区][2] (适合桌面用户)
* [通过命令行更改 Ubuntu 中的时区][3] (桌面和服务器都工作) * [通过命令行更改 Ubuntu 中的时区][3] (桌面和服务器都工作)
![][4] ![][4]
#### 方法 1: 通过终端更改 Ubuntu 时区 #### 方法 1: 通过终端更改 Ubuntu 时区
[Ubuntu][5] 或一些使用 systemd 的其它发行版可以在 Linux 终端中使用 timedatectl 命令来设置时区。 [Ubuntu][5] 或一些使用 systemd 的其它发行版可以在 Linux 终端中使用 `timedatectl` 命令来设置时区。
你可以使用没有任何参数的 timedatectl 命令来检查当前是日期和时区设置: 你可以使用没有任何参数的 `timedatectl` 命令来检查当前是日期和时区设置:
``` ```
[email protected]:~$ timedatectl [email protected]:~$ timedatectl
@ -44,9 +42,9 @@ systemd-timesyncd.service active: yes
正如你在上面的输出中所看,我的系统使用 Asia/Kolkata 。它也告诉我现在比世界时早 5 小时 30 分钟。 正如你在上面的输出中所看,我的系统使用 Asia/Kolkata 。它也告诉我现在比世界时早 5 小时 30 分钟。
为在 Linux 中设置时区,你需要知道准确的时区。你必需使用时区的正确的格式 (时区格式是 大陆/城市)。 为在 Linux 中设置时区,你需要知道准确的时区。你必需使用时区的正确的格式 (时区格式是/城市)。
为获取时区列表,使用 _timedatectl_ 命令的 _list-timezones_ 参数: 为获取时区列表,使用 `timedatectl` 命令的 `list-timezones` 参数:
``` ```
timedatectl list-timezones timedatectl list-timezones
@ -56,9 +54,9 @@ timedatectl list-timezones
![Timezones List][6] ![Timezones List][6]
你可以使用向上箭头和向下箭头或 PgUp 和 PgDown 键来在页面之间移动。 你可以使用向上箭头和向下箭头或 `PgUp``PgDown` 键来在页面之间移动。
你也可以 grep 输出,并搜索你的时区。例如,假如你正在寻找欧洲的时区,你可以使用: 你也可以 `grep` 输出,并搜索你的时区。例如,假如你正在寻找欧洲的时区,你可以使用:
``` ```
timedatectl list-timezones | grep -i europe timedatectl list-timezones | grep -i europe
@ -72,7 +70,7 @@ timedatectl set-timezone Europe/Paris
它虽然不显示任何成功信息,但是时区会立即更改。你不需要重新启动或注销。 它虽然不显示任何成功信息,但是时区会立即更改。你不需要重新启动或注销。
记住,虽然你不需要成为 root 用户,并且对命令使用 sudo ,但是你的账户仍然需要拥有管理器权限来更改时区。 记住,虽然你不需要成为 root 用户并对命令使用 `sudo`,但是你的账户仍然需要拥有管理器权限来更改时区。
你可以使用 [date 命令][7] 来验证更改的时间好时区: 你可以使用 [date 命令][7] 来验证更改的时间好时区:
@ -83,7 +81,7 @@ Sat Jan 18 13:56:26 CET 2020
#### 方法 2: 通过 GUI 更改 Ubuntu 时区 #### 方法 2: 通过 GUI 更改 Ubuntu 时区
按下 super 键 (Windows 键) ,并搜索设置: 按下 `super` 键 (Windows 键) ,并搜索设置:
![Applications Menu Settings][8] ![Applications Menu Settings][8]
@ -91,7 +89,7 @@ Sat Jan 18 13:56:26 CET 2020
![Go to Settings -> Details][9] ![Go to Settings -> Details][9]
在详细信息中,你将在左侧边栏中找到 日期和时间 。在这里,你应该关闭自动时区选项(如果它已经被启用),然后在时区上单击: 在详细信息中,你将在左侧边栏中找到“日期和时间”。在这里,你应该关闭自动时区选项(如果它已经被启用),然后在时区上单击:
![In Details -> Date & Time, turn off the Automatic Time Zone][10] ![In Details -> Date & Time, turn off the Automatic Time Zone][10]
@ -99,7 +97,7 @@ Sat Jan 18 13:56:26 CET 2020
![Select a timezone][11] ![Select a timezone][11]
在选择新的时区后,除了关闭这个地图后,你不必做任何事情。不需要注销或 [关闭 Ubuntu][12] 在选择新的时区后,除了关闭这个地图后,你不必做任何事情。不需要注销或 [关闭 Ubuntu][12]。
我希望这篇快速教程能帮助你在 Ubuntu 和其它 Linux 发行版中更改时区。如果你有问题或建议,请告诉我。 我希望这篇快速教程能帮助你在 Ubuntu 和其它 Linux 发行版中更改时区。如果你有问题或建议,请告诉我。
@ -110,7 +108,7 @@ via: https://itsfoss.com/change-timezone-ubuntu/
作者:[Abhishek Prakash][a] 作者:[Abhishek Prakash][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean) 译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11856-1.html)
[#]: subject: (One open source chat tool to rule them all)
[#]: via: (https://opensource.com/article/20/1/open-source-chat-tool)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
一个通过 IRC 管理所有聊天的开源聊天工具
======
> BitlBee 将多个聊天应用集合到一个界面中。在我们的 20 个使用开源提升生产力的系列的第九篇文章中了解如何设置和使用 BitlBee。
![](https://img.linux.net.cn/data/attachment/album/202002/05/123636dw8uw34mbkqzmw84.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 将所有聊天都放到 BitlBee 中
即时消息和聊天已经成为网络世界的主要内容。如果你像我一样,你可能打开五六个不同的应用与你的朋友、同事和其他人交谈。关注所有聊天真的很痛苦。谢天谢地,你可以使用一个应用(好吧,是两个)将这些聊天整个到一个地方。
![BitlBee on XChat][2]
[BitlBee][3] 是作为服务运行的应用,它可以将标准的 IRC 客户端与大量的消息服务进行桥接。而且,由于它本质上是 IRC 服务器,因此你可以选择很多客户端。
BitlBee 几乎包含在所有 Linux 发行版中。在 Ubuntu 上安装(我选择的 Linux 桌面),类似这样:
```
sudo apt install bitlbee-libpurple
```
在其他发行版上,包名可能略有不同,但搜索 “bitlbee” 应该就能看到。
你会注意到我用的 libpurple 版的 BitlBee。这个版本能让我使用 [libpurple][4] 即时消息库中提供的所有协议,该库最初是为 [Pidgin][5] 开发的。
安装完成后,服务应会自动启动。现在,使用一个 IRC 客户端(图片中为 [XChat][6]),我可以连接到端口 6667标准 IRC 端口)上的服务。
![Initial BitlBee connection][7]
你将自动连接到控制频道 &bitlbee。此频道对于你是独一无二的在多用户系统上每个人都有一个自己的。在这里你可以配置该服务。
在控制频道中输入 `help`,你可以随时获得完整的文档。浏览它,然后使用 `register` 命令在服务器上注册帐户。
```
register <mypassword>
```
现在你在服务器上所做的任何配置更改IM 帐户、设置等)都将在输入 `save` 时保存。每当你连接时,使用 `identify <mypassword>` 连接到你的帐户并加载这些设置。
![purple settings][8]
命令 `help purple` 将显示 libpurple 提供的所有可用协议。例如,我安装了 [telegram-purple][9] 包,它增加了连接到 Telegram 的能力。我可以使用 `account add` 命令将我的电话号码作为帐户添加。
```
account add telegram +15555555
```
BitlBee 将显示它已添加帐户。你可以使用 `account list` 列出你的帐户。因为我只有一个帐户,我可以通过 `account 0 on` 登录,它会进行 Telegram 登录,列出我所有的朋友和聊天,接下来就能正常聊天了。
但是,对于 Slack 这个最常见的聊天系统之一呢?你可以安装 [slack-libpurple][10] 插件,并且对 Slack 执行同样的操作。如果你不愿意编译和安装这些,这可能不适合你。
按照插件页面上的说明操作,安装后重新启动 BitlBee 服务。现在,当你运行 `help purple` 时,应该会列出 Slack。像其他协议一样添加一个 Slack 帐户。
```
account add slack ksonney@myslack.slack.com
account 1 set password my_legcay_API_token
account 1 on
```
你知道么,你已经连接到 Slack 中,你可以通过 `chat add` 命令添加你感兴趣的 Slack 频道。比如:
```
chat add 1 happyparty
```
将 Slack 频道 happyparty 添加为本地频道 #happyparty。现在可以使用标准 IRC `/join` 命令访问该频道。这很酷。
BitlBee 和 IRC 客户端帮助我的(大部分)聊天和即时消息保存在一个地方,并减少了我的分心,因为我不再需要查找并切换到任何一个刚刚找我的应用上。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-chat-tool
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://opensource.com/sites/default/files/uploads/productivity_9-1.png (BitlBee on XChat)
[3]: https://www.bitlbee.org/
[4]: https://developer.pidgin.im/wiki/WhatIsLibpurple
[5]: http://pidgin.im/
[6]: http://xchat.org/
[7]: https://opensource.com/sites/default/files/uploads/productivity_9-2.png (Initial BitlBee connection)
[8]: https://opensource.com/sites/default/files/uploads/productivity_9-3.png (purple settings)
[9]: https://github.com/majn/telegram-purple
[10]: https://github.com/dylex/slack-libpurple
[11]: mailto:ksonney@myslack.slack.com

View File

@ -1,40 +1,41 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11858-1.html)
[#]: subject: (Use this Twitter client for Linux to tweet from the terminal) [#]: subject: (Use this Twitter client for Linux to tweet from the terminal)
[#]: via: (https://opensource.com/article/20/1/tweet-terminal-rainbow-stream) [#]: via: (https://opensource.com/article/20/1/tweet-terminal-rainbow-stream)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) [#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个 Twitter 客户端在 Linux 终端中发推特 使用这个 Twitter 客户端在 Linux 终端中发推特
====== ======
在我们的 20 个使用开源提升生产力的系列的第十篇文章中使用 Rainbow Stream 跟上你的 Twitter 流而无需离开终端。
![Chat bubbles][1] > 在我们的 20 个使用开源提升生产力的系列的第十篇文章中,使用 Rainbow Stream 跟上你的 Twitter 流而无需离开终端。
![](https://img.linux.net.cn/data/attachment/album/202002/06/113720bwi55j7xcccwwwi0.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。 去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 通过 Rainbow Stream 跟上Twitter ### 通过 Rainbow Stream 跟上Twitter
我喜欢社交网络和微博。它快速、简单,还有我可以与世界分享我的想法。当然,缺点是几乎所有非 Windows 的桌面客户端都是网站的封装。[Twitter][2] 有很多客户端,但我真正想要的是轻量、易于使用,最重要的是吸引人的客户端。 我喜欢社交网络和微博。它快速、简单,还有我可以与世界分享我的想法。当然,缺点是几乎所有非 Windows 的桌面客户端都是网站的封装。[Twitter][2] 有很多客户端,但我真正想要的是轻量、易于使用,最重要的是吸引人的客户端。
![Rainbow Stream for Twitter][3] ![Rainbow Stream for Twitter][3]
[Rainbow Stream][4] 是好看的 Twitter 客户端之一。它简单易用,并且可以通过 **pip3 install rainbowstream** 快速安装。第一次运行时,它将打开浏览器窗口,并让你通过 Twitter 授权。完成后,你将回到命令行,你的 Twitter 时间线将开始滚动。 [Rainbow Stream][4] 是好看的 Twitter 客户端之一。它简单易用,并且可以通过 `pip3 install rainbowstream` 快速安装。第一次运行时,它将打开浏览器窗口,并让你通过 Twitter 授权。完成后,你将回到命令行,你的 Twitter 时间线将开始滚动。
![Rainbow Stream first run][5] ![Rainbow Stream first run][5]
要了解的最重要的命令是 **p** 暂停推流、**r** 继续推流、**h** 得帮助,以及 **t** 发布新的推文。例如,**h tweets** 将提供发送和回复推文的所有选项。另一个有用的帮助页面是 **h messages**,它提供了处理直接消息的命令,这是我妻子和我经常使用的东西。还有很多其他命令,我会回头获得很多帮助。 要了解的最重要的命令是 `p` 暂停推流、`r` 继续推流、`h` 得到帮助,以及 `t` 发布新的推文。例如,`h tweets` 将提供发送和回复推文的所有选项。另一个有用的帮助页面是 `h messages`,它提供了处理直接消息的命令,这是我妻子和我经常使用的东西。还有很多其他命令,我会回头获得很多帮助。
随着时间线的滚动,你可以看到它有完整的 UTF-8 支持,并以正确的字体显示推文被转推以及喜欢的次数,图标和 emoji 也能正确显示。 随着时间线的滚动,你可以看到它有完整的 UTF-8 支持,并以正确的字体显示推文被转推以及喜欢的次数,图标和 emoji 也能正确显示。
![Kill this love][6] ![Kill this love][6]
关于 Rainbow Stream 的_最好_功能之一就是你不必放弃照片和图像。默认情况下此功能是关闭的但是你可以使用 **config** 命令尝试它。 关于 Rainbow Stream 的*最好*功能之一就是你不必放弃照片和图像。默认情况下,此功能是关闭的,但是你可以使用 `config` 命令尝试它。
``` ```
`config IMAGE_ON_TERM = true` config IMAGE_ON_TERM = true
``` ```
此命令将任何图像渲染为 ASCII 艺术。如果你有大量照片流,它可能会有点多,但是我喜欢。它有非常复古的 1990 年代 BBS 感觉,我也确实喜欢 1990 年代的 BBS 场景。 此命令将任何图像渲染为 ASCII 艺术。如果你有大量照片流,它可能会有点多,但是我喜欢。它有非常复古的 1990 年代 BBS 感觉,我也确实喜欢 1990 年代的 BBS 场景。
@ -50,7 +51,7 @@ via: https://opensource.com/article/20/1/tweet-terminal-rainbow-stream
作者:[Kevin Sonney][a] 作者:[Kevin Sonney][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11869-1.html)
[#]: subject: (Read Reddit from the Linux terminal)
[#]: via: (https://opensource.com/article/20/1/open-source-reddit-client)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
在 Linux 终端中阅读 Reddit
======
> 在我们的 20 个使用开源提升生产力的系列的第十一篇文章中使用 Reddit 客户端 Tuir 在工作中短暂休息一下。
![](https://img.linux.net.cn/data/attachment/album/202002/09/104113w1ytjmlv1jly0j1t.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Tuir 阅读 Reddit
短暂休息对于保持生产力很重要。我休息时喜欢去的地方之一是 [Reddit][2],如果你愿意,这可能是一个很好的资源。我在那里发现了各种有关 DevOps、生产力、Emacs、鸡和 ChromeOS 项目的文章。这些讨论可能很有价值。我还关注了一些只有动物图片的子板,因为我喜欢动物(而不只是鸡)照片,有时经过长时间的工作后,我真正需要的是小猫照片。
![/r/emacs in Tuir][3]
当我阅读 Reddit不仅仅是看动物宝宝的图片我使用 [Tuir][4]Reddit 终端 UI。Tuir 是功能齐全的 Reddit 客户端,可以在运行 Python 的任何系统上运行。安装是通过 `pip` 完成的,非常简单。
首次运行时Tuir 会进入 Reddit 默认文章列表。屏幕的顶部和底部有列出不同命令的栏。顶部栏显示你在 Reddit 上的位置,第二行显示根据 Reddit “Hot/New/Controversial” 等类别筛选的命令。按下筛选器前面的数字触发筛选。
![Filtering by Reddit's "top" category][5]
你可以使用箭头键或 `j`、`k`、`h` 和 `l` 键浏览列表,这与 Vi/Vim 使用的键相同。底部栏有用于应用导航的命令。如果要跳转到另一个子板,只需按 `/` 键打开提示,然后输入你要进入的子板名称。
![Logging in][6]
某些东西除非你登录,否则无法访问。如果你尝试执行需要登录的操作,那么 Tuir 就会提示你,例如发布新文章 `c`)或赞成/反对 `a` 和 `z`)。要登录,请按 `u` 键。这将打开浏览器以通过 OAuth2 登录Tuir 将保存令牌。之后,你的用户名应出现在屏幕的右上方。
Tuir 还可以打开浏览器来查看图像、加载链接等。稍作调整,它甚至可以在终端中显示图像(尽管我没有让它可以正常工作)。
总的来说,我对 Tuir 在我需要休息时能快速跟上 Reddit 感到很满意。
Tuir 是现已淘汰的 [RTV][7] 的两个分叉之一。另一个是 [TTRV][8],它还无法通过 `pip` 安装,但功能相同。我期待看到它们随着时间的推移脱颖而出。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-reddit-client
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://www.reddit.com/
[3]: https://opensource.com/sites/default/files/uploads/productivity_11-1.png (/r/emacs in Tuir)
[4]: https://gitlab.com/ajak/tuir
[5]: https://opensource.com/sites/default/files/uploads/productivity_11-2.png (Filtering by Reddit's "top" category)
[6]: https://opensource.com/sites/default/files/uploads/productivity_11-3.png (Logging in)
[7]: https://github.com/michael-lazar/rtv
[8]: https://github.com/tildeclub/ttrv

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11876-1.html)
[#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool)
[#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用此开源工具在一起收取你的 RSS 订阅源和播客
======
> 在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 收取你的新闻 RSS 源和播客。
![](https://img.linux.net.cn/data/attachment/album/202002/10/162526wv5jdl0m12sw10md.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Newsboat 访问你的 RSS 源和播客
RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com我还会关注 [SysAdvent][2] sysadmin 年度工具还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。
![Newsboat][3]
[Newsboat][4] 是一个基于终端的 RSS 订阅源阅读器,外观感觉很像电子邮件程序 [Mutt][5]。它使阅读新闻变得容易,并有许多不错的功能。
安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew中。安装后只需在 `~/.newsboat/urls` 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入:
```
newsboat -i </path/to/my/feeds.opml>
```
添加订阅源后Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动使用 `r` 检查某个源中是否有新项目,使用 `R` 检查所有源中是否有新项目,按回车打开订阅源,并选择要阅读的文章。
![Newsboat article list][6]
但是,你不仅限于本地 URL 列表。Newsboat 还是 [Tiny Tiny RSS][7]、ownCloud 和 Nextcloud News 等新闻阅读服务以及一些 Google Reader 后续产品的客户端。[Newsboat 的文档][8]中涵盖了有关此的详细信息以及其他许多配置选项。
![Reading an article in Newsboat][9]
#### 播客
Newsboat 还通过 Podboat 提供了[播客支持][10]Podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 `e` 将节目添加到你的下载队列中。所有信息将保存在 `~/.newsboat` 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat执行此操作也可以使用 `podboat -a` 让 Podboat 下载所有内容。作为播客人和播客听众,我认为这*真的*很方便。
![Podboat][11]
总体而言Newsboat 有一些非常好的功能,并且是一些基于 Web 或桌面应用的不错的轻量级替代方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-rss-feed-reader
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://sysadvent.blogspot.com/
[3]: https://opensource.com/sites/default/files/uploads/productivity_12-1.png (Newsboat)
[4]: https://newsboat.org
[5]: http://mutt.org/
[6]: https://opensource.com/sites/default/files/uploads/productivity_12-2.png (Newsboat article list)
[7]: https://tt-rss.org/
[8]: https://newsboat.org/releases/2.18/docs/newsboat.html
[9]: https://opensource.com/sites/default/files/uploads/productivity_12-3.png (Reading an article in Newsboat)
[10]: https://newsboat.org/releases/2.18/docs/newsboat.html#_podcast_support
[11]: https://opensource.com/sites/default/files/uploads/productivity_12-4.png (Podboat)

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11879-1.html)
[#]: subject: (Use this open source tool to get your local weather forecast)
[#]: via: (https://opensource.com/article/20/1/open-source-weather-forecast)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个开源工具获取本地天气预报
======
> 在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
![](https://img.linux.net.cn/data/attachment/album/202002/11/140842a8qwomfeg9mwegg8.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 wego 了解天气
过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,“晴朗”可以表示从“酷热”、“低于零度”到“一小时内会小雨”。能够了解实际情况和快速预测非常有用。
![Wego][2]
[Wego][3] 是用 Go 编写的程序,可以获取并显示你的当地天气。如果你愿意,它甚至可以用闪亮的 ASCII 艺术效果进行渲染。
要安装 `wego`,你需要确保在系统上安装了[Go][4]。之后,你可以使用 `go get` 命令获取最新版本。你可能还想将 `~/go/bin` 目录添加到路径中:
```
go get -u github.com/schachmat/wego
export PATH=~/go/bin:$PATH
wego
```
首次运行时,`wego` 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。`wego` 还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap因此我将在此向你展示如何设置。
你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 `~/.wegorc` 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国还是国际单位制SI。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
```
# wego configuration for OEM
aat-coords=false
aat-monochrome=false
backend=openweathermap
days=3
forecast-lang=en
frontend=ascii-art-table
jsn-no-indent=false
location=Pittsboro
owm-api-key=XXXXXXXXXXXXXXXXXXXXX
owm-debug=false
owm-lang=en
units=imperial
```
现在,在命令行运行 `wego` 将显示接下来三天的当地天气。
`wego` 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 `-f` 参数或在 `.wegorc` 文件中指定前端。
![Wego at login][10]
如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 `~/.bashrc`(我这里是 `~/.zshrc`)即可。
[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 `curl` 获取一行天气信息。我有一个名为 `get_wttr` 的 shell 函数,用于获取当前简化的预报信息。
```
get_wttr() {
  curl -s "wttr.in/Pittsboro?format=3"    
}
```
![weather tool for productivity][12]
现在,在我离开家之前,我就可以通过命令行快速简单地获取我是否需要外套、雨伞或者防晒霜了。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-weather-forecast
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
[2]: https://opensource.com/sites/default/files/uploads/productivity_13-1.png (Wego)
[3]: https://github.com/schachmat/wego
[4]: https://golang.org/doc/install
[5]: https://forecast.io
[6]: https://darksky.net
[7]: https://openweathermap.org/
[8]: https://www.worldweatheronline.com/
[9]: https://openweathermap.org/api
[10]: https://opensource.com/sites/default/files/uploads/productivity_13-2.png (Wego at login)
[11]: https://github.com/chubin/wttr.in
[12]: https://opensource.com/sites/default/files/uploads/day13-image3.png (weather tool for productivity)

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11867-1.html)
[#]: subject: (What's your favorite Linux distribution?)
[#]: via: (https://opensource.com/article/20/1/favorite-linux-distribution)
[#]: author: (Opensource.com https://opensource.com/users/admin)
你最喜欢哪个 Linux 发行版?
======
![](https://img.linux.net.cn/data/attachment/album/202002/08/004438ei1y4pp44pw4xy3w.jpg)
你最喜欢哪个 Linux 发行版?虽然有所变化,但现在仍有数百种 [Linux 发行版][2]保持活跃且运作良好。发行版、包管理器和桌面的组合为 Linux 用户创建了无数客制化系统环境。
我们询问了社区的作者们哪个是他们的最爱以及原因。尽管回答中存在一些共性由于各种原因Fedora 和 Ubuntu 是最受欢迎的选择),但我们也听到一些惊奇的回答。以下是他们的一些回答:
> “我使用 Fedora 发行版我喜欢这样的社区成员们共同创建一个令人惊叹的操作系统展现了开源软件世界最伟大的造物。”——Matthew Miller
> “我在家中使用 Arch。作为一名游戏玩家我希望可以轻松使用最新版本的 Wine 和 GFX 驱动同时最大限度地掌控我的系统。所以我选择一个滚动升级并且每个包都保持领先的发行版。”——Aimi Hobson
> “NixOS在业余爱好者市场中没有比这更合适的。”——Alexander Sosedkin
> “我用过每个 Fedora 版本作为我的工作系统。这意味着我从第一个版本开始使用。从前我问自己是否会忘记我使用的是哪一个版本。而这一天已经到来了是从什么时候开始忘记了的呢”——Hugh Brock
> “通常,在我的家里和办公室里都有运行 Ubuntu、CentOS 和 Fedora 的机器。我依赖这些发行版来完成各种工作。Fedora 速度很快而且可以获取最新版本的应用和库。Ubuntu 有大型社区支持可以轻松使用。CentOS 则当我们需要稳如磐石的服务器平台时。”——Steve Morris
> “我最喜欢?对于社区以及如何为发行版构建软件包(从源码构建而非二进制文件),我选择 Fedora。对于可用包的范围和包的定义和开发我选择 Debian。对于文档我选择 Arch。对于新手的提问我以前会推荐 Ubuntu而现在会推荐 Fedora。”——Al Stone
* * *
自从 2014 以来,我们一直向社区提出这一问题。除了 2015 年 PCLinuxOS 出乎意料的领先Ubuntu 往往每年都获得粉丝们的青睐。其他受欢迎的竞争者还包括 Fedora、Debian、Mint 和 Arch。在新的十年里哪个发行版更吸引你如果我们的投票列表中没有你最喜欢的选择请在评论中告诉我们。
下面是过去七年来你最喜欢的 Linux 发行版投票的总览。你可以在我们去年的年刊《[Opensource.com 上的十年最佳][3]》中看到它。[点击这里][3]下载完整版电子书!
![Poll results for favorite Linux distribution through the years][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/favorite-linux-distribution
作者:[Opensource.com][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/admin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://distrowatch.com/
[3]: https://opensource.com/downloads/2019-yearbook-special-edition
[4]: https://opensource.com/sites/default/files/pictures/linux-distributions-through-the-years.jpg (favorite Linux distribution through the years)
[5]: https://opensource.com/article/20/1/favorite-linux-distribution

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11863-1.html)
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
COPR 仓库中 4 个很酷的新项目2020.01
======
![][1]
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Contrast
[Contrast][4] 是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5] 中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。
![][6]
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供了 Contrast。要安装 Contrast请使用以下命令
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8] 是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供了 Pamixer。要安装 Pamixer请使用以下命令
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] 是一款图像编辑器。它有简单且布局合理的用户界面,其中的大多数功能都可在工具栏中使用。尽管它不支持使用图层,但 PhotoFlare 提供了诸如各种颜色调整、图像变换、滤镜、画笔和自动裁剪等功能。此外PhotoFlare 可以批量编辑图片,来对所有图片应用相同的滤镜和转换,并将结果保存在指定目录中。
![][11]
#### 安装说明
[仓库][7]当前为 Fedora 31 提供了 PhotoFlare。要安装 PhotoFlare请使用以下命令
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] 是用于比较两个文件树的命令行工具。除了显示某些文件或目录仅存在于一棵树中之外tdiff 还显示文件大小、类型和内容,所有者用户和组 ID、权限、修改时间等方面的差异。
#### 安装说明
[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供了 tdiff。要安装 tdiff请使用以下命令
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -1,32 +1,28 @@
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (mengxinayan)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-11849-1.html)
[#]: subject: (Showing memory usage in Linux by process and user) [#]: subject: (Showing memory usage in Linux by process and user)
[#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html) [#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Showing memory usage in Linux by process and user 查看 Linux 系统中进程和用户的内存使用情况
====== ======
There are several commands for checking up on memory usage in a Linux system, and here are some of the better ones.
[Fancycrave][1] [(CC0)][2]
There are a lot of tools for looking at memory usage on Linux systems. Some are commonly used commands like **free** and **ps** while others are tools like **top** that allow you to display system performance stats in various ways. In this post, well look at some commands that can be most helpful in identifying the users and processes that are using the most memory. > 有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令。
Here are some that address memory usage by process. ![Fancycrave][1]
### Using top 有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 `free`、`ps`。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 `top`。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
One of the best commands for looking at memory usage is **top**. One extremely easy way to see what processes are using the most memory is to start **top** and then press **shift+m** to switch the order of the processes shown to rank them by the percentage of memory each is using. Once youve entered **shift+m**, your top output should reorder the task entries to look something like this: 下面是一些按照进程查看内存使用情况的命令:
[][3] ### 按照进程查看内存使用情况
BrandPost Sponsored by HPE #### 使用 top
[Take the Intelligent Route with Consumption-Based Storage][3] `top` 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 `top`,然后按下 `shift+m`,这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 `shift+m` ,你的 `top` 应该会得到类似于下面这样的输出结果:
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
``` ```
$top $top
@ -54,11 +50,11 @@ MiB Swap: 2048.0 total, 2045.7 free, 2.2 used. 3053.5 avail Mem
2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd 2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd
``` ```
Notice the **%MEM** ranking. The list will be limited by your window size, but the most significant processes with respect to memory usage will show up at the top of the process list. 注意 `%MEM` 排序。列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
### Using ps #### 使用 ps
The **ps** command includes a column that displays memory usage for each process. To get the most useful display for viewing the top memory users, however, you can pass the **ps** output from this command to the **sort** command. Heres an example that provides a very useful display: `ps` 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 `ps` 命令的结果传递给 `sort` 命令。下面是一个有用的示例:
``` ```
$ ps aux | sort -rnk 4 | head -5 $ ps aux | sort -rnk 4 | head -5
@ -69,7 +65,7 @@ nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefo
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4 nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
``` ```
In the example above (truncated for this post), sort is being used with the **-r** (reverse), the **-n** (numeric) and the **-k** (key) options which are telling the command to sort the output in reverse numeric order based on the fourth column (memory usage) in the output from **ps**. If we first display the heading for the **ps** output, this is a little easier to see. 在上面的例子中(文中已截断),`sort` 命令使用了 `-r` 选项(反转)、`-n` 选项(数字值)、`-k` 选项(关键字),使 `sort` 命令对 `ps` 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 `ps` 命令的标题,那么将会便于查看。
``` ```
$ ps aux | head -1; ps aux | sort -rnk 4 | head -5 $ ps aux | head -1; ps aux | sort -rnk 4 | head -5
@ -81,19 +77,21 @@ nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefo
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4 nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
``` ```
If you like this command, you can set it up as an alias with a command like the one below. Don't forget to add it to your ~/.bashrc file if you want to make it permanent. 如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 `~/.bashrc` 文件中。
``` ```
$ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4" $ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
``` ```
Here are some commands that reveal memory usage by user. 下面是一些根据用户查看内存使用情况的命令:
### Using top ### 按用户查看内存使用情况
Examining memory usage by user is somewhat more complicated because you have to find a way to group all of a users processes into a single memory-usage total. #### 使用 top
If you want to home in on a single user, **top** can be used much in the same way that it was used above. Just add a username with the -U option as shown below and press the **shift+m** keys to order by memory usage: 按照用户检查内存使用情况会更复杂一些,因为你需要找到一种方法把用户所拥有的所有进程统计为单一的内存使用量。
如果你只想查看单个用户进程使用情况,`top` 命令可以采用与上文中同样的方法进行使用。只需要添加 `-U` 选项并在其后面指定你要查看的用户名,然后按下 `shift+m` 便可以按照内存使用有多到少进行查看。
``` ```
$ top -U nemo $ top -U nemo
@ -115,9 +113,9 @@ MiB Swap: 2048.0 total, 2042.7 free, 5.2 used. 2812.0 avail Mem
32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions 32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions
``` ```
### Using ps #### 使用 ps
You can also use a **ps** command to rank an individual user's processes by memory usage. In this example, we do this by selecting a single user's processes with a **grep** command: 你依旧可以使用 `ps` 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 `grep` 命令来筛选得到某个用户的所有进程。
``` ```
$ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more $ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more
@ -129,10 +127,9 @@ nemo 342 10.8 7.0 2941056 426484 ? Rl 08:59 10:45 /usr/lib/firefo
nemo 2389 16.9 3.8 1762960 234644 pts/1 Sl+ 09:15 13:57 vlc videos/edge_computing.mp4 nemo 2389 16.9 3.8 1762960 234644 pts/1 Sl+ 09:15 13:57 vlc videos/edge_computing.mp4
nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-shell nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-shell
``` ```
### 使用 ps 和其他命令的搭配
### Using ps along with other commands 如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的方法,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u` 命令得到了用户列表。其中包含了系统用户比如 `syslog`。我们对每个任务使用 `awk` 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
What gets complicated is when you want to compare users' memory usages with each other. In that case, creating a by-user total and ranking them is a good technique, but it requires a little more work and uses a number of commands. In the script below, we get a list of users with the **ps aux | grep -v COMMAND | awk '{print $1}' | sort -u** command. This includes system users like **syslog**. We then collect stats for each user and total the memory usage stat for each task with **awk**. As a last step, we display each user's memory usage sum in numerical (largest first) order.
``` ```
#!/bin/bash #!/bin/bash
@ -152,7 +149,7 @@ done
echo -e $stats | grep -v ^$ | sort -rn | head echo -e $stats | grep -v ^$ | sort -rn | head
``` ```
Output from this script might look like this: 这个脚本的输出可能如下:
``` ```
$ ./show_user_mem_usage $ ./show_user_mem_usage
@ -170,9 +167,7 @@ $ ./show_user_mem_usage
0 rtkit 0 rtkit
``` ```
There are a lot of ways to report on memory usage on Linux. Focusing on which processes and users are consuming the most memory can benefit from a few carefully crafted tools and commands. 在 Linux 有许多方法可以报告内存使用情况。可以通过一些用心设计的工具和命令,来查看并获得某个进程或者用户占用着最多的内存。
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -180,14 +175,14 @@ via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-
作者:[Sandra Henry-Stocker][a] 作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID) 译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ [a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972 [b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/37LPYOkEE2o [1]: https://images.idgesg.net/images/article/2018/06/chips_processors_memory_cards_by_fancycrave_cc0_via_unsplash_1200x800-100760955-large.jpg
[2]: https://creativecommons.org/publicdomain/zero/1.0/ [2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) [3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.facebook.com/NetworkWorld/ [4]: https://www.facebook.com/NetworkWorld/

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11864-1.html)
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 命令行简介
======
> 下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。
![](https://images.idgesg.net/images/article/2020/01/cmd_linux-control_linux-logo_-100828420-large.jpg)
如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面前兴奋地输入命令来使用大量工具和应用。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 “Activities” 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(类似 “cmd” 为你打开的窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 `Ctrl+Alt+t` 键打开命令窗口。
如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 `$` 或者像 `user@system:~$` 这样的东西,但它意味着系统已经准备好为你运行命令了。
一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
| 命令 | 用途 |
|---|---|
| `pwd` | 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录) |
| `ls` | 列出我的文件 |
| `ls -a` | 列出我更多的文件(包括隐藏文件) |
| `ls -al` | 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限) |
| `who` | 告诉我谁登录了(如果只有你,不要失望) |
| `date` | 日期提醒我今天是星期几(也显示时间) |
| `ps` | 列出我正在运行的进程(可能只是你的 shell 和 `ps` 命令) |
一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
| 命令 | 用途 |
|---|---|
| `cd /tmp` | 移动到其他文件夹(本例中,打开 `/tmp` 文件夹) |
| `ls` | 列出当前位置的文件 |
| `cd` | 回到主目录(不带参数的 `cd` 总是能将你带回到主目录) |
| `cat .bashrc` | 显示文件的内容(本例中显示 `.bashrc` 文件的内容) |
| `history` | 显示最近执行的命令 |
| `echo hello` | 跟自己说 “hello” |
| `cal` | 显示当前月份的日历 |
要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。“重定向”是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。“管道”是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
| 命令 | 用途 |
|---|---|
| `echo "echo hello" > tryme` | 创建一个新的文件并将 “echo hello” 写入该文件 |
| `chmod 700 tryme` | 使新建的文件可执行 |
| `tryme` | 运行新文件(它应当运行文件中包含的命令并且显示 “hello” |
| `ps aux` | 显示所有运行中的程序 |
| `ps aux | grep $USER` | 显示所有运行中的程序,但是限制输出的内容包含你的用户名 |
| `echo $USER` | 使用环境变量显示你的用户名 |
| `whoami` | 使用命令显示你的用户名 |
| `who | wc -l` | 计数所有当前登录的用户数目 |
### 总结
一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[c]: https://github.com/qianmingtian
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11853-1.html)
[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
[#]: via: (https://itsfoss.com/linux-kernel-5-6/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
四大亮点带你看 Linux 内核 5.6
======
当我们还在体验 Linux 5.5 稳定发行版带来更好的硬件支持时Linux 5.6 已经来了。
说实话Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将自带 Linux 5.5,你也需要切实了解一下 Linux 5.6 内核为我们提供了什么。
我将在本文中重点介绍 Linux 5.6 发布版中值得期待的关键更改和功能:
### Linux 5.6 功能亮点
![][1]
当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容:
#### 1、支持 WireGuard
WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。
你可以在官网上进一步了解 [WireGuard][3] 的优点。当然,如果你使用过它,那你可能已经知道它比 OpenVPN 更好的原因。
同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。
#### 2、支持 USB4
Linux 5.6 也将支持 **USB4**
如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5]。
根据文档“USB4 将使 USB 的最大带宽增大一倍并支持<ruby>多并发数据和显示协议<rt>multiple simultaneous data and display protocols</rt></ruby>。”
另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3这将是一个好消息。
#### 3、使用 LZO/LZ4 压缩 F2FS 数据
Linux 5.6 也将支持使用 LZO/LZ4 算法压缩 F2FS 数据。
换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展技术。
#### 4、解决 32 位系统的 2038 年问题
Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。
这意味着对于 32 位系统,时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。
Linux kernel 5.6 解决了这个问题,因此 32 位系统也可以运行到 2038 年以后。
#### 5、改进硬件支持
很显然,在下一个发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样是优先的。
新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。
除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPU、NVIDIA GPU 和 Intel Tiger Lake 芯片组的支持)。
#### 6、其他更新
此外Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进:
* 改进 AMD Zen 的温度/功率报告
* 修复华硕飞行堡垒系列笔记本中 AMD CPU 过热
* 开源支持 NVIDIA RTX 2000 图灵系列显卡
* 内建 FSCRYPT 加密
[Phoronix][6] 跟踪了 Linux 5.6 带来的许多技术性更改。因此,如果你好奇 Linux 5.6 所涉及的全部更改,则可以亲自了解一下。
现在你已经了解了 Linux 5.6 发布版带来的新功能,对此有什么看法呢?在下方评论中留下你的看法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-kernel-5-6/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/linux-kernel-5.6.jpg?ssl=1
[2]: https://openvpn.net/
[3]: https://www.wireguard.com/
[4]: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-20.04-Adds-WireGuard
[5]: https://www.usb.org/sites/default/files/2019-09/USB-IF_USB4%20spec%20announcement_FINAL.pdf
[6]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Spectacular

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11866-1.html)
[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.04 已经到期!现有用户必须升级到 Ubuntu 19.10
======
> Ubuntu 19.04 已在 2020 年 1 月 23 日到期,这意味着运行 Ubuntu 19.04 的系统将不再会接收到安全和维护更新,因此将使其容易受到攻击。
![][1]
[Ubuntu 19.04][2] 发布于 2019 年 4 月 18 日。由于它不是长期支持LTS版本因此只有 9 个月的支持。完成它的发行周期后Ubuntu 19.04 于 2020 年 1 月 23 日到期。
Ubuntu 19.04 带来了一些视觉和性能方面的改进,为时尚和美观的 Ubuntu 外观铺平了道路。与其他常规 Ubuntu 版本一样,它的生命周期为 9 个月。它如今结束了。
### Ubuntu 19.04 终止了吗?这是什么意思?
EOLEnd of life是指在某个日期之后操作系统版本将无法获得更新。你可能已经知道 Ubuntu或其他操作系统提供了安全性和维护升级以使你的系统免受网络攻击。当发行版到期后操作系统将停止接收这些重要更新。
如果你的操作系统版本到期后继续使用该系统,那么系统将容易受到网络和恶意软件的攻击。不仅如此。在 Ubuntu 中,你使用 APT 从软件中心下载的应用也不会更新。实际上,你将不再能够[使用 apt-get 命令安装新软件][3](如果不是立即,那就是逐渐地)。
### 所有 Ubuntu 19.04 用户必须升级到 Ubuntu 19.10
从 2020 年 1 月 23 日开始Ubuntu 19.04 将停止接收更新。你必须升级到 2020 年 7 月之前受支持的 Ubuntu 19.10。这也适用于其他[官方 Ubuntu 衍生版][4],例如 Lubuntu、Xubuntu、Kubuntu 等。
你可以在“设置 -> 细节” 或使用如下命令来[检查你的 Ubuntu 版本][9]
```
lsb_release -a
```
#### 如何升级到 Ubuntu 19.10
值得庆幸的是Ubuntu 提供了简单的方法来将现有系统升级到新版本。实际上Ubuntu 还会提示你有新的 Ubuntu 版本可用,你应该升级到该版本。
![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
如果你的互联网连接良好,那么可以使用[和更新 Ubuntu 一样的 Software Updater 工具][6]。在上图中,你只需单击 “Upgrade” 按钮并按照说明进行操作。我已经编写了有关使用此方法[升级到 Ubuntu 18.04][7]的文章。
如果你没有良好的互联网连接,那么有一种临时方案。在外部磁盘上备份家目录或重要数据。
然后,制作一个 Ubuntu 19.10 的 Live USB。下载 Ubuntu 19.10 ISO并使用 Ubuntu 系统上已安装的启动磁盘创建器从该 ISO 创建 Live USB。
从该 Live USB 引导,然后继续“安装” Ubuntu 19.10。在安装过程中,你应该看到一个删除 Ubuntu 19.04 并将其替换为 Ubuntu 19.10 的选项。选择此选项,然后像重新[安装 Ubuntu][8]一样进行下去。
#### 你是否仍在使用 Ubuntu 19.04、18.10、17.10 或其他不受支持的版本?
你应该注意,目前仅 Ubuntu 16.04、18.04 和 19.10(或更高版本)版本还受支持。如果你运行的不是这些 Ubuntu 版本,那么你必须升级到较新版本。
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
[2]: https://itsfoss.com/ubuntu-19-04-release/
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://itsfoss.com/which-ubuntu-install/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
[6]: https://itsfoss.com/update-ubuntu/
[7]: https://itsfoss.com/upgrade-ubuntu-version/
[8]: https://itsfoss.com/install-ubuntu/
[9]: https://itsfoss.com/how-to-know-ubuntu-unity-version/

1
sources/README.md Normal file
View File

@ -0,0 +1 @@
这里放待翻译的文件。

View File

@ -1,108 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora CoreOS out of preview)
[#]: via: (https://fedoramagazine.org/fedora-coreos-out-of-preview/)
[#]: author: (bgilbert https://fedoramagazine.org/author/bgilbert/)
Fedora CoreOS out of preview
======
![The Fedora CoreOS logo on a gray background.][1]
The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now [available for general use][2].
Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. Its the successor to both [Fedora Atomic Host][3] and [CoreOS Container Linux][4] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][5].
Some highlights of the current Fedora CoreOS release:
* [Automatic updates][6], with staged deployments and phased rollouts
* Built from Fedora 31, featuring:
* Linux 5.4
* systemd 243
* Ignition 2.1
* OCI and Docker Container support via Podman 1.7 and Moby 18.09
* cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration
Fedora CoreOS is available on a variety of platforms:
* Bare metal, QEMU, OpenStack, and VMware
* Images available in all public AWS regions
* Downloadable cloud images for Alibaba, AWS, Azure, and GCP
* Can run live from RAM via ISO and PXE (netboot) images
Fedora CoreOS is under active development.  Planned future enhancements include:
* Addition of the _next_ release stream for extended testing of upcoming Fedora releases.
* Support for additional cloud and virtualization platforms, and processor architectures other than _x86_64_.
* Closer integration with Kubernetes distributions, including [OKD][7].
* [Aggregate statistics collection][8].
* Additional [documentation][9].
### Where do I get it?
To try out the new release, head over to the [download page][10] to get OS images or cloud image IDs.  Then use the [quick start guide][11] to get a machine running quickly.
### How do I get involved?
Its easy!  You can report bugs and missing features to the [issue tracker][12]. You can also discuss Fedora CoreOS in [Fedora Discourse][13], the [development mailing list][14], in _#fedora-coreos_ on Freenode, or at our [weekly IRC meetings][15].
### Are there stability guarantees?
In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  Weve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems.
Well continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the [coreos-status mailing list][16], along with recommended mitigations.
### How do I migrate from CoreOS Container Linux?
Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend [writing a new Fedora CoreOS Config][11] to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine.
Whether youre currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, youll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with [NetworkManager key files][17] instead of _systemd-networkd_, and time synchronization is performed by _chrony_ rather than _systemd-timesyncd_.  Initial migration documentation will be [available soon][9] and a skeleton list of differences between the two OSes is available in [this issue][18].
CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  Well announce the exact end-of-life date later this month.
### How do I migrate from Fedora Atomic Host?
Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend [writing a Fedora CoreOS Config][11] and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, youll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS.
Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-coreos-out-of-preview/
作者:[bgilbert][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bgilbert/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/introducing-fedora-coreos-816x345.png
[2]: https://getfedora.org/coreos/
[3]: https://www.projectatomic.io/
[4]: https://coreos.com/os/docs/latest/
[5]: https://fedoramagazine.org/introducing-fedora-coreos/
[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/
[7]: https://www.okd.io/
[8]: https://github.com/coreos/fedora-coreos-pinger/
[9]: https://docs.fedoraproject.org/en-US/fedora-coreos/
[10]: https://getfedora.org/coreos/download/
[11]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/
[12]: https://github.com/coreos/fedora-coreos-tracker/issues
[13]: https://discussion.fedoraproject.org/c/server/coreos
[14]: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/
[15]: https://github.com/coreos/fedora-coreos-tracker#meetings
[16]: https://lists.fedoraproject.org/archives/list/coreos-status@lists.fedoraproject.org/
[17]: https://developer.gnome.org/NetworkManager/stable/nm-settings-keyfile.html
[18]: https://github.com/coreos/fedora-coreos-tracker/issues/159

View File

@ -1,80 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning)
[#]: via: (https://opensource.com/article/20/1/news-january-19)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
Open source fights cancer, Tesla adopts Coreboot, Uber and Lyft release open source machine learning
======
Catch up on the biggest open source headlines from the past two weeks.
![Weekly news roundup with TV][1]
In this edition of our open source news roundup, we take a look machine learning tools from Uber and Lyft, open source software to fight cancer, saving students money with open textbooks, and more!
### Uber and Lyft release machine learning tools
It's hard to a growing company these days that doesn't take advantage of machine learning to streamline its business and make sense of the data it amasses. Ridesharing companies, which gather massive amounts of data, have enthusiastically embraced the promise of machine learning. Two of the biggest players in the ridesharing sector have made some of their machine learning code open source.
Uber recently [released the source code][2] for its Manifold tool for debugging machine learning models. According to Uber software engineer Lezhi Li, Manifold will "benefit the machine learning (ML) community by providing interpretability and debuggability for ML workflows." If you're interested, you can browse Manifold's source code [on GitHub][3].
Lyft has also upped its open source stakes by releasing Flyte. Flyte, whose source code is [available on GitHub][4], manages machine learning pipelines and "is an essential backbone to (Lyft's) operations." Lyft has been using it to train AI models and process data "across pricing, logistics, mapping, and autonomous projects."
### Software to detect cancer cells
In a study recently published in _Nature Biotechnology_, a team of medical researchers from around the world announced [new open source software][5] that "could make it easier to create personalised cancer treatment plans."
The software assesses "the proportion of cancerous cells in a tumour sample" and can help clinicians "judge the accuracy of computer predictions and establish benchmarks" across tumor samples. Maxime Tarabichi, one of the lead authors of [the study][6], said that the software "provides a foundation which will hopefully become a much-needed, unbiased, gold-standard benchmarking tool for assessing models that aim to characterise a tumours genetic diversity."
### University of Regina saves students over $1 million with open textbooks
If rising tuition costs weren't enough to send university student spiralling into debt, the high prices of textbooks can deepen the crater in their bank accounts. To help ease that financial pain, many universities turn to open textbooks. One of those schools is the University of Regina. By offering open text books, the university [expects to save a huge amount for students][7] over the next five years.
The expected savings are in the region of $1.5 million (CAD), or around $1.1 million USD (at the time of writing). The textbooks, according to a report by radio station CKOM, are "provided free for (students) and they can be printed off or used as e-books." Students aren't getting inferior-quality textbooks, though. Nilgun Onder of the University of Regina said that the "textbooks and other open education resources the university published are all peer-reviewed resources. In other words, they are reliable and credible."
### Tesla adopts Coreboot
Much of the software driving (no pun intended) the electric vehicles made by Tesla Motors is open source. So it's not surprising to learn that the company has [adopted Coreboot][8] "as part of their electric vehicle computer systems."
Coreboot was developed as a replacement for proprietary BIOS and is used to boot hardware and the Linux kernel. The code, which is in [Tesla's GitHub repository][9], "is from Tesla Motors and Samsung," according to Phoronix. Samsung, in case you're wondering, makes the chip on which Tesla's self-driving software runs.
#### In other news
* [Arduino launches new modular platform for IoT development][10]
* [SUSE and Karunya Institute of Technology and Sciences collaborate to enhance cloud and open source learning][11]
* [How open-source code could help us survive natural disasters][12]
* [The hottest thing in robotics is an open source project you've never heard of][13]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week. Make sure to check out [our event calendar][14], to see what's happening next week in open source._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/news-january-19
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
[2]: https://venturebeat.com/2020/01/07/uber-open-sources-manifold-a-visual-tool-for-debugging-ai-models/
[3]: https://github.com/uber/manifold
[4]: https://github.com/lyft/flyte
[5]: https://www.cbronline.com/industry/healthcare/open-source-cancer-cells/
[6]: https://www.nature.com/articles/s41587-019-0364-z
[7]: https://www.ckom.com/2020/01/07/open-source-program-to-save-u-of-r-students-1-5m/
[8]: https://www.phoronix.com/scan.php?page=news_item&px=Tesla-Uses-Coreboot
[9]: https://github.com/teslamotors/coreboot
[10]: https://techcrunch.com/2020/01/07/arduino-launches-a-new-modular-platform-for-iot-development/
[11]: https://www.crn.in/news/suse-and-karunya-institute-of-technology-and-sciences-collaborate-to-enhance-cloud-and-open-source-learning/
[12]: https://qz.com/1784867/open-source-data-could-help-save-lives-during-natural-disasters/
[13]: https://www.techrepublic.com/article/the-hottest-thing-in-robotics-is-an-open-source-project-youve-never-heard-of/
[14]: https://opensource.com/resources/conferences-and-events-monthly

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What 2020 brings for the developer, and more industry trends)
[#]: via: (https://opensource.com/article/20/1/hybrid-developer-future-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
What 2020 brings for the developer, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [How developers will work in 2020][2]
> Developers have been spending an enormous amount of time on everything *except* making software that solves problems. DevOps has transmogrified from developers releasing software into developers building ever more complex infrastructure atop Kubernetes and developers reinventing their software as distributed stateless functions. In 2020, serverless will mature. Handle state. Handle data storage without requiring devs to learn yet-another-proprietary-database-service. Learning new stuff is fun-but shipping is even better, and well finally see systems and services that support that.
**The impact:** A lot of forces are converging to give developers superpowers. There are ever more open source building blocks in place; thousands of geniuses are collaborating to make developer workflows more fun and efficient, and artificial intelligences are being brought to bear solving the types of problems a developer might face. On the one hand, there is clear leverage to giving developer superpowers: if they can make magic with software they'll be able to make even bigger magic with all this help. On the other hand, imagine if teachers had the same level of investment and support. Makes ya wonder don't it?
## [2020 forecast: Cloud-y with a chance of hybrid][3]
> Behind this growth is an array of new themes and strategies that are pushing cloud further up business agendas the world over. With emerging technologies, such as AI and machine learning, containers and functions, and even more flexibility available with hybrid cloud solutions being provided by the major providers, its no wonder cloud is set to take centre stage.
**The impact:** Hybrid cloud finally has the same level of flesh that public cloud and on-premises have. Over the course of 2019 especially the competing visions offered for what it meant to be hybrid formed a composite that drove home why someone would want it. At the same time more and more of the technology pieces that make hybrid viable are in place and maturing. 2019 was the year that people truly "got" hybrid. 2020 will be the year that people start to take advantage of it.
## [The no-code delusion][4]
> Increasingly popular in the last couple of years, I think 2020 is going to be the year of “no code”: the movement that says you can write business logic and even entire applications without having the training of a software developer. I empathise with people doing this, and I think some of the “no code” tools are great. But I also thing its wrong at heart.
**The impact:** I've heard many devs say it over many years: "software development is hard." It would be a mistake to interpret that as "all software development is equally hard." What I've always found hard about learning to code is trying to think in a way that a computer will understand. With or without code, making computers do complex things will always require a different kind of thinking.
## [All things Java][5]
> The open, multi-vendor model has been a major strength—its very hard for any single vendor to pioneer a market for a sustained period of time—and taking different perspectives from diverse industries has been a key strength of the [evolution of Java][6]. Choosing to open source Java in 2006 was also a decision that only worked to strengthen the Java ecosystem, as it allowed Sun Microsystems and later Oracle to share the responsibility of maintaining and evolving Java with many other organizations and individuals.
**The impact:** The things that move quickly in technology are the things that can be thrown away. When you know you're going to keep something for a long time, you're likely to make different choices about what to prioritize when building it. Disposable and long-lived both have their places, and the Java community made enough good decisions over the years that the language itself can have a foot in both camps.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/hybrid-developer-future-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://thenextweb.com/readme/2020/01/15/how-developers-will-work-in-2020/
[3]: https://www.itproportal.com/features/2020-forecast-cloud-y-with-a-chance-of-hybrid/
[4]: https://www.alexhudson.com/2020/01/13/the-no-code-delusion/
[5]: https://appdevelopermagazine.com/all-things-java/
[6]: https://appdevelopermagazine.com/top-10-developer-technologies-in-2019/

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NSA cloud advice, Facebook open source year in review, and more industry trends)
[#]: via: (https://opensource.com/article/20/1/nsa-facebook-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
NSA cloud advice, Facebook open source year in review, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Facebook open source year in review][2]
> Last year was a busy one for our [open source][3] engineers. In 2019 we released 170 new open source projects, bringing our portfolio to a total of 579 [active repositories][3]. While its important for our internal engineers to contribute to these projects (and they certainly do — with more than 82,000 commits this year), we are also incredibly grateful for the massive support from external contributors. Approximately 2,500 external contributors committed more than 32,000 changes. In addition to these contributions, nearly 93,000 new people starred our projects this year, growing the most important component of any open source project — the community! Facebook Open Source would not be here without your contributions, so we want to thank you for your participation in 2019.
**The impact**: Facebook got ~33% more changes than they would have had they decided to develop these as closed projects. Organizations addressing similar challenges got an 82,000-commit boost in exchange. What a clear illustration of the business impact of open source development.
## [Cloud advice from the NSA][4]
> This document divides cloud vulnerabilities into four classes (misconfiguration, poor access control, shared tenancy vulnerabilities, and supply chain vulnerabilities) that encompass the vast majority of known vulnerabilities. Cloud customers have a critical role in mitigating misconfiguration and poor access control, but can also take actions to protect cloud resources from the exploitation of shared tenancy and supply chain vulnerabilities. Descriptions of each vulnerability class along with the most effective mitigations are provided to help organizations lock down their cloud resources. By taking a risk-based approach to cloud adoption, organizations can securely benefit from the clouds extensive capabilities.
**The impact**: The Fear, Uncertainty, and Doubt (FUD) that has been associated with cloud adoption is being debunked more all the time. None other then the US Department of Defense has done a lot of the thinking so you don't have to, and there is a good chance that their concerns are at least as dire as yours are.
## [With Kubernetes, China Minsheng Bank transformed its legacy applications][5]
> But all of CMBCs legacy applications—for example, the core banking system, payment systems, and channel systems—were written in C and Java, using traditional architecture. “We wanted to do distributed applications because in the past we used VMs in our own data center, and that was quite expensive and with low resource utilization rate,” says Zhang. “Our biggest challenge is how to make our traditional legacy applications adaptable to the cloud native environment.” So far, around 20 applications are running in production on the Kubernetes platform, and 30 new applications are in active development to adopt the Kubernetes platform.
**The impact**: This illustrates nicely the challenges and opportunities facing businesses in a competitive environment, and suggests a common adoption pattern. Do new stuff the new way, and move the old stuff as it makes sense.
## [The '5 Rs' of the move to cloud native: Re-platform, re-host, re-factor, replace, retire][6]
> The bottom line is that telcos and service providers will go cloud native when it is cheaper for them to migrate to the cloud and pay cloud costs than it is to remain in the data centre. That time is now and by adhering to the "5 Rs" of the move to cloud native, Re-platform, Re-host, Re-factor, Replace and/or Retire, the path is open, clearly marked and the goal eminently achievable.
**The impact**: Cloud-native is basically used as a synonym for open source in this interview; there is no other type of technology that will deliver the same lift.
## [Fedora CoreOS out of preview][7]
> Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. Its the successor to both [Fedora Atomic Host][8] and [CoreOS Container Linux][9] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][10].
**The impact**: Collapsing these two branches of the Linux family tree into one another moves the state of the art forward for everyone (once you get through the migration).
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/nsa-facebook-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://opensource.com/article/20/1/hybrid-developer-future-industry-trends
[3]: https://opensource.facebook.com/
[4]: https://media.defense.gov/2020/Jan/22/2002237484/-1/-1/0/CSI-MITIGATING-CLOUD-VULNERABILITIES_20200121.PDF
[5]: https://www.cncf.io/blog/2020/01/23/with-kubernetes-china-minsheng-bank-transformed-its-legacy-applications-and-moved-into-ai-blockchain-and-big-data/
[6]: https://www.telecomtv.com/content/cloud-native/the-5-rs-of-the-move-to-cloud-native-re-platform-re-host-re-factor-replace-retire-37473/
[7]: https://fedoramagazine.org/fedora-coreos-out-of-preview/
[8]: https://www.projectatomic.io/
[9]: https://coreos.com/os/docs/latest/
[10]: https://fedoramagazine.org/introducing-fedora-coreos/

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news)
[#]: via: (https://opensource.com/article/20/2/linux-java-and-other-industry-news)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Need 32-bit Linux to run past 2038? When version 5.6 of the kernel pops, you're in for a treat][2]
> Arnd Bergmann, an engineer working on the thorny Y2038 problem in the Linux kernel, posted to the [mailing list][3] that, yup, Linux 5.6 "should be the first release that can serve as a base for a 32-bit system designed to run beyond year 2038."
**The impact:** Y2K didn't get fixed; it just got bigger and delayed. There is no magic in software or computers; just people trying to solve complicated problems as best they can, and some times introducing more complicated problems for different people to solve at some point in the future.
## [What the dev? Celebrating Java's 25th anniversary][4]
> Java is coming up on a big milestone: Its 25th anniversary! To celebrate, we take a look back over the last 25 years to see how Java has evolved over time. In this episode, Social Media and Online Editor Jenna Sargent talks to Rich Sharples, senior director of product management for middleware at Red Hat, to learn more.
**The impact:** There is something comforting about immersing yourself in a deep well of lived experience. Rich clearly lived through what he is talking about and shares insider knowlege with you (and his dog).
## [Do I need an API Gateway if I use a service mesh?][5]
> This post may not be able to break through the noise around API Gateways and Service Mesh. However, its 2020 and there is still abundant confusion around these topics. I have chosen to write this to help bring real concrete explanation to help clarify differences, overlap, and when to use which. Feel free to [@ me on twitter (@christianposta)][6] if you feel Im adding to the confusion, disagree, or wish to buy me a beer (and these are not mutually exclusive reasons).
**The impact:** Yes, though they use similar terms and concepts they have different concerns and scopes.
## [What Australia's AGL Energy learned about Cloud Native compliance][7]
> This is really at the heart of what open source is, enabling everybody to contribute equally. Within large enterprises, there are controls that are needed, but if we can automate the management of the majority of these controls, we can enable an amazing culture and development experience.
**The impact:** They say "software is eating the world" and "developers are the new kingmakers." The fact that compliance in an energy utility is subject to developer experience improvement basically proves both statements.
## [Monoliths are the future][8]
> And then what they end up doing is creating 50 deployables, but its really a _distributed_ monolith. So its actually the same thing, but instead of function calls and class instantiation, theyre initiating things and throwing it over a network and hoping that it comes back. And since they cant reliably _make it_ come back, they introduce things like [Prometheus][9], [OpenTracing][10], all of this stuff. Im like, **“What are you doing?!”**
**The impact:** Do things for real reasons with a clear-eyed understanding of what those reasons are and how they'll make your business or your organization better.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/linux-java-and-other-industry-news
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.theregister.co.uk/2020/01/30/linux_5_6_2038/
[3]: https://lkml.org/lkml/2020/1/29/355
[4]: https://whatthedev.buzzsprout.com/673192/2543290-celebrating-java-s-25th-anniversary-episode-16
[5]: https://blog.christianposta.com/microservices/do-i-need-an-api-gateway-if-i-have-a-service-mesh/ (Do I Need an API Gateway if I Use a Service Mesh?)
[6]: http://twitter.com/christianposta?lang=en
[7]: https://thenewstack.io/what-australias-agl-energy-learned-about-cloud-native-compliance/
[8]: https://changelog.com/posts/monoliths-are-the-future
[9]: https://prometheus.io/
[10]: https://opentracing.io

1
sources/news/README.md Normal file
View File

@ -0,0 +1 @@
这里放新闻类文章,要求时效性

View File

@ -1,221 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Ultimate Guide to JavaScript Fatigue: Realities of our industry)
[#]: via: (https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html)
[#]: author: (Lucas Fernandes Da Costa https://lucasfcosta.com)
The Ultimate Guide to JavaScript Fatigue: Realities of our industry
======
**Complaining about JS Fatigue is just like complaining about the fact that humanity has created too many tools to solve the problems we have** , from email to airplanes and spaceships.
Last week Ive done a talk about this very same subject at the NebraskaJS 2017 Conference and I got so many positive feedbacks that I just thought this talk should also become a blog post in order to reach more people and help them deal with JS Fatigue and understand the realities of our industry. **My goal with this post is to change the way you think about software engineering in general and help you in any areas you might work on**.
One of the things that has inspired me to write this blog post and that totally changed my life is [this great post by Patrick McKenzie, called “Dont Call Yourself a Programmer and other Career Advice”][1]. **I highly recommend you read that**. Most of this blog post is advice based on what Patrick has written in that post applied to the JavaScript ecosystem and with a few more thoughts Ive developed during these last years working in the tech industry.
This first section is gonna be a bit philosophical, but I swear it will be worth reading.
### Realities of Our Industry 101
Just like Patrick has done in [his post][1], lets start with the most basic and essential truth about our industry:
Software solves business problems
This is it. **Software does not exist to please us as programmers** and let us write beautiful code. Neither it exists to create jobs for people in the tech industry. **Actually, it exists to kill as many jobs as possible, including ours** , and this is why basic income will become much more important in the next few years, but thats a whole other subject.
Im sorry to say that, but the reason things are that way is that there are only two things that matter in the software engineering (and any other industries):
**Cost versus Revenue**
**The more you decrease cost and increase revenue, the more valuable you are** , and one of the most common ways of decreasing cost and increasing revenue is replacing human beings by machines, which are more effective and usually cost less in the long run.
You are not paid to write code
**Technology is not a goal.** Nobody cares about which programming language you are using, nobody cares about which frameworks your team has chosen, nobody cares about how elegant your data structures are and nobody cares about how good is your code. **The only thing that somebody cares about is how much does your software cost and how much revenue it generates**.
Writing beautiful code does not matter to your clients. We write beautiful code because it makes us more productive in the long run and this decreases cost and increases revenue.
The whole reason why we try not to write bugs is not that we value correctness, but that **our clients** value correctness. If you have ever seen a bug becoming a feature you know what Im talking about. That bug exists but it should not be fixed. That happens because our goal is not to fix bugs, our goal is to generate revenue. If our bugs make clients happy then they increase revenue and therefore we are accomplishing our goals.
Reusable space rockets, self-driving cars, robots, artificial intelligence: these things do not exist just because someone thought it would be cool to create them. They exist because there are business interests behind them. And Im not saying the people behind them just want money, Im sure they think that stuff is also cool, but the truth is that if they were not economically viable or had any potential to become so, they would not exist.
Probably I should not even call this section “Realities of Our Industry 101”, maybe I should just call it “Realities of Capitalism 101”.
And given that our only goal is to increase revenue and decrease cost, I think we as programmers should be paying more attention to requirements and design and start thinking with our minds and participating more actively in business decisions, which is why it is extremely important to know the problem domain we are working on. How many times before have you found yourself trying to think about what should happen in certain edge cases that have not been thought before by your managers or business people?
In 1975, Boehm has done a research in which he found out that about 64% of all errors in the software he was studying were caused by design, while only 36% of all errors were coding errors. Another study called [“Higher Order Software—A Methodology for Defining Software”][2] also states that **in the NASA Apollo project, about 73% of all errors were design errors**.
The whole reason why Design and Requirements exist is that they define what problems were going to solve and solving problems is what generates revenue.
> Without requirements or design, programming is the art of adding bugs to an empty text file.
>
> * Louis Srygley
>
This same principle also applies to the tools weve got available in the JavaScript ecosystem. Babel, webpack, react, Redux, Mocha, Chai, Typescript, all of them exist to solve a problem and we gotta understand which problem they are trying to solve, we need to think carefully about when most of them are needed, otherwise, we will end up having JS Fatigue because:
JS Fatigue happens when people use tools they don't need to solve problems they don't have.
As Donald Knuth once said: “Premature optimization is the root of all evil”. Remember that software only exists to solve business problems and most software out there is just boring, it does not have any high scalability or high-performance constraints. Focus on solving business problems, focus on decreasing cost and generating revenue because this is all that matters. Optimize when you need, otherwise you will probably be adding unnecessary complexity to your software, which increases cost, and not generating enough revenue to justify that.
This is why I think we should apply [Test Driven Development][3] principles to everything we do in our job. And by saying this Im not just talking about testing. **Im talking about waiting for problems to appear before solving them. This is what TDD is all about**. As Kent Beck himself says: “TDD reduces fear” because it guides your steps and allows you take small steps towards solving your problems. One problem at a time. By doing the same thing when it comes to deciding when to adopt new technologies then we will also reduce fear.
Solving one problem at a time also decreases [Analysis Paralysis][4], which is basically what happens when you open Netflix and spend three hours concerned about making the optimal choice instead of actually watching something. By solving one problem at a time we reduce the scope of our decisions and by reducing the scope of our decisions we have fewer choices to make and by having fewer choices to make we decrease Analysis Paralysis.
Have you ever thought about how easier it was to decide what you were going to watch when there were only a few TV channels available? Or how easier it was to decide which game you were going to play when you had only a few cartridges at home?
### But what about JavaScript?
By the time Im writing this post NPM has 489,989 packages and tomorrow approximately 515 new ones are going to be published.
And the packages we use and complain about have a history behind them we must comprehend in order to understand why we need them. **They are all trying to solve problems.**
Babel, Dart, CoffeeScript and other transpilers come from our necessity of writing code other than JavaScript but making it runnable in our browsers. Babel even lets us write new generation JavaScript and make sure it will work even on older browsers, which has always been a great problem given the inconsistencies and different amount of compliance to the ECMA Specification between browsers. Even though the ECMA spec is becoming more and more solid these days, we still need Babel. And if you want to read more about Babels history I highly recommend that you read [this excellent post by Henry Zhu][5].
Module bundlers such as Webpack and Browserify also have their reason to exist. If you remember well, not so long ago we used to suffer a lot with lots of `script` tags and making them work together. They used to pollute the global namespace and it was reasonably hard to make them work together when one depended on the other. In order to solve this [`Require.js`][6] was created, but it still had its problems, it was not that straightforward and its syntax also made it prone to other problems, as you can see [in this blog post][7]. Then Node.js came with `CommonJS` imports, which were synchronous, simple and clean, but we still needed a way to make that work on our browsers and this is why we needed Webpack and Browserify.
And Webpack itself actually solves more problems than that by allowing us to deal with CSS, images and many other resources as if they were JavaScript dependencies.
Front-end frameworks are a bit more complicated, but the reason why they exist is to reduce the cognitive load when we write code so that we dont need to worry about manipulating the DOM ourselves or even dealing with messy browser APIs (another problem JQuery came to solve), which is not only error prone but also not productive.
This is what we have been doing this whole time in computer science. We use low-level abstractions and build even more abstractions on top of it. The more we worry about describing how our software should work instead of making it work, the more productive we are.
But all those tools have something in common: **they exist because the web platform moves too fast**. Nowadays were using web technology everywhere: in web browsers, in desktop applications, in phone applications or even in watch applications.
This evolution also creates problems we need to solve. PWAs, for example, do not exist only because theyre cool and we programmers have fun writing them. Remember the first section of this post: **PWAs exist because they create business value**.
And usually standards are not fast enough to be created and therefore we need to create our own solutions to these things, which is why it is great to have such a vibrant and creative community with us. Were solving problems all the time and **we are allowing natural selection to do its job**.
The tools that suit us better thrive, get more contributors and develop themselves more quickly and sometimes other tools end up incorporating the good ideas from the ones that thrive and becoming even more popular than them. This is how we evolve.
By having more tools we also have more choices. If you remember the UNIX philosophy well, it states that we should aim at creating programs that do one thing and do it well.
We can clearly see this happening in the JS testing environment, for example, where we have Mocha for running tests and Chai for doing assertions, while in Java JUnit tries to do all these things. This means that if we have a problem with one of them or if we find another one that suits us better, we can simply replace that small part and still have the advantages of the other ones.
The UNIX philosophy also states that we should write programs that work together. And this is exactly what we are doing! Take a look at Babel, Webpack and React, for example. They work very well together but we still do not need one to use the other. In the testing environment, for example, if were using Mocha and Chai all of a sudden we can just install Karma and run those same tests in multiple environments.
### How to Deal With It
My first advice for anyone suffering from JS Fatigue would definitely be to stay aware that **you dont need to know everything**. Trying to learn it all at once, even when we dont have to do so, only increases the feeling of fatigue. Go deep in areas that you love and for which you feel an inner motivation to study and adopt a lazy approach when it comes to the other ones. Im not saying that you should be lazy, Im just saying that you can learn those only when needed. Whenever you face a problem that requires you to use a certain technology to solve it, go learn.
Another important thing to say is that **you should start from the beginning**. Make sure you have learned enough about JavaScript itself before using any JavaScript frameworks. This is the only way you will be able to understand them and bend them to your will, otherwise, whenever you face an error you have never seen before you wont know which steps to take in order to solve it. Learning core web technologies such as CSS, HTML5, JavaScript and also computer science fundamentals or even how the HTTP protocol works will help you master any other technologies a lot more quickly.
But please, dont get too attached to that. Sometimes you gotta risk yourself and start doing things on your own. As Sacha Greif has written in [this blog post][8], spending too much time learning the fundamentals is just like trying to learn how to swim by studying fluid dynamics. Sometimes you just gotta jump into the pool and try to swim by yourself.
And please, dont get too attached to a single technology. All of the things we have available nowadays have already been invented in the past. Of course, they have different features and a brand new name, but, in their essence, they are all the same.
If you look at NPM, it is nothing new, we already had Maven Central and Ruby Gems quite a long time ago.
In order to transpile your code, Babel applies the very same principles and theory as some of the oldest and most well-known compilers, such as the GCC.
Even JSX is not a new idea. It E4X (ECMAScript for XML) already existed more than 10 years ago.
Now you might ask: “what about Gulp, Grunt and NPM Scripts?” Well, Im sorry but we can solve all those problems with GNU Make in 1976. And actually, there are a reasonable number of JavaScript projects that still use it, such as Chai.js, for example. But we do not do that because we are hipsters that like vintage stuff. We use `make` because it solves our problems, and this is what you should aim at doing, as weve talked before.
If you really want to understand a certain technology and be able to solve any problems you might face, please, dig deep. One of the most decisive factors to success is curiosity, so **dig deep into the technologies you like**. Try to understand them from bottom-up and whenever you think something is just “magic”, debunk that myth by exploring the codebase by yourself.
In my opinion, there is no better quote than this one by Richard Feinman, when it comes to really learning something:
> What I cannot create, I do not understand
And just below this phrase, [in the same blackboard, Richard also wrote][9]:
> Know how to solve every problem that has been solved
Isnt this just amazing?
When Richard said that, he was talking about being able to take any theoretical result and re-derive it, but I think the exact same principle can be applied to software engineering. The tools that solve our problems have already been invented, they already exist, so we should be able to get to them all by ourselves.
This is the very reason I love [some of the videos available in Egghead.io][10] in which Dan Abramov explains how to implement certain features that exist in Redux from scratch or [blog posts that teach you how to build your own JSX renderer][11].
So why not trying to implement these things by yourself or going to GitHub and reading their codebase in order to understand how they work? Im sure you will find a lot of useful knowledge out there. Comments and tutorials might lie and be incorrect sometimes, the code cannot.
Another thing that we have been talking a lot in this post is that **you should not get ahead of yourself**. Follow a TDD approach and solve one problem at a time. You are paid to increase revenue and decrease cost and you do this by solving problems, this is the reason why software exists.
And since we love comparing our role to the ones related to civil engineering, lets do a quick comparison between software development and civil engineering, just as [Sam Newman does in his brilliant book called “Building Microservices”][12].
We love calling ourselves “engineers” or “architects”, but is that term really correct? We have been developing software for what we know as computers less than a hundred years ago, while the Colosseum, for example, exists for about two thousand years.
When was the last time youve seen a bridge falling and when was the last time your telephone or your browser crashed?
In order to explain this, Ill use an example I love.
This is the beautiful and awesome city of Barcelona:
![The City of Barcelona][13]
When we look at it this way and from this distance, it just looks like any other city in the world, but when we look at it from above, this is how Barcelona looks:
![Barcelona from above][14]
As you can see, every block has the same size and all of them are very organized. If youve ever been to Barcelona you will also know how good it is to move through the city and how well it works.
But the people that planned Barcelona could not predict what it was going to look like in the next two or three hundred years. In cities, people come in and people move through it all the time so what they had to do was make it grow organically and adapt as the time goes by. They had to be prepared for changes.
This very same thing happens to our software. It evolves quickly, refactors are often needed and requirements change more frequently than we would like them to.
So, instead of acting like a Software Engineer, act as a Town Planner. Let your software grow organically and adapt as needed. Solve problems as they come by but make sure everything still has its place.
Doing this when it comes to software is even easier than doing this in cities due to the fact that **software is flexible, civil engineering is not**. **In the software world, our build time is compile time**. In Barcelona we cannot simply destroy buildings to give space to new ones, in Software we can do that a lot easier. We can break things all the time, we can make experiments because we can build as many times as we want and it usually takes seconds and we spend a lot more time thinking than building. Our job is purely intellectual.
So **act like a town planner, let your software grow and adapt as needed**.
By doing this you will also have better abstractions and know when its the right time to adopt them.
As Sam Koblenski says:
> Abstractions only work well in the right context, and the right context develops as the system develops.
Nowadays something I see very often is people looking for boilerplates when theyre trying to learn a new technology, but, in my opinion, **you should avoid boilerplates when youre starting out**. Of course boilerplates and generators are useful if you are already experienced, but they take a lot of control out of your hands and therefore you wont learn how to set up a project and you wont understand exactly where each piece of the software you are using fits.
When you feel like you are struggling more than necessary to get something simple done, it might be the right time for you to look for an easier way to do this. In our role **you should strive to be lazy** , you should work to not work. By doing that you have more free time to do other things and this decreases cost and increases revenue, so thats another way of accomplishing your goal. You should not only work harder, you should work smarter.
Probably someone has already had the same problem as youre having right now, but if nobody did it might be your time to shine and build your own solution and help other people.
But sometimes you will not be able to realize you could be more effective in your tasks until you see someone doing them better. This is why it is so important to **talk to people**.
By talking to people you share experiences that help each others careers and we discover new tools to improve our workflow and, even more important than that, learn how they solve their problems. This is why I like reading blog posts in which companies explain how they solve their problems.
Especially in our area we like to think that Google and StackOverflow can answer all our questions, but we still need to know which questions to ask. Im sure you have already had a problem you could not find a solution for because you didnt know exactly what was happening and therefore didnt know what was the right question to ask.
But if I needed to sum this whole post in a single advice, it would be:
Solve problems.
Software is not a magic box, software is not poetry (unfortunately). It exists to solve problems and improves peoples lives. Software exists to push the world forward.
**Now its your time to go out there and solve problems**.
--------------------------------------------------------------------------------
via: https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html
作者:[Lucas Fernandes Da Costa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://lucasfcosta.com
[b]: https://github.com/lujun9972
[1]: http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/
[2]: http://ieeexplore.ieee.org/document/1702333/
[3]: https://en.wikipedia.org/wiki/Test_Driven_Development
[4]: https://en.wikipedia.org/wiki/Analysis_paralysis
[5]: https://babeljs.io/blog/2016/12/07/the-state-of-babel
[6]: http://requirejs.org
[7]: https://benmccormick.org/2015/05/28/moving-past-requirejs/
[8]: https://medium.freecodecamp.org/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1
[9]: https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-cannot-create-I-do-not-understand
[10]: https://egghead.io/lessons/javascript-redux-implementing-store-from-scratch
[11]: https://jasonformat.com/wtf-is-jsx/
[12]: https://www.barnesandnoble.com/p/building-microservices-sam-newman/1119741399/2677517060476?st=PLA&sid=BNB_DRS_Marketplace+Shopping+Books_00000000&2sid=Google_&sourceId=PLGoP4760&k_clickid=3x4760
[13]: /assets/barcelona-city.jpeg
[14]: /assets/barcelona-above.jpeg
[15]: https://twitter.com/thewizardlucas

View File

@ -1,513 +0,0 @@
What every software engineer should know about search
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*5AlsVRQrewLw74uHYTZ36w.jpeg)
### Want to build or improve a search experience? Start here.
Ask a software engineer: “[How would you add search functionality to your product?][78]” or “[How do I build a search engine?][79]” Youll probably immediately hear back something like: “Oh, wed just launch an ElasticSearch cluster. Search is easy these days.”
But is it? Numerous current products [still][80] [have][81] [suboptimal][82] [search][83] [experiences][84]. Any true search expert will tell you that few engineers have a very deep understanding of how search engines work, knowledge thats often needed to improve search quality.
Even though many open source software packages exist, and the research is vast, the knowledge around building solid search experiences is limited to a select few. Ironically, [searching online][85] for search-related expertise doesnt yield any recent, thoughtful overviews.
#### Emoji Legend
```
❗ “Serious” gotcha: consequences of ignorance can be deadly
🔷 Especially notable idea or piece of technology
☁️ Cloud/SaaS
🍺 Open source / free software
🦏 JavaScript
🐍 Python
☕ Java
🇨 C/C++
```
### Why read this?
Think of this post as a collection of insights and resources that could help you to build search experiences. It cant be a complete reference, of course, but hopefully we can improve it based on feedback (please comment or reach out!).
Ill point at some of the most popular approaches, algorithms, techniques, and tools, based on my work on general purpose and niche search experiences of varying sizes at Google, Airbnb and several startups.
Not appreciating or understanding the scope and complexity of search problems can lead to bad user experiences, wasted engineering effort, and product failure.
If youre impatient or already know a lot of this, you might find it useful to jump ahead to the tools and services sections.
### Some philosophy
This is a long read. But most of what we cover has four underlying principles:
#### 🔷 Search is an inherently messy problem:
* Queries are highly variable. The search problems are highly variablebased on product needs.
* Think about how different Facebook search (searching a graph of people).
* YouTube search (searching individual videos).
* Or how different both of those are are from Kayak ([air travel planning is a really hairy problem][2]).
* Google Maps (making sense of geo-spacial data).
* Pinterest (pictures of a brunch you might cook one day).
#### Quality, metrics, and processes matter a lot:
* There is no magic bullet (like PageRank) nor a magic ranking formula that makes for a good approach. Processes are always evolving collection of techniques and processes that solve aspects of the problem and improve overall experience, usually gradually and continuously.
* ❗In other words, search is not just just about building software that does ranking or retrieval (which we will discuss below) for a specific domain. Search systems are usually an evolving pipeline of components that are tuned and evolve over time and that build up to a cohesive experience.
* In particular, the key to success in search is building processes for evaluation and tuning into the product and development cycles. A search system architect should think about processes and metrics, not just technologies.
#### Use existing technologies first:
* As in most engineering problems, dont reinvent the wheel yourself. When possible, use existing services or open source tools. If an existing SaaS (such as [Algolia][3] or managed Elasticsearch) fits your constraints and you can afford to pay for it, use it. This solution will likely will be the best choice for your product at first, even if down the road you need to customize, enhance, or replace it.
#### ❗Even if you buy, know the details:
* Even if you are using an existing open source or commercial solution, you should have some sense of the complexity of the search problem and where there are likely to be pitfalls.
### Theory: the search problem
Search is different for every product, and choices depend on many technical details of the requirements. It helps to identify the key parameters of your search problem:
1. Size: How big is the corpus (a complete set of documents that need to be searched)? Is it thousands or billions of documents?
2. Media: Are you searching through text, images, graphical relationships, or geospatial data?
3. 🔷 Corpus control and quality: Are the sources for the documents under your control, or coming from a (potentially adversarial) third party? Are all the documents ready to be indexed or need to be cleaned up and selected?
4. Indexing speed: Do you need real-time indexing, or is building indices in batch is fine?
5. Query language: Are the queries structured, or you need to support unstructured ones?
6. Query structure: Are your queries textual, images, sounds? Street addresses, record ids, peoples faces?
7. Context-dependence: Do the results depend on who the user is, what is their history with the product, their geographical location, time of the day etc?
8. Suggest support: Do you need to support incomplete queries?
9. Latency: What are the serving latency requirements? 100 milliseconds or 100 seconds?
10. Access control: Is it entirely public or should users only see a restricted subset of the documents?
11. Compliance: Are there compliance or organizational limitations?
12. Internationalization: Do you need to support documents with multilingual character sets or Unicode? (Hint: Always use UTF-8 unless you really know what youre doing.) Do you need to support a multilingual corpus? Multilingual queries?
Thinking through these points up front can help you make significant choices designing and building individual search system components.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*qTK1iCtyJUr4zOyw4IFD7A.jpeg)
A production indexing pipeline.
### Theory: the search pipeline
Now lets go through a list of search sub-problems. These are usually solved by separate subsystems that form a pipeline. What that means is that a given subsystem consumes the output of previous subsystems, and produces input for the following subsystems.
This leads to an important property of the ecosystem: once you change how an upstream subsystem works, you need to evaluate the effect of the change and possibly change the behavior downstream.
Here are the most important problems you need to solve:
#### Index selection:
given a set of documents (e.g. the entirety of the Internet, all the Twitter posts, all the pictures on Instagram), select a potentially smaller subset of documents that may be worthy for consideration as search results and only include those in the index, discarding the rest. This is done to keep your indexes compact, and is almost orthogonal to selecting the documents to show to the user. Examples of particular classes of documents that dont make the cut may include:
#### Spam:
oh, all the different shapes and sizes of search spam! A giant topic in itself, worthy of a separate guide. [A good web spam taxonomy overview][86].
#### Undesirable documents:
domain constraints might require filtering: [porn][87], illegal content, etc. The techniques are similar to spam filtering, probably with extra heuristics.
#### Duplicates:
Or near-duplicates and redundant documents. Can be done with [Locality-sensitive hashing][88], [similarity measures][89], clustering techniques or even [clickthrough data][90]. A [good overview][91] of techniques.
#### Low-utility documents:
The definition of utility depends highly on the problem domain, so its hard to recommend the approaches here. Some ideas are: it might be possible to build a utility function for your documents; heuristics might work, or example an image that contains only black pixels is not a useful document; utility might be learned from user behavior.
#### Index construction:
For most search systems, document retrieval is performed using an [inverted index][92]often just called the index.
* The index is a mapping of search terms to documents. A search term could be a word, an image feature or any other document derivative useful for query-to-document matching. The list of the documents for a given term is called a [posting list][1]. It can be sorted by some metric, like document quality.
* Figure out whether you need to index the data in real time.❗Many companies with large corpora of documents use a batch-oriented indexing approach, but then find this is unsuited to a product where users expect results to be current.
* With text documents, term extraction usually involves using NLP techniques, such as stop lists, [stemming][4] and [entity extraction][5]; for images or videos computer vision methods are used etc.
* In addition, documents are mined for statistical and meta information, such as references to other documents (used in the famous [PageRank][6]ranking signal), [topics][7], counts of term occurrences, document size, entities A mentioned etc. That information can be later used in ranking signal construction or document clustering. Some larger systems might contain several indexes, e.g. for documents of different types.
* Index formats. The actual structure and layout of the index is a complex topic, since it can be optimized in many ways. For instance there are [posting lists compression methods][8], one could target [mmap()able data representation][9] or use[ LSM-tree][10] for continuously updated index.
#### Query analysis and document retrieval:
Most popular search systems allow non-structured queries. That means the system has to extract structure out of the query itself. In the case of an inverted index, you need to extract search terms using [NLP][93] techniques.
The extracted terms can be used to retrieve relevant documents. Unfortunately, most queries are not very well formulated, so it pays to do additional query expansion and rewriting, like:
* [Term re-weighting][11].
* [Spell checking][12]. Historical query logs are very useful as a dictionary.
* [Synonym matching][13]. [Another survey][14].
* [Named entity recognition][15]. A good approach is to use [HMM-based language modeling][16].
* Query classification. Detect queries of particular type. For example, Google Search detects queries that contain a geographical entity, a porny query, or a query about something in the news. The retrieval algorithm can then make a decision about which corpora or indexes to look at.
* Expansion through [personalization][17] or [local context][18]. Useful for queries like “gas stations around me”.
#### Ranking:
Given a list of documents (retrieved in the previous step), their signals, and a processed query, create an optimal ordering (ranking) for those documents.
Originally, most ranking models in use were hand-tuned weighted combinations of all the document signals. Signal sets might include PageRank, clickthrough data, topicality information and [others][94].
To further complicate things, many of those signals, such as PageRank, or ones generated by [statistical language models][95] contain parameters that greatly affect the performance of a signal. Those have to be hand-tuned too.
Lately, 🔷 [learning to rank][96], signal-based discriminative supervised approaches are becoming more and more popular. Some popular examples of LtR are [McRank][97] and [LambdaRank][98] from Microsoft, and [MatrixNet][99] from Yandex.
A new, [vector space based approach][100] for semantic retrieval and ranking is gaining popularity lately. The idea is to learn individual low-dimensional vector document representations, then build a model which maps queries into the same vector space.
Then, retrieval is just finding several documents that are closest by some metric (e.g. Eucledian distance) to the query vector. Ranking is the distance itself. If the mapping of both the documents and queries is built well, the documents are chosen not by a fact of presence of some simple pattern (like a word), but how close the documents are to the query by  _meaning_ .
### Indexing pipeline operation
Usually, each of the above pieces of the pipeline must be operated on a regular basis to keep the search index and search experience current.
Operating a search pipeline can be complex and involve a lot of moving pieces. Not only is the data moving through the pipeline, but the code for each module and the formats and assumptions embedded in the data will change over time.
A pipeline can be run in “batch” or based on a regular or occasional basis (if indexing speed does not need to be real time) or in a streamed way (if real-time indexing is needed) or based on certain triggers.
Some complex search engines (like Google) have several layers of pipelines operating on different time scalesfor example, a page that changes often (like [cnn.com][101]) is indexed with a higher frequency than a static page that hasnt changed in years.
### Serving systems
Ultimately, the goal of a search system is to accept queries, and use the index to return appropriately ranked results. While this subject can be incredibly complex and technical, we mention a few of the key aspects to this part of the system.
* Performance: users notice when the system they interact with is laggy. ❗Google has done [extensive research][19], and they have noticed that number of searches falls 0.6%, when serving is slowed by 300ms. They recommend to serve results under 200 ms for most of your queries. A good article [on the topic][20]. This is the hard part: the system needs to collect documents from, possibly, many computers, than merge them into possible a very long list and then sort that list in the ranking order. To complicate things further, ranking might be query-dependent, so, while sorting, the system is not just comparing 2 numbers, but performing computation.
* 🔷 Caching results: is often necessary to achieve decent performance. ❗️ But caches are just one large gotcha. The might show stale results when indices are updated or some results are blacklisted. Purging caches is a can of warm of itself: a search system might not have the capacity to serve the entire query stream with an empty (cold) cache, so the [cache needs to be pre-warmed][21] before the queries start arriving. Overall, caches complicate a systems performance profile. Choosing a cache size and a replacement algorithm is also a [challenge][22].
* Availability: is often defined by an uptime/(uptime + downtime) metric. When index is distributed, in order to serve any search results, the system often needs to query all the shards for their share of results. ❗That means, that if one shard is unavailable, the entire search system is compromised. The more machines are involved in serving the indexthe higher the probability of one of them becoming defunct and bringing the whole system down.
* Managing multiple indices: Indices for large systems may separated into shards (pieces) or divided by media type or indexing cadence (fresh versus long-term indices). Results can then be merged.
* Merging results of different kinds: e.g. Google showing results from Maps, News etc.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*M8WQu17E7SDziV0rVwUKbw.jpeg)
A human rater. Yeah, you should still have those.
### Quality, evaluation, and improvement
So youve launched your indexing pipeline and search servers, and its all running nicely. Unfortunately the road to a solid search experience only begins with running infrastructure.
Next, youll need to build a set of processes around continuous search quality evaluation and improvement. In fact, this is actually most of the work and the hardest problem youll have to solve.
🔷 What is quality? First, youll need to determine (and get your boss or the product lead to agree), what quality means in your case:
* Self-reported user satisfaction (includes UX)
* Perceived relevance of the returned results (not including UX)
* Satisfaction relative to competitors
* Satisfaction relative performance of the previous version of the search engine (e.g. last week)
* [User engagement][23]
Metrics: Some of these concepts can be quite hard to quantify. On the other hand, its incredibly useful to be able to express how well a search engine is performing in a single number, a quality metric.
Continuously computing such a metric for your (and your competitors) system you can both track your progress and explain how well you are doing to your boss. Here are some classical ways to quantify quality, that can help you construct your magic quality metric formula:
* [Precision][24] and [recall][25] measure how well the retrieved set of documents corresponds to the set you expected to see.
* [F score][26] (specifically F1 score) is a single number, that represents both precision and recall well.
* [Mean Average Precision][27] (MAP) allows to quantify the relevance of the top returned results.
* 🔷 [Normalized Discounted Cumulative Gain][28] (nDCG) is like MAP, but weights the relevance of the result by its position.
* [Long and short clicks][29]Allow to quantify how useful the results are to the real users.
* [A good detailed overview][30].
🔷 Human evaluations: Quality metrics might seem like statistical calculations, but they cant all be done by automated calculations. Ultimately, metrics need to represent subjective human evaluation, and this is where a “human in the loop” comes into play.
Skipping human evaluation is probably the most spread reason of sub-par search experiences.
Usually, at early stages the developers themselves evaluate the results manually. At later point [human raters][102] (or assessors) may get involved. Raters typically use custom tools to look at returned search results and provide feedback on the quality of the results.
Subsequently, you can use the feedback signals to guide development, help make launch decisions or even feed them back into the index selection, retrieval or ranking systems.
Here is the list of some other types of human-driven evaluation, that can be done on a search system:
* Basic user evaluation: The user ranks their satisfaction with the whole experience
* Comparative evaluation: Compare with other search results (compare with search results from earlier versions of the system or competitors)
* Retrieval evaluation: The query analysis and retrieval quality is often evaluated using manually constructed query-document sets. A user is shown a query and the list of the retrieved documents. She can then mark all the documents that are relevant to the query, and the ones that are not. The resulting pairs of (query, [relevant docs]) are called a “golden set”. Golden sets are remarkably useful. For one, an engineer can set up automatic retrieval regression tests using those sets. The selection signal from golden sets can also be fed back as ground truth to term re-weighting and other query re-writing models.
* Ranking evaluation: Raters are presented with a query and two documents side-by-side. The rater must choose the document that fits the query better. This creates a partial ordering on the documents for a given query. That ordering can be later be compared to the output of the ranking system. The usual ranking quality measures used are MAP and nDCG.
#### Evaluation datasets:
One should start thinking about the datasets used for evaluation (like “golden sets” mentioned above) early in the search experience design process. How you collect and update them? How you push them to the production eval pipeline? Is there a built-in bias?
Live experiments:
After your search engine catches on and gains enough users, you might want to start conducting [live search experiments][103] on a portion of your traffic. The basic idea is to turn some optimization on for a group of people, and then compare the outcome with that of a “control” groupa similar sample of your users that did not have the experiment feature on for them. How you would measure the outcome is, once again, very product specific: it could be clicks on results, clicks on ads etc.
Evaluation cycle time: How fast you improve your search quality is directly related to how fast you can complete the above cycle of measurement and improvement. It is essential from the beginning to ask yourself, “how fast can we measure and improve our performance?”
Will it take days, hours, minutes or seconds to make changes and see if they improve quality? ❗Running evaluation should also be as easy as possible for the engineers and should not take too much hands-on time.
### 🔷 So… How do I PRACTICALLY build it?
This blogpost is not meant as a tutorial, but here is a brief outline of how Id approach building a search experience right now:
1. As was said above, if you can afford itjust buy the existing SaaS (some good ones are listed below). An existing service fits if:
* Your experience is a “connected” one (your service or app has internet connection).
* Does it support all the functionality you need out of box? This post gives a pretty good idea of what functions would you want. To name a few, Id at least consider: support for the media you are searching; real-time indexing support; query flexibility, including context-dependent queries.
* Given the size of the corpus and the expected [QpS][31], can you afford to pay for it for the next 12 months?
* Can the service support your expected traffic within the required latency limits? In case when you are querying the service from an app, make sure that the given service is accessible quickly enough from where your users are.
2\. If a hosted solution does not fit your needs or resources, you probably want to use one of the open source libraries or tools. In case of connected apps or websites, Id choose ElasticSearch right now. For embedded experiences, there are multiple tools below.
3\. You most likely want to do index selection and clean up your documents (say extract relevant text from HTML pages) before uploading them to the search index. This will decrease the index size and make getting to good results easier. If your corpus fits on a single machine, just write a script (or several) to do that. If not, Id use [Spark][104].
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1600/1*lGw4kVVQyj8E5by2GWVoQg.jpeg)
You can never have too many tools.
### ☁️ SaaS
☁️ 🔷[Algolia][105]a proprietary SaaS that indexes a clients website and provides an API to search the websites pages. They also have an API to submit your own documents, support context dependent searches and serve results really fast. If I were building a web search experience right now and could afford it, Id probably use Algolia firstand buy myself time to build a comparable search experience.
* Various ElasticSearch providers: AWS (☁️ [ElasticSearch Cloud)][32], ☁️[elastic.co][33] and from ☁️ [Qbox][34].
* ☁️[ Azure Search][35]a SaaS solution from Microsoft. Accessible through a REST API, it can scale to billions of documents. Has a Lucene query interface to simplify migrations from Lucene-based solutions.
* ☁️[ Swiftype][36]an enterprise SaaS that indexes your companys internal services, like Salesforce, G Suite, Dropbox and the intranet site.
### Tools and libraries
🍺☕🔷[ Lucene][106] is the most popular IR library. Implements query analysis, index retrieval and ranking. Either of the components can be replaced by an alternative implementation. There is also a C port🍺[Lucy][107].
* 🍺☕🔷[ Solr][37] is a complete search server, based on Lucene. Its a part of the [Hadoop][38] ecosystem of tools.
* 🍺☕🔷[ Hadoop][39] is the most widely used open source MapReduce system, originally designed as a indexing pipeline framework for Solr. It has been gradually loosing ground to 🍺[Spark][40] as the batch data processing framework used for indexing. ☁️[EMR][41] is a proprietary implementation of MapReduce on AWS.
* 🍺☕🔷 [ElasticSearch][42] is also based on Lucene ([feature comparison with Solr][43]). It has been getting more attention lately, so much that a lot of people think of ES when they hear “search”, and for good reasons: its well supported, has [extensive API][44], [integrates with Hadoop][45] and [scales well][46]. There are open source and [Enterprise][47] versions. ES is also available as a SaaS on Can scale to billions of documents, but scaling to that point can be very challenging, so typical scenario would involve orders of magnitude smaller corpus.
* 🍺🇨 [Xapian][48]a C++-based IR library. Relatively compact, so good for embedding into desktop or mobile applications.
* 🍺🇨 [Sphinx][49]an full-text search server. Has a SQL-like query language. Can also act as a [storage engine for MySQL][50] or used as a library.
* 🍺☕ [Nutch][51]a web crawler. Can be used in conjunction with Solr. Its also the tool behind [🍺Common Crawl][52].
* 🍺🦏 [Lunr][53]a compact embedded search library for web apps on the client-side.
* 🍺🦏 [searchkit][54]a library of web UI components to use with ElasticSearch.
* 🍺🦏 [Norch][55]a [LevelDB][56]-based search engine library for Node.js.
* 🍺🐍 [Whoosh][57]a fast, full-featured search library implemented in pure Python.
* OpenStreetMaps has its own 🍺[deck of search software][58].
### Datasets
A few fun or useful data sets to try building a search engine or evaluating search engine quality:
* 🍺🔷 [Commoncrawl][59]a regularly-updated open web crawl data. There is a [mirror on AWS][60], accessible for free within the service.
* 🍺🔷 [Openstreetmap data dump][61] is a very rich source of data for someone building a geospacial search engine.
* 🍺 [Google Books N-grams][62] can be very useful for building language models.
* 🍺 [Wikipedia dumps][63] are a classic source to build, among other things, an entity graph out of. There is a [wide range of helper tools][64] available.
* [IMDb dumps][65] are a fun dataset to build a small toy search engine for.
### References
* [Modern Information Retrieval][66] by R. Baeza-Yates and B. Ribeiro-Neto is a good, deep academic treatment of the subject. This is a good overview for someone completely new to the topic.
* [Information Retrieval][67] by S. Büttcher, C. Clarke and G. Cormack is another academic textbook with a wide coverage and is more up-to-date. Covers learn-to-rank and does a pretty good job at discussing theory of search systems evaluation. Also is a good overview.
* [Learning to Rank][68] by T-Y Liu is a best theoretical treatment of LtR. Pretty thin on practical aspects though. Someone considering building an LtR system should probably check this out.
* [Managing Gigabytes][69]published in 1999, is still a definitive reference for anyone embarking on building an efficient index of a significant size.
* [Text Retrieval and Search Engines][70]a MOOC from Coursera. A decent overview of basics.
* [Indexing the World Wide Web: The Journey So Far][71] ([PDF][72]), an overview of web search from 2012, by Ankit Jain and Abhishek Das of Google.
* [Why Writing Your Own Search Engine is Hard][73] a classic article from 2004 from Anna Patterson.
* [https://github.com/harpribot/awesome-information-retrieval][74]a curated list of search-related resources.
* A [great blog][75] on everything search by [Daniel Tunkelang][76].
* Some good slides on [search engine evaluation][77].
This concludes my humble attempt to make a somewhat-useful “map” for an aspiring search engine engineer. Did I miss something important? Im pretty sure I didyou know, [the margin is too narrow][108] to contain this enormous topic. Let me know if you think that something should be here and is notyou can reach [me][109] at[ forwidur@gmail.com][110] or at [@forwidur][111].
> P.S.This post is part of a open, collaborative effort to build an online reference, the Open Guide to Practical AI, which well release in draft form soon. See [this popular guide][112] for an example of whats coming. If youd like to get updates on or help with with this effort, sign up [here][113].
> Special thanks to [Joshua Levy][114], [Leo Polovets][115] and [Abhishek Das][116] for reading drafts of this and their invaluable feedback!
> Header image courtesy of [Mickaël Forrett][117]. The beautiful toolbox is called [The Studley Tool Chest][118].
--------------------------------------------------------------------------------
作者简介:
Max Grigorev
distributed systems, data, AI
-------------
via: https://medium.com/startup-grind/what-every-software-engineer-should-know-about-search-27d1df99f80d
作者:[Max Grigorev][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@forwidur?source=post_header_lockup
[1]:https://en.wikipedia.org/wiki/Inverted_index
[2]:http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/ITA-software-travel-complexity.pdf
[3]:https://www.algolia.com/
[4]:https://en.wikipedia.org/wiki/Stemming
[5]:https://en.wikipedia.org/wiki/Named-entity_recognition
[6]:http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
[7]:https://gofishdigital.com/semantic-topic-modeling/
[8]:https://nlp.stanford.edu/IR-book/html/htmledition/postings-file-compression-1.html
[9]:https://deplinenoise.wordpress.com/2013/03/31/fast-mmapable-data-structures/
[10]:https://en.wikipedia.org/wiki/Log-structured_merge-tree
[11]:http://orion.lcg.ufrj.br/Dr.Dobbs/books/book5/chap11.htm
[12]:http://norvig.com/spell-correct.html
[13]:http://nlp.stanford.edu/IR-book/html/htmledition/query-expansion-1.html
[14]:https://www.iro.umontreal.ca/~nie/IFT6255/carpineto-Survey-QE.pdf
[15]:https://en.wikipedia.org/wiki/Named-entity_recognition
[16]:http://www.aclweb.org/anthology/P02-1060
[17]:https://en.wikipedia.org/wiki/Personalized_search
[18]:http://searchengineland.com/future-search-engines-context-217550
[19]:http://services.google.com/fh/files/blogs/google_delayexp.pdf
[20]:http://highscalability.com/latency-everywhere-and-it-costs-you-sales-how-crush-it
[21]:https://stackoverflow.com/questions/22756092/what-does-it-mean-by-cold-cache-and-warm-cache-concept
[22]:https://en.wikipedia.org/wiki/Cache_performance_measurement_and_metric
[23]:http://blog.popcornmetrics.com/5-user-engagement-metrics-for-growth/
[24]:https://en.wikipedia.org/wiki/Information_retrieval#Precision
[25]:https://en.wikipedia.org/wiki/Information_retrieval#Recall
[26]:https://en.wikipedia.org/wiki/F1_score
[27]:http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/
[28]:https://en.wikipedia.org/wiki/Discounted_cumulative_gain
[29]:http://www.blindfiveyearold.com/short-clicks-versus-long-clicks
[30]:https://arxiv.org/pdf/1302.2318.pdf
[31]:https://en.wikipedia.org/wiki/Queries_per_second
[32]:https://aws.amazon.com/elasticsearch-service/
[33]:https://www.elastic.co/
[34]:https://qbox.io/
[35]:https://azure.microsoft.com/en-us/services/search/
[36]:https://swiftype.com/
[37]:http://lucene.apache.org/solr/
[38]:http://hadoop.apache.org/
[39]:http://hadoop.apache.org/
[40]:http://spark.apache.org/
[41]:https://aws.amazon.com/emr/
[42]:https://www.elastic.co/products/elasticsearch
[43]:http://solr-vs-elasticsearch.com/
[44]:https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html
[45]:https://github.com/elastic/elasticsearch-hadoop
[46]:https://www.elastic.co/guide/en/elasticsearch/guide/current/distributed-cluster.html
[47]:https://www.elastic.co/cloud/enterprise
[48]:https://xapian.org/
[49]:http://sphinxsearch.com/
[50]:https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/
[51]:https://nutch.apache.org/
[52]:http://commoncrawl.org/
[53]:https://lunrjs.com/
[54]:https://github.com/searchkit/searchkit
[55]:https://github.com/fergiemcdowall/norch
[56]:https://github.com/google/leveldb
[57]:https://bitbucket.org/mchaput/whoosh/wiki/Home
[58]:http://wiki.openstreetmap.org/wiki/Search_engines
[59]:http://commoncrawl.org/
[60]:https://aws.amazon.com/public-datasets/common-crawl/
[61]:http://wiki.openstreetmap.org/wiki/Downloading_data
[62]:http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
[63]:https://dumps.wikimedia.org/
[64]:https://www.mediawiki.org/wiki/Alternative_parsers
[65]:http://www.imdb.com/interfaces
[66]:https://www.amazon.com/dp/0321416910
[67]:https://www.amazon.com/dp/0262528878/
[68]:https://www.amazon.com/dp/3642142664/
[69]:https://www.amazon.com/dp/1558605703
[70]:https://www.coursera.org/learn/text-retrieval
[71]:https://research.google.com/pubs/pub37043.html
[72]:https://pdfs.semanticscholar.org/28d8/288bff1b1fc693e6d80c238de9fe8b5e8160.pdf
[73]:http://queue.acm.org/detail.cfm?id=988407
[74]:https://github.com/harpribot/awesome-information-retrieval
[75]:https://medium.com/@dtunkelang
[76]:https://www.cs.cmu.edu/~quixote/
[77]:https://web.stanford.edu/class/cs276/handouts/lecture8-evaluation_2014-one-per-page.pdf
[78]:https://stackoverflow.com/questions/34314/how-do-i-implement-search-functionality-in-a-website
[79]:https://www.quora.com/How-to-build-a-search-engine-from-scratch
[80]:https://github.com/isaacs/github/issues/908
[81]:https://www.reddit.com/r/Windows10/comments/4jbxgo/can_we_talk_about_how_bad_windows_10_search_sucks/d365mce/
[82]:https://www.reddit.com/r/spotify/comments/2apwpd/the_search_function_sucks_let_me_explain/
[83]:https://medium.com/@RohitPaulK/github-issues-suck-723a5b80a1a3#.yp8ui3g9i
[84]:https://thenextweb.com/opinion/2016/01/11/netflix-search-sucks-flixed-fixes-it/
[85]:https://www.google.com/search?q=building+a+search+engine
[86]:http://airweb.cse.lehigh.edu/2005/gyongyi.pdf
[87]:https://www.researchgate.net/profile/Gabriel_Sanchez-Perez/publication/262371199_Explicit_image_detection_using_YCbCr_space_color_model_as_skin_detection/links/549839cf0cf2519f5a1dd966.pdf
[88]:https://en.wikipedia.org/wiki/Locality-sensitive_hashing
[89]:https://en.wikipedia.org/wiki/Similarity_measure
[90]:https://www.microsoft.com/en-us/research/wp-content/uploads/2011/02/RadlinskiBennettYilmaz_WSDM2011.pdf
[91]:http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
[92]:https://en.wikipedia.org/wiki/Inverted_index
[93]:https://en.wikipedia.org/wiki/Natural_language_processing
[94]:http://backlinko.com/google-ranking-factors
[95]:http://times.cs.uiuc.edu/czhai/pub/slmir-now.pdf
[96]:https://en.wikipedia.org/wiki/Learning_to_rank
[97]:https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf
[98]:https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/lambdarank.pdf
[99]:https://yandex.com/company/technologies/matrixnet/
[100]:https://arxiv.org/abs/1708.02702
[101]:http://cnn.com/
[102]:http://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf
[103]:https://googleblog.blogspot.co.uk/2008/08/search-experiments-large-and-small.html
[104]:https://spark.apache.org/
[105]:https://www.algolia.com/
[106]:https://lucene.apache.org/
[107]:https://lucy.apache.org/
[108]:https://www.brainyquote.com/quotes/quotes/p/pierredefe204944.html
[109]:https://www.linkedin.com/in/grigorev/
[110]:mailto:forwidur@gmail.com
[111]:https://twitter.com/forwidur
[112]:https://github.com/open-guides/og-aws
[113]:https://upscri.be/d29cfe/
[114]:https://twitter.com/ojoshe
[115]:https://twitter.com/lpolovets
[116]:https://www.linkedin.com/in/abhishek-das-3280053/
[117]:https://www.behance.net/gallery/3530289/-HORIZON-
[118]:https://en.wikipedia.org/wiki/Henry_O._Studley

View File

@ -1,69 +0,0 @@
Why I love technical debt
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory1.png?itok=nbSRovsj)
This is not necessarily the title you'd expect for an article, I guess,* but I'm a fan of [technical debt][1]. There are two reasons for this: a Bad Reason and a Good Reason. I'll be upfront about the Bad Reason first, then explain why even that isn't really a reason to love it. I'll then tackle the Good Reason, and you'll nod along in agreement.
### The Bad Reason I love technical debt
We'll get this out of the way, then, shall we? The Bad Reason is that, well, there's just lots of it, it's interesting, it keeps me in a job, and it always provides a reason, as a security architect, for me to get involved in** projects that might give me something new to look at. I suppose those aren't all bad things. It can also be a bit depressing, because there's always so much of it, it's not always interesting, and sometimes I need to get involved even when I might have better things to do.
And what's worse is that it almost always seems to be security-related, and it's always there. That's the bad part.
Security, we all know, is the piece that so often gets left out, or tacked on at the end, or done in half the time it deserves, or done by people who have half an idea, but don't quite fully grasp it. I should be clear at this point: I'm not saying that this last reason is those people's fault. That people know they need security is fantastic. If we (the security folks) or we (the organization) haven't done a good enough job in making sufficient security resources--whether people, training, or visibility--available to those people who need it, the fact that they're trying is great and something we can work on. Let's call that a positive. Or at least a reason for hope.***
### The Good Reason I love technical debt
Let's get on to the other reason: the legitimate reason. I love technical debt when it's named.
What does that mean?
We all get that technical debt is a bad thing. It's what happens when you make decisions for pragmatic reasons that are likely to come back and bite you later in a project's lifecycle. Here are a few classic examples that relate to security:
* Not getting around to applying authentication or authorization controls on APIs that might, at some point, be public.
* Lumping capabilities together so it's difficult to separate out appropriate roles later on.
* Hard-coding roles in ways that don't allow for customisation by people who may use your application in different ways from those you initially considered.
* Hard-coding cipher suites for cryptographic protocols, rather than putting them in a config file where they can be changed or selected later.
There are lots more, of course, but those are just a few that jump out at me and that I've seen over the years. Technical debt means making decisions that will mean more work later on to fix them. And that can't be good, can it?
There are two words in the preceding paragraphs that should make us happy: they are "decisions" and "pragmatic." Because, in order for something to be named technical debt, I'd argue, it has to have been subject to conscious decision-making, and trade-offs must have been made--hopefully for rational reasons. Those reasons may be many and various--lack of qualified resources; project deadlines; lack of sufficient requirement definition--but if they've been made consciously, then the technical debt can be named, and if technical debt can be named, it can be documented.
And if it's documented, we're halfway there. As a security guy, I know that I can't force everything that goes out of the door to meet all the requirements I'd like--but the same goes for the high availability gal, the UX team, the performance folks, etc.
What we need--what we all need--is for documentation to exist about why decisions were made, because when we return to the problem we'll know it was thought about. And, what's more, the recording of that information might even make it into product documentation. "This API is designed to be used in a protected environment and should not be exposed on the public Internet" is a great piece of documentation. It may not be what a customer is looking for, but at least they know how to deploy the product, and, crucially, it's an opportunity for them to come back to the product manager and say, "We'd really like to deploy that particular API in this way. Could you please add this as a feature request?" Product managers like that. Very much.****
The best thing, though, is not just that named technical debt is visible technical debt, but that if you encourage your developers to document the decisions in code,***** then there's a decent chance that they'll record some ideas about how this should be done in the future. If you're really lucky, they might even add some hooks in the code to make it easier (an "auth" parameter on the API, which is unused in the current version, but will make API compatibility so much simpler in new releases; or cipher entry in the config file that currently only accepts one option, but is at least checked by the code).
I've been a bit disingenuous, I know, by defining technical debt as named technical debt. But honestly, if it's not named, then you can't know what it is, and until you know what it is, you can't fix it.******* My advice is this: when you're doing a release close-down (or in your weekly standup--EVERY weekly standup), have an agenda item to record technical debt. Name it, document it, be proud, sleep at night.
* Well, apart from the obvious clickbait reason--for which I'm (a little) sorry.
** I nearly wrote "poke my nose into."
*** Work with me here.
**** If you're software engineer/coder/hacker, here's a piece of advice: Learn to talk to product managers like real people, and treat them nicely. They (the better ones, at least) are invaluable allies when you need to prioritize features or have tricky trade-offs to make.
***** Do this. Just do it. Documentation that isn't at least mirrored in code isn't real documentation.******
****** Don't believe me? Talk to developers. "Who reads product documentation?" "Oh, the spec? I skimmed it. A few releases back. I think." "I looked in the header file; couldn't see it there."
******* Or decide not to fix it, which may also be an entirely appropriate decision.
This article originally appeared on [Alice, Eve, and Bob - a security blog][2] and is republished with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/10/why-i-love-technical-debt
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Technical_debt
[2]:https://aliceevebob.wordpress.com/2017/08/29/why-i-love-technical-debt/

View File

@ -1,86 +0,0 @@
How to Monetize an Open Source Project
======
![](http://www.itprotoday.com/sites/itprotoday.com/files/styles/article_featured_standard/public/ThinkstockPhotos-629994230_0.jpg?itok=5dZ68OTn)
The problem for any small group of developers putting the finishing touches on a commercial open source application is figuring out how to monetize the software in order to keep the bills paid and food on the table. Often these small pre-startups will start by deciding which of the recognized open source business models they're going to adapt, whether that be following Red Hat's lead and offering professional services, going the SaaS route, releasing as open core or something else.
Steven Grandchamp, general manager for MariaDB's North America operations and CEO for Denver-based startup [Drud Tech][1], thinks that might be putting the cart before the horse. With an open source project, the best first move is to get people downloading and using your product for free.
**Related:** [Demand for Open Source Skills Continues to Grow][2]
"The number one tangent to monetization in any open source product is adoption, because the key to monetizing an open source product is you flip what I would call the sales funnel upside down," he told ITPro at the recent All Things Open conference in Raleigh, North Carolina.
In many ways, he said, selling open source solutions is the opposite of marketing traditional proprietary products, where adoption doesn't happen until after a contract is signed.
**Related:** [Is Raleigh the East Coast's Silicon Valley?][3]
"In a proprietary software company, you advertise, you market, you make claims about what the product can do, and then you have sales people talk to customers. Maybe you have a free trial or whatever. Maybe you have a small version. Maybe it's time bombed or something like that, but you don't really get to realize the benefit of the product until there's a contract and money changes hands."
Selling open source solutions is different because of the challenge of selling software that's freely available as a GitHub download.
"The whole idea is to put the product out there, let people use it, experiment with it, and jump on the chat channels," he said, pointing out that his company Drud has a public chat channel that's open to anybody using their product. "A subset of that group is going to raise their hand and go, 'Hey, we need more help. We'd like a tighter relationship with the company. We'd like to know where your road map's going. We'd like to know about customization. We'd like to know if maybe this thing might be on your road map.'"
Grandchamp knows more than a little about making software pay, from both the proprietary and open source sides of the fence. In the 1980s he served as VP of research and development at Formation Technologies, and became SVP of R&D at John H. Harland after it acquired Formation in the mid-90s. He joined MariaDB in 2016, after serving eight years as CEO at OpenLogic, which was providing commercial support for more than 600 open-source projects at the time it was acquired by Rogue Wave Software. Along the way, there was a two year stint at Microsoft's Redmond campus.
OpenLogic was where he discovered open source, and his experiences there are key to his approach for monetizing open source projects.
"When I got to OpenLogic, I was told that we had 300 customers that were each paying $99 a year for access to our tool," he explained. "But the problem was that nobody was renewing the tool. So I called every single customer that I could find and said 'did you like the tool?'"
It turned out that nearly everyone he talked to was extremely happy with the company's software, which ironically was the reason they weren't renewing. The company's tool solved their problem so well there was no need to renew.
"What could we have offered that would have made you renew the tool?" he asked. "They said, 'If you had supported all of the open source products that your tool assembled for me, then I would have that ongoing relationship with you.'"
Grandchamp immediately grasped the situation, and when the CTO said such support would be impossible, Grandchamp didn't mince words: "Then we don't have a company."
"We figured out a way to support it," he said. "We created something called the Open Logic Expert Community. We developed relationships with committers and contributors to a couple of hundred open source packages, and we acted as sort of the hub of the SLA for our customers. We had some people on staff, too, who knew the big projects."
After that successful launch, Grandchamp and his team began hearing from customers that they were confused over exactly what open source code they were using in their projects. That lead to the development of what he says was the first software-as-a-service compliance portal of open source, which could scan an application's code and produce a list of all of the open source code included in the project. When customers then expressed confusion over compliance issues, the SaaS service was expanded to flag potential licensing conflicts.
Although the product lines were completely different, the same approach was used to monetize MariaDB, then called SkySQL, after MySQL co-founders Michael "Monty" Widenius, David Axmark, and Allan Larsson created the project by forking MySQL, which Oracle had acquired from Sun Microsystems in 2010.
Again, users were approached and asked what things they would be willing to purchase.
"They wanted different functionality in the database, and you didn't really understand this if you didn't talk to your customers," Grandchamp explained. "Monty and his team, while they were being acquired at Sun and Oracle, were working on all kinds of new functionality, around cloud deployments, around different ways to do clustering, they were working on lots of different things. That work, Oracle and MySQL didn't really pick up."
Rolling in the new features customers wanted needed to be handled gingerly, because it was important to the folks at MariaDB to not break compatibility with MySQL. This necessitated a strategy around when the code bases would come together and when they would separate. "That road map, knowledge, influence and technical information was worth paying for."
As with OpenLogic, MariaDB customers expressed a willingness to spend money on a variety of fronts. For example, a big driver in the early days was a project called Remote DBA, which helped customers make up for a shortage of qualified database administrators. The project could help with design issues, as well as monitor existing systems to take the workload off of a customer's DBA team. The service also offered access to MariaDB's own DBAs, many of whom had a history with the database going back to the early days of MySQL.
"That was a subscription offering that people were definitely willing to pay for," he said.
The company also learned, again by asking and listening to customers, that there were various types of support subscriptions that customers were willing to purchase, including subscriptions around capability and functionality, and a managed service component of Remote DBA.
These days Grandchamp is putting much of his focus on his latest project, Drud, a startup that offers a suite of integrated, automated, open source development tools for developing and managing multiple websites, which can be running on any combination of content management systems and deployment platforms. It is monetized partially through modules that add features like a centralized dashboard and an "intelligence engine."
As you might imagine, he got it off the ground by talking to customers and giving them what they indicated they'd be willing to purchase.
"Our number one customer target is the agency market," he said. "The enterprise market is a big target, but I believe it's our second target, not our first. And the reason it's number two is they don't make decisions very fast. There are technology refresh cycles that have to come up, there are lots of politics involved and lots of different vendors. It's lucrative once you're in, but in a startup you've got to figure out how to pay your bills. I want to pay my bills today. I don't want to pay them in three years."
Drud's focus on the agency market illustrates another consideration: the importance of understanding something about your customers' business. When talking with agencies, many said they were tired of being offered generic software that really didn't match their needs from proprietary vendors that didn't understand their business. In Drud's case, that understanding is built into the company DNA. The software was developed by an agency to fill its own needs.
"We are a platform designed by an agency for an agency," Grandchamp said. "Right there is a relationship that they're willing to pay for. We know their business."
Grandchamp noted that startups also need to be able to distinguish users from customers. Most of the people downloading and using commercial open source software aren't the people who have authorization to make purchasing decisions. These users, however, can point to the people who control the purse strings.
"It's our job to build a way to communicate with those users, provide them value so that they'll give us value," he explained. "It has to be an equal exchange. I give you value of a tool that works, some advice, really good documentation, access to experts who can sort of guide you along. Along the way I'm asking you for pieces of information. Who do you work for? How are the technology decisions happening in your company? Are there other people in your company that we should refer the product to? We have to create the dialog."
In the end, Grandchamp said, in the open source world the people who go out to find business probably shouldn't see themselves as salespeople, but rather, as problem solvers.
"I believe that you're not really going to need salespeople in this model. I think you're going to need customer success people. I think you're going to need people who can enable your customers to be successful in a business relationship that's more highly transactional."
"People don't like to be sold," he added, "especially in open source. The last person they want to see is the sales person, but they like to ply and try and consume and give you input and give you feedback. They love that."
--------------------------------------------------------------------------------
via: http://www.itprotoday.com/software-development/how-monetize-open-source-project
作者:[Christine Hall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itprotoday.com/author/christine-hall
[1]:https://www.drud.com/
[2]:http://www.itprotoday.com/open-source/demand-open-source-skills-continues-grow
[3]:http://www.itprotoday.com/software-development/raleigh-east-coasts-silicon-valley

View File

@ -1,87 +0,0 @@
Why pair writing helps improve documentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
Professional writers, at least in the Red Hat documentation team, nearly always work on docs alone. But have you tried writing as part of a pair? In this article, I'll explain a few benefits of pair writing.
### What is pair writing?
Pair writing is when two writers work in real time, on the same piece of text, in the same room. This approach improves document quality, speeds up writing, and allows writers to learn from each other. The idea of pair writing is borrowed from [pair programming][1].
When pair writing, you and your colleague work on the text together, making suggestions and asking questions as needed. Meanwhile, you're observing each other's work. For example, while one is writing, the other writer observes details such as structure or context. Often discussion around the document turns into sharing experiences and opinions, and brainstorming about writing in general.
At all times, the writing is done by only one person. Thus, you need only one computer, unless you want one writer to do online research while the other person does the writing. The text workflow is the same as if you are working alone: a text editor, the documentation source files, git, and so on.
### Pair writing in practice
My colleague Aneta Steflova and I have done more than 50 hours of pair writing working on the Red Hat Enterprise Linux System Administration docs and on the Red Hat Identity Management docs. I've found that, compared to writing alone, pair writing:
* is as productive or more productive;
* improves document quality;
* helps writers share technical expertise; and
* is more fun.
### Speed
Two writers writing one text? Sounds half as productive, right? Wrong. (Usually.)
Pair writing can help you work faster because two people have solutions to a bigger set of problems, which means getting blocked less often during the process. For example, one time we wrote urgent API docs for identity management. I know at least the basics of web APIs, the REST protocol, and so on, which helped us speed through those parts of the documentation. Working alone, Aneta would have needed to interrupt the writing process frequently to study these topics.
### Quality
Poor wording or sentence structure, inconsistencies in material, and so on have a harder time surviving under the scrutiny of four eyes. For example, one of our pair writing documents was reviewed by an extremely critical developer, who was known for catching technical inaccuracies and bad structure. After this particular review, he said, "Perfect. Thanks a lot."
### Sharing expertise
Each of us lives in our own writing bubble, and we normally don't know how others approach writing. Pair writing can help you improve your own writing process. For example, Aneta showed me how to better handle assignments in which the developer has provided starting text (as opposed to the writer writing from scratch using their own knowledge of the subject), which I didn't have experience with. Also, she structures the docs thoroughly, which I began doing as well.
As another example, I'm good enough at Vim that XML editing (e.g., tags manipulation) is enjoyable instead of torturous. Aneta saw how I was using Vim, asked about it, suffered through the learning curve, and now takes advantage of the Vim features that help me.
Pair writing is especially good for helping and mentoring new writers, and it's a great way to get to know professionally (and have fun with) colleagues.
### When pair writing shines
In addition to benefits I've already listed, pair writing is especially good for:
* **Working with[Bugzilla][2]** : Bugzillas can be cumbersome and cause problems, especially for administration-clumsy people (like me).
* **Reviewing existing documents** : When documentation needs to be expanded or fixed, it is necessary to first examine the existing document.
* **Learning new technology** : A fellow writer can be a better teacher than an engineer.
* **Writing emails/requests for information to developers with well-chosen questions** : The difficulty of this task rises in proportion to the difficulty of technology you are documenting.
Also, with pair writing, feedback is in real time, as-needed, and two-way.
On the downside, pair writing can be a faster pace, giving a writer less time to mull over a topic or wording. On the other hand, generally peer review is not necessary after pair writing.
### Words of caution
To get the most out of pair writing:
* Go into the project well prepared, otherwise you can waste your colleague's time.
* Talkative types need to stay focused on the task, otherwise they end up talking rather than writing.
* Be prepared for direct feedback. Pair writing is not for feedback-allergic writers.
* Beware of session hijackers. Dominant personalities can turn pair writing into writing solo with a spectator. (However, it _can _ be good if one person takes over at times, as long as the less-experienced partner learns from the hijacker, or the more-experienced writer is providing feedback to the hijacker.)
### Conclusion
Pair writing is a meeting, but one in which you actually get work done. It's an activity that lets writers focus on the one indispensable thing in our vocation--writing.
_This post was written with the help of pair writing with Aneta Steflova._
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/try-pair-writing
作者:[Maxim Svistunov][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/maxim-svistunov
[1]:https://developer.atlassian.com/blog/2015/05/try-pair-programming/
[2]:https://www.bugzilla.org/

View File

@ -1,120 +0,0 @@
Why and How to Set an Open Source Strategy
============================================================
![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg)
This article explains how to walk through, measure, and define strategies collaboratively in an open source community.
_“If you dont know where you are going, youll end up someplace else.” _ _—_  Yogi Berra
Open source projects are generally started as a way to scratch ones itch  and frankly thats one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand.
Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge  how does a project start to build a strategic vision? In this article, Ill describe how to walk through, measure, and define strategies collaboratively, in a community.
Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isnt a tool to justify unfortunate actions but to help show focus and where each community member can contribute.
A good application of strategy achieves the following:
* Why the project exists?
* What the project looks to achieve?
* What is the ideal end state for a project is.
The key to success is answering these questions as simply as possible, with consensus from your community. Lets look at some ways to do this.
### Setting a mission and vision
_“_ _Efforts and courage are not enough without purpose and direction.”_   John F. Kennedy
All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are  _Mission_  and  _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision).
A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions:
* What drove you to join and/or contribute the project?
* How do you define success for your participation?
In a company, youd ask your customers these questions usually. But in open source projects, the customers are the project participants  and their time investment is what makes the project a success.
Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each members answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights.
Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Heres their mission:
_Build community and adoption of Open Source on the mainframe by:_
* _Eliminating barriers to Open Source adoption on the mainframe_
* _Demonstrating value of the mainframe on technical and business levels_
* _Strengthening collaboration points and resources for the community to thrive_
At 40 words, it passes the key eye tests of a good mission statement; its clear, concise, and demonstrates the useful value the project aims for.
The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well:
_Linux on the Mainframe as the standard for enterprise class systems and applications._
You could read that as a [BHAG][1], but its really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision  its only 13 words, inspirational, clear, memorable, and concise.
Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there?
### Goals, Objectives, Actions, and Results
_“I dont focus on what Im up against. I focus on my goals and I try to ignore the rest.”_   Venus Williams
Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off.
There are lots of methods of time bound planning, but the method I think works the best for projects is what Ive dubbed the GOAR method. Its an acronym that stands for:
* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction.
* Objectives show how you measure a goals completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.”
* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors.
* Results come along the way, showing progress both positive and negative from the actions.
You can put these into a table like this:
| Goals | Objectives | Actions | Results |
|:--|:--|:--|:--|
| Grow a diverse contributor base     | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | |
| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | |
In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along.
The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project  this person could help introduce those developers to the codebase and guide them through their first commit.
### What happens if the project doesnt hit the goals?
_“I have not failed. Ive just found 10,000 ways that wont work.”_  — Thomas A. Edison
Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not!
Instead, you can use this experience as a way to better understand your projects velocity, its impact, and its community, and perhaps as a way to prioritize what is important and whats not.
--------------------------------------------------------------------------------
via: https://www.linuxfoundation.org/blog/set-open-source-strategy/
作者:[ John Mertic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxfoundation.org/author/jmertic/
[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal
[2]:https://www.linuxfoundation.org/author/jmertic/
[3]:https://www.linuxfoundation.org/category/blog/
[4]:https://www.linuxfoundation.org/category/audience/c-level/
[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/
[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/
[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/
[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/
[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
[11]:https://www.linuxfoundation.org/category/audience/open-source-users/
[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/

View File

@ -1,94 +0,0 @@
Why is collaboration so difficult?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_block_collaboration.png?itok=pKbXpr1e)
Many contemporary definitions of "collaboration" define it simply as "working together"--and, in part, it is working together. But too often, we tend to use the term "collaboration" interchangeably with cognate terms like "cooperation" and "coordination." These terms also refer to some manner of "working together," yet there are subtle but important differences between them all.
How does collaboration differ from coordination or cooperation? What is so important about collaboration specifically? Does it have or do something that coordination and cooperation don't? The short answer is a resounding "yes!"
[This unit explores collaboration][1], a problematic term because it has become a simple buzzword for "working together." By the time you've studied the cases and practiced the exercises contained in this section, you will understand that it's so much more than that.
### Not like the others
"Coordination" can be defined as the ordering of a variety of people acting in an effective, unified manner toward an end goal or state
In traditional organizations and businesses, people contributed according to their role definitions, such as in manufacturing, where each employee was responsible for adding specific components to the widget on an assembly line until the widget was complete. In contexts like these, employees weren't expected to contribute beyond their pre-defined roles (they were probably discouraged from doing so), and they didn't necessarily have a voice in the work or in what was being created. Often, a manager oversaw the unification of effort (hence the role "project coordinator"). Coordination is meant to connote a sense of harmony and unity, as if elements are meant to go together, resulting in efficiency among the ordering of the elements.
One common assumption is that coordinated efforts are aimed at the same, single goal. So some end result is "successful" when people and parts work together seamlessly; when one of the parts breaks down and fails, then the whole goal fails. Many traditional businesses (for instance, those with command-and-control hierarchies) manage work through coordination.
Cooperation is another term whose surface meaning is "working together." Rather than the sense of compliance that is part of "coordination," it carries a sense of agreement and helpfulness on the path toward completing a shared activity or goal.
"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
People tend to use the term "cooperation" when joining two semi-related entities where one or more entity could decide not to cooperate. The people and pieces that are part of a cooperative effort make the shared activity easier to perform or the shared goal easier to reach. "Cooperation" implies a shared goal or activity we agree to pursue jointly. One example is how police and witnesses cooperate to solve crimes.
"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
Sometimes collaboration involves two or more groups that do not normally work together; they are disparate groups or not usually connected. For instance, a traitor collaborates with the enemy, or rival businesses collaborate with each other. The subtlety of collaboration is that the two groups may have oppositional initial goals but work together to create a shared goal. Collaboration can be more contentious than coordination or cooperation, but like cooperation, any one of the entities could choose not to collaborate. Despite the contention and conflict, however, there is discourse--whether in the form of multi-way discussion or one-way feedback--because without discourse, there is no way for people to express a point of dissent that is ripe for negotiation.
The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
### For example
One way to think about these things is through a real-life example--like the writing of [this book][1].
The editor, [Bryan][2], coordinates the authors' work through the call for proposals, setting dates and deadlines, collecting the writing, and meeting editing dates and deadlines for feedback about our work. He coordinates the authors, the writing, the communications. In this example, I'm not coordinating anything except myself (still a challenge most days!).
The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
I cooperate with Bryan's dates and deadlines, and with the ways he has decided to coordinate the work. I propose the introduction on GitHub; I wait for approval. I comply with instructions, write some stuff, and send it to him by the deadlines. He cooperates by accepting a variety of document formats. I get his edits,incorporate them, send it back him, and so forth. If I don't cooperate (or something comes up and I can't cooperate), then maybe someone else writes this introduction instead.
Bryan and I collaborate when either one of us challenges something, including pieces of the work or process that aren't clear, things that we thought we agreed to, or things on which we have differing opinions. These intersections are ripe for negotiation and therefore indicative of collaboration. They are the opening for us to negotiate some creative work.
Once the collaboration is negotiated and settled, writing and editing the book returns to cooperation/coordination; that is why collaboration relies on the other two terms of joint work.
One of the most interesting parts of this example (and of work and shared activity in general) is the moment-by-moment pivot from any of these terms to the other. The writing of this book is not completely collaborative, coordinated, or cooperative. It's a messy mix of all three.
### Why is collaboration important?
Collaboration is an important facet of contemporary organizations--specifically those oriented toward knowledge work--because it allows for productive disagreement between actors. That kind of disagreement then helps increase the level of engagement and provide meaning to the group's work.
In his book, The Age of Discontinuity: Guidelines to our Changing Society, [Peter Drucker discusses][3] the "knowledge worker" and the pivot from work based on experience (e.g. apprenticeships) to work based on knowledge and the application of knowledge. This change in work and workers, he writes:
> ...will make the management of knowledge workers increasingly crucial to the performance and achievement of the knowledge society. We will have to learn to manage the knowledge worker both for productivity and for satisfaction, both for achievement and for status. We will have to learn to give the knowledge worker a job big enough to challenge him, and to permit performance as a "professional."
In other words, knowledge workers aren't satisfied with being subordinate--told what to do by managers as, if there is one right way to do a task. And, unlike past workers, they expect more from their work lives, including some level of emotional fulfillment or meaning-making from their work. The knowledge worker, according to Drucker, is educated toward continual learning, "paid for applying his knowledge, exercising his judgment, and taking responsible leadership." So it then follows that knowledge workers expect from work the chance to apply and share their knowledge, develop themselves professionally, and continuously augment their knowledge.
Interesting to note is the fact that Peter Drucker wrote about those concepts in 1969, nearly 50 years ago--virtually predicting the societal and organizational changes that would reveal themselves, in part, through the development of knowledge sharing tools such as forums, bulletin boards, online communities, and cloud knowledge sharing like DropBox and GoogleDrive as well as the creation of social media tools such as MySpace, Facebook, Twitter, YouTube and countless others. All of these have some basis in the idea that knowledge is something to liberate and share.
In this light, one might view the open organization as one successful manifestation of a system of management for knowledge workers. In other words, open organizations are a way to manage knowledge workers by meeting the needs of the organization and knowledge workers (whether employees, customers, or the public) simultaneously. The foundational values this book explores are the scaffolding for the management of knowledge, and they apply to ways we can:
* make sure there's a lot of varied knowledge around (inclusivity)
* help people come together and participate (community)
* circulate information, knowledge, and decision making (transparency)
* innovate and not become entrenched in old ways of thinking and being (adaptability)
* develop a shared goal and work together to use knowledge (collaboration)
Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups. As we've discovered, collaboration is more than working together with some degree of compliance; in fact, it describes a type of working together that overcomes compliance because people can disagree, question, and express their needs in a negotiation and in collaboration. And, collaboration is more than "working toward a shared goal"; collaboration is a process which defines the shared goals via negotiation and, when successful, leads to cooperation and coordination to focus activity on the negotiated outcome.
Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups.
Collaboration works best when the other four open organization values are present. For instance, when people are transparent, there is no guessing about what is needed, why, by whom, or when. Also, because collaboration involves negotiation, it also needs diversity (a product of inclusivity); after all, if we aren't negotiating among differing views, needs, or goals, then what are we negotiating? During a negotiation, the parties are often asked to give something up so that all may gain, so we have to be adaptable and flexible to the different outcomes that negotiation can provide. Lastly, collaboration is often an ongoing process rather than one which is quickly done and over, so it's best to enter collaboration as if you are part of the same community, desiring everyone to benefit from the negotiation. In this way, acts of authentic and purposeful collaboration directly necessitate the emergence of the other four values--transparency, inclusivity, adaptability, and community--as they assemble part of the organization's collective purpose spontaneously.
### Collaboration in open organizations
Traditional organizations advance an agreed-upon set of goals that people are welcome to support or not. In these organizations, there is some amount of discourse and negotiation, but often a higher-ranking or more powerful member of the organization intervenes to make a decision, which the membership must accept (and sometimes ignores). In open organizations, however, the focus is for members to perform their activity and to work out their differences; only if necessary would someone get involved (and even then would try to do it in the most minimal way that support the shared values of community, transparency, adaptability, collaboration and inclusivity.) This make the collaborative processes in open organizations "messier" (or "chaotic" to use Jim Whitehurst's term) but more participatory and, hopefully, innovative.
This article is part of the [Open Organization Workbook project][1].
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/17/11/what-is-collaboration
作者:[Heidi Hess Von Ludewig][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/heidi-hess-von-ludewig
[1]:https://opensource.com/open-organization/17/8/workbook-project-announcement
[2]:http://opensource.com/users/bbehrens
[3]:https://www.elsevier.com/books/the-age-of-discontinuity/drucker/978-0-434-90395-5

Some files were not shown because too many files have changed in this diff Show More