mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-28 23:20:10 +08:00
commit
c77b19fff1
@ -0,0 +1,149 @@
|
||||
分布式跟踪系统的四大功能模块如何协同工作
|
||||
======
|
||||
|
||||
> 了解分布式跟踪中的主要体系结构决策,以及各部分如何组合在一起。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
|
||||
|
||||
早在十年前,认真研究过分布式跟踪基本上只有学者和一小部分大型互联网公司中的人。对于任何采用微服务的组织来说,它如今成为一种筹码。其理由是确立的:微服务通常会发生让人意想不到的错误,而分布式跟踪则是描述和诊断那些错误的最好方法。
|
||||
|
||||
也就是说,一旦你准备将分布式跟踪集成到你自己的应用程序中,你将很快意识到对于不同的人来说“<ruby>分布式跟踪<rt>Distributed Tracing</rt></ruby>”一词意味着不同的事物。此外,跟踪生态系统里挤满了具有相似内容的重叠项目。本文介绍了分布式跟踪系统中四个(可能)独立的功能模块,并描述了它们间将如何协同工作。
|
||||
|
||||
### 分布式跟踪:一种思维模型
|
||||
|
||||
大多数用于跟踪的思维模型来源于 [Google 的 Dapper 论文][1]。[OpenTracing][2] 使用相似的术语,因此,我们从该项目借用了以下术语:
|
||||
|
||||
![Tracing][3]
|
||||
|
||||
* <ruby>跟踪<rt>Trace</rt></ruby>:事物在分布式系统运行的过程描述。
|
||||
* <ruby>跨度<rt>Span</rt></ruby>:一种命名的定时操作,表示工作流的一部分。跨度可接受键值对标签以及附加到特定跨度实例的细粒度的、带有时间戳的结构化日志。
|
||||
* <ruby>跨度上下文<rt>Span context</rt></ruby>:携带分布式事务的跟踪信息,包括当它通过网络或消息总线将服务传递给服务时。跨度上下文包含跟踪标识符、跨度标识符以及跟踪系统所需传播到下游服务的任何其他数据。
|
||||
|
||||
如果你想要深入研究这种思维模式的细节,请仔细参照 [OpenTracing 技术规范][1]。
|
||||
|
||||
### 四大功能模块
|
||||
|
||||
从应用层分布式跟踪系统的观点来看,现代软件系统架构如下图所示:
|
||||
|
||||
![Tracing][5]
|
||||
|
||||
现代软件系统的组件可分为三类:
|
||||
|
||||
* **应用程序和业务逻辑**:你的代码。
|
||||
* **广泛共享库**:他人的代码
|
||||
* **广泛共享服务**:他人的基础架构
|
||||
|
||||
这三类组件有着不同的需求,驱动着监控应用程序的分布式跟踪系统的设计。最终的设计得到了四个重要的部分:
|
||||
|
||||
* <ruby>跟踪检测 API<rt>A tracing instrumentation API</rt></ruby>:修饰应用程序代码
|
||||
* <ruby>线路协议<rt>Wire protocol</rt></ruby>:在 RPC 请求中与应用程序数据一同发送的规定
|
||||
* <ruby>数据协议<rt>Data protocol</rt></ruby>:将异步信息(带外)发送到你的分析系统的规定
|
||||
* <ruby>分析系统<rt>Analysis system</rt></ruby>:用于处理跟踪数据的数据库和交互式用户界面
|
||||
|
||||
为了更深入的解释这个概念,我们将深入研究驱动该设计的细节。如果你只需要我的一些建议,请跳转至下方的四大解决方案。
|
||||
|
||||
### 需求,细节和解释
|
||||
|
||||
应用程序代码、共享库以及共享式服务在操作上有显著的差别,这种差别严重影响了对其进行检测的请求操作。
|
||||
|
||||
#### 检测应用程序代码和业务逻辑
|
||||
|
||||
在任何特定的微服务中,由微服务开发者编写的大部分代码是应用程序或者商业逻辑。这部分代码规定了特定区域的操作。通常,它包含任何特殊、独一无二的逻辑判断,这些逻辑判断首先证明了创建新型微服务的合理性。基本上按照定义,**该代码通常不会在多个服务中共享或者以其他方式出现。**
|
||||
|
||||
也即是说你仍需了解它,这也意味着需要以某种方式对它进行检测。一些监控和跟踪分析系统使用<ruby>黑盒代理<rt>black-box agents</rt></ruby>自动检测代码,另一些系统更想使用显式的白盒检测工具。对于后者,抽象跟踪 API 提供了许多对于微服务的应用程序代码来说更为实用的优势:
|
||||
|
||||
* 抽象 API 允许你在不重新编写检测代码的条件下换新的监视工具。你可能想要变更云服务提供商、供应商和监测技术,而一大堆不可移植的检测代码将会为该过程增加有意义的开销和麻烦。
|
||||
* 事实证明,除了生产监控之外,该工具还有其他有趣的用途。现有的项目使用相同的跟踪工具来驱动测试工具、分布式调试器、“混沌工程”故障注入器和其他元应用程序。
|
||||
* 但更重要的是,若将应用程序组件提取到共享库中要怎么办呢?由上述内容可得到结论:
|
||||
|
||||
#### 检测共享库
|
||||
|
||||
在大多数应用程序中出现的实用程序代码(处理网络请求、数据库调用、磁盘写操作、线程、并发管理等)通常情况下是通用的,而非特别应用于某个特定应用程序。这些代码会被打包成库和框架,而后就可以被装载到许多的微服务上并且被部署到多种不同的环境中。
|
||||
|
||||
其真正的不同是:对于共享代码,其他人则成为了使用者。大多数用户有不同的依赖关系和操作风格。如果尝试去使用该共享代码,你将会注意到几个常见的问题:
|
||||
|
||||
* 你需要一个 API 来编写检测。然而,你的库并不知道你正在使用哪个分析系统。会有多种选择,并且运行在相同应用下的所有库无法做出不兼容的选择。
|
||||
* 由于这些包封装了所有网络处理代码,因此从请求报头注入和提取跨度上下文的任务往往指向 RPC 库。然而,共享库必须了解到每个应用程序正在使用哪种跟踪协议。
|
||||
* 最后,你不想强制用户使用相互冲突的依赖项。大多数用户有不同的依赖关系和操作风格。即使他们使用 gRPC,绑定的 gRPC 版本是否相同?因此任何你的库附带用于跟踪的监控 API 必定是免于依赖的。
|
||||
|
||||
**因此,一个(a)没有依赖关系、(b)与线路协议无关、(c)使用流行的供应商和分析系统的抽象 API 应该是对检测共享库代码的要求。**
|
||||
|
||||
#### 检测共享式服务
|
||||
|
||||
最后,有时整个服务(或微服务集合体)的通用性足以使许多独立的应用程序使用它们。这种共享式服务通常由第三方托管和管理,例如缓存服务器、消息队列以及数据库。
|
||||
|
||||
从应用程序开发者的角度来看,理解共享式服务本质上是黑盒子是极其重要的。它不可能将你的应用程序监控注入到共享式服务。恰恰相反,托管服务通常会运行它自己的监控方案。
|
||||
|
||||
### 四个方面的解决方案
|
||||
|
||||
因此,抽象的跟踪应用程序接口将会帮助库发出数据并且注入/抽取跨度上下文。标准的线路协议将会帮助黑盒服务相互连接,而标准的数据格式将会帮助分离的分析系统合并其中的数据。让我们来看一下部分有希望解决这些问题的方案。
|
||||
|
||||
#### 跟踪 API:OpenTracing 项目
|
||||
|
||||
如你所见,我们需要一个跟踪 API 来检测应用程序代码。为了将这种工具扩展到大多数进行跨度上下文注入和提取的共享库中,则必须以某种关键方式对 API 进行抽象。
|
||||
|
||||
[OpenTracing][2] 项目主要针对解决库开发者的问题,OpenTracing 是一个与供应商无关的跟踪 API,它没有依赖关系,并且迅速得到了许多监控系统的支持。这意味着,如果库附带了内置的本地 OpenTracing 工具,当监控系统在应用程序启动连接时,跟踪将会自动启动。
|
||||
|
||||
就个人而言,作为一个已经编写、发布和操作开源软件十多年的人,在 OpenTracing 项目上工作并最终解决这个观察性的难题令我十分满意。
|
||||
|
||||
除了 API 之外,OpenTracing 项目还维护了一个不断增长的工具列表,其中一些可以在[这里][6]找到。如果你想参与进来,无论是通过提供一个检测插件,对你自己的 OSS 库进行本地测试,或者仅仅只想问个问题,都可以通过 [Gitter][7] 向我们打招呼。
|
||||
|
||||
#### 线路协议: HTTP 报头 trace-context
|
||||
|
||||
为了监控系统能进行互操作,以及减轻从一个监控系统切换为另外一个时带来的迁移问题,需要标准的线路协议来传播跨度上下文。
|
||||
|
||||
[w3c 分布式跟踪上下文社区小组][8]在努力制定此标准。目前的重点是制定一系列标准的 HTTP 报头。该规范的最新草案可以在[此处][9]找到。如果你对此小组有任何的疑问,[邮件列表][10]和[Gitter 聊天室][11]是很好的解惑地点。
|
||||
|
||||
(LCTT 译注:本文原文发表于 2018 年 5 月,可能现在社区已有不同进展)
|
||||
|
||||
#### 数据协议 (还未出现!!)
|
||||
|
||||
对于黑盒服务,在无法安装跟踪程序或无法与程序进行交互的情况下,需要使用数据协议从系统中导出数据。
|
||||
|
||||
目前这种数据格式和协议的开发工作尚处在初级阶段,并且大多在 w3c 分布式跟踪上下文工作组的上下文中进行工作。需要特别关注的是在标准数据模式中定义更高级别的概念,例如 RPC 调用、数据库语句等。这将允许跟踪系统对可用数据类型做出假设。OpenTracing 项目也通过定义一套[标准标签集][12]来解决这一事务。该计划是为了使这两项努力结果相互配合。
|
||||
|
||||
注意当前有一个中间地带。对于由应用程序开发者操作但不想编译或以其他方式执行代码修改的“网络设备”,动态链接可以帮助避免这种情况。主要的例子就是服务网格和代理,就像 Envoy 或者 NGINX。针对这种情况,可将兼容 OpenTracing 的跟踪器编译为共享对象,然后在运行时动态链接到可执行文件中。目前 [C++ OpenTracing API][13] 提供了该选项。而 JAVA 的 OpenTracing [跟踪器解析][14]也在开发中。
|
||||
|
||||
这些解决方案适用于支持动态链接,并由应用程序开发者部署的的服务。但从长远来看,标准的数据协议可以更广泛地解决该问题。
|
||||
|
||||
#### 分析系统:从跟踪数据中提取有见解的服务
|
||||
|
||||
最后不得不提的是,现在有足够多的跟踪监视解决方案。可以在[此处][15]找到已知与 OpenTracing 兼容的监控系统列表,但除此之外仍有更多的选择。我更鼓励你研究你的解决方案,同时希望你在比较解决方案时发现本文提供的框架能派上用场。除了根据监控系统的操作特性对其进行评级外(更不用提你是否喜欢 UI 和其功能),确保你考虑到了上述三个重要方面、它们对你的相对重要性以及你感兴趣的跟踪系统如何为它们提供解决方案。
|
||||
|
||||
### 结论
|
||||
|
||||
最后,每个部分的重要性在很大程度上取决于你是谁以及正在建立什么样的系统。举个例子,开源库的作者对 OpenTracing API 非常感兴趣,而服务开发者对 trace-context 规范更感兴趣。当有人说一部分比另一部分重要时,他们的意思通常是“一部分对我来说比另一部分重要”。
|
||||
|
||||
然而,事实是:分布式跟踪已经成为监控现代系统所必不可少的事物。在为这些系统进行构建模块时,“尽可能解耦”的老方法仍然适用。在构建像分布式监控系统一样的跨系统的系统时,干净地解耦组件是维持灵活性和前向兼容性地最佳方式。
|
||||
|
||||
感谢你的阅读!现在当你准备好在你自己的应用程序中实现跟踪服务时,你已有一份指南来了解他们正在谈论哪部分部分以及它们之间如何相互协作。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/distributed-tracing
|
||||
|
||||
作者:[Ted Young][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[chenmu-kk](https://github.com/chenmu-kk)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tedsuo
|
||||
[1]:https://research.google.com/pubs/pub36356.html
|
||||
[2]:http://opentracing.io/
|
||||
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
|
||||
[4]:https://github.com/opentracing/specification/blob/master/specification.md
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
|
||||
[6]:https://github.com/opentracing-contrib/
|
||||
[7]:https://gitter.im/opentracing/public
|
||||
[8]:https://www.w3.org/community/trace-context/
|
||||
[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
|
||||
[10]:http://lists.w3.org/Archives/Public/public-trace-context/
|
||||
[11]:https://gitter.im/TraceContext/Lobby
|
||||
[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
|
||||
[13]:https://github.com/opentracing/opentracing-cpp
|
||||
[14]:https://github.com/opentracing-contrib/java-tracerresolver
|
||||
[15]:http://opentracing.io/documentation/pages/supported-tracers
|
||||
[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
|
||||
[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11795-1.html)
|
||||
[#]: subject: (Run a server with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/seth)
|
||||
@ -10,37 +10,37 @@
|
||||
使用 Git 来管理 Git 服务器
|
||||
======
|
||||
|
||||
> 借助 Gitolite,你可以使用 Git 来管理 Git 服务器。在我们的系列中了解这些鲜为人知的 Git 用途。
|
||||
> 借助 Gitolite,你可以使用 Git 来管理 Git 服务器。在我们的系列文章中了解这些鲜为人知的 Git 用途。
|
||||
|
||||
![computer servers processing data][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/18/132045yrr1pb9n497tfbiy.png)
|
||||
|
||||
正如我在系列文章中演示的那样,[Git][2] 除了跟踪源代码外,还可以做很多事情。信不信由你,Git 甚至可以管理你的 Git 服务器,因此你可以或多或少地使用 Git 本身运行 Git 服务器。
|
||||
正如我在系列文章中演示的那样,[Git][2] 除了跟踪源代码外,还可以做很多事情。信不信由你,Git 甚至可以管理你的 Git 服务器,因此你可以或多或少地使用 Git 本身来运行 Git 服务器。
|
||||
|
||||
当然,这涉及除日常使用 Git 之外的许多组件,其中最重要的是 [Gitolite][3],该后端应用程序可以管理你使用 Git 的每个细小的配置。Gitolite 的优点在于,由于它使用 Git 作为其前端接口,因此很容易将 Git 服务器管理集成到其他基于 Git 的工作流中。Gitolite 可以精确控制谁可以访问你服务器上的特定存储库以及他们具有哪些权限。你可以使用常规的 Linux 系统工具自行管理此类事务,但是如果在六个用户中只有一个或两个以上的仓库,则需要大量的工作。
|
||||
当然,这涉及除日常使用 Git 之外的许多组件,其中最重要的是 [Gitolite][3],该后端应用程序可以管理你使用 Git 的每个细微的配置。Gitolite 的优点在于,由于它使用 Git 作为其前端接口,因此很容易将 Git 服务器管理集成到其他基于 Git 的工作流中。Gitolite 可以精确控制谁可以访问你服务器上的特定存储库以及他们具有哪些权限。你可以使用常规的 Linux 系统工具自行管理此类事务,但是如果有好几个用户和不止一两个仓库,则需要大量的工作。
|
||||
|
||||
Gitolite 的开发人员做了艰苦的工作,使你可以轻松地为许多用户提供对你的 Git 服务器的访问权,而又不让他们访问你的整个环境 —— 而这一切,你可以使用 Git 来完成全部工作。
|
||||
|
||||
Gitolite 并`不是` 图形化的管理员和用户面板。优秀的 [Gitea][4] 项目可提供这种经验,但是本文重点介绍 Gitolite 的简单优雅和令人舒适的熟悉感。
|
||||
Gitolite 并**不是**图形化的管理员和用户面板。优秀的 [Gitea][4] 项目可提供这种体验,但是本文重点介绍 Gitolite 的简单优雅和令人舒适的熟悉感。
|
||||
|
||||
### 安装 Gitolite
|
||||
|
||||
假设你的 Git 服务器运行 Linux,则可以使用包管理器安装 Gitolite(在 CentOS 和 RHEL 上为 `yum`,在 Debian 和 Ubuntu 上为 `apt`,在 OpenSUSE 上为 `zypper` 等)。例如,在 RHEL 上:
|
||||
假设你的 Git 服务器运行在 Linux 上,则可以使用包管理器安装 Gitolite(在 CentOS 和 RHEL 上为 `yum`,在 Debian 和 Ubuntu 上为 `apt`,在 OpenSUSE 上为 `zypper` 等)。例如,在 RHEL 上:
|
||||
|
||||
```
|
||||
$ sudo yum install gitolite3
|
||||
```
|
||||
|
||||
许多发行版的存储库仍提供的是旧版本的 Gitolite,但当前版本为版本 3。
|
||||
许多发行版的存储库提供的仍是旧版本的 Gitolite,但最新版本为版本 3。
|
||||
|
||||
你必须具有对服务器的无密码 SSH 访问权限。如果愿意,你可以使用密码登录服务器,但是 Gitolite 依赖于 SSH 密钥,因此必须配置使用密钥登录的选项。如果你不知道如何配置服务器以进行无密码 SSH 访问,请首先学习如何进行操作(Steve Ovens 的 Ansible 文章的[设置 SSH 密钥身份验证][5]部分对此进行了很好的说明)。这是加强服务器管理的安全以及运行 Gitolite 的重要组成部分。
|
||||
|
||||
### 配置 Git 用户
|
||||
|
||||
如果没有 Gitolite,则如果某人请求访问你在服务器上托管的 Git 存储库,则必须向该人提供用户帐户。Git 提供了一个特殊的外壳,即 `git-shell`,这是一个仅执行 Git 任务的特别特定的 shell。这可以让你有个只能通过非常受限的 Shell 环境的过滤器来访问服务器的用户。
|
||||
如果没有 Gitolite,则如果某人请求访问你在服务器上托管的 Git 存储库时,则必须向该人提供用户帐户。Git 提供了一个特殊的外壳,即 `git-shell`,这是一个仅执行 Git 任务的特别的特定 shell。这可以让你有个只能通过非常受限的 Shell 环境来过滤访问你的服务器的用户。
|
||||
|
||||
该解决方案可行,但通常意味着用户可以访问服务器上的所有存储库,除非你具有用于组权限的良好模式,并在创建新存储库时严格保持这些权限。这种方式还需要在系统级别进行大量手动配置,这通常是为特定级别的系统管理员保留的区域,而不一定是通常负责 Git 存储库的人员。
|
||||
这个解决方案是一个办法,但通常意味着用户可以访问服务器上的所有存储库,除非你具有用于组权限的良好模式,并在创建新存储库时严格遵循这些权限。这种方式还需要在系统级别进行大量手动配置,这通常是只有特定级别的系统管理员才能做的工作,而不一定是通常负责 Git 存储库的人员。
|
||||
|
||||
Gitolite 通过为需要访问任何存储库的每个人指定一个用户名来完全回避此问题。 默认情况下,用户名是 `git`,并且由于 Gitolite 的文档假定使用的是它,因此在学习该工具时保留它是一个很好的默认设置。对于曾经使用过 GitLab 或 GitHub 或任何其他 Git 托管服务的人来说,这也是一个众所周知的约定。
|
||||
Gitolite 通过为需要访问任何存储库的每个人指定一个用户名来完全回避此问题。默认情况下,该用户名是 `git`,并且由于 Gitolite 的文档中假定使用的是它,因此在学习该工具时保留它是一个很好的默认设置。对于曾经使用过 GitLab 或 GitHub 或任何其他 Git 托管服务的人来说,这也是一个众所周知的约定。
|
||||
|
||||
Gitolite 将此用户称为**托管用户**。在服务器上创建一个帐户以充当托管用户(我习惯使用 `git`,因为这是惯例):
|
||||
|
||||
@ -48,7 +48,7 @@ Gitolite 将此用户称为**托管用户**。在服务器上创建一个帐户
|
||||
$ sudo adduser --create-home git
|
||||
```
|
||||
|
||||
为了控制该 `git` 用户帐户,该帐户必须具有属于你的有效 SSH 公钥。你应该已经进行了设置,因此复制你的公钥(**不是你的私钥**)添加到 `git` 用户的家目录中:
|
||||
为了控制该 `git` 用户帐户,该帐户必须具有属于你的有效 SSH 公钥。你应该已经进行了设置,因此复制你的公钥(**而不是你的私钥**)添加到 `git` 用户的家目录中:
|
||||
|
||||
```
|
||||
$ sudo cp ~/.ssh/id_ed25519.pub /home/git/
|
||||
@ -62,11 +62,11 @@ $ sudo su - git
|
||||
$ gitolite setup --pubkey id_ed25519.pub
|
||||
```
|
||||
|
||||
安装脚本运行后,`git` 的家用户目录将有一个 `repository` 目录,该目录(目前)包含文件 `git-admin.git` 和 `testing.git`。这就是该服务器所需的全部设置,现在请登出 `git` 用户。
|
||||
安装脚本运行后,`git` 的家用户目录将有一个 `repository` 目录,该目录(目前)包含存储库 `git-admin.git` 和 `testing.git`。这就是该服务器所需的全部设置,现在请登出 `git` 用户。
|
||||
|
||||
### 使用 Gitolite
|
||||
|
||||
管理 Gitolite 就是编辑 Git 存储库中的文本文件,尤其是 `gitolite-admin.git`。你不会通过 SSH 进入服务器来进行 Git 管理,并且 Gitolite 也建议你不要这样尝试。你和你的用户存储在 Gitolite 服务器上的存储库是个**裸**存储库,因此最好不要使用它们。
|
||||
管理 Gitolite 就是编辑 Git 存储库中的文本文件,尤其是 `gitolite-admin.git` 中的。你不会通过 SSH 进入服务器来进行 Git 管理,并且 Gitolite 也建议你不要这样尝试。在 Gitolite 服务器上存储你和你的用户的存储库是个**裸**存储库,因此最好不要使用它们。
|
||||
|
||||
```
|
||||
$ git clone git@example.com:gitolite-admin.git gitolite-admin.git
|
||||
@ -76,7 +76,7 @@ conf
|
||||
keydir
|
||||
```
|
||||
|
||||
该存储库中的 `conf` 目录包含一个名为 `gitolite.conf` 的文件。在文本编辑器中打开它,或使用`cat`查看其内容:
|
||||
该存储库中的 `conf` 目录包含一个名为 `gitolite.conf` 的文件。在文本编辑器中打开它,或使用 `cat` 查看其内容:
|
||||
|
||||
```
|
||||
repo gitolite-admin
|
||||
@ -86,15 +86,15 @@ repo testing
|
||||
RW+ = @all
|
||||
```
|
||||
|
||||
你可能对该配置文件的功能有所了解:`gitolite-admin` 代表此存储库,并且 `id_ed25519` 密钥的所有者具有读取、写入和 Git 管理权限。换句话说,不是将用户映射到普通的本地 Unix 用户(因为所有用户都使用 `git` 用户托管用户身份),而是将用户映射到 `keydir` 目录中列出的 SSH 密钥。
|
||||
你可能对该配置文件的功能有所了解:`gitolite-admin` 代表此存储库,并且 `id_ed25519` 密钥的所有者具有读取、写入和管理 Git 的权限。换句话说,不是将用户映射到普通的本地 Unix 用户(因为所有用户都使用 `git` 用户托管用户身份),而是将用户映射到 `keydir` 目录中列出的 SSH 密钥。
|
||||
|
||||
`testing.git` 存储库使用特殊组符号为访问服务器的每个人提供了全部权限。
|
||||
|
||||
#### 添加用户
|
||||
|
||||
如果要向 Git 服务器添加一个名为 `alice` 的用户,Alice 必须向你发送她的 SSH 公钥。Gitolite 使用 `.pub` 扩展名左边的任何内容作为该 Git 用户的标识符。不要使用默认的密钥名称值,而是给密钥指定一个指示密钥所有者的名称。如果用户有多个密钥(例如,一个用于笔记本电脑,一个用于台式机),则可以使用子目录来避免文件名冲突。例如,Alice 在笔记本电脑上使用的密钥可能是默认的 `id_rsa.pub`,因此将其重命名为`alice.pub` 或类似名称(或让用户根据其计算机上的本地用户帐户来命名密钥),然后将其放入 `gitolite-admin.git/keydir/work/laptop/` 目录中。如果她从她的桌面发送了另一个密钥,命名为 `alice.pub`(与上一个相同),然后将其添加到 `keydir/home/desktop/` 中。另一个密钥可能放到 `keydir/home/desktop/` 中,依此类推。Gitolite 递归地在 `keydir` 中搜索与存储库“用户”匹配的 `.pub` 文件,并将所有匹配项视为相同的身份。
|
||||
如果要向 Git 服务器添加一个名为 `alice` 的用户,Alice 必须向你发送她的 SSH 公钥。Gitolite 使用文件名的 `.pub` 扩展名左边的任何内容作为该 Git 用户的标识符。不要使用默认的密钥名称值,而是给密钥指定一个指示密钥所有者的名称。如果用户有多个密钥(例如,一个用于笔记本电脑,一个用于台式机),则可以使用子目录来避免文件名冲突。例如,Alice 在笔记本电脑上使用的密钥可能是默认的 `id_rsa.pub`,因此将其重命名为`alice.pub` 或类似名称(或让用户根据其计算机上的本地用户帐户来命名密钥),然后将其放入 `gitolite-admin.git/keydir/work/laptop/` 目录中。如果她从她的桌面计算机发送了另一个密钥,命名为 `alice.pub`(与上一个相同),然后将其添加到 `keydir/home/desktop/` 中。另一个密钥可能放到 `keydir/home/desktop/` 中,依此类推。Gitolite 递归地在 `keydir` 中搜索与存储库“用户”相匹配的 `.pub` 文件,并将所有匹配项视为相同的身份。
|
||||
|
||||
当你将密钥添加到 `keydir` 目录时,必须将它们提交回服务器。这是一件很容易忘记的事情,这里有一个使用自动化的 Git 应用程序(例如 [Sparkleshare] [7])的真正的理由,因此任何更改都将立即提交给你的 Gitolite 管理员。第一次忘记提交和推送,在浪费了三个小时的时间以及用户的故障排除时间之后,你会发现 Gitolite 是使用 Sparkleshare 的完美理由。
|
||||
当你将密钥添加到 `keydir` 目录时,必须将它们提交回服务器。这是一件很容易忘记的事情,这里有一个使用自动化的 Git 应用程序(例如 [Sparkleshare][7])的真正的理由,因此任何更改都将立即提交给你的 Gitolite 管理员。第一次忘记提交和推送,在浪费了三个小时的你和你的用户的故障排除时间之后,你会发现 Gitolite 是使用 Sparkleshare 的完美理由。
|
||||
|
||||
```
|
||||
$ git add keydir
|
||||
@ -106,10 +106,10 @@ $ git push origin HEAD
|
||||
|
||||
#### 设置权限
|
||||
|
||||
与用户一样,目录权限和组也是从你可能习惯的的常规 Unix 工具中抽象出来的(或可从在线信息查找)。在`gitolite-admin.git/conf` 目录中的 `gitolite.conf` 文件中授予对项目的权限。权限分为四个级别:
|
||||
与用户一样,目录权限和组也是从你可能习惯的的常规 Unix 工具中抽象出来的(或可从在线信息查找)。在 `gitolite-admin.git/conf` 目录中的 `gitolite.conf` 文件中授予对项目的权限。权限分为四个级别:
|
||||
|
||||
* `R` 允许只读。在存储库上具有 `R` 权限的用户可以克隆它,仅此而已。
|
||||
* `RW` 允许用户执行分支的快进推送、创建新分支和创建新标签。对于大多数用户来说,这个或多或少感觉就像一个“普通”的 Git 存储库。
|
||||
* `RW` 允许用户执行分支的快进推送、创建新分支和创建新标签。对于大多数用户来说,这个基本上就像是一个“普通”的 Git 存储库。
|
||||
* `RW+` 允许可能具有破坏性的 Git 动作。用户可以执行常规的快进推送、回滚推送、变基以及删除分支和标签。你可能想要或不希望将其授予项目中的所有贡献者。
|
||||
* `-` 明确拒绝访问存储库。这与未在存储库的配置中列出的用户相同。
|
||||
|
||||
@ -126,7 +126,7 @@ repo widgets
|
||||
RW+ = alice
|
||||
```
|
||||
|
||||
现在,Alice(也仅 Alice 一个人)就可以克隆该存储库:
|
||||
现在,Alice(也仅有 Alice 一个人)可以克隆该存储库:
|
||||
|
||||
```
|
||||
[alice]$ git clone git@example.com:widgets.git
|
||||
@ -188,7 +188,7 @@ repo foo/CREATOR/[a-z]..*
|
||||
R = READERS
|
||||
```
|
||||
|
||||
第一行定义了一组用户:该组称为 `@managers`,其中包含用户 `alice` 和 `bob`。下一行设置了通配符允许创建尚不存在的存储库,放在名为 `foo` 的目录下的创建存储库的用户名的子目录中。例如:
|
||||
第一行定义了一组用户:该组称为 `@managers`,其中包含用户 `alice` 和 `bob`。下一行设置了通配符允许创建尚不存在的存储库,放在名为 `foo` 的目录下的创建该存储库的用户名的子目录中。例如:
|
||||
|
||||
```
|
||||
[alice]$ git clone git@example.com:foo/alice/cool-app.git
|
||||
@ -197,11 +197,11 @@ Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.gi
|
||||
warning: You appear to have cloned an empty repository.
|
||||
```
|
||||
|
||||
野生仓库的创建者可以使用一些机制来定义谁可以读取和写入其存储库,但是他们是被限定范围的。在大多数情况下,Gitolite 假定由一组特定的用户来管理项目权限。一种解决方案是使用 Git 挂钩授予所有用户对 `gitolite-admin` 的访问权限,以要求管理者批准将更改合并到 master 分支中。
|
||||
野生仓库的创建者可以使用一些机制来定义谁可以读取和写入其存储库,但是他们是有范围限定的。在大多数情况下,Gitolite 假定由一组特定的用户来管理项目权限。一种解决方案是使用 Git 挂钩来授予所有用户对 `gitolite-admin` 的访问权限,以要求管理者批准将更改合并到 master 分支中。
|
||||
|
||||
### 了解更多
|
||||
|
||||
Gitolite 具有比此介绍性文章涵盖的更多功能,因此请尝试一下。其[文档][8]非常出色,一旦你通读了它,就可以自定义 Gitolite 服务器,以向用户提供你喜欢的任何级别的控制。Gitolite 是一种维护成本低、简单的系统,你可以安装、设置它,然后基本上就可以将其忘却。
|
||||
Gitolite 具有比此介绍性文章所涵盖的更多功能,因此请尝试一下。其[文档][8]非常出色,一旦你通读了它,就可以自定义 Gitolite 服务器,以向用户提供你喜欢的任何级别的控制。Gitolite 是一种维护成本低、简单的系统,你可以安装、设置它,然后基本上就可以将其忘却。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -210,7 +210,7 @@ via: https://opensource.com/article/19/4/server-administration-git
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,333 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11810-1.html)
|
||||
[#]: subject: (Getting started with OpenSSL: Cryptography basics)
|
||||
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen)
|
||||
|
||||
OpenSSL 入门:密码学基础知识
|
||||
======
|
||||
|
||||
> 想要入门密码学的基础知识,尤其是有关 OpenSSL 的入门知识吗?继续阅读。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/23/142249fpnhyqz9y2cz1exe.jpg)
|
||||
|
||||
本文是使用 [OpenSSL][2] 的密码学基础知识的两篇文章中的第一篇,OpenSSL 是在 Linux 和其他系统上流行的生产级库和工具包。(要安装 OpenSSL 的最新版本,请参阅[这里][3]。)OpenSSL 实用程序可在命令行使用,程序也可以调用 OpenSSL 库中的函数。本文的示例程序使用的是 C 语言,即 OpenSSL 库的源语言。
|
||||
|
||||
本系列的两篇文章涵盖了加密哈希、数字签名、加密和解密以及数字证书。你可以从[我的网站][4]的 ZIP 文件中找到这些代码和命令行示例。
|
||||
|
||||
让我们首先回顾一下 OpenSSL 名称中的 SSL。
|
||||
|
||||
### OpenSSL 简史
|
||||
|
||||
<ruby>[安全套接字层][5]<rt>Secure Socket Layer</rt></ruby>(SSL)是 Netscape 在 1995 年发布的一种加密协议。该协议层可以位于 HTTP 之上,从而为 HTTPS 提供了 S:<ruby>安全<rt>secure</rt></ruby>。SSL 协议提供了各种安全服务,其中包括两项在 HTTPS 中至关重要的服务:
|
||||
|
||||
* <ruby>对等身份验证<rt>Peer authentication</rt></ruby>(也称为相互质询):连接的每一边都对另一边的身份进行身份验证。如果 Alice 和 Bob 要通过 SSL 交换消息,则每个人首先验证彼此的身份。
|
||||
* <ruby>机密性<rt>Confidentiality</rt></ruby>:发送者在通过通道发送消息之前先对其进行加密。然后,接收者解密每个接收到的消息。此过程可保护网络对话。即使窃听者 Eve 截获了从 Alice 到 Bob 的加密消息(即*中间人*攻击),Eve 会发现他无法在计算上解密此消息。
|
||||
|
||||
反过来,这两个关键 SSL 服务与其他不太受关注的服务相关联。例如,SSL 支持消息完整性,从而确保接收到的消息与发送的消息相同。此功能是通过哈希函数实现的,哈希函数也随 OpenSSL 工具箱一起提供。
|
||||
|
||||
SSL 有多个版本(例如 SSLv2 和 SSLv3),并且在 1999 年出现了一个基于 SSLv3 的类似协议<ruby>传输层安全性<rt>Transport Layer Security</rt></ruby>(TLS)。TLSv1 和 SSLv3 相似,但不足以相互配合工作。不过,通常将 SSL/TLS 称为同一协议。例如,即使正在使用的是 TLS(而非 SSL),OpenSSL 函数也经常在名称中包含 SSL。此外,调用 OpenSSL 命令行实用程序以 `openssl` 开始。
|
||||
|
||||
除了 man 页面之外,OpenSSL 的文档是零零散散的,鉴于 OpenSSL 工具包很大,这些页面很难以查找使用。命令行和代码示例可以将主要主题集中起来。让我们从一个熟悉的示例开始(使用 HTTPS 访问网站),然后使用该示例来选出我们感兴趣的加密部分进行讲述。
|
||||
|
||||
### 一个 HTTPS 客户端
|
||||
|
||||
此处显示的 `client` 程序通过 HTTPS 连接到 Google:
|
||||
|
||||
```
|
||||
/* compilation: gcc -o client client.c -lssl -lcrypto */
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <openssl/bio.h> /* BasicInput/Output streams */
|
||||
#include <openssl/err.h> /* errors */
|
||||
#include <openssl/ssl.h> /* core library */
|
||||
#define BuffSize 1024
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
perror(msg);
|
||||
ERR_print_errors_fp(stderr);
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
void init_ssl() {
|
||||
SSL_load_error_strings();
|
||||
SSL_library_init();
|
||||
}
|
||||
|
||||
void cleanup(SSL_CTX* ctx, BIO* bio) {
|
||||
SSL_CTX_free(ctx);
|
||||
BIO_free_all(bio);
|
||||
}
|
||||
|
||||
void secure_connect(const char* hostname) {
|
||||
char name[BuffSize];
|
||||
char request[BuffSize];
|
||||
char response[BuffSize];
|
||||
|
||||
const SSL_METHOD* method = TLSv1_2_client_method();
|
||||
if (NULL == method) report_and_exit("TLSv1_2_client_method...");
|
||||
|
||||
SSL_CTX* ctx = SSL_CTX_new(method);
|
||||
if (NULL == ctx) report_and_exit("SSL_CTX_new...");
|
||||
|
||||
BIO* bio = BIO_new_ssl_connect(ctx);
|
||||
if (NULL == bio) report_and_exit("BIO_new_ssl_connect...");
|
||||
|
||||
SSL* ssl = NULL;
|
||||
|
||||
/* 链路 bio 通道,SSL 会话和服务器端点 */
|
||||
|
||||
sprintf(name, "%s:%s", hostname, "https");
|
||||
BIO_get_ssl(bio, &ssl); /* 会话 */
|
||||
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* 鲁棒性 */
|
||||
BIO_set_conn_hostname(bio, name); /* 准备连接 */
|
||||
|
||||
/* 尝试连接 */
|
||||
if (BIO_do_connect(bio) <= 0) {
|
||||
cleanup(ctx, bio);
|
||||
report_and_exit("BIO_do_connect...");
|
||||
}
|
||||
|
||||
/* 验证信任库,检查证书 */
|
||||
if (!SSL_CTX_load_verify_locations(ctx,
|
||||
"/etc/ssl/certs/ca-certificates.crt", /* 信任库 */
|
||||
"/etc/ssl/certs/")) /* 其它信任库 */
|
||||
report_and_exit("SSL_CTX_load_verify_locations...");
|
||||
|
||||
long verify_flag = SSL_get_verify_result(ssl);
|
||||
if (verify_flag != X509_V_OK)
|
||||
fprintf(stderr,
|
||||
"##### Certificate verification error (%i) but continuing...\n",
|
||||
(int) verify_flag);
|
||||
|
||||
/* 获取主页作为示例数据 */
|
||||
sprintf(request,
|
||||
"GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A",
|
||||
hostname);
|
||||
BIO_puts(bio, request);
|
||||
|
||||
/* 从服务器读取 HTTP 响应并打印到输出 */
|
||||
while (1) {
|
||||
memset(response, '\0', sizeof(response));
|
||||
int n = BIO_read(bio, response, BuffSize);
|
||||
if (n <= 0) break; /* 0 代表流结束,< 0 代表有错误 */
|
||||
puts(response);
|
||||
}
|
||||
|
||||
cleanup(ctx, bio);
|
||||
}
|
||||
|
||||
int main() {
|
||||
init_ssl();
|
||||
|
||||
const char* hostname = "www.google.com:443";
|
||||
fprintf(stderr, "Trying an HTTPS connection to %s...\n", hostname);
|
||||
secure_connect(hostname);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
可以从命令行编译和执行该程序(请注意 `-lssl` 和 `-lcrypto` 中的小写字母 `L`):
|
||||
|
||||
```
|
||||
gcc -o client client.c -lssl -lcrypto
|
||||
```
|
||||
|
||||
该程序尝试打开与网站 [www.google.com][13] 的安全连接。在与 Google Web 服务器的 TLS 握手过程中,`client` 程序会收到一个或多个数字证书,该程序会尝试对其进行验证(但在我的系统上失败了)。尽管如此,`client` 程序仍继续通过安全通道获取 Google 主页。该程序取决于前面提到的安全工件,尽管在上述代码中只着重突出了数字证书。但其它工件仍在幕后发挥作用,稍后将对它们进行详细说明。
|
||||
|
||||
通常,打开 HTTP(非安全)通道的 C 或 C++ 的客户端程序将使用诸如*文件描述符*或*网络套接字*之类的结构,它们是两个进程(例如,这个 `client` 程序和 Google Web 服务器)之间连接的端点。另一方面,文件描述符是一个非负整数值,用于在程序中标识该程序打开的任何文件类的结构。这样的程序还将使用一种结构来指定有关 Web 服务器地址的详细信息。
|
||||
|
||||
这些相对较低级别的结构不会出现在客户端程序中,因为 OpenSSL 库会将套接字基础设施和地址规范等封装在更高层面的安全结构中。其结果是一个简单的 API。下面首先看一下 `client` 程序示例中的安全性详细信息。
|
||||
|
||||
* 该程序首先加载相关的 OpenSSL 库,我的函数 `init_ssl` 中对 OpenSSL 进行了两次调用:
|
||||
|
||||
```
|
||||
SSL_load_error_strings();
|
||||
SSL_library_init();
|
||||
```
|
||||
* 下一个初始化步骤尝试获取安全*上下文*,这是建立和维护通往 Web 服务器的安全通道所需的信息框架。如对 OpenSSL 库函数的调用所示,在示例中使用了 TLS 1.2:
|
||||
|
||||
```
|
||||
const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */
|
||||
```
|
||||
|
||||
如果调用成功,则将 `method` 指针被传递给库函数,该函数创建类型为 `SSL_CTX` 的上下文:
|
||||
|
||||
```
|
||||
SSL_CTX* ctx = SSL_CTX_new(method);
|
||||
```
|
||||
|
||||
`client` 程序会检查每个关键的库调用的错误,如果其中一个调用失败,则程序终止。
|
||||
* 现在还有另外两个 OpenSSL 工件也在发挥作用:SSL 类型的安全会话,从头到尾管理安全连接;以及类型为 BIO(<ruby>基本输入/输出<rt>Basic Input/Output</rt></ruby>)的安全流,用于与 Web 服务器进行通信。BIO 流是通过以下调用生成的:
|
||||
|
||||
```
|
||||
BIO* bio = BIO_new_ssl_connect(ctx);
|
||||
```
|
||||
|
||||
请注意,这个最重要的上下文是其参数。`BIO` 类型是 C 语言中 `FILE` 类型的 OpenSSL 封装器。此封装器可保护 `client` 程序与 Google 的网络服务器之间的输入和输出流的安全。
|
||||
* 有了 `SSL_CTX` 和 `BIO`,然后程序在 SSL 会话中将它们组合在一起。三个库调用可以完成工作:
|
||||
|
||||
```
|
||||
BIO_get_ssl(bio, &ssl); /* 会话 */
|
||||
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* 鲁棒性 */
|
||||
BIO_set_conn_hostname(bio, name); /* 准备连接 */
|
||||
```
|
||||
|
||||
安全连接本身是通过以下调用建立的:
|
||||
|
||||
```
|
||||
BIO_do_connect(bio);
|
||||
```
|
||||
|
||||
如果最后一个调用不成功,则 `client` 程序终止;否则,该连接已准备就绪,可以支持 `client` 程序与 Google Web 服务器之间的机密对话。
|
||||
|
||||
在与 Web 服务器握手期间,`client` 程序会接收一个或多个数字证书,以认证服务器的身份。但是,`client` 程序不会发送自己的证书,这意味着这个身份验证是单向的。(Web 服务器通常配置为**不**需要客户端证书)尽管对 Web 服务器证书的验证失败,但 `client` 程序仍通过了连接到 Web 服务器的安全通道继续获取 Google 主页。
|
||||
|
||||
为什么验证 Google 证书的尝试会失败?典型的 OpenSSL 安装目录为 `/etc/ssl/certs`,其中包含 `ca-certificates.crt` 文件。该目录和文件包含着 OpenSSL 自带的数字证书,以此构成<ruby>信任库<rt>truststore</rt></ruby>。可以根据需要更新信任库,尤其是可以包括新信任的证书,并删除不再受信任的证书。
|
||||
|
||||
`client` 程序从 Google Web 服务器收到了三个证书,但是我的计算机上的 OpenSSL 信任库并不包含完全匹配的证书。如目前所写,`client` 程序不会通过例如验证 Google 证书上的数字签名(一个用来证明该证书的签名)来解决此问题。如果该签名是受信任的,则包含该签名的证书也应受信任。尽管如此,`client` 程序仍继续获取页面,然后打印出 Google 的主页。下一节将更详细地介绍这些。
|
||||
|
||||
### 客户端程序中隐藏的安全性
|
||||
|
||||
让我们从客户端示例中可见的安全工件(数字证书)开始,然后考虑其他安全工件如何与之相关。数字证书的主要格式标准是 X509,生产级的证书由诸如 [Verisign][14] 的<ruby>证书颁发机构<rt>Certificate Authority</rt></ruby>(CA)颁发。
|
||||
|
||||
数字证书中包含各种信息(例如,激活日期和失效日期以及所有者的域名),也包括发行者的身份和*数字签名*(这是加密过的*加密哈希*值)。证书还具有未加密的哈希值,用作其标识*指纹*。
|
||||
|
||||
哈希值来自将任意数量的二进制位映射到固定长度的摘要。这些位代表什么(会计报告、小说或数字电影)无关紧要。例如,<ruby>消息摘要版本 5<rt>Message Digest version 5</rt></ruby>(MD5)哈希算法将任意长度的输入位映射到 128 位哈希值,而 SHA1(<ruby>安全哈希算法版本 1<rt>Secure Hash Algorithm version 1</rt></ruby>)算法将输入位映射到 160 位哈希值。不同的输入位会导致不同的(实际上在统计学上是唯一的)哈希值。下一篇文章将会进行更详细的介绍,并着重介绍什么使哈希函数具有加密功能。
|
||||
|
||||
数字证书的类型有所不同(例如根证书、中间证书和最终实体证书),并形成了反映这些证书类型的层次结构。顾名思义,*根*证书位于层次结构的顶部,其下的证书继承了根证书所具有的信任。OpenSSL 库和大多数现代编程语言都具有 X509 数据类型以及处理此类证书的函数。来自 Google 的证书具有 X509 格式,`client` 程序会检查该证书是否为 `X509_V_OK`。
|
||||
|
||||
X509 证书基于<ruby>公共密钥基础结构<rt>public-key infrastructure</rt></ruby>(PKI),其中包括的算法(RSA 是占主导地位的算法)用于生成*密钥对*:公共密钥及其配对的私有密钥。公钥是一种身份:[Amazon][15] 的公钥对其进行标识,而我的公钥对我进行标识。私钥应由其所有者负责保密。
|
||||
|
||||
成对出现的密钥具有标准用途。可以使用公钥对消息进行加密,然后可以使用同一个密钥对中的私钥对消息进行解密。私钥也可以用于对文档或其他电子工件(例如程序或电子邮件)进行签名,然后可以使用该对密钥中的公钥来验证签名。以下两个示例补充了一些细节。
|
||||
|
||||
在第一个示例中,Alice 将她的公钥分发给全世界,包括 Bob。然后,Bob 用 Alice 的公钥加密邮件,然后将加密的邮件发送给 Alice。用 Alice 的公钥加密的邮件将可以用她的私钥解密(假设是她自己的私钥),如下所示:
|
||||
|
||||
```
|
||||
+------------------+ encrypted msg +-------------------+
|
||||
Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg
|
||||
+------------------+ +-------------------+
|
||||
```
|
||||
|
||||
理论上可以在没有 Alice 的私钥的情况下解密消息,但在实际情况中,如果使用像 RSA 这样的加密密钥对系统,则在计算上做不到。
|
||||
|
||||
现在,第二个示例,请对文档签名以证明其真实性。签名算法使用密钥对中的私钥来处理要签名的文档的加密哈希:
|
||||
|
||||
```
|
||||
+-------------------+
|
||||
Hash of document--->|Alice's private key|--->Alice's digital signature of the document
|
||||
+-------------------+
|
||||
```
|
||||
|
||||
假设 Alice 以数字方式签署了发送给 Bob 的合同。然后,Bob 可以使用 Alice 密钥对中的公钥来验证签名:
|
||||
|
||||
```
|
||||
+------------------+
|
||||
Alice's digital signature of the document--->|Alice's public key|--->verified or not
|
||||
+------------------+
|
||||
```
|
||||
|
||||
假若没有 Alice 的私钥,就无法轻松伪造 Alice 的签名:因此,Alice 有必要保密她的私钥。
|
||||
|
||||
在 `client` 程序中,除了数字证书以外,这些安全性都没有明确展示。下一篇文章使用使用 OpenSSL 实用程序和库函数的示例填充更多详细的信息。
|
||||
|
||||
### 命令行的 OpenSSL
|
||||
|
||||
同时,让我们看一下 OpenSSL 命令行实用程序:特别是在 TLS 握手期间检查来自 Web 服务器的证书的实用程序。调用 OpenSSL 实用程序可以使用 `openssl` 命令,然后添加参数和标志的组合以指定所需的操作。
|
||||
|
||||
看看以下命令:
|
||||
|
||||
```
|
||||
openssl list-cipher-algorithms
|
||||
```
|
||||
|
||||
该输出是组成<ruby>加密算法套件<rt>cipher suite<rt></ruby>的相关算法的列表。下面是列表的开头,加了澄清首字母缩写词的注释:
|
||||
|
||||
```
|
||||
AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining
|
||||
AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes
|
||||
AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1
|
||||
...
|
||||
```
|
||||
|
||||
下一条命令使用参数 `s_client` 将打开到 [www.google.com][13] 的安全连接,并在屏幕上显示有关此连接的所有信息:
|
||||
|
||||
```
|
||||
openssl s_client -connect www.google.com:443 -showcerts
|
||||
```
|
||||
|
||||
端口号 443 是 Web 服务器用于接收 HTTPS(而不是 HTTP 连接)的标准端口号。(对于 HTTP,标准端口为 80)Web 地址 www.google.com:443 也出现在 `client` 程序的代码中。如果尝试连接成功,则将显示来自 Google 的三个数字证书以及有关安全会话、正在使用的加密算法套件以及相关项目的信息。例如,这是开头的部分输出,它声明*证书链*即将到来。证书的编码为 base64:
|
||||
|
||||
|
||||
```
|
||||
Certificate chain
|
||||
0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
|
||||
i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU
|
||||
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw
|
||||
...
|
||||
```
|
||||
|
||||
诸如 Google 之类的主要网站通常会发送多个证书进行身份验证。
|
||||
|
||||
输出以有关 TLS 会话的摘要信息结尾,包括加密算法套件的详细信息:
|
||||
|
||||
```
|
||||
SSL-Session:
|
||||
Protocol : TLSv1.2
|
||||
Cipher : ECDHE-RSA-AES128-GCM-SHA256
|
||||
Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573
|
||||
...
|
||||
```
|
||||
|
||||
`client` 程序中使用了协议 TLS 1.2,`Session-ID` 唯一地标识了 `openssl` 实用程序和 Google Web 服务器之间的连接。`Cipher` 条目可以按以下方式进行解析:
|
||||
|
||||
* `ECDHE`(<ruby>椭圆曲线 Diffie-Hellman(临时)<rt>Elliptic Curve Diffie Hellman Ephemeral</rt></ruby>)是一种用于管理 TLS 握手的高效的有效算法。尤其是,ECDHE 通过确保连接双方(例如,`client` 程序和 Google Web 服务器)使用相同的加密/解密密钥(称为*会话密钥*)来解决“密钥分发问题”。后续文章会深入探讨该细节。
|
||||
* `RSA`(Rivest Shamir Adleman)是主要的公共密钥密码系统,并以 1970 年代末首次描述了该系统的三位学者的名字命名。这个正在使用的密钥对是使用 RSA 算法生成的。
|
||||
* `AES128`(<ruby>高级加密标准<rt>Advanced Encryption Standard</rt></ruby>)是一种<ruby>块式加密算法<rt>block cipher</rt></ruby>,用于加密和解密<ruby>位块<rt>blocks of bits</rt></ruby>。(另一种算法是<ruby>流式加密算法<rt>stream cipher</rt></ruby>,它一次加密和解密一个位。)这个加密算法是对称加密算法,因为使用同一个密钥进行加密和解密,这首先引起了密钥分发问题。AES 支持 128(此处使用)、192 和 256 位的密钥大小:密钥越大,安全性越好。
|
||||
|
||||
通常,像 AES 这样的对称加密系统的密钥大小要小于像 RSA 这样的非对称(基于密钥对)系统的密钥大小。例如,1024 位 RSA 密钥相对较小,而 256 位密钥则当前是 AES 最大的密钥。
|
||||
* `GCM`(<ruby>伽罗瓦计数器模式<rt>Galois Counter Mode</rt></ruby>)处理在安全对话期间重复应用的加密算法(在这种情况下为 AES128)。AES128 块的大小仅为 128 位,安全对话很可能包含从一侧到另一侧的多个 AES128 块。GCM 非常有效,通常与 AES128 搭配使用。
|
||||
* `SHA256`(<ruby>256 位安全哈希算法<rt>Secure Hash Algorithm 256 bits</rt></ruby>)是我们正在使用的加密哈希算法。生成的哈希值的大小为 256 位,尽管使用 SHA 甚至可以更大。
|
||||
|
||||
加密算法套件正在不断发展中。例如,不久前,Google 使用 RC4 流加密算法(RSA 的 Ron Rivest 后来开发的 Ron's Cipher 版本 4)。 RC4 现在有已知的漏洞,这大概部分导致了 Google 转换为 AES128。
|
||||
|
||||
### 总结
|
||||
|
||||
我们通过安全的 C Web 客户端和各种命令行示例对 OpenSSL 做了首次了解,使一些需要进一步阐明的主题脱颖而出。[下一篇文章会详细介绍][17],从加密散列开始,到对数字证书如何应对密钥分发挑战为结束的更全面讨论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://www.openssl.org/
|
||||
[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
|
||||
[4]: http://condor.depaul.edu/mkalin
|
||||
[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security
|
||||
[6]: https://en.wikipedia.org/wiki/Netscape
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
|
||||
[13]: http://www.google.com
|
||||
[14]: https://www.verisign.com
|
||||
[15]: https://www.amazon.com
|
||||
[16]: http://www.google.com:443
|
||||
[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11822-1.html)
|
||||
[#]: subject: (How the Linux screen tool can save your tasks – and your sanity – if SSH is interrupted)
|
||||
[#]: via: (https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如果 SSH 被中断,Linux screen 工具如何拯救你的任务以及理智
|
||||
======
|
||||
|
||||
> 当你需要确保长时间运行的任务不会在 SSH 会话中断时被杀死时,Linux screen 命令可以成为救生员。以下是使用方法。
|
||||
|
||||
![](https://images.idgesg.net/images/article/2019/09/working_w_screen-shs-100812448-large.jpg)
|
||||
|
||||
如果因 SSH 会话断开而不得不重启一个耗时的进程,那么你可能会很高兴了解一个有趣的工具,可以用来避免此问题:`screen` 工具。
|
||||
|
||||
`screen` 是一个终端多路复用器,它使你可以在单个 SSH 会话中运行多个终端会话,并随时从它们之中脱离或重新接驳。做到这一点的过程非常简单,仅涉及少数命令。
|
||||
|
||||
要启动 `screen` 会话,只需在 SSH 会话中键入 `screen`。 然后,你可以开始启动需要长时间运行的进程,并在适当的时候键入 `Ctrl + A Ctrl + D` 从会话中脱离,然后键入 `screen -r` 重新接驳。
|
||||
|
||||
如果你要运行多个 `screen` 会话,更好的选择是为每个会话指定一个有意义的名称,以帮助你记住正在处理的任务。使用这种方法,你可以在启动每个会话时使用如下命令命名:
|
||||
|
||||
```
|
||||
$ screen -S slow-build
|
||||
```
|
||||
|
||||
一旦运行了多个会话,要重新接驳到一个会话,需要从列表中选择它。在以下命令中,我们列出了当前正在运行的会话,然后再重新接驳其中一个。请注意,一开始这两个会话都被标记为已脱离。
|
||||
|
||||
```
|
||||
$ screen -ls
|
||||
There are screens on:
|
||||
6617.check-backups (09/26/2019 04:35:30 PM) (Detached)
|
||||
1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
|
||||
2 Sockets in /run/screen/S-shs
|
||||
```
|
||||
|
||||
然后,重新接驳到该会话要求你提供分配给会话的名称。例如:
|
||||
|
||||
```
|
||||
$ screen -r slow-build
|
||||
```
|
||||
|
||||
在脱离的会话中,保持运行状态的进程会继续进行处理,而你可以执行其他工作。如果你使用这些 `screen` 会话之一来查询 `screen` 会话情况,可以看到当前重新接驳的会话再次显示为 `Attached`。
|
||||
|
||||
```
|
||||
$ screen -ls
|
||||
There are screens on:
|
||||
6617.check-backups (09/26/2019 04:35:30 PM) (Attached)
|
||||
1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
|
||||
2 Sockets in /run/screen/S-shs.
|
||||
```
|
||||
|
||||
你可以使用 `-version` 选项查询正在运行的 `screen` 版本。
|
||||
|
||||
```
|
||||
$ screen -version
|
||||
Screen version 4.06.02 (GNU) 23-Oct-17
|
||||
```
|
||||
|
||||
### 安装 screen
|
||||
|
||||
如果 `which screen` 未在屏幕上提供信息,则可能你的系统上未安装该工具。
|
||||
|
||||
```
|
||||
$ which screen
|
||||
/usr/bin/screen
|
||||
```
|
||||
|
||||
如果你需要安装它,则以下命令之一可能适合你的系统:
|
||||
|
||||
```
|
||||
sudo apt install screen
|
||||
sudo yum install screen
|
||||
```
|
||||
|
||||
当你需要运行耗时的进程时,如果你的 SSH 会话由于某种原因断开连接,则可能会中断这个耗时的进程,那么 `screen` 工具就会派上用场。而且,如你所见,它非常易于使用和管理。
|
||||
|
||||
以下是上面使用的命令的摘要:
|
||||
|
||||
```
|
||||
screen -S <process description> 开始会话
|
||||
Ctrl+A Ctrl+D 从会话中脱离
|
||||
screen -ls 列出会话
|
||||
screen -r <process description> 重新接驳会话
|
||||
```
|
||||
|
||||
尽管还有更多关于 `screen` 的知识,包括可以在 `screen` 会话之间进行操作的其他方式,但这已经足够帮助你开始使用这个便捷的工具了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
61
published/202001/20191015 How GNOME uses Git.md
Normal file
61
published/202001/20191015 How GNOME uses Git.md
Normal file
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11806-1.html)
|
||||
[#]: subject: (How GNOME uses Git)
|
||||
[#]: via: (https://opensource.com/article/19/10/how-gnome-uses-git)
|
||||
[#]: author: (Molly de Blanc https://opensource.com/users/mollydb)
|
||||
|
||||
一个非技术人员对 GNOME 项目使用 GitLab 的感受
|
||||
======
|
||||
|
||||
> 将 GNOME 项目集中在 GitLab 上的决定为整个社区(不只是开发人员)带来了好处。
|
||||
|
||||
![red panda][1]
|
||||
|
||||
“您的 GitLab 是什么?”这是我在 [GNOME 基金会][2]工作的第一天被问到的第一个问题之一,该基金会是支持 GNOME 项目(包括[桌面环境][3]、[GTK][4] 和 [GStreamer][5])的非盈利组织。此人问的是我在 [GNOME 的 GitLab 实例][6]上的用户名。我在 GNOME 期间,经常有人要求我提供我的 GitLab。
|
||||
|
||||
我们使用 GitLab 进行几乎所有操作。通常情况下,我会收到一些<ruby>提案<rt>issue</rt></ruby>和参考错误报告,有时还需要修改文件。我不是以开发人员或系统管理员的身份进行此操作的。我参与了“参与度、包容性和多样性(I&D)”团队。我为 GNOME 朋友们撰写新闻通讯,并采访该项目的贡献者。我为 GNOME 活动提供赞助。我不写代码,但我每天都使用 GitLab。
|
||||
|
||||
在过去的二十年中,GNOME 项目的管理采用了各种方式。该项目的不同部分使用不同的系统来跟踪代码更改、协作以及作为项目和社交空间共享信息。但是,该项目决定,它需要更加地一体化,这从构思到完成大约花费了一年的时间。
|
||||
|
||||
GNOME 希望切换到单个工具供整个社区使用的原因很多。外部项目与 GNOME 息息相关,并为它们提供更简单的与资源交互的方式对于项目至关重要,无论是支持社区还是发展生态系统。我们还希望更好地跟踪 GNOME 的指标,即贡献者的数量、贡献的类型和数量以及项目不同部分的开发进度。
|
||||
|
||||
当需要选择一种协作工具时,我们考虑了我们需要的东西。最重要的要求之一是它必须由 GNOME 社区托管。由第三方托管并不是一种选择,因此像 GitHub 和 Atlassian 这样的服务就不在考虑之中。而且,当然了,它必须是自由软件。很快,唯一真正的竞争者出现了,它就是 GitLab。我们希望确保进行贡献很容易。GitLab 具有诸如单点登录的功能,该功能允许人们使用 GitHub、Google、GitLab.com 和 GNOME 帐户登录。
|
||||
|
||||
我们认为 GitLab 是一条出路,我们开始从许多工具迁移到单个工具。GNOME 董事会成员 [Carlos Soriano][7] 领导这项改变。在 GitLab 和 GNOME 社区的大力支持下,我们于 2018 年 5 月完成了该过程。
|
||||
|
||||
人们非常希望迁移到 GitLab 有助于社区的发展,并使贡献更加容易。由于 GNOME 以前使用了许多不同的工具,包括 Bugzilla 和 CGit,因此很难定量地评估这次切换对贡献量的影响。但是,我们可以更清楚地跟踪一些统计数据,例如在 2018 年 6 月至 2018 年 11 月之间关闭了近 10,000 个提案,合并了 7,085 个合并请求。人们感到社区在发展壮大,越来越受欢迎,而且贡献实际上也更加容易。
|
||||
|
||||
人们因不同的原因而开始使用自由软件,重要的是,可以通过为需要软件的人提供更好的资源和更多的支持来公平竞争。Git 作为一种工具已被广泛使用,并且越来越多的人使用这些技能来参与到自由软件当中。自托管的 GitLab 提供了将 Git 的熟悉度与 GitLab 提供的功能丰富、用户友好的环境相结合的绝佳机会。
|
||||
|
||||
切换到 GitLab 已经一年多了,变化确实很明显。持续集成(CI)为开发带来了巨大的好处,并且已经完全集成到 GNOME 的几乎每个部分当中。不进行代码开发的团队也转而使用 GitLab 生态系统进行工作。无论是使用问题跟踪来管理分配的任务,还是使用版本控制来共享和管理资产,就连“参与度、包容性和多样性(I&D)”这样的团队都已经使用了 GitLab。
|
||||
|
||||
一个社区,即使是一个正在开发的自由软件,也很难适应新技术或新工具。在类似 GNOME 的情况下,这尤其困难,该项目[最近已经 22 岁了] [8]。像 GNOME 这样经过了 20 多年建设的项目,太多的人和组织使用了太多的部件,但迁移工作之所以能实现,这要归功于 GNOME 社区的辛勤工作和 GitLab 的慷慨帮助。
|
||||
|
||||
在为使用 Git 进行版本控制的项目工作时,我发现很方便。这是一个令人感觉舒适和熟悉的系统,是一个在工作场所和爱好项目之间保持一致的工具。作为 GNOME 社区的新成员,能够参与并使用 GitLab 真是太好了。作为社区建设者,看到这样结果是令人鼓舞的:越来越多的相关项目加入并进入生态系统;新的贡献者和社区成员对该项目做出了首次贡献;以及增强了衡量我们正在做的工作以了解其成功和成功的能力。
|
||||
|
||||
如此多的做着完全不同的事情(例如他们正在从事的不同工作以及所使用的不同技能)的团队同意汇集在一个工具上(尤其是被认为是跨开源的标准工具),这一点很棒。作为 GNOME 的贡献者,我真的非常感谢我们使用了 GitLab。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/how-gnome-uses-git
|
||||
|
||||
作者:[Molly de Blanc][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mollydb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna (red panda)
|
||||
[2]: https://www.gnome.org/foundation/
|
||||
[3]: https://gnome.org/
|
||||
[4]: https://www.gtk.org/
|
||||
[5]: https://gstreamer.freedesktop.org/
|
||||
[6]: https://gitlab.gnome.org/
|
||||
[7]: https://twitter.com/csoriano1618?lang=en
|
||||
[8]: https://opensource.com/article/19/8/poll-favorite-gnome-version
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (alim0x)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11831-1.html)
|
||||
[#]: subject: (My Linux story: Learning Linux in the 90s)
|
||||
[#]: via: (https://opensource.com/article/19/11/learning-linux-90s)
|
||||
[#]: author: (Mike Harris https://opensource.com/users/mharris)
|
||||
|
||||
我的 Linux 故事:在 90 年代学习 Linux
|
||||
======
|
||||
|
||||
> 这是一个关于我如何在 WiFi 时代之前学习 Linux 的故事,那时的发行版还以 CD 的形式出现。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/29/213829t00wmwu2w0z502zg.jpg)
|
||||
|
||||
大部分人可能不记得 1996 年时计算产业或日常生活世界的样子。但我很清楚地记得那一年。我那时候是堪萨斯中部一所高中的二年级学生,那是我的自由与开源软件(FOSS)旅程的开端。
|
||||
|
||||
我从这里开始进步。我在 1996 年之前就开始对计算机感兴趣。我在我家的第一台 Apple ][e 上启蒙成长,然后多年之后是 IBM Personal System/2。(是的,在这过程中有一些代际的跨越。)IBM PS/2 有一个非常激动人心的特性:一个 1200 波特的 Hayes 调制解调器。
|
||||
|
||||
我不记得是怎样了,但在那不久之前,我得到了一个本地 [BBS][2] 的电话号码。一旦我拨号进去,我可以得到本地的一些其他 BBS 的列表,我的网络探险就此开始了。
|
||||
|
||||
在 1995 年,[足够幸运][3]的人拥有了家庭互联网连接,每月可以使用不到 30 分钟。那时的互联网不像我们现代的服务那样,通过卫星、光纤、有线电视同轴电缆或任何版本的铜线提供。大多数家庭通过一个调制解调器拨号,它连接到他们的电话线上。(这时离移动电话无处不在的时代还早得很,大多数人只有一部家庭电话。)尽管这还要取决你所在的位置,但我不认为那时有很多独立的互联网服务提供商(ISP),所以大多数人从仅有的几家大公司获得服务,包括 America Online,CompuServe 以及 Prodigy。
|
||||
|
||||
你能获取到的服务速率非常低,甚至在拨号上网革命性地达到了顶峰的 56K,你也只能期望得到最高 3.5Kbps 的速率。如果你想要尝试 Linux,下载一个 200MB 到 800MB 的 ISO 镜像或(更加切合实际的)一套软盘镜像要贡献出时间、决心,以及减少电话的使用。
|
||||
|
||||
我走了一条简单一点的路:在 1996 年,我从一家主要的 Linux 发行商订购了一套 “tri-Linux” CD 集。这些光盘提供了三个发行版,我的这套包含了 Debian 1.1(Debian 的第一个稳定版本)、Red Hat Linux 3.0.3 以及 Slackware 3.1(代号 Slackware '96)。据我回忆,这些光盘是从一家叫做 [Linux Systems Labs][4] 的在线商店购买的。这家在线商店如今已经不存在了,但在 90 年代和 00 年代早期,这样的发行商很常见。这些是多光盘 Linux 套件。这是 1998 年的一套光盘,你可以了解到他们都包含了什么:
|
||||
|
||||
![A tri-linux CD set][5]
|
||||
|
||||
![A tri-linux CD set][6]
|
||||
|
||||
在 1996 年夏天一个命中注定般的日子,那时我住在堪萨斯一个新的并且相对较为乡村的城市,我做出了安装并使用 Linux 的第一次尝试。在 1996 年的整个夏天,我尝试了那套三张 Linux CD 套件里的全部三个发行版。他们都在我母亲的老 Pentium 75MHz 电脑上完美运行。
|
||||
|
||||
我最终选择了 [Slackware][7] 3.1 作为我的首选发行版,相比其它发行版可能更多的是因为它的终端的外观,这是决定选择一个发行版前需要考虑的重要因素。
|
||||
|
||||
我将系统设置完毕并运行了起来。我连接到一家 “不太知名的” ISP(一家这个区域的本地服务商),通过我家的第二条电话线拨号(为了满足我的所有互联网使用而订购)。那就像在天堂一样。我有一台完美运行的双系统(Microsoft Windows 95 和 Slackware 3.1)电脑。我依然拨号进入我所知道和喜爱的 BBS,游玩在线 BBS 游戏,比如 Trade Wars、Usurper 以及 Legend of the Red Dragon。
|
||||
|
||||
我能够记得在 EFNet(IRC)上 #Linux 频道上渡过的日子,帮助其他用户,回答他们的 Linux 问题以及和版主们互动。
|
||||
|
||||
在我第一次在家尝试使用 Linux 系统的 20 多年后,已经是我进入作为 Red Hat 顾问的第五年,我仍然在使用 Linux(现在是 Fedora)作为我的日常系统,并且依然在 IRC 上帮助想要使用 Linux 的人们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/11/learning-linux-90s
|
||||
|
||||
作者:[Mike Harris][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[alim0x](https://github.com/alim0x)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mharris
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
|
||||
[2]: https://en.wikipedia.org/wiki/Bulletin_board_system
|
||||
[3]: https://en.wikipedia.org/wiki/Global_Internet_usage#Internet_users
|
||||
[4]: https://web.archive.org/web/19961221003003/http://lsl.com/
|
||||
[5]: https://opensource.com/sites/default/files/20191026_142009.jpg (A tri-linux CD set)
|
||||
[6]: https://opensource.com/sites/default/files/20191026_142020.jpg (A tri-linux CD set)
|
||||
[7]: http://slackware.com
|
@ -1,34 +1,33 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cycoe)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11790-1.html)
|
||||
[#]: subject: (Add jumping to your Python platformer game)
|
||||
[#]: via: (https://opensource.com/article/19/12/jumping-python-platformer-game)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
为你的 Python 平台类游戏添加跳跃功能
|
||||
======
|
||||
在本期使用 Python Pygame 模块编写视频游戏中,学会如何使用跳跃来对抗重力。
|
||||
![游戏厅中的游戏][1]
|
||||
|
||||
> 在本期使用 Python Pygame 模块编写视频游戏中,学会如何使用跳跃来对抗重力。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/16/214917c8mxn82fot82fx88.jpg)
|
||||
|
||||
在本系列的 [前一篇文章][2] 中,你已经模拟了重力。但现在,你需要赋予你的角色跳跃的能力来对抗重力。
|
||||
|
||||
跳跃是对重力作用的暂时延缓。在这一小段时间里,你是向_上_跳,而不是被重力拉着向下落。但你一旦到达了跳跃的最高点,重力就会重新发挥作用,将你拉回地面。
|
||||
跳跃是对重力作用的暂时延缓。在这一小段时间里,你是向*上*跳,而不是被重力拉着向下落。但你一旦到达了跳跃的最高点,重力就会重新发挥作用,将你拉回地面。
|
||||
|
||||
在代码中,跳跃被表示为变量。首先,你需要为玩家对象建立一个变量,使得 Python 能够跟踪对象是否正在跳跃中。一旦玩家对象开始跳跃,他就会再次受到重力的作用,并被拉回最近的物体。
|
||||
在代码中,这种变化被表示为变量。首先,你需要为玩家精灵建立一个变量,使得 Python 能够跟踪该精灵是否正在跳跃中。一旦玩家精灵开始跳跃,他就会再次受到重力的作用,并被拉回最近的物体。
|
||||
|
||||
### 设置跳跃状态变量
|
||||
|
||||
你需要为你的 Player 类添加两个新变量:
|
||||
你需要为你的 `Player` 类添加两个新变量:
|
||||
|
||||
* 一个是为了跟踪你的角色是否正在跳跃中,可通过你的玩家对象是否站在坚实的地面来确定
|
||||
* 一个是为了跟踪你的角色是否正在跳跃中,可通过你的玩家精灵是否站在坚实的地面来确定
|
||||
* 一个是为了将玩家带回地面
|
||||
|
||||
|
||||
|
||||
将如下两个变量添加到你的 **Player** 类中。在下方的代码中,注释前的部分用于提示上下文,因此只需要添加最后两行:
|
||||
|
||||
将如下两个变量添加到你的 `Player` 类中。在下方的代码中,注释前的部分用于提示上下文,因此只需要添加最后两行:
|
||||
|
||||
```
|
||||
self.movex = 0
|
||||
@ -40,16 +39,15 @@
|
||||
self.jump_delta = 6
|
||||
```
|
||||
|
||||
第一个变量 **collide_delta** 被设为 0 是因为在正常状态下,玩家对象没有处在跳跃中的状态。另一个变量 **jump_delta** 被设为 6,是为了防止对象在第一次进入游戏世界时就发生反弹(实际上就是跳跃)。当你完成了本篇文章的示例,尝试把该变量设为 0 看看会发生什么。
|
||||
第一个变量 `collide_delta` 被设为 0 是因为在正常状态下,玩家精灵没有处在跳跃中的状态。另一个变量 `jump_delta` 被设为 6,是为了防止精灵在第一次进入游戏世界时就发生反弹(实际上就是跳跃)。当你完成了本篇文章的示例,尝试把该变量设为 0 看看会发生什么。
|
||||
|
||||
### 跳跃中的碰撞
|
||||
|
||||
如果你是跳到一个蹦床上,那你的跳跃一定非常优美。但是如果你是跳向一面墙会发生什么呢?(千万不要去尝试!)不管你的起跳多么令人印象深刻,当你撞到比你更大更硬的物体时,你都会立马停下。(译注:原理参考动量守恒定律)
|
||||
如果你是跳到一个蹦床上,那你的跳跃一定非常优美。但是如果你是跳向一面墙会发生什么呢?(千万不要去尝试!)不管你的起跳多么令人印象深刻,当你撞到比你更大更硬的物体时,你都会立马停下。(LCTT 译注:原理参考动量守恒定律)
|
||||
|
||||
为了在你的视频游戏中模拟这一点,你需要在你的玩家对象与地面等东西发生碰撞时,将 **self.collide_delta** 变量设为 0。如果你的 **self.collide_delta** 不是 0 而是其它的什么值,那么你的玩家就会发生跳跃,并且当你的玩家与墙或者地面发生碰撞时无法跳跃。
|
||||
|
||||
在你的 **Player** 类的 **update** 方法中,将地面碰撞相关代码块修改为如下所示:
|
||||
为了在你的视频游戏中模拟这一点,你需要在你的玩家精灵与地面等东西发生碰撞时,将 `self.collide_delta` 变量设为 0。如果你的 `self.collide_delta` 不是 0 而是其它的什么值,那么你的玩家就会发生跳跃,并且当你的玩家与墙或者地面发生碰撞时无法跳跃。
|
||||
|
||||
在你的 `Player` 类的 `update` 方法中,将地面碰撞相关代码块修改为如下所示:
|
||||
|
||||
```
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
@ -57,42 +55,40 @@
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty-ty
|
||||
self.collide_delta = 0 # 停止跳跃
|
||||
if self.rect.y > g.rect.y:
|
||||
self.health -=1
|
||||
print(self.health)
|
||||
if self.rect.y > g.rect.y:
|
||||
self.health -=1
|
||||
print(self.health)
|
||||
```
|
||||
|
||||
这段代码块检查了地面对象和玩家对象之间发生的碰撞。当发生碰撞时,它会将玩家 Y 方向的坐标值设置为游戏窗口的高度减去一个瓷贴的高度再减去另一个瓷贴的高度。以此保证了玩家对象是站在地面**上**,而不是嵌在地面里。同时它也将 **self.collide_delta** 设为 0,使得程序能够知道玩家未处在跳跃中。除此之外,它将 **self.movey** 设为 0,使得程序能够知道玩家当前未受到重力的牵引作用(这是游戏物理引擎的奇怪之处,一旦玩家落地,也就没有必要继续将玩家拉向地面)。
|
||||
这段代码块检查了地面精灵和玩家精灵之间发生的碰撞。当发生碰撞时,它会将玩家 Y 方向的坐标值设置为游戏窗口的高度减去一个瓷砖的高度再减去另一个瓷砖的高度。以此保证了玩家精灵是站在地面*上*,而不是嵌在地面里。同时它也将 `self.collide_delta` 设为 0,使得程序能够知道玩家未处在跳跃中。除此之外,它将 `self.movey` 设为 0,使得程序能够知道玩家当前未受到重力的牵引作用(这是游戏物理引擎的奇怪之处,一旦玩家落地,也就没有必要继续将玩家拉向地面)。
|
||||
|
||||
此处 **if** 语句用来检测玩家是否已经落到地面之_下_,如果是,那就扣除一点生命值作为惩罚。此处假定了你希望当你的玩家落到地图之外时失去生命值。这个设定不是必需的,它只是平台类游戏的一种惯例。更有可能的是,你希望这个事件能够触发另一些事件,或者说是一种能够让你的现实世界玩家沉迷于让对象掉到屏幕之外的东西。一种简单的恢复方式是在玩家对象掉落到地图之外时,将 **self.rect.y** 重新设置为 0,这样它就会在地图上方重新生成,并落到坚实的地面上。
|
||||
此处 `if` 语句用来检测玩家是否已经落到地面之*下*,如果是,那就扣除一点生命值作为惩罚。此处假定了你希望当你的玩家落到地图之外时失去生命值。这个设定不是必需的,它只是平台类游戏的一种惯例。更有可能的是,你希望这个事件能够触发另一些事件,或者说是一种能够让你的现实世界玩家沉迷于让精灵掉到屏幕之外的东西。一种简单的恢复方式是在玩家精灵掉落到地图之外时,将 `self.rect.y` 重新设置为 0,这样它就会在地图上方重新生成,并落到坚实的地面上。
|
||||
|
||||
### 撞向地面
|
||||
|
||||
模拟的重力使你玩家的 Y 坐标不断增大(译注:此处原文中为 0,但在 Pygame 中越靠下方 Y 坐标应越大)。要实现跳跃,完成如下代码使你的玩家对象离开地面,飞向空中。
|
||||
|
||||
在你的 **Player** 类的 **update** 方法中,添加如下代码来暂时延缓重力的作用:
|
||||
模拟的重力使你玩家的 Y 坐标不断增大(LCTT 译注:此处原文中为 0,但在 Pygame 中越靠下方 Y 坐标应越大)。要实现跳跃,完成如下代码使你的玩家精灵离开地面,飞向空中。
|
||||
|
||||
在你的 `Player` 类的 `update` 方法中,添加如下代码来暂时延缓重力的作用:
|
||||
|
||||
```
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
self.jump_delta = 6*2
|
||||
self.movey -= 33 # 跳跃的高度
|
||||
self.collide_delta += 6
|
||||
self.jump_delta += 6
|
||||
```
|
||||
|
||||
根据此代码所示,跳跃使玩家对象向空中移动了 33 个像素。此处是_负_ 33 是因为在 Pygame 中,越小的数代表距离屏幕顶端越近。
|
||||
根据此代码所示,跳跃使玩家精灵向空中移动了 33 个像素。此处是*负* 33 是因为在 Pygame 中,越小的数代表距离屏幕顶端越近。
|
||||
|
||||
不过此事件视条件而定,只有当 **self.collide_delta** 小于 6(缺省值定义在你 **Player** 类的 **init** 方法中)并且 **self.jump_delta** 也于 6 的时候才会发生。此条件能够保证直到玩家碰到一个平台,才能触发另一次跳跃。换言之,它能够阻止空中二段跳。
|
||||
不过此事件视条件而定,只有当 `self.collide_delta` 小于 6(缺省值定义在你 `Player` 类的 `init` 方法中)并且 `self.jump_delta` 也于 6 的时候才会发生。此条件能够保证直到玩家碰到一个平台,才能触发另一次跳跃。换言之,它能够阻止空中二段跳。
|
||||
|
||||
在某些特殊条件下,你可能不想阻止空中二段跳,或者说你允许玩家进行空中二段跳。举个栗子,如果玩家获得了某个战利品,那么在他被敌人攻击到之前,都能够拥有空中二段跳的能力。
|
||||
|
||||
当你完成本篇文章中的示例,尝试将 **self.collide_delta** 和 **self.jump_delta** 设置为 0,从而获得百分之百的几率触发空中二段跳。
|
||||
当你完成本篇文章中的示例,尝试将 `self.collide_delta` 和 `self.jump_delta` 设置为 0,从而获得百分之百的几率触发空中二段跳。
|
||||
|
||||
### 在平台上着陆
|
||||
|
||||
目前你已经定义了再玩家对象摔落地面时的抵抗重力条件,但此时你的游戏代码仍保持平台与地面置于不同的列表中(就像本文中做的很多其他选择一样,这个设定并不是必需的,你可以尝试将地面作为另一种平台)。为了允许玩家对象站在平台之上,你必须像检测地面碰撞一样,检测玩家对象与平台对象之间的碰撞。将如下代码放于你的 **update** 方法中:
|
||||
|
||||
目前你已经定义了在玩家精灵摔落地面时的抵抗重力条件,但此时你的游戏代码仍保持平台与地面置于不同的列表中(就像本文中做的很多其他选择一样,这个设定并不是必需的,你可以尝试将地面作为另一种平台)。为了允许玩家精灵站在平台之上,你必须像检测地面碰撞一样,检测玩家精灵与平台精灵之间的碰撞。将如下代码放于你的 `update` 方法中:
|
||||
|
||||
```
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
@ -103,27 +99,26 @@
|
||||
|
||||
但此处还有一点需要考虑:平台悬在空中,也就意味着玩家可以通过从上面或者从下面接触平台来与之互动。
|
||||
|
||||
确定平台如何与玩家互动取决于你,阻止玩家从下方到达平台也并不稀奇。将如下代码加到上方的代码块中,使得平台表现得像天花板或者说是藤架。只有在玩家对象跳得比平台上沿更高时才能跳到平台上,但会阻止玩家从平台下方跳上来:
|
||||
|
||||
确定平台如何与玩家互动取决于你,阻止玩家从下方到达平台也并不稀奇。将如下代码加到上方的代码块中,使得平台表现得像天花板或者说是藤架。只有在玩家精灵跳得比平台上沿更高时才能跳到平台上,但会阻止玩家从平台下方跳上来:
|
||||
|
||||
```
|
||||
if self.rect.y > p.rect.y:
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
```
|
||||
|
||||
此处 **if** 语句代码块的第一个子句阻止玩家对象从平台正下方跳到平台上。如果它检测到玩家对象的坐标比平台更大(在 Pygame 中,坐标更大意味着在屏幕的更下方),那么将玩家对象新的 Y 坐标设置为当前平台的 Y 坐标加上一个瓷贴的高度。实际效果就是保证玩家对象距离平台一个瓷贴的高度,防止其从下方穿过平台。
|
||||
此处 `if` 语句代码块的第一个子句阻止玩家精灵从平台正下方跳到平台上。如果它检测到玩家精灵的坐标比平台更大(在 Pygame 中,坐标更大意味着在屏幕的更下方),那么将玩家精灵新的 Y 坐标设置为当前平台的 Y 坐标加上一个瓷砖的高度。实际效果就是保证玩家精灵距离平台一个瓷砖的高度,防止其从下方穿过平台。
|
||||
|
||||
**else** 子句做了相反的事情。当程序运行到此处时,如果玩家对象的 Y 坐标_不_比平台的更大,意味着玩家对象是从空中落下(不论是由于玩家刚刚从此处生成,或者是玩家执行了跳跃)。在这种情况下,玩家对象的 Y 坐标被设为平台的 Y 坐标减去一个瓷贴的高度(切记,在 Pygame 中更小的 Y 坐标代表在屏幕上的更高处)。这样就能保证玩家在平台_上_,除非他从平台上跳下来或者走下来。
|
||||
`else` 子句做了相反的事情。当程序运行到此处时,如果玩家精灵的 Y 坐标*不*比平台的更大,意味着玩家精灵是从空中落下(不论是由于玩家刚刚从此处生成,或者是玩家执行了跳跃)。在这种情况下,玩家精灵的 Y 坐标被设为平台的 Y 坐标减去一个瓷砖的高度(切记,在 Pygame 中更小的 Y 坐标代表在屏幕上的更高处)。这样就能保证玩家在平台*上*,除非他从平台上跳下来或者走下来。
|
||||
|
||||
你也可以尝试其他的方式来处理玩家与平台之间的互动。举个栗子,也许玩家对象被设定为处在平台的“前面”,他能够无障碍地跳跃穿过平台并站在上面。或者你可以设计一种平台会减缓而又不完全阻止玩家的跳跃过程。甚至你可以通过将不同平台分到不同列表中来混合搭配使用。
|
||||
你也可以尝试其他的方式来处理玩家与平台之间的互动。举个栗子,也许玩家精灵被设定为处在平台的“前面”,他能够无障碍地跳跃穿过平台并站在上面。或者你可以设计一种平台会减缓而又不完全阻止玩家的跳跃过程。甚至你可以通过将不同平台分到不同列表中来混合搭配使用。
|
||||
|
||||
### 触发一次跳跃
|
||||
|
||||
目前为此,你的代码已经模拟了所有必需的跳跃条件,但仍缺少一个跳跃触发器。你的玩家对象的 **self.jump_delta** 初始值被设置为 6,只有当它比 6 小时才会触发更新跳跃的代码。
|
||||
目前为此,你的代码已经模拟了所有必需的跳跃条件,但仍缺少一个跳跃触发器。你的玩家精灵的 `self.jump_delta` 初始值被设置为 6,只有当它比 6 小的时候才会触发更新跳跃的代码。
|
||||
|
||||
为跳跃变量设置一个新的设置方法,在你的 **Player** 类中创建一个 **jump** 方法,并将 **self.jump_delta** 设为小于 6 的值。通过使玩家对象向空中移动 33 个像素,来暂时减缓重力的作用。
|
||||
为跳跃变量设置一个新的设置方法,在你的 `Player` 类中创建一个 `jump` 方法,并将 `self.jump_delta` 设为小于 6 的值。通过使玩家精灵向空中移动 33 个像素,来暂时减缓重力的作用。
|
||||
|
||||
|
||||
```
|
||||
@ -131,25 +126,24 @@
|
||||
self.jump_delta = 0
|
||||
```
|
||||
|
||||
不管你相信与否,这就是 **jump** 方法的全部。剩余的部分在 **update** 方法中,你已经在前面实现了相关代码。
|
||||
不管你相信与否,这就是 `jump` 方法的全部。剩余的部分在 `update` 方法中,你已经在前面实现了相关代码。
|
||||
|
||||
要使你游戏中的跳跃功能生效,还有最后一件事情要做。如果你想不起来是什么,运行游戏并观察跳跃是如何生效的。
|
||||
|
||||
问题就在于你的主循环中没有调用 **jump** 方法。先前你已经为该方法创建了一个按键占位符,现在,跳跃键所做的就是将 **jump** 打印到终端。
|
||||
问题就在于你的主循环中没有调用 `jump` 方法。先前你已经为该方法创建了一个按键占位符,现在,跳跃键所做的就是将 `jump` 打印到终端。
|
||||
|
||||
### 调用 jump 方法
|
||||
|
||||
在你的主循环中,将_上_方向键的效果从打印一条调试语句,改为调用 **jump** 方法。
|
||||
|
||||
注意此处,与 **update** 方法类似,**jump** 方法也需要检测碰撞,因此你需要告诉它使用哪个 **plat_list**。
|
||||
在你的主循环中,将*上*方向键的效果从打印一条调试语句,改为调用 `jump` 方法。
|
||||
|
||||
注意此处,与 `update` 方法类似,`jump` 方法也需要检测碰撞,因此你需要告诉它使用哪个 `plat_list`。
|
||||
|
||||
```
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
player.jump(plat_list)
|
||||
```
|
||||
|
||||
如果你倾向于使用空格键作为跳跃键,使用 **pygame.K_SPACE** 替代 **pygame.K_UP** 作为按键。另一种选择,你可以同时使用两种方式(使用单独的 **if** 语句),给玩家多一种选择。
|
||||
如果你倾向于使用空格键作为跳跃键,使用 `pygame.K_SPACE` 替代 `pygame.K_UP` 作为按键。另一种选择,你可以同时使用两种方式(使用单独的 `if` 语句),给玩家多一种选择。
|
||||
|
||||
现在来尝试你的游戏吧!在下一篇文章中,你将让你的游戏卷动起来。
|
||||
|
||||
@ -157,7 +151,6 @@
|
||||
|
||||
以下是目前为止的所有代码:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# draw a world
|
||||
@ -220,7 +213,7 @@ class Player(pygame.sprite.Sprite):
|
||||
def gravity(self):
|
||||
self.movey += 3.2 # how fast player falls
|
||||
|
||||
if self.rect.y > worldy and self.movey >= 0:
|
||||
if self.rect.y > worldy and self.movey >= 0:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty
|
||||
|
||||
@ -233,23 +226,23 @@ class Player(pygame.sprite.Sprite):
|
||||
|
||||
def update(self):
|
||||
'''
|
||||
更新对象位置
|
||||
更新精灵位置
|
||||
'''
|
||||
|
||||
self.rect.x = self.rect.x + self.movex
|
||||
self.rect.y = self.rect.y + self.movey
|
||||
|
||||
# 向左移动
|
||||
if self.movex < 0:
|
||||
if self.movex < 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[self.frame//ani]
|
||||
|
||||
# 向右移动
|
||||
if self.movex > 0:
|
||||
if self.movex > 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[(self.frame//ani)+4]
|
||||
|
||||
@ -263,7 +256,7 @@ class Player(pygame.sprite.Sprite):
|
||||
for p in plat_hit_list:
|
||||
self.collide_delta = 0 # stop jumping
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
@ -273,11 +266,11 @@ class Player(pygame.sprite.Sprite):
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty-ty
|
||||
self.collide_delta = 0 # stop jumping
|
||||
if self.rect.y > g.rect.y:
|
||||
if self.rect.y > g.rect.y:
|
||||
self.health -=1
|
||||
print(self.health)
|
||||
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
self.jump_delta = 6*2
|
||||
self.movey -= 33 # how high to jump
|
||||
self.collide_delta += 6
|
||||
@ -308,22 +301,22 @@ class Enemy(pygame.sprite.Sprite):
|
||||
|
||||
self.movey += 3.2
|
||||
|
||||
if self.counter >= 0 and self.counter <= distance:
|
||||
if self.counter >= 0 and self.counter <= distance:
|
||||
self.rect.x += speed
|
||||
elif self.counter >= distance and self.counter <= distance*2:
|
||||
elif self.counter >= distance and self.counter <= distance*2:
|
||||
self.rect.x -= speed
|
||||
else:
|
||||
self.counter = 0
|
||||
|
||||
self.counter += 1
|
||||
|
||||
if not self.rect.y >= worldy-ty-ty:
|
||||
if not self.rect.y >= worldy-ty-ty:
|
||||
self.rect.y += self.movey
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
@ -352,7 +345,7 @@ class Level():
|
||||
ground_list = pygame.sprite.Group()
|
||||
i=0
|
||||
if lvl == 1:
|
||||
while i < len(gloc):
|
||||
while i < len(gloc):
|
||||
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
|
||||
ground_list.add(ground)
|
||||
i=i+1
|
||||
@ -371,9 +364,9 @@ class Level():
|
||||
ploc.append((300,worldy-ty-256,3))
|
||||
ploc.append((500,worldy-ty-128,4))
|
||||
|
||||
while i < len(ploc):
|
||||
while i < len(ploc):
|
||||
j=0
|
||||
while j <= ploc[i][2]:
|
||||
while j <= ploc[i][2]:
|
||||
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
|
||||
plat_list.add(plat)
|
||||
j=j+1
|
||||
@ -417,11 +410,11 @@ eloc = []
|
||||
eloc = [200,20]
|
||||
gloc = []
|
||||
#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
|
||||
tx = 64 # 瓷贴尺寸
|
||||
ty = 64 # 瓷贴尺寸
|
||||
tx = 64 # 瓷砖尺寸
|
||||
ty = 64 # 瓷砖尺寸
|
||||
|
||||
i=0
|
||||
while i <= (worldx/tx)+tx:
|
||||
while i <= (worldx/tx)+tx:
|
||||
gloc.append(i*tx)
|
||||
i=i+1
|
||||
|
||||
@ -482,9 +475,8 @@ while main == True:
|
||||
* [如何在你的 Python 游戏中添加一个玩家][8]
|
||||
* [用 Pygame 使你的游戏角色移动起来][9]
|
||||
* [如何向你的 Python 游戏中添加一个敌人][10]
|
||||
* [在你的 Python 游戏中模拟重力][2]
|
||||
|
||||
|
||||
* [在 Pygame 游戏中放置平台][11]
|
||||
* [在你的 Python 游戏中模拟引力][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -493,19 +485,20 @@ via: https://opensource.com/article/19/12/jumping-python-platformer-game
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[cycoe](https://github.com/cycoe)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32 (Arcade games)
|
||||
[2]: https://opensource.com/article/19/11/simulate-gravity-python
|
||||
[2]: https://linux.cn/article-11780-1.html
|
||||
[3]: https://opensource.com/sites/default/files/uploads/pygame-jump.jpg (Pygame platformer)
|
||||
[4]: https://www.python.org/
|
||||
[5]: https://www.pygame.org/
|
||||
[6]: https://opensource.com/article/17/10/python-101
|
||||
[7]: https://opensource.com/article/17/12/game-framework-python
|
||||
[8]: https://opensource.com/article/17/12/game-python-add-a-player
|
||||
[9]: https://opensource.com/article/17/12/game-python-moving-player
|
||||
[10]: https://opensource.com/article/18/5/pygame-enemy
|
||||
[6]: https://linux.cn/article-9071-1.html
|
||||
[7]: https://linux.cn/article-10850-1.html
|
||||
[8]: https://linux.cn/article-10858-1.html
|
||||
[9]: https://linux.cn/article-10874-1.html
|
||||
[10]: https://linux.cn/article-10883-1.html
|
||||
[11]: https://linux.cn/article-10902-1.html
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11814-1.html)
|
||||
[#]: subject: (What's your favorite terminal emulator?)
|
||||
[#]: via: (https://opensource.com/article/19/12/favorite-terminal-emulator)
|
||||
[#]: author: (Opensource.com https://opensource.com/users/admin)
|
||||
|
||||
你最喜欢的终端模拟器是什么?
|
||||
======
|
||||
|
||||
> 我们让社区讲述他们在终端仿真器方面的经验。以下是我们收到的一些回复。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/24/000846qsmpz7s7spig77qg.jpg)
|
||||
|
||||
终端仿真器的偏好可以说明一个人的工作流程。无鼠标操作能力是否必须具备?你想要标签页还是窗口?对于终端仿真器你还有什么选择的原因?是否有酷的因素?欢迎参加调查或给我们留下评论,告诉我们你最喜欢的终端模拟器。你尝试过多少种终端仿真器呢?
|
||||
|
||||
我们让社区讲述他们在终端仿真器方面的经验。以下是我们收到的一些回复。
|
||||
|
||||
“我最喜欢的终端仿真器是用 Powerline 定制的 Tilix。我喜欢它支持在一个窗口中打开多个终端。” —Dan Arel
|
||||
|
||||
“[urxvt][2]。它可以通过文件简单配置,轻巧,并且在大多数程序包管理器存储库中都很容易找到。” —Brian Tomlinson
|
||||
|
||||
“即使我不再使用 GNOME,gnome-terminal 仍然是我的首选。:)” —Justin W. Flory
|
||||
|
||||
“现在 FC31 上的 Terminator。我刚刚开始使用它,我喜欢它的分屏功能,对我来说感觉很轻巧。我正在研究它的插件。” —Marc Maxwell
|
||||
|
||||
“不久前,我切换到了 Tilix,它完成了我需要终端执行的所有工作。:) 多个窗格、通知,很精简,用来运行我的 tmux 会话很棒。” —Kevin Fenzi
|
||||
|
||||
“alacritty。它针对速度进行了优化,是用 Rust 实现的,并且具有很多常规功能,但是老实说,我只关心一个功能:可配置的字形间距,使我可以进一步压缩字体。” —Alexander Sosedkin
|
||||
|
||||
“我是个老古板:KDE Konsole。如果是远程会话,请使用 tmux。” —Marcin Juszkiewicz
|
||||
|
||||
“在 macOS 上用 iTerm2。是的,它是开源的。:-) 在 Linux 上是 Terminator。” —Patrick Mullins
|
||||
|
||||
“我现在已经使用 alacritty 一两年了,但是最近我在全屏模式下使用 cool-retro-term,因为我必须运行一个输出内容有很多的脚本,而它看起来很酷,让我感觉很酷。这对我很重要。” —Nick Childers
|
||||
|
||||
“我喜欢 Tilix,部分是因为它擅长免打扰(我通常全屏运行它,里面是 tmux),而且还提供自定义热链接支持:在我的终端中,像 ‘rhbz#1234’ 之类的文本是将我带到 Bugzilla 的热链接。类似的还有 LaunchPad 提案,OpenStack 的 Gerrit 更改 ID 等。” —Lars Kellogg-Stedman
|
||||
|
||||
“Eterm,在使用 Vintage 配置文件的 cool-retro-term 中,演示效果也最好。” —Ivan Horvath
|
||||
|
||||
“Tilix +1。这是 GNOME 用户最好的选择,我是这么觉得的!” —Eric Rich
|
||||
|
||||
“urxvt。快速、小型、可配置、可通过 Perl 插件扩展,这使其可以无鼠标操作。” —Roman Dobosz
|
||||
|
||||
“Konsole 是最好的,也是 KDE 项目中我唯一使用的应用程序。所有搜索结果都高亮显示是一个杀手级功能,据我所知没有任何其它 Linux 终端有这个功能(如果能证明我错了,那我也很高兴)。最适合搜索编译错误和输出日志。” —Jan Horak
|
||||
|
||||
“我过去经常使用 Terminator。现在我在 Tilix 中克隆了它的主题(深色主题),而感受一样好。它可以在选项卡之间轻松移动。就是这样。” —Alberto Fanjul Alonso
|
||||
|
||||
“我开始使用的是 Terminator,自从差不多过去这三年,我已经完全切换到 Tilix。” —Mike Harris
|
||||
|
||||
“我使用下拉式终端 X。这是 GNOME 3 的一个非常简单的扩展,使我始终可以通过一个按键(对于我来说是`F12`)拉出一个终端。它还支持制表符,这正是我所需要的。 ” —Germán Pulido
|
||||
|
||||
“xfce4-terminal:支持 Wayland、缩放、无边框、无标题栏、无滚动条 —— 这就是我在 tmux 之外全部想要的终端仿真器的功能。我希望我的终端仿真器可以尽可能多地使用屏幕空间,我通常在 tmux 窗格中并排放着编辑器(Vim)和 repl。” —Martin Kourim
|
||||
|
||||
“别问,问就是 Fish ! ;-)” —Eric Schabell
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/favorite-terminal-emulator
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/admin
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals_0.png?itok=XwIRERsn (Terminal window with green text)
|
||||
[2]: https://opensource.com/article/19/10/why-use-rxvt-terminal
|
@ -0,0 +1,467 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11819-1.html)
|
||||
[#]: subject: (Enable your Python game player to run forward and backward)
|
||||
[#]: via: (https://opensource.com/article/19/12/python-platformer-game-run)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
使你的 Python 游戏玩家能够向前和向后跑
|
||||
======
|
||||
> 使用 Pygame 模块来使你的 Python 平台开启侧滚效果,来让你的玩家自由奔跑。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/25/220636x5mabbl47xvtsk55.jpg)
|
||||
|
||||
这是仍在进行中的关于使用 Pygame 模块来在 Python 3 中在创建电脑游戏的第九部分。先前的文章是:
|
||||
|
||||
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][2]
|
||||
* [使用 Python 和 Pygame 模块构建一个游戏框架][3]
|
||||
* [如何在你的 Python 游戏中添加一个玩家][4]
|
||||
* [用 Pygame 使你的游戏角色移动起来][5]
|
||||
* [如何向你的 Python 游戏中添加一个敌人][6]
|
||||
* [在 Pygame 游戏中放置平台][12]
|
||||
* [在你的 Python 游戏中模拟引力][7]
|
||||
* [为你的 Python 平台类游戏添加跳跃功能][8]
|
||||
|
||||
在这一系列关于使用 [Pygame][10] 模块来在 [Python 3][9] 中创建电脑游戏的先前文章中,你已经设计了你的关卡设计布局,但是你的关卡的一些部分可能已近超出你的屏幕的可视区域。在平台类游戏中,这个问题的普遍解决方案是,像术语“<ruby>侧滚<rt>side-scroller</rt></ruby>”表明的一样,滚动。
|
||||
|
||||
滚动的关键是当玩家精灵接近屏的幕边缘时,使在玩家精灵周围的平台移动。这样给予一种错觉,屏幕是一个在游戏世界中穿梭追拍的"摄像机"。
|
||||
|
||||
这个滚动技巧需要两个在屏幕边缘的绝对区域,在绝对区域内的点处,在世界滚动期间,你的化身静止不动。
|
||||
|
||||
### 在侧滚动条中放置卷轴
|
||||
|
||||
如果你希望你的玩家能够后退,你需要一个触发点来向前和向后。这两个点仅仅是两个变量。设置它们各个距各个屏幕边缘大约 100 或 200 像素。在你的设置部分中创建变量。在下面的代码中,前两行用于上下文说明,所以仅需要添加这行后的代码:
|
||||
|
||||
```
|
||||
player_list.add(player)
|
||||
steps = 10
|
||||
forwardX = 600
|
||||
backwardX = 230
|
||||
```
|
||||
|
||||
在主循环中,查看你的玩家精灵是否在 `forwardx` 或 `backwardx` 滚动点处。如果是这样,向左或向右移动使用的平台,取决于世界是向前或向后移动。在下面的代码中,代码的最后三行仅供你参考:
|
||||
|
||||
```
|
||||
# scroll the world forward
|
||||
if player.rect.x >= forwardx:
|
||||
scroll = player.rect.x - forwardx
|
||||
player.rect.x = forwardx
|
||||
for p in plat_list:
|
||||
p.rect.x -= scroll
|
||||
|
||||
# scroll the world backward
|
||||
if player.rect.x <= backwardx:
|
||||
scroll = backwardx - player.rect.x
|
||||
player.rect.x = backwardx
|
||||
for p in plat_list:
|
||||
p.rect.x += scroll
|
||||
|
||||
## scrolling code above
|
||||
world.blit(backdrop, backdropbox)
|
||||
player.gravity() # check gravity
|
||||
player.update()
|
||||
```
|
||||
|
||||
启动你的游戏,并尝试它。
|
||||
|
||||
![Scrolling the world in Pygame][11]
|
||||
|
||||
滚动像预期的一样工作,但是你可能注意到一个发生的小问题,当你滚动你的玩家和非玩家精灵周围的世界时:敌人精灵不随同世界滚动。除非你要你的敌人精灵要无休止地追逐你的玩家,你需要修改敌人代码,以便当你的玩家快速撤退时,敌人被留在后面。
|
||||
|
||||
### 敌人卷轴
|
||||
|
||||
在你的主循环中,你必须对卷轴平台为你的敌人的位置的应用相同的规则。因为你的游戏世界将(很可能)有不止一个敌人在其中,该规则应该被应用于你的敌人列表,而不是一个单独的敌人精灵。这是分组类似元素到列表中的优点之一。
|
||||
|
||||
前两行用于上下文注释,所以只需添加这两行后面的代码到你的主循环中:
|
||||
|
||||
```
|
||||
# scroll the world forward
|
||||
if player.rect.x >= forwardx:
|
||||
scroll = player.rect.x - forwardx
|
||||
player.rect.x = forwardx
|
||||
for p in plat_list:
|
||||
p.rect.x -= scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x -= scroll
|
||||
```
|
||||
|
||||
来滚向另一个方向:
|
||||
|
||||
```
|
||||
# scroll the world backward
|
||||
if player.rect.x <= backwardx:
|
||||
scroll = backwardx - player.rect.x
|
||||
player.rect.x = backwardx
|
||||
for p in plat_list:
|
||||
p.rect.x += scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x += scroll
|
||||
```
|
||||
|
||||
再次启动游戏,看看发生什么。
|
||||
|
||||
这里是到目前为止你已经为这个 Python 平台所写所有的代码:
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# draw a world
|
||||
# add a player and player control
|
||||
# add player movement
|
||||
# add enemy and basic collision
|
||||
# add platform
|
||||
# add gravity
|
||||
# add jumping
|
||||
# add scrolling
|
||||
|
||||
# GNU All-Permissive License
|
||||
# Copying and distribution of this file, with or without modification,
|
||||
# are permitted in any medium without royalty provided the copyright
|
||||
# notice and this notice are preserved. This file is offered as-is,
|
||||
# without any warranty.
|
||||
|
||||
import pygame
|
||||
import sys
|
||||
import os
|
||||
|
||||
'''
|
||||
Objects
|
||||
'''
|
||||
|
||||
class Platform(pygame.sprite.Sprite):
|
||||
# x location, y location, img width, img height, img file
|
||||
def __init__(self,xloc,yloc,imgw,imgh,img):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.image = pygame.image.load(os.path.join('images',img)).convert()
|
||||
self.image.convert_alpha()
|
||||
self.rect = self.image.get_rect()
|
||||
self.rect.y = yloc
|
||||
self.rect.x = xloc
|
||||
|
||||
class Player(pygame.sprite.Sprite):
|
||||
'''
|
||||
Spawn a player
|
||||
'''
|
||||
def __init__(self):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.movex = 0
|
||||
self.movey = 0
|
||||
self.frame = 0
|
||||
self.health = 10
|
||||
self.collide_delta = 0
|
||||
self.jump_delta = 6
|
||||
self.score = 1
|
||||
self.images = []
|
||||
for i in range(1,9):
|
||||
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
|
||||
img.convert_alpha()
|
||||
img.set_colorkey(ALPHA)
|
||||
self.images.append(img)
|
||||
self.image = self.images[0]
|
||||
self.rect = self.image.get_rect()
|
||||
|
||||
def jump(self,platform_list):
|
||||
self.jump_delta = 0
|
||||
|
||||
def gravity(self):
|
||||
self.movey += 3.2 # how fast player falls
|
||||
|
||||
if self.rect.y > worldy and self.movey >= 0:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty
|
||||
|
||||
def control(self,x,y):
|
||||
'''
|
||||
control player movement
|
||||
'''
|
||||
self.movex += x
|
||||
self.movey += y
|
||||
|
||||
def update(self):
|
||||
'''
|
||||
Update sprite position
|
||||
'''
|
||||
|
||||
self.rect.x = self.rect.x + self.movex
|
||||
self.rect.y = self.rect.y + self.movey
|
||||
|
||||
# moving left
|
||||
if self.movex < 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[self.frame//ani]
|
||||
|
||||
# moving right
|
||||
if self.movex > 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[(self.frame//ani)+4]
|
||||
|
||||
# collisions
|
||||
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
|
||||
for enemy in enemy_hit_list:
|
||||
self.health -= 1
|
||||
#print(self.health)
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.collide_delta = 0 # stop jumping
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
for g in ground_hit_list:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty-ty
|
||||
self.collide_delta = 0 # stop jumping
|
||||
if self.rect.y > g.rect.y:
|
||||
self.health -=1
|
||||
print(self.health)
|
||||
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
self.jump_delta = 6*2
|
||||
self.movey -= 33 # how high to jump
|
||||
self.collide_delta += 6
|
||||
self.jump_delta += 6
|
||||
|
||||
class Enemy(pygame.sprite.Sprite):
|
||||
'''
|
||||
Spawn an enemy
|
||||
'''
|
||||
def __init__(self,x,y,img):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.image = pygame.image.load(os.path.join('images',img))
|
||||
self.movey = 0
|
||||
#self.image.convert_alpha()
|
||||
#self.image.set_colorkey(ALPHA)
|
||||
self.rect = self.image.get_rect()
|
||||
self.rect.x = x
|
||||
self.rect.y = y
|
||||
self.counter = 0
|
||||
|
||||
|
||||
def move(self):
|
||||
'''
|
||||
enemy movement
|
||||
'''
|
||||
distance = 80
|
||||
speed = 8
|
||||
|
||||
self.movey += 3.2
|
||||
|
||||
if self.counter >= 0 and self.counter <= distance:
|
||||
self.rect.x += speed
|
||||
elif self.counter >= distance and self.counter <= distance*2:
|
||||
self.rect.x -= speed
|
||||
else:
|
||||
self.counter = 0
|
||||
|
||||
self.counter += 1
|
||||
|
||||
if not self.rect.y >= worldy-ty-ty:
|
||||
self.rect.y += self.movey
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
for g in ground_hit_list:
|
||||
self.rect.y = worldy-ty-ty
|
||||
|
||||
|
||||
class Level():
|
||||
def bad(lvl,eloc):
|
||||
if lvl == 1:
|
||||
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
|
||||
enemy_list = pygame.sprite.Group() # create enemy group
|
||||
enemy_list.add(enemy) # add enemy to group
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return enemy_list
|
||||
|
||||
def loot(lvl,lloc):
|
||||
print(lvl)
|
||||
|
||||
def ground(lvl,gloc,tx,ty):
|
||||
ground_list = pygame.sprite.Group()
|
||||
i=0
|
||||
if lvl == 1:
|
||||
while i < len(gloc):
|
||||
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
|
||||
ground_list.add(ground)
|
||||
i=i+1
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return ground_list
|
||||
|
||||
def platform(lvl,tx,ty):
|
||||
plat_list = pygame.sprite.Group()
|
||||
ploc = []
|
||||
i=0
|
||||
if lvl == 1:
|
||||
ploc.append((0,worldy-ty-128,3))
|
||||
ploc.append((300,worldy-ty-256,3))
|
||||
ploc.append((500,worldy-ty-128,4))
|
||||
|
||||
while i < len(ploc):
|
||||
j=0
|
||||
while j <= ploc[i][2]:
|
||||
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
|
||||
plat_list.add(plat)
|
||||
j=j+1
|
||||
print('run' + str(i) + str(ploc[i]))
|
||||
i=i+1
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return plat_list
|
||||
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
worldx = 960
|
||||
worldy = 720
|
||||
|
||||
fps = 40 # frame rate
|
||||
ani = 4 # animation cycles
|
||||
clock = pygame.time.Clock()
|
||||
pygame.init()
|
||||
main = True
|
||||
|
||||
BLUE = (25,25,200)
|
||||
BLACK = (23,23,23 )
|
||||
WHITE = (254,254,254)
|
||||
ALPHA = (0,255,0)
|
||||
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
|
||||
backdropbox = world.get_rect()
|
||||
player = Player() # spawn player
|
||||
player.rect.x = 0
|
||||
player.rect.y = 0
|
||||
player_list = pygame.sprite.Group()
|
||||
player_list.add(player)
|
||||
steps = 10
|
||||
forwardx = 600
|
||||
backwardx = 230
|
||||
|
||||
eloc = []
|
||||
eloc = [200,20]
|
||||
gloc = []
|
||||
#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
|
||||
tx = 64 #tile size
|
||||
ty = 64 #tile size
|
||||
|
||||
i=0
|
||||
while i <= (worldx/tx)+tx:
|
||||
gloc.append(i*tx)
|
||||
i=i+1
|
||||
|
||||
enemy_list = Level.bad( 1, eloc )
|
||||
ground_list = Level.ground( 1,gloc,tx,ty )
|
||||
plat_list = Level.platform( 1,tx,ty )
|
||||
|
||||
'''
|
||||
Main loop
|
||||
'''
|
||||
while main == True:
|
||||
for event in pygame.event.get():
|
||||
if event.type == pygame.QUIT:
|
||||
pygame.quit(); sys.exit()
|
||||
main = False
|
||||
|
||||
if event.type == pygame.KEYDOWN:
|
||||
if event.key == pygame.K_LEFT or event.key == ord('a'):
|
||||
print("LEFT")
|
||||
player.control(-steps,0)
|
||||
if event.key == pygame.K_RIGHT or event.key == ord('d'):
|
||||
print("RIGHT")
|
||||
player.control(steps,0)
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
print('jump')
|
||||
|
||||
if event.type == pygame.KEYUP:
|
||||
if event.key == pygame.K_LEFT or event.key == ord('a'):
|
||||
player.control(steps,0)
|
||||
if event.key == pygame.K_RIGHT or event.key == ord('d'):
|
||||
player.control(-steps,0)
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
player.jump(plat_list)
|
||||
|
||||
if event.key == ord('q'):
|
||||
pygame.quit()
|
||||
sys.exit()
|
||||
main = False
|
||||
|
||||
# scroll the world forward
|
||||
if player.rect.x >= forwardx:
|
||||
scroll = player.rect.x - forwardx
|
||||
player.rect.x = forwardx
|
||||
for p in plat_list:
|
||||
p.rect.x -= scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x -= scroll
|
||||
|
||||
# scroll the world backward
|
||||
if player.rect.x <= backwardx:
|
||||
scroll = backwardx - player.rect.x
|
||||
player.rect.x = backwardx
|
||||
for p in plat_list:
|
||||
p.rect.x += scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x += scroll
|
||||
|
||||
world.blit(backdrop, backdropbox)
|
||||
player.gravity() # check gravity
|
||||
player.update()
|
||||
player_list.draw(world) #refresh player position
|
||||
enemy_list.draw(world) # refresh enemies
|
||||
ground_list.draw(world) # refresh enemies
|
||||
plat_list.draw(world) # refresh platforms
|
||||
for e in enemy_list:
|
||||
e.move()
|
||||
pygame.display.flip()
|
||||
clock.tick(fps)
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/python-platformer-game-run
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f (Gaming artifacts with joystick, GameBoy, paddle)
|
||||
[2]: https://linux.cn/article-9071-1.html
|
||||
[3]: https://linux.cn/article-10850-1.html
|
||||
[4]: https://linux.cn/article-10858-1.html
|
||||
[5]: https://linux.cn/article-10874-1.html
|
||||
[6]: https://linux.cn/article-10883-1.html
|
||||
[7]: https://linux.cn/article-11780-1.html
|
||||
[8]: https://linux.cn/article-11790-1.html
|
||||
[9]: https://www.python.org/
|
||||
[10]: https://www.pygame.org/news
|
||||
[11]: https://opensource.com/sites/default/files/uploads/pygame-scroll.jpg (Scrolling the world in Pygame)
|
||||
[12]:https://linux.cn/article-10902-1.html
|
@ -0,0 +1,139 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11799-1.html)
|
||||
[#]: subject: (How to Add Border Around Text in GIMP)
|
||||
[#]: via: (https://itsfoss.com/gimp-text-outline/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
在 GIMP 中如何在文本周围添加边框
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/19/230506fzkyktqglfcyzkuh.jpg)
|
||||
|
||||
这个简单的教程介绍了在 [GIMP][1] 中显示文本的轮廓的步骤。文本轮廓可以帮助你在其它颜色下高亮显示该文本。
|
||||
|
||||
![Outlined Text created in GIMP][2]
|
||||
|
||||
让我们看看如何在你的文本周围添加一个边框。
|
||||
|
||||
### 在 GIMP 中添加文本轮廓
|
||||
|
||||
整个过程可以用这些简单的步骤描述:
|
||||
|
||||
* 创建文本,并复制它的轮廓路径
|
||||
* 添加一层新的透明层,并添加轮廓路径到透明层中
|
||||
* 更改轮廓的大小,给它添加一种不同的颜色
|
||||
|
||||
这就是全部的东西。不用担心,我将使用适当地截图详细的展示每个步骤。按照这个教程,你应该能够为文本添加轮廓,即使你在此之前从未使用过 GIMP 。
|
||||
|
||||
仅需要确保你已经 [在 Linux 上安装 GIMP][3],或者也可以使用的其它任何操作系统。
|
||||
|
||||
这篇教程在 GIMP 2.10 版本下演示。
|
||||
|
||||
#### 步骤 1: 创建你的主要文本,并复制它的轮廓
|
||||
|
||||
打开 GIMP ,并通过转到 “菜单 -> 文件 -> 新建” 来创建一个新的文件。你应该可以使用 `Ctrl+N` 键盘快捷键。
|
||||
|
||||
![Create New File][4]
|
||||
|
||||
你可以在这里选择画布的大小。你也可以选择要白色背景或一种透明背景。它在 “高级选项 -> 颜色” 配置文件下。
|
||||
|
||||
我选择默认的白色背景。它在以后能够更改。
|
||||
|
||||
现在从左边栏的工具箱中选择文本工具。
|
||||
|
||||
![Adding text in GIMP][5]
|
||||
|
||||
写你想要的文本。你可以根据你的选择以更改文本的字体、大小和对齐方式。我保持这篇文章的文本的默认左对齐。
|
||||
|
||||
我故意为文本选择一种浅色,以便难于阅读。在这篇教程中我将添加一个深色轮廓到这个浅色的文本。
|
||||
|
||||
![Text added in GIMP][6]
|
||||
|
||||
当你写完文本后,右键文本框并选择 “文本的路径” 。
|
||||
|
||||
![Right click on the text box and select ‘Path from Text’][7]
|
||||
|
||||
#### 步骤 2: 添加一个带有文本轮廓的透明层
|
||||
|
||||
现在,转到顶部菜单,转到“层”,并添加一个新层。
|
||||
|
||||
![Use Shift+Ctrl+N to add a new layer][8]
|
||||
|
||||
确保添加新层为透明的。你可以给它一个合适的名称,像“文本大纲”。单击确定来添加这个透明层。
|
||||
|
||||
![Add a transparent layer][9]
|
||||
|
||||
再次转到菜单,这次转到 “选择” ,并单击 “来自路径” 。你将看到你的文本应该被高亮显示。
|
||||
|
||||
![Go to Select and choose From Path][10]
|
||||
|
||||
总的来说,你只创建了一个透明层,它有像你的原文一样相同的文本(但是透明)。现在你需要做的是在这个层上增加文本的大小。
|
||||
|
||||
#### 步骤 3: 通过增加它的大小和更改它的颜色来添加文本轮廓
|
||||
|
||||
为此,再次在菜单中转到 “选择” ,这次选择 “增加”。这将允许增大透明层上的文本的大小。
|
||||
|
||||
![Grow the selection on the additional layer][11]
|
||||
|
||||
以 5 或 10 像素增加,或者你喜欢的任意像素。
|
||||
|
||||
![Grow it by 5 or 10 pixel][12]
|
||||
|
||||
你选择需要做是使用一种你选择的颜色来填充这个扩大的选择区。因为我的原文是浅色,在这里我将为轮廓使用背景色。
|
||||
|
||||
如果尚未选择的话,先选择你的主图像层。这些层在右侧栏中可视。然后转到工具箱并选择油漆桶工具。为你的轮廓选择想要的颜色。
|
||||
|
||||
选择使用该工具来填充黑色到你的选择区。记住。你填充文本外部的轮廓,而不是文本本身。
|
||||
|
||||
![Fill the outline of the text with a different color][13]
|
||||
|
||||
在这里你完成了很多。使用 `Ctrl+Shift+A` 来取消你当前的选择区。
|
||||
|
||||
![Outline added to the text][14]
|
||||
|
||||
如此,你现在已经在 GIMP 中成功地添加轮廓到你的文本。它是在白色背景中,如果你想要一个透明背景,只需要在右侧栏的图层菜单中删除背景层。
|
||||
|
||||
![Remove the white background layer if you want a transparent background][15]
|
||||
|
||||
如果你对结果感到满意,保存文件未 PNG 文件(来保留透明背景),或你喜欢的任何文件格式。
|
||||
|
||||
### 你使它工作了吗?
|
||||
|
||||
就这样。这就是你在 GIMP 中为添加一个文本轮廓而需要做的全部工作。
|
||||
|
||||
我希望你发现这个 GIMP 教程有帮助。你可能想查看另一个 [关于在 GIMP 中添加一个水印的简单教程][16]。
|
||||
|
||||
如果你有问题或建议,请在下面自由留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gimp-text-outline/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gimp.org/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outlined_text_GIMP.png?ssl=1
|
||||
[3]: https://itsfoss.com/gimp-2-10-release/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/12/create_outline_text_gimp_1.jpeg?ssl=1
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_2.jpg?ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp-3.jpg?ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_4.jpg?ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_5.jpg?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_6.jpg?ssl=1
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_7.jpg?ssl=1
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_8.jpg?ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_9.jpg?ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_10.jpg?ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_11.jpg?ssl=1
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_12.jpg?ssl=1
|
||||
[16]: https://itsfoss.com/add-watermark-gimp-linux/
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11803-1.html)
|
||||
[#]: subject: (4 ways to volunteer this holiday season)
|
||||
[#]: via: (https://opensource.com/article/19/12/ways-volunteer)
|
||||
[#]: author: (John Jones https://opensource.com/users/johnjones4)
|
||||
|
||||
假期志愿服务的 4 种方式
|
||||
======
|
||||
|
||||
> 想要洒播些节日的快乐吗?为开源组织做贡献,帮助有需要的社区。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/20/223730f7983z8atxp1tf4l.jpg)
|
||||
|
||||
当领导者们配置人员和资源以做出积极改变时,就会产生社会影响。但是,许多社会努力都缺乏能够为这些改变者提供服务的技术资源。然而,有些组织通过将想要做出改变的开发人员与迫切需要更好技术的社区和非营利组织联系起来,来促进技术进步。这些组织通常为特定的受众提供服务,并招募特定种类的技术人员,它们有一个共同点:开源。
|
||||
|
||||
作为开发人员,我们出于各种原因试图加入开源社区。有些是为了专业发展,有些是为了能够与广阔的网络上令人印象深刻的技术人员合作,还有其他人则是因为他们清楚自己的贡献对于项目的成功的必要性。为什么不将你作为开发人员的才华投入到需要它的地方,而同时又为开源组织做贡献呢?以下组织是实现此目标的一些主要事例。
|
||||
|
||||
### Code for America
|
||||
|
||||
“Code for America” 是在数字时代,政府如何依靠人民为人民服务的一个例子。通过其 Brigade Network,该组织在美国各个城市中组织了一个由志愿程序员、数据科学家、相关公民和设计师组成的全国联盟。这些本地分支机构定期举行聚会,向社区开放。这样既可以向小组推出新项目,又可以协调正在进行的工作。为了使志愿者与项目相匹配,该网站经常列出项目所需的特定技能,例如数据分析、内容创建和JavaScript。同时,Brigade 网站也会关注当地问题,分享自然灾害等共同经验,这些都可以促进成员之间的合作。例如,新奥尔良、休斯敦和坦帕湾团队合作开发了一个飓风响应网站,当灾难发生时,该网站可以快速响应不同的城市灾难情况。
|
||||
|
||||
想要加入该组织,请访问 [该网站][2] 获取 70 多个 Brigade 的清单,以及个人加入组织的指南。
|
||||
|
||||
### Code for Change
|
||||
|
||||
“Code for Change” 显示了即使在高中时期,也可以为社会做贡献。印第安纳波利斯的一群高中开发爱好者成立了自己的俱乐部,他们通过创建针对社区问题的开源软件解决方案来回馈当地组织。“Code for Change” 鼓励当地组织提出项目构想,学生团体加入并开发完全自由和开源的解决方案。该小组已经开发了诸如“蓝宝石”之类的项目,该项目优化了当地难民组织的志愿者管理系统,并建立了民权委员会的投诉表格,方便公民就他们所关心的问题在网上发表意见。
|
||||
|
||||
有关如何在你自己的社区中创建 “Code for Change”,[访问他们的网站][3]。
|
||||
|
||||
### Python for Good/Ruby for Good
|
||||
|
||||
“Python for Good” 和 “Ruby for Good” 是在俄勒冈州波特兰市和弗吉尼亚州费尔法克斯市举办的双年展活动,该活动将人们聚集在一起,为各自的社区开发和制定解决方案。
|
||||
|
||||
在周末,人们聚在一起聆听当地非营利组织的建议,并通过构建开源解决方案来解决他们的问题。 2017 年,“Ruby For Good” 参与者创建了 “Justice for Juniors”,该计划指导当前和以前被监禁的年轻人,并将他们重新融入社区。参与者还创建了 “Diaperbase”,这是一种库存管理系统,为美国各地的<ruby>尿布库<rt>diaper bank</rt></ruby>所使用。这些活动的主要目标之一是将看似不同的行业和思维方式的组织和个人聚集在一起,以谋求共同利益。公司可以赞助活动,非营利组织可以提交项目构想,各种技能的人都可以注册参加活动并做出贡献。通过两岸(美国大西洋和太平洋东西海岸)的努力,“Ruby for Good” 和 “Python for Good” 一直恪守“使世界变得更好”的座右铭。
|
||||
|
||||
“[Ruby for Good][4]” 在夏天举行,举办地点在弗吉尼亚州费尔法克斯的乔治•梅森大学。
|
||||
|
||||
### Social Coder
|
||||
|
||||
英国的 Ed Guiness 创建了 “Social Coder”,将志愿者和慈善机构召集在一起,为六大洲的非营利组织创建和使用开源项目。“Social Coder” 积极招募来自世界各地的熟练 IT 志愿者,并将其与通过 Social Coder 注册的慈善机构和非营利组织进行匹配。项目范围从简单的网站更新到整个移动应用程序的开发。
|
||||
|
||||
例如,PHASE Worldwide 是一个在尼泊尔支持工作的小型非政府组织,因为 “Social Coder”,它获得了利用开源技术的关键支持和专业知识。
|
||||
|
||||
有许多慈善机构已经与英国的 “Social Coder”进行了合作,也欢迎其它国家的组织加入。通过他们的网站,个人可以注册为社会软件项目工作,找到寻求帮助的组织和慈善机构。
|
||||
|
||||
对 “Social Coder” 的志愿服务感兴趣的个人可以 [在此][5]注册.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/ways-volunteer
|
||||
|
||||
作者:[John Jones][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/johnjones4
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1 (Gift box opens with colors coming out)
|
||||
[2]: https://brigade.codeforamerica.org/
|
||||
[3]: http://codeforchange.herokuapp.com/
|
||||
[4]: https://rubyforgood.org/
|
||||
[5]: https://socialcoder.org/Home/Programmer
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chen-ni)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11791-1.html)
|
||||
[#]: subject: (Explained! Why Your Distribution Still Using an ‘Outdated’ Linux Kernel?)
|
||||
[#]: via: (https://itsfoss.com/why-distros-use-old-kernel/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
@ -10,7 +10,9 @@
|
||||
为什么你的发行版仍然在使用“过时的”Linux 内核?
|
||||
======
|
||||
|
||||
[检查一下你的系统所使用的 Linux 内核版本][1],你十有八九会发现,按照 Linux 内核官网提供的信息,该内核版本已经达到使用寿命终期了。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/16/225806jbqyacu3loolobae.png)
|
||||
|
||||
[检查一下你的系统所使用的 Linux 内核版本][1],你十有八九会发现,按照 Linux 内核官网提供的信息,该内核版本已经达到使用寿命终期(EOL)了。
|
||||
|
||||
一个软件一旦达到了使用寿命终期,那么就意味着它再也不会得到 bug 修复和维护了。
|
||||
|
||||
@ -18,11 +20,11 @@
|
||||
|
||||
下面将逐一解答这些问题。
|
||||
|
||||
总结
|
||||
|
||||
上游内核维护与你的发行版的内核维护是两个不同的概念。
|
||||
|
||||
例如,根据 Linux 内核官网,Linux 内核 4.15 版本可能已经达到使用寿命终期了,但是在 2023 年 4 月之前,Ubuntu 18.04 长期维护版本将会继续使用这个版本,并通过向后移植安全补丁和修复 bug 来提供维护。
|
||||
> **总结**
|
||||
>
|
||||
> 上游内核维护与你的发行版的内核维护是两个不同的概念。
|
||||
>
|
||||
> 例如,根据 Linux 内核官网,Linux 内核 4.15 版本可能已经达到使用寿命终期了,但是在 2023 年 4 月之前,Ubuntu 18.04 长期维护版本将会继续使用这个版本,并通过向后移植安全补丁和修复 bug 来提供维护。
|
||||
|
||||
### 检查 Linux 内核版本,以及是否达到使用寿命终期
|
||||
|
||||
@ -35,13 +37,11 @@ uname -r
|
||||
我使用的是 Ubuntu 18.04,输出的 Linux 内核版本如下:
|
||||
|
||||
```
|
||||
[email protected]:~$ uname -r
|
||||
abhishek@itsfoss:~$ uname -r
|
||||
5.0.0-37-generic
|
||||
```
|
||||
|
||||
接下来,可以到 Linux 内核官网上看看哪些 Linux 内核版本仍然在维护状态。在网站主页上就可以看到相关信息。
|
||||
|
||||
[Linux 内核官网][2]
|
||||
接下来,可以到 [Linux 内核官网][2]上看看哪些 Linux 内核版本仍然在维护状态。在网站主页上就可以看到相关信息。
|
||||
|
||||
你看到的内核版本状态应该类似于下图:
|
||||
|
||||
@ -53,7 +53,7 @@ uname -r
|
||||
|
||||
不幸的是,Linux 内核的生命周期没有任何规律可循。不是说常规的内核稳定发布版可以得到 X 月的维护、长期维护版本(LTS)可以得到 Y 年的维护。没有这回事。
|
||||
|
||||
根据实际需求,可能会存在内核的多个 LTS 版本,其使用寿命终期各不相同。在[这个页面][5]上可以查到这些 LTS 版本的相关信息,包括推定的使用寿命终期。
|
||||
根据实际需求,可能会存在内核的多个 LTS 版本,其使用寿命终期各不相同。在[这个页面][5]上可以查到这些 LTS 版本的相关信息,包括计划的使用寿命终期。
|
||||
|
||||
那么问题来了:既然 Linux 内核官网上明确表示 5.0 版本的内核已经达到了使用寿命终期,Ubuntu 为什么还在提供这个内核版本呢?
|
||||
|
||||
@ -63,11 +63,11 @@ uname -r
|
||||
|
||||
你是否想过,为什么 Ubuntu/Debian/Fedora 等发行版被称为 Linux “发行版”?这是因为,它们“发行” Linux 内核。
|
||||
|
||||
这些发行版会对 Linux 内核进行不同的修改,并添加各种 GUI 元素(包括桌面环境,显示服务器等)以及软件,然后再呈现给用户。
|
||||
这些发行版会对 Linux 内核进行不同的修改,并添加各种 GUI 元素(包括桌面环境、显示服务器等)以及软件,然后再呈现给用户。
|
||||
|
||||
按照通常的工作流,Linux 发行版会选择一个内核,提供给其用户,然后在接下来的几个月、几年中,甚至是达到内核的使用寿命终期之后,仍然会继续使用该内核。
|
||||
|
||||
这样能够保障安全吗?其实是可以的,因为 _**发行版会通过向后移植全部的重要修补来维护内核**_。
|
||||
这样能够保障安全吗?其实是可以的,因为 **发行版会通过向后移植全部的重要修补来维护内核**。
|
||||
|
||||
换句话说,你的 Linux 发行版会确保 Linux 内核没有漏洞和 bug,并且已经通过向后移植获得了重要的新特性。在“过时的旧版本 Linux 内核”上,其实有着数以千计的改动。
|
||||
|
||||
@ -83,13 +83,13 @@ uname -r
|
||||
|
||||
新的 Linux 内核稳定版本每隔 2 到 3 个月发布一次,有不少用户跃跃欲试。
|
||||
|
||||
实话说,除非有十分充分的理由,否则不应该使用最新版本的稳定内核。你使用的发行版并不会提供这个选项,你也不能指望通过在键盘上敲出“_sudo apt give-me-the-latest-stable-kernel_”解决问题。
|
||||
实话说,除非有十分充分的理由,否则不应该使用最新版本的稳定内核。你使用的发行版并不会提供这个选项,你也不能指望通过在键盘上敲出 `sudo apt give-me-the-latest-stable-kernel` 解决问题。
|
||||
|
||||
此外,手动[安装主流 Linux 内核版本][8]本身就是一个挑战。即使安装成功,之后每次发布 bug 修复的时候,负责更新内核的就会是你了。此外,当新内核达到使用寿命终期之后,你就有责任将它升级到更新的内核版本了。和常规的[Ubuntu 更新][9]不同,内核升级无法通过 apt upgrade 完成。
|
||||
此外,手动[安装主流 Linux 内核版本][8]本身就是一个挑战。即使安装成功,之后每次发布 bug 修复的时候,负责更新内核的就会是你了。此外,当新内核达到使用寿命终期之后,你就有责任将它升级到更新的内核版本了。和常规的 [Ubuntu 更新][9]不同,内核升级无法通过 `apt upgrade` 完成。
|
||||
|
||||
同样需要记住的是,切换到主流内核之后,可能就无法使用你的发行版提供的一些驱动程序和补丁了。
|
||||
|
||||
正如 [Greg Kroah-Hartman][10]所言,“_**你能使用的最好的内核,就是别人在维护的内核。**_”除了你的 Linux 发行版之外,又有谁更胜任这份工作呢!
|
||||
正如 [Greg Kroah-Hartman][10]所言,“**你能使用的最好的内核,就是别人在维护的内核。**”除了你的 Linux 发行版之外,又有谁更胜任这份工作呢!
|
||||
|
||||
希望你对这个主题已经有了更好的理解。下回发现你的系统正在使用的内核版本已经达到使用寿命终期的时候,希望你不会感到惊慌失措。
|
||||
|
||||
@ -102,7 +102,7 @@ via: https://itsfoss.com/why-distros-use-old-kernel/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chen-ni](https://github.com/chen-ni)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (algzjh)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11816-1.html)
|
||||
[#]: subject: (The best resources for agile software development)
|
||||
[#]: via: (https://opensource.com/article/19/12/agile-resources)
|
||||
[#]: author: (Leigh Griffin https://opensource.com/users/lgriffin)
|
||||
|
||||
敏捷软件开发的最佳资源
|
||||
======
|
||||
|
||||
> 请阅读我们的热门文章,这些文章着重讨论了敏捷的过去、现在和未来。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/25/121308jrs4speu2y09u09e.jpg)
|
||||
|
||||
对于 Opensource.com 上的敏捷主题来说,2019 年是非常棒的一年。随着 2020 年的到来,我们回顾了我们读者所读的与敏捷相关的热门文章。
|
||||
|
||||
### 小规模 Scrum 指南
|
||||
|
||||
Opensource.com 关于[小规模 Scrum][2] 的指南(我曾参与合著)由六部分组成,为小型团队提供了关于如何将敏捷引入到他们的工作中的建议。在官方的 [Scrum 指南][3]的概述中,传统的 Scrum 框架推荐至少三个人来实现,以充分发挥其潜力。但是,它并没有为一两个人的团队如何成功遵循 Scrum 提供指导。我们的六部分系列旨在规范化小规模的 Scrum,并检验我们在现实世界中使用它的经验。该系列受到了读者的热烈欢迎,以至于这六篇文章占据了前 10 名文章的 60%。因此,如果你还没有阅读的话,一定要从我们的[小规模 Scrum 介绍页面][2]下载。
|
||||
|
||||
### 全面的敏捷项目管理指南
|
||||
|
||||
遵循传统项目管理方法的团队最初对敏捷持怀疑态度,现在已经热衷于敏捷的工作方式。目前,敏捷已被接受,并且一种更加灵活的混合风格已经找到了归宿。Matt Shealy 撰写的[有关敏捷项目管理的综合指南][4]涵盖了敏捷项目管理的 12 条指导原则,对于希望为其项目带来敏捷性的传统项目经理而言,它是完美的选择。
|
||||
|
||||
### 成为出色的敏捷开发人员的 4 个步骤
|
||||
|
||||
DevOps 文化已经出现在许多现代软件团队中,这些团队采用了敏捷软件开发原则,利用了最先进的工具和自动化技术。但是,这种机械的敏捷方法并不能保证开发人员在日常工作中遵循敏捷实践。Daniel Oh 在[成为出色的敏捷开发人员的 4 个步骤][5]中给出了一些很棒的技巧,通过关注设计思维,使用可预测的方法,以质量为中心并不断学习和探索来提高你的敏捷性。用你的敏捷工具补充这些方法将形成非常灵活和强大的敏捷开发人员。
|
||||
|
||||
### Scrum 和 kanban:哪种敏捷框架更好?
|
||||
|
||||
对于以敏捷方式运行的团队来说,Scrum 和 kanban 是两种最流行的方法。在 “[Scrum 与 kanban:哪种敏捷框架更好?][6]” 中,Taz Brown 探索了两者的历史和目的。在阅读本文时,我想起一句名言:“如果你的工具箱里只有锤子,那么所有问题看起来都像钉子。”知道何时使用 kanban 以及何时使用 Scrum 非常重要,本文有助于说明两者都有一席之地,这取决于你的团队、挑战和目标。
|
||||
|
||||
### 开发人员对敏捷发表意见的 4 种方式
|
||||
|
||||
当采用敏捷的话题出现时,开发人员常常会担心自己会被强加上一种工作风格。在“[开发人员对敏捷发表意见的 4 种方式][7]”中,[Clément Verna][8] 着眼于开发人员通过帮助确定敏捷在其团队中的表现形式来颠覆这种说法的方法。检查敏捷的起源和基础是一个很好的起点,但是真正的价值在于拥有可帮助指导你的过程的指标。知道你将面临什么样的挑战会给你的前进提供坚实的基础。根据经验进行决策不仅可以增强团队的能力,还可以使他们对整个过程有一种主人翁意识。Verna 的文章还探讨了将人置于过程之上并作为一个团队来实现目标的重要性。
|
||||
|
||||
### 敏捷的现在和未来
|
||||
|
||||
今年,Opensource.com 的作者围绕敏捷的过去、现在以及未来可能会是什么样子进行了大量的讨论。感谢他们所有人,请一定于 2020 年在这里分享[你自己的敏捷故事][9]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/agile-resources
|
||||
|
||||
作者:[Leigh Griffin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[algzjh](https://github.com/algzjh)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lgriffin
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G "Women programming"
|
||||
[2]: https://opensource.com/downloads/small-scale-scrum
|
||||
[3]: https://scrumguides.org/scrum-guide.html
|
||||
[4]: https://opensource.com/article/19/8/guide-agile-project-management
|
||||
[5]: https://opensource.com/article/19/2/steps-agile-developer
|
||||
[6]: https://opensource.com/article/19/8/scrum-vs-kanban
|
||||
[7]: https://opensource.com/article/19/10/ways-developers-what-agile
|
||||
[8]: https://twitter.com/clemsverna
|
||||
[9]: https://opensource.com/how-submit-article
|
@ -0,0 +1,525 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11828-1.html)
|
||||
[#]: subject: (Put some loot in your Python platformer game)
|
||||
[#]: via: (https://opensource.com/article/20/1/loot-python-platformer-game)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
在你的 Python 平台类游戏中放一些奖励
|
||||
======
|
||||
|
||||
> 这部分是关于在使用 Python 的 Pygame 模块开发的视频游戏总给你的玩家提供收集的宝物和经验值的内容。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/29/131158jkwnhgd1nnawzn86.jpg)
|
||||
|
||||
这是正在进行的关于使用 [Python 3][2] 的 [Pygame][3] 模块创建视频游戏的系列文章的第十部分。以前的文章有:
|
||||
|
||||
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
|
||||
* [使用 Python 和 Pygame 模块构建一个游戏框架][5]
|
||||
* [如何在你的 Python 游戏中添加一个玩家][6]
|
||||
* [用 Pygame 使你的游戏角色移动起来][7]
|
||||
* [如何向你的 Python 游戏中添加一个敌人][8]
|
||||
* [在 Pygame 游戏中放置平台][13]
|
||||
* [在你的 Python 游戏中模拟引力][9]
|
||||
* [为你的 Python 平台类游戏添加跳跃功能][10]
|
||||
* [使你的 Python 游戏玩家能够向前和向后跑][11]
|
||||
|
||||
如果你已经阅读了本系列的前几篇文章,那么你已经了解了编写游戏的所有基础知识。现在你可以在这些基础上,创造一个全功能的游戏。当你第一次学习时,遵循本系列代码示例,这样的“用例”是有帮助的,但是,用例也会约束你。现在是时候运用你学到的知识,以新的方式应用它们了。
|
||||
|
||||
如果说,说起来容易做起来难,这篇文章展示了一个如何将你已经了解的内容用于新目的的例子中。具体来说,就是它涵盖了如何使用你以前的课程中已经了解到的来实现奖励系统。
|
||||
|
||||
在大多数电子游戏中,你有机会在游戏世界中获得“奖励”或收集到宝物和其他物品。奖励通常会增加你的分数或者你的生命值,或者为你的下一次任务提供信息。
|
||||
|
||||
游戏中包含的奖励类似于编程平台。像平台一样,奖励没有用户控制,随着游戏世界的滚动进行,并且必须检查与玩家的碰撞。
|
||||
|
||||
### 创建奖励函数
|
||||
|
||||
奖励和平台非常相似,你甚至不需要一个奖励的类。你可以重用 `Platform` 类,并将结果称为“奖励”。
|
||||
|
||||
由于奖励类型和位置可能因关卡不同而不同,如果你还没有,请在你的 `Level` 中创建一个名为 `loot` 的新函数。因为奖励物品不是平台,你也必须创建一个新的 `loot_list` 组,然后添加奖励物品。与平台、地面和敌人一样,该组用于检查玩家碰撞:
|
||||
|
||||
```
|
||||
def loot(lvl,lloc):
|
||||
if lvl == 1:
|
||||
loot_list = pygame.sprite.Group()
|
||||
loot = Platform(300,ty*7,tx,ty, 'loot_1.png')
|
||||
loot_list.add(loot)
|
||||
|
||||
if lvl == 2:
|
||||
print(lvl)
|
||||
|
||||
return loot_list
|
||||
```
|
||||
|
||||
你可以随意添加任意数量的奖励对象;记住把每一个都加到你的奖励清单上。`Platform` 类的参数是奖励图标的 X 位置、Y 位置、宽度和高度(通常让你的奖励精灵保持和所有其他方块一样的大小最为简单),以及你想要用作的奖励的图片。奖励的放置可以和贴图平台一样复杂,所以使用创建关卡时需要的关卡设计文档。
|
||||
|
||||
在脚本的设置部分调用新的奖励函数。在下面的代码中,前三行是上下文,所以只需添加第四行:
|
||||
|
||||
```
|
||||
enemy_list = Level.bad( 1, eloc )
|
||||
ground_list = Level.ground( 1,gloc,tx,ty )
|
||||
plat_list = Level.platform( 1,tx,ty )
|
||||
loot_list = Level.loot(1,tx,ty)
|
||||
```
|
||||
|
||||
正如你现在所知道的,除非你把它包含在你的主循环中,否则奖励不会被显示到屏幕上。将下面代码示例的最后一行添加到循环中:
|
||||
|
||||
```
|
||||
enemy_list.draw(world)
|
||||
ground_list.draw(world)
|
||||
plat_list.draw(world)
|
||||
loot_list.draw(world)
|
||||
```
|
||||
|
||||
启动你的游戏看看会发生什么。
|
||||
|
||||
![Loot in Python platformer][12]
|
||||
|
||||
你的奖励将会显示出来,但是当你的玩家碰到它们时,它们不会做任何事情,当你的玩家经过它们时,它们也不会滚动。接下来解决这些问题。
|
||||
|
||||
### 滚动奖励
|
||||
|
||||
像平台一样,当玩家在游戏世界中移动时,奖励必须滚动。逻辑与平台滚动相同。要向前滚动奖励物品,添加最后两行:
|
||||
|
||||
```
|
||||
for e in enemy_list:
|
||||
e.rect.x -= scroll
|
||||
for l in loot_list:
|
||||
l.rect.x -= scroll
|
||||
```
|
||||
|
||||
要向后滚动,请添加最后两行:
|
||||
|
||||
```
|
||||
for e in enemy_list:
|
||||
e.rect.x += scroll
|
||||
for l in loot_list:
|
||||
l.rect.x += scroll
|
||||
```
|
||||
|
||||
再次启动你的游戏,看看你的奖励物品现在表现得像在游戏世界里一样了,而不是仅仅画在上面。
|
||||
|
||||
### 检测碰撞
|
||||
|
||||
就像平台和敌人一样,你可以检查奖励物品和玩家之间的碰撞。逻辑与其他碰撞相同,除了撞击不会(必然)影响重力或生命值。取而代之的是,命中会导致奖励物品会消失并增加玩家的分数。
|
||||
|
||||
当你的玩家触摸到一个奖励对象时,你可以从 `loot_list` 中移除该对象。这意味着当你的主循环在 `loot_list` 中重绘所有奖励物品时,它不会重绘那个特定的对象,所以看起来玩家已经获得了奖励物品。
|
||||
|
||||
在 `Player` 类的 `update` 函数中的平台碰撞检测之上添加以下代码(最后一行仅用于上下文):
|
||||
|
||||
```
|
||||
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
|
||||
for loot in loot_hit_list:
|
||||
loot_list.remove(loot)
|
||||
self.score += 1
|
||||
print(self.score)
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
```
|
||||
|
||||
当碰撞发生时,你不仅要把奖励从它的组中移除,还要给你的玩家一个分数提升。你还没有创建分数变量,所以请将它添加到你的玩家属性中,该属性是在 `Player` 类的 `__init__` 函数中创建的。在下面的代码中,前两行是上下文,所以只需添加分数变量:
|
||||
|
||||
```
|
||||
self.frame = 0
|
||||
self.health = 10
|
||||
self.score = 0
|
||||
```
|
||||
|
||||
当在主循环中调用 `update` 函数时,需要包括 `loot_list`:
|
||||
|
||||
```
|
||||
player.gravity()
|
||||
player.update()
|
||||
```
|
||||
|
||||
如你所见,你已经掌握了所有的基本知识。你现在要做的就是用新的方式使用你所知道的。
|
||||
|
||||
在下一篇文章中还有一些提示,但是与此同时,用你学到的知识来制作一些简单的单关卡游戏。限制你试图创造的东西的范围是很重要的,这样你就不会埋没自己。这也使得最终的成品看起来和感觉上更容易完成。
|
||||
|
||||
以下是迄今为止你为这个 Python 平台编写的所有代码:
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# draw a world
|
||||
# add a player and player control
|
||||
# add player movement
|
||||
# add enemy and basic collision
|
||||
# add platform
|
||||
# add gravity
|
||||
# add jumping
|
||||
# add scrolling
|
||||
|
||||
# GNU All-Permissive License
|
||||
# Copying and distribution of this file, with or without modification,
|
||||
# are permitted in any medium without royalty provided the copyright
|
||||
# notice and this notice are preserved. This file is offered as-is,
|
||||
# without any warranty.
|
||||
|
||||
import pygame
|
||||
import sys
|
||||
import os
|
||||
|
||||
'''
|
||||
Objects
|
||||
'''
|
||||
|
||||
class Platform(pygame.sprite.Sprite):
|
||||
# x location, y location, img width, img height, img file
|
||||
def __init__(self,xloc,yloc,imgw,imgh,img):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.image = pygame.image.load(os.path.join('images',img)).convert()
|
||||
self.image.convert_alpha()
|
||||
self.rect = self.image.get_rect()
|
||||
self.rect.y = yloc
|
||||
self.rect.x = xloc
|
||||
|
||||
class Player(pygame.sprite.Sprite):
|
||||
'''
|
||||
Spawn a player
|
||||
'''
|
||||
def __init__(self):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.movex = 0
|
||||
self.movey = 0
|
||||
self.frame = 0
|
||||
self.health = 10
|
||||
self.collide_delta = 0
|
||||
self.jump_delta = 6
|
||||
self.score = 1
|
||||
self.images = []
|
||||
for i in range(1,9):
|
||||
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
|
||||
img.convert_alpha()
|
||||
img.set_colorkey(ALPHA)
|
||||
self.images.append(img)
|
||||
self.image = self.images[0]
|
||||
self.rect = self.image.get_rect()
|
||||
|
||||
def jump(self,platform_list):
|
||||
self.jump_delta = 0
|
||||
|
||||
def gravity(self):
|
||||
self.movey += 3.2 # how fast player falls
|
||||
|
||||
if self.rect.y > worldy and self.movey >= 0:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty
|
||||
|
||||
def control(self,x,y):
|
||||
'''
|
||||
control player movement
|
||||
'''
|
||||
self.movex += x
|
||||
self.movey += y
|
||||
|
||||
def update(self):
|
||||
'''
|
||||
Update sprite position
|
||||
'''
|
||||
|
||||
self.rect.x = self.rect.x + self.movex
|
||||
self.rect.y = self.rect.y + self.movey
|
||||
|
||||
# moving left
|
||||
if self.movex < 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[self.frame//ani]
|
||||
|
||||
# moving right
|
||||
if self.movex > 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[(self.frame//ani)+4]
|
||||
|
||||
# collisions
|
||||
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
|
||||
for enemy in enemy_hit_list:
|
||||
self.health -= 1
|
||||
#print(self.health)
|
||||
|
||||
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
|
||||
for loot in loot_hit_list:
|
||||
loot_list.remove(loot)
|
||||
self.score += 1
|
||||
print(self.score)
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.collide_delta = 0 # stop jumping
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
for g in ground_hit_list:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty-ty
|
||||
self.collide_delta = 0 # stop jumping
|
||||
if self.rect.y > g.rect.y:
|
||||
self.health -=1
|
||||
print(self.health)
|
||||
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
self.jump_delta = 6*2
|
||||
self.movey -= 33 # how high to jump
|
||||
self.collide_delta += 6
|
||||
self.jump_delta += 6
|
||||
|
||||
class Enemy(pygame.sprite.Sprite):
|
||||
'''
|
||||
Spawn an enemy
|
||||
'''
|
||||
def __init__(self,x,y,img):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.image = pygame.image.load(os.path.join('images',img))
|
||||
self.movey = 0
|
||||
#self.image.convert_alpha()
|
||||
#self.image.set_colorkey(ALPHA)
|
||||
self.rect = self.image.get_rect()
|
||||
self.rect.x = x
|
||||
self.rect.y = y
|
||||
self.counter = 0
|
||||
|
||||
|
||||
def move(self):
|
||||
'''
|
||||
enemy movement
|
||||
'''
|
||||
distance = 80
|
||||
speed = 8
|
||||
|
||||
self.movey += 3.2
|
||||
|
||||
if self.counter >= 0 and self.counter <= distance:
|
||||
self.rect.x += speed
|
||||
elif self.counter >= distance and self.counter <= distance*2:
|
||||
self.rect.x -= speed
|
||||
else:
|
||||
self.counter = 0
|
||||
|
||||
self.counter += 1
|
||||
|
||||
if not self.rect.y >= worldy-ty-ty:
|
||||
self.rect.y += self.movey
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
for g in ground_hit_list:
|
||||
self.rect.y = worldy-ty-ty
|
||||
|
||||
|
||||
class Level():
|
||||
def bad(lvl,eloc):
|
||||
if lvl == 1:
|
||||
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
|
||||
enemy_list = pygame.sprite.Group() # create enemy group
|
||||
enemy_list.add(enemy) # add enemy to group
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return enemy_list
|
||||
|
||||
def loot(lvl,tx,ty):
|
||||
if lvl == 1:
|
||||
loot_list = pygame.sprite.Group()
|
||||
loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
|
||||
loot_list.add(loot)
|
||||
|
||||
if lvl == 2:
|
||||
print(lvl)
|
||||
|
||||
return loot_list
|
||||
|
||||
def ground(lvl,gloc,tx,ty):
|
||||
ground_list = pygame.sprite.Group()
|
||||
i=0
|
||||
if lvl == 1:
|
||||
while i < len(gloc):
|
||||
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
|
||||
ground_list.add(ground)
|
||||
i=i+1
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return ground_list
|
||||
|
||||
def platform(lvl,tx,ty):
|
||||
plat_list = pygame.sprite.Group()
|
||||
ploc = []
|
||||
i=0
|
||||
if lvl == 1:
|
||||
ploc.append((20,worldy-ty-128,3))
|
||||
ploc.append((300,worldy-ty-256,3))
|
||||
ploc.append((500,worldy-ty-128,4))
|
||||
|
||||
while i < len(ploc):
|
||||
j=0
|
||||
while j <= ploc[i][2]:
|
||||
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
|
||||
plat_list.add(plat)
|
||||
j=j+1
|
||||
print('run' + str(i) + str(ploc[i]))
|
||||
i=i+1
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return plat_list
|
||||
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
worldx = 960
|
||||
worldy = 720
|
||||
|
||||
fps = 40 # frame rate
|
||||
ani = 4 # animation cycles
|
||||
clock = pygame.time.Clock()
|
||||
pygame.init()
|
||||
main = True
|
||||
|
||||
BLUE = (25,25,200)
|
||||
BLACK = (23,23,23 )
|
||||
WHITE = (254,254,254)
|
||||
ALPHA = (0,255,0)
|
||||
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
|
||||
backdropbox = world.get_rect()
|
||||
player = Player() # spawn player
|
||||
player.rect.x = 0
|
||||
player.rect.y = 0
|
||||
player_list = pygame.sprite.Group()
|
||||
player_list.add(player)
|
||||
steps = 10
|
||||
forwardx = 600
|
||||
backwardx = 230
|
||||
|
||||
eloc = []
|
||||
eloc = [200,20]
|
||||
gloc = []
|
||||
#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
|
||||
tx = 64 #tile size
|
||||
ty = 64 #tile size
|
||||
|
||||
i=0
|
||||
while i <= (worldx/tx)+tx:
|
||||
gloc.append(i*tx)
|
||||
i=i+1
|
||||
|
||||
enemy_list = Level.bad( 1, eloc )
|
||||
ground_list = Level.ground( 1,gloc,tx,ty )
|
||||
plat_list = Level.platform( 1,tx,ty )
|
||||
loot_list = Level.loot(1,tx,ty)
|
||||
|
||||
'''
|
||||
Main loop
|
||||
'''
|
||||
while main == True:
|
||||
for event in pygame.event.get():
|
||||
if event.type == pygame.QUIT:
|
||||
pygame.quit(); sys.exit()
|
||||
main = False
|
||||
|
||||
if event.type == pygame.KEYDOWN:
|
||||
if event.key == pygame.K_LEFT or event.key == ord('a'):
|
||||
print("LEFT")
|
||||
player.control(-steps,0)
|
||||
if event.key == pygame.K_RIGHT or event.key == ord('d'):
|
||||
print("RIGHT")
|
||||
player.control(steps,0)
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
print('jump')
|
||||
|
||||
if event.type == pygame.KEYUP:
|
||||
if event.key == pygame.K_LEFT or event.key == ord('a'):
|
||||
player.control(steps,0)
|
||||
if event.key == pygame.K_RIGHT or event.key == ord('d'):
|
||||
player.control(-steps,0)
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
player.jump(plat_list)
|
||||
|
||||
if event.key == ord('q'):
|
||||
pygame.quit()
|
||||
sys.exit()
|
||||
main = False
|
||||
|
||||
# scroll the world forward
|
||||
if player.rect.x >= forwardx:
|
||||
scroll = player.rect.x - forwardx
|
||||
player.rect.x = forwardx
|
||||
for p in plat_list:
|
||||
p.rect.x -= scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x -= scroll
|
||||
for l in loot_list:
|
||||
l.rect.x -= scroll
|
||||
|
||||
# scroll the world backward
|
||||
if player.rect.x <= backwardx:
|
||||
scroll = backwardx - player.rect.x
|
||||
player.rect.x = backwardx
|
||||
for p in plat_list:
|
||||
p.rect.x += scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x += scroll
|
||||
for l in loot_list:
|
||||
l.rect.x += scroll
|
||||
|
||||
world.blit(backdrop, backdropbox)
|
||||
player.gravity() # check gravity
|
||||
player.update()
|
||||
player_list.draw(world) #refresh player position
|
||||
enemy_list.draw(world) # refresh enemies
|
||||
ground_list.draw(world) # refresh enemies
|
||||
plat_list.draw(world) # refresh platforms
|
||||
loot_list.draw(world) # refresh loot
|
||||
|
||||
for e in enemy_list:
|
||||
e.move()
|
||||
pygame.display.flip()
|
||||
clock.tick(fps)
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/loot-python-platformer-game
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp (Hearts, stars, and dollar signs)
|
||||
[2]: https://www.python.org/
|
||||
[3]: https://www.pygame.org/news
|
||||
[4]: https://linux.cn/article-9071-1.html
|
||||
[5]: https://linux.cn/article-10850-1.html
|
||||
[6]: https://linux.cn/article-10858-1.html
|
||||
[7]: https://linux.cn/article-10874-1.html
|
||||
[8]: https://linux.cn/article-10883-1.html
|
||||
[9]: https://linux.cn/article-11780-1.html
|
||||
[10]: https://linux.cn/article-11790-1.html
|
||||
[11]: https://linux.cn/article-11819-1.html
|
||||
[12]: https://opensource.com/sites/default/files/uploads/pygame-loot.jpg (Loot in Python platformer)
|
||||
[13]: https://linux.cn/article-10902-1.html
|
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11832-1.html)
|
||||
[#]: subject: (Introducing the guide to inter-process communication in Linux)
|
||||
[#]: via: (https://opensource.com/article/20/1/inter-process-communication-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
免费电子书《Linux 进程间通信指南》介绍
|
||||
======
|
||||
|
||||
> 这本免费的电子书使经验丰富的程序员更深入了解 Linux 中进程间通信(IPC)的核心概念和机制。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/30/115631jthl0h61zhhmwpv1.jpeg)
|
||||
|
||||
让一个软件过程与另一个软件过程进行对话是一个微妙的平衡行为。但是,它对于应用程序而言可能是至关重要的功能,因此这是任何从事复杂项目的程序员都必须解决的问题。你的应用程序是否需要启动由其它软件处理的工作;监视外设或网络上正在执行的操作;或者检测来自其它来源的信号,当你的软件需要依赖其自身代码之外的东西来知道下一步做什么或什么时候做时,你就需要考虑<ruby>进程间通信<rt>inter-process communication</rt></ruby>(IPC)。
|
||||
|
||||
这在 Unix 操作系统上已经由来已久了,这可能是因为人们早期预期软件会来自各种来源。按照相同的传统,Linux 提供了一些同样的 IPC 接口和一些新接口。Linux 内核具有多种 IPC 方法,[util-linux 包][2]包含了 `ipcmk`、`ipcrm`、`ipcs` 和 `lsipc` 命令,用于监视和管理 IPC 消息。
|
||||
|
||||
### 显示进程间通信信息
|
||||
|
||||
在尝试 IPC 之前,你应该知道系统上已经有哪些 IPC 设施。`lsipc` 命令提供了该信息。
|
||||
|
||||
```
|
||||
RESOURCE DESCRIPTION LIMIT USED USE%
|
||||
MSGMNI Number of message queues 32000 0 0.00%
|
||||
MSGMAX Max size of message (byt.. 8192 - -
|
||||
MSGMNB Default max size of queue 16384 - -
|
||||
SHMMNI Shared memory segments 4096 79 1.93%
|
||||
SHMALL Shared memory pages 184[...] 25452 0.00%
|
||||
SHMMAX Max size of shared memory 18446744073692774399
|
||||
SHMMIN Min size of shared memory 1 - -
|
||||
SEMMNI Number of semaphore ident 32000 0 0.00%
|
||||
SEMMNS Total number of semaphore 1024000.. 0 0.00%
|
||||
SEMMSL Max semaphores per semap 32000 - -
|
||||
SEMOPM Max number of operations p 500 - -
|
||||
SEMVMX Semaphore max value 32767 - -
|
||||
```
|
||||
|
||||
你可能注意到,这个示例清单包含三种不同类型的 IPC 机制,每种机制在 Linux 内核中都是可用的:消息(MSG)、共享内存(SHM)和信号量(SEM)。你可以用 `ipcs` 命令查看每个子系统的当前活动:
|
||||
|
||||
```
|
||||
$ ipcs
|
||||
|
||||
------ Message Queues Creators/Owners ---
|
||||
msqid perms cuid cgid [...]
|
||||
|
||||
------ Shared Memory Segment Creators/Owners
|
||||
shmid perms cuid cgid [...]
|
||||
557056 700 seth users [...]
|
||||
3571713 700 seth users [...]
|
||||
2654210 600 seth users [...]
|
||||
2457603 700 seth users [...]
|
||||
|
||||
------ Semaphore Arrays Creators/Owners ---
|
||||
semid perms cuid cgid [...]
|
||||
```
|
||||
|
||||
这表明当前没有消息或信号量阵列,但是使用了一些共享内存段。
|
||||
|
||||
你可以在系统上执行一个简单的示例,这样就可以看到正在工作的系统之一。它涉及到一些 C 代码,所以你必须在系统上有构建工具。必须安装这些软件包才能从源代码构建软件,这些软件包的名称取决于发行版,因此请参考文档以获取详细信息。例如,在基于 Debian 的发行版上,你可以在 wiki 的[构建教程][3]部分了解构建需求,而在基于 Fedora 的发行版上,你可以参考该文档的[从源代码安装软件][4]部分。
|
||||
|
||||
### 创建一个消息队列
|
||||
|
||||
你的系统已经有一个默认的消息队列,但是你可以使用 `ipcmk` 命令创建你自己的消息队列:
|
||||
|
||||
```
|
||||
$ ipcmk --queue
|
||||
Message queue id: 32764
|
||||
```
|
||||
|
||||
编写一个简单的 IPC 消息发送器,为了简单,在队列 ID 中硬编码:
|
||||
|
||||
```
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
|
||||
struct msgbuffer {
|
||||
char text[24];
|
||||
} message;
|
||||
|
||||
int main() {
|
||||
int msqid = 32764;
|
||||
strcpy(message.text,"opensource.com");
|
||||
msgsnd(msqid, &message, sizeof(message), 0);
|
||||
printf("Message: %s\n",message.text);
|
||||
printf("Queue: %d\n",msqid);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
编译该应用程序并运行:
|
||||
|
||||
```
|
||||
$ gcc msgsend.c -o msg.bin
|
||||
$ ./msg.bin
|
||||
Message: opensource.com
|
||||
Queue: 32769
|
||||
```
|
||||
|
||||
你刚刚向你的消息队列发送了一条消息。你可以使用 `ipcs` 命令验证这一点,可以使用 `——queue` 选项将输出限制到该消息队列:
|
||||
|
||||
```
|
||||
$ ipcs -q
|
||||
|
||||
------ Message Queues --------
|
||||
key msqid owner perms used-bytes messages
|
||||
0x7b341ab9 0 seth 666 0 0
|
||||
0x72bd8410 32764 seth 644 24 1
|
||||
```
|
||||
|
||||
你也可以检索这些消息:
|
||||
|
||||
```
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdio.h>
|
||||
|
||||
struct msgbuffer {
|
||||
char text[24];
|
||||
} message;
|
||||
|
||||
int main() {
|
||||
int msqid = 32764;
|
||||
msgrcv(msqid, &message, sizeof(message),0,0);
|
||||
printf("\nQueue: %d\n",msqid);
|
||||
printf("Got this message: %s\n", message.text);
|
||||
msgctl(msqid,IPC_RMID,NULL);
|
||||
return 0;
|
||||
```
|
||||
|
||||
编译并运行:
|
||||
|
||||
```
|
||||
$ gcc get.c -o get.bin
|
||||
$ ./get.bin
|
||||
|
||||
Queue: 32764
|
||||
Got this message: opensource.com
|
||||
```
|
||||
|
||||
### 下载这本电子书
|
||||
|
||||
这只是 Marty Kalin 的《[Linux 进程间通信指南][5]》中课程的一个例子,可从 Opensource.com 下载的这本最新免费(且 CC 授权)的电子书。在短短的几节课中,你将从消息队列、共享内存和信号量、套接字、信号等中了解 IPC 的 POSIX 方法。认真阅读 Marty 的书,你将成为一个博识的程序员。而这不仅适用于经验丰富的编码人员,如果你编写的只是 shell 脚本,那么你将拥有有关管道(命名和未命名)和共享文件的大量实践知识,以及使用共享文件或外部消息队列时需要了解的重要概念。
|
||||
|
||||
如果你对制作具有动态和具有系统感知的优秀软件感兴趣,那么你需要了解 IPC。让[这本书][5]做你的向导。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/inter-process-communication-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_inter-process_communication_linux_520x292.png?itok=hPoen7oI (Inter-process Communication in Linux)
|
||||
[2]: https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/
|
||||
[3]: https://wiki.debian.org/BuildingTutorial
|
||||
[4]: https://docs.pagure.org/docs-fedora/installing-software-from-source.html
|
||||
[5]: https://opensource.com/downloads/guide-inter-process-communication-linux
|
@ -1,20 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11788-1.html)
|
||||
[#]: subject: (How to write a Python web API with Pyramid and Cornice)
|
||||
[#]: via: (https://opensource.com/article/20/1/python-web-api-pyramid-cornice)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
如何使用 Pyramid 和 Cornice 编写 Python Web API
|
||||
======
|
||||
使用 Pyramid 和 Cornice 构建可扩展的 RESTful Web 服务。
|
||||
![Searching for code][1]
|
||||
|
||||
[Python][2] 是一种高级的,面向对象的编程语言,它以其简单的语法而闻名。它一直是构建 RESTful API 的顶级编程语言之一。
|
||||
> 使用 Pyramid 和 Cornice 构建和描述可扩展的 RESTful Web 服务。
|
||||
|
||||
[Pyramid][3] 是一个 Python Web 框架,旨在随着应用的扩展而扩展:这对于简单的应用来说很简单,对于大型、复杂的应用也可以做到。Pyramid 为 PyPI (Python 软件包索引)提供了强大的支持。[Cornice][4] 提供了使用 Pyramid 构建 RESTful Web 服务的助手。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/16/120352fcgeeccvfgt8sfvc.jpg)
|
||||
|
||||
[Python][2] 是一种高级的、面向对象的编程语言,它以其简单的语法而闻名。它一直是构建 RESTful API 的顶级编程语言之一。
|
||||
|
||||
[Pyramid][3] 是一个 Python Web 框架,旨在随着应用的扩展而扩展:这可以让简单的应用很简单,也可以增长为大型、复杂的应用。此外,Pyramid 为 PyPI (Python 软件包索引)提供了强大的支持。[Cornice][4] 为使用 Pyramid 构建和描述 RESTful Web 服务提供了助力。
|
||||
|
||||
本文将使用 Web 服务的例子来获取名人名言,来展示如何使用这些工具。
|
||||
|
||||
@ -22,7 +24,6 @@
|
||||
|
||||
首先为你的应用创建一个虚拟环境,并创建一个文件来保存代码:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir tutorial
|
||||
$ cd tutorial
|
||||
@ -36,7 +37,6 @@ $ source env/bin/activate
|
||||
|
||||
使用以下命令导入这些模块:
|
||||
|
||||
|
||||
```
|
||||
from pyramid.config import Configurator
|
||||
from cornice import Service
|
||||
@ -44,8 +44,7 @@ from cornice import Service
|
||||
|
||||
### 定义服务
|
||||
|
||||
将引用服务定义为 **Service** 对象:
|
||||
|
||||
将引用服务定义为 `Service` 对象:
|
||||
|
||||
```
|
||||
QUOTES = Service(name='quotes',
|
||||
@ -55,8 +54,7 @@ QUOTES = Service(name='quotes',
|
||||
|
||||
### 编写引用逻辑
|
||||
|
||||
到目前为止,这仅支持 **GET** 获取名言。用 **QUOTES.get** 装饰函数。这是将逻辑绑定到 REST 服务的方法:
|
||||
|
||||
到目前为止,这仅支持获取名言。用 `QUOTES.get` 装饰函数。这是将逻辑绑定到 REST 服务的方法:
|
||||
|
||||
```
|
||||
@QUOTES.get()
|
||||
@ -72,14 +70,13 @@ def get_quote(request):
|
||||
}
|
||||
```
|
||||
|
||||
请注意,与其他框架不同,装饰器_不能_更改 **get_quote** 函数。如果导入此模块,你仍然可以定期调用该函数并检查结果。
|
||||
请注意,与其他框架不同,装饰器*不会*更改 `get_quote` 函数。如果导入此模块,你仍然可以定期调用该函数并检查结果。
|
||||
|
||||
在为 Pyramid RESTful 服务编写单元测试时,这很有用。
|
||||
|
||||
### 定义应用对象
|
||||
|
||||
最后,使用 **scan** 查找所有修饰的函数并将其添加到配置中:
|
||||
|
||||
最后,使用 `scan` 查找所有修饰的函数并将其添加到配置中:
|
||||
|
||||
```
|
||||
with Configurator() as config:
|
||||
@ -94,14 +91,12 @@ with Configurator() as config:
|
||||
|
||||
我使用 Twisted 的 WSGI 服务器运行该应用,但是如果需要,你可以使用任何其他 [WSGI][5] 服务器,例如 Gunicorn 或 uWSGI。
|
||||
|
||||
|
||||
```
|
||||
`(env)$ python -m twisted web --wsgi=main.application`
|
||||
(env)$ python -m twisted web --wsgi=main.application
|
||||
```
|
||||
|
||||
默认情况下,Twisted 的 WSGI 服务器运行在端口 8080 上。你可以使用 [HTTPie][6] 测试该服务:
|
||||
|
||||
|
||||
```
|
||||
(env) $ pip install httpie
|
||||
...
|
||||
@ -130,7 +125,7 @@ X-Content-Type-Options: nosniff
|
||||
|
||||
### 为什么要使用 Pyramid?
|
||||
|
||||
Pyramid 不是最受欢迎的框架,但它已在 [PyPI][7] 等一些引人注目的项目中使用。我喜欢 Pyramid,因为它是认真对待单元测试的框架之一:因为装饰器不会修改函数并且没有线程局部变量,所以可以直接从单元测试中调用函数。例如,需要访问数据库的函数将从通过 **request.config** 传递的 **request.config** 对象中获取它。这允许单元测试人员将模拟(或真实)数据库对象放入请求中,而不用仔细设置全局变量,线程局部变量或其他特定于框架的东西。
|
||||
Pyramid 并不是最受欢迎的框架,但它已在 [PyPI][7] 等一些引人注目的项目中使用。我喜欢 Pyramid,因为它是认真对待单元测试的框架之一:因为装饰器不会修改函数并且没有线程局部变量,所以可以直接从单元测试中调用函数。例如,需要访问数据库的函数将从通过 `request.config` 传递的 `request.config` 对象中获取它。这允许单元测试人员将模拟(或真实)数据库对象放入请求中,而不用仔细设置全局变量、线程局部变量或其他特定于框架的东西。
|
||||
|
||||
如果你正在寻找一个经过测试的库来构建你接下来的 API,请尝试使用 Pyramid。你不会失望的。
|
||||
|
||||
@ -140,8 +135,8 @@ via: https://opensource.com/article/20/1/python-web-api-pyramid-cornice
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11809-1.html)
|
||||
[#]: subject: (How to setup multiple monitors in sway)
|
||||
[#]: via: (https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/)
|
||||
[#]: author: (arte219 https://fedoramagazine.org/author/arte219/)
|
||||
|
||||
如何在 Sway 中设置多个显示器
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Sway 是一种平铺式 Wayland 合成器,具有与 [i3 X11 窗口管理器][2]相同的功能、外观和工作流程。由于 Sway 使用 Wayland 而不是 X11,因此就不能一如既往地使用设置 X11 的工具。这包括 `xrandr` 之类的工具,这些工具在 X11 窗口管理器或桌面中用于设置显示器。这就是为什么必须通过编辑 Sway 配置文件来设置显示器的原因,这就是本文的目的。
|
||||
|
||||
### 获取你的显示器 ID
|
||||
|
||||
首先,你必须获得 Sway 用来指代显示器的名称。你可以通过运行以下命令进行操作:
|
||||
|
||||
```
|
||||
$ swaymsg -t get_outputs
|
||||
```
|
||||
|
||||
你将获得所有显示器的相关信息,每个显示器都用空行分隔。
|
||||
|
||||
你必须查看每个部分的第一行,以及 `Output` 之后的内容。例如,当你看到 `Output DVI-D-1 'Philips Consumer Electronics Company'` 之类的行时,则该输出 ID 为 `DVI-D-1`。注意这些 ID 及其所属的物理监视器。
|
||||
|
||||
### 编辑配置文件
|
||||
|
||||
如果你之前没有编辑过 Sway 配置文件,则必须通过运行以下命令将其复制到主目录中:
|
||||
|
||||
```
|
||||
cp -r /etc/sway/config ~/.config/sway/config
|
||||
```
|
||||
|
||||
现在,默认配置文件位于 `~/.config/sway` 中,名为 `config`。你可以使用任何文本编辑器进行编辑。
|
||||
|
||||
现在你需要做一点数学。想象有一个网格,其原点在左上角。X 和 Y 坐标的单位是像素。Y 轴反转。这意味着,例如,如果你从原点开始,向右移动 100 像素,向下移动 80 像素,则坐标将为 `(100, 80)`。
|
||||
|
||||
你必须计算最终显示在此网格上的位置。显示器的位置由左上方的像素指定。例如,如果我们要使用名称为“HDMI1”且分辨率为 1920×1080 的显示器,并在其右侧使用名称为 “eDP1” 且分辨率为 1600×900 的笔记本电脑显示器,则必须在配置文件中键入 :
|
||||
|
||||
```
|
||||
output HDMI1 pos 0 0
|
||||
output eDP1 pos 1920 0
|
||||
```
|
||||
|
||||
你还可以使用 `res` 选项手动指定分辨率:
|
||||
|
||||
```
|
||||
output HDMI1 pos 0 0 res 1920x1080
|
||||
output eDP1 pos 1920 0 res 1600x900
|
||||
```
|
||||
|
||||
### 将工作空间绑定到显示器上
|
||||
|
||||
与多个监视器一起使用 Sway 在工作区管理中可能会有些棘手。幸运的是,你可以将工作区绑定到特定的显示器上,因此你可以轻松地切换到该显示器并更有效地使用它。只需通过配置文件中的 `workspace` 命令即可完成。例如,如果要绑定工作区 1 和 2 到显示器 “DVI-D-1”,绑定工作区 8 和 9 到显示器 “HDMI-A-1”,则可以使用以下方法:
|
||||
|
||||
```
|
||||
workspace 1 output DVI-D-1
|
||||
workspace 2 output DVI-D-1
|
||||
```
|
||||
|
||||
```
|
||||
workspace 8 output HDMI-A-1
|
||||
workspace 9 output HDMI-A-1
|
||||
```
|
||||
|
||||
就是这样。这就在 Sway 中多显示器设置的基础知识。可以在 <https://github.com/swaywm/sway/wiki#Wiki#Multihead> 中找到更详细的指南。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/
|
||||
|
||||
作者:[arte219][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/arte219/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/sway-multiple-monitors-816x345.png
|
||||
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
|
@ -1,61 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qianmingtian)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11787-1.html)
|
||||
[#]: subject: (Huawei’s Linux Distribution openEuler is Available Now!)
|
||||
[#]: via: (https://itsfoss.com/openeuler/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
华为的linux发行版 openEuler 可以使用了!
|
||||
外媒:华为的 Linux 发行版 openEuler 可以使用了!
|
||||
======
|
||||
|
||||
华为提供了一个基于 CentOS 的企业 Linux 发行版 EulerOS 。最近,华为发布了一个名为 [openEuler][1] 的 EulerOS 社区版。
|
||||
> 华为提供了一个基于 CentOS 的企业级 Linux 发行版 EulerOS。最近,华为发布了一个名为 [openEuler][1] 的 EulerOS 社区版。
|
||||
|
||||
openEuler 的源代码也被发布了。你在微软旗下的 GitHub 上找不到它——源代码可以在 [Gitee][2] 找到,这是一个中文的 [GitHub 的替代品][3] 。
|
||||
openEuler 的源代码也一同发布了。你在微软旗下的 GitHub 上找不到它——源代码可以在 [Gitee][2] 找到,这是一个中文的 [GitHub 的替代品][3]。
|
||||
|
||||
它有两个独立的存储库,一个用于存储[源代码][2],另一个作为[包源][4] 存储有助于构建操作系统的软件包。
|
||||
它有两个独立的存储库,一个用于存储[源代码][2];另一个作为[软件包的源代码][4],存储有助于构建该操作系统的软件包。
|
||||
|
||||
![][5]
|
||||
![][5]
|
||||
|
||||
openuler 基础架构团队分享了他们使源代码可用的经验:
|
||||
openEuler 基础架构团队分享了他们使源代码可用的经验:
|
||||
|
||||
>我们现在很兴奋。很难想象我们会管理成千上万的仓库。为了确保它们能被成功地编译,我们要感谢所有参与贡献的人。
|
||||
> 我们现在很兴奋。很难想象我们会管理成千上万的仓库。为了确保它们能被成功地编译,我们要感谢所有参与贡献的人。
|
||||
|
||||
### openEuler 是基于 CentOS 的 Linux 发行版
|
||||
|
||||
与 EulerOS 一样,openEuler OS 也是基于 [CentOS][6],但华为技术有限公司为企业应用进一步开发了该操作系统。
|
||||
|
||||
它是为 ARM64 架构的服务器量身定做的,同时华为声称已经做了一些改变来提高其性能。你可以在[华为发展博客][7]上了解更多。
|
||||
它是为 ARM64 架构的服务器量身定做的,同时华为声称已经做了一些改变来提高其性能。你可以在[华为开发博客][7]上了解更多。
|
||||
|
||||
![][8]
|
||||
|
||||
|
||||
目前,根据 openEuler 的官方声明,有 50 多名贡献者为 openEuler 贡献了近 600 个提交。
|
||||
|
||||
贡献者使源代码对社区可用成为可能。
|
||||
贡献者们使源代码对社区可用成为可能。
|
||||
|
||||
值得注意的是,存储库还包括两个与之相关的新项目(或子项目),[iSulad][9] 和 **A-Tune**。
|
||||
值得注意的是,存储库还包括两个与之相关的新项目(或子项目),[iSulad][9] 和 A-Tune。
|
||||
|
||||
A-Tune 是一个基于 AI 的操作系统调优软件, iSulad 是一个轻量级的容器运行时守护进程,如[Gitee][2]中提到的那样,它是为物联网和云基础设施设计的。
|
||||
A-Tune 是一个基于 AI 的操作系统调优软件,iSulad 是一个轻量级的容器运行时守护进程,如在 [Gitee][2] 中提到的那样,它是为物联网和云基础设施设计的。
|
||||
|
||||
另外,官方的[公告][10]提到,这些系统是在华为云上通过脚本自动化构建的。这确实十分有趣。
|
||||
另外,官方的[公告][10]提到,这些系统是在华为云上通过脚本自动构建的。这确实十分有趣。
|
||||
|
||||
### 下载 openEuler
|
||||
|
||||
![][11]
|
||||
|
||||
到目前为止,你找不到它的英文文档,所以你必须等待或选择通过[文档][12]帮助他们。
|
||||
到目前为止,你找不到它的英文文档,所以你必须等待或选择通过(贡献)[文档][12]来帮助他们。
|
||||
|
||||
你可以直接从它的[官方网站][13]下载 ISO 来测试它:
|
||||
|
||||
[下载 openEuler ][13]
|
||||
- [下载 openEuler ][13]
|
||||
|
||||
### 你认为华为的 openEuler 怎么样?
|
||||
|
||||
据 cnTechPost 报道,华为曾宣布 EulerOS 将以新名字 openEuler 成为开源软件。
|
||||
|
||||
目前还不清楚 openEuler 是否会取代 EulerOS ,或者两者会像 CentOS (社区版)和 Red Hat (商业版)一样同时存在。
|
||||
目前还不清楚 openEuler 是否会取代 EulerOS ,或者两者会像 CentOS(社区版)和 Red Hat(商业版)一样同时存在。
|
||||
|
||||
我还没有测试过它,所以我不能说 openEuler 是否适合英文用户。
|
||||
|
||||
@ -68,7 +67,7 @@ via: https://itsfoss.com/openeuler/
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qianmingtian][c]
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,22 +1,24 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11793-1.html)
|
||||
[#]: subject: (Sync files across multiple devices with Syncthing)
|
||||
[#]: via: (https://opensource.com/article/20/1/sync-files-syncthing)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 Syncthing 在多个设备间同步文件
|
||||
======
|
||||
2020 年,在我们的 20 个使用开源提升生产力的系列文章中,首先了解如何使用 Syncthing 同步文件。
|
||||
![Files in a folder][1]
|
||||
|
||||
> 2020 年,在我们的 20 个使用开源提升生产力的系列文章中,首先了解如何使用 Syncthing 同步文件。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/18/123416rebvs7sjwm6c889y.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 Synthing 同步文件
|
||||
|
||||
置新机器很麻烦。我们都有在机器之间复制的“标准设置”。多年来,我使用了很多方法来使它们在计算机之间同步。在过去(这会告诉你我年纪有多大了),曾经是软盘、然后是 Zip 磁盘、U 盘、SCP、Rsync、Dropbox、ownCloud,你想到的都试过。但这些似乎对我都不够好。
|
||||
设置新机器很麻烦。我们都有在机器之间复制的“标准设置”。多年来,我使用了很多方法来使它们在计算机之间同步。在过去(这会告诉你我年纪有多大了),曾经是软盘、然后是 Zip 磁盘、U 盘、SCP、Rsync、Dropbox、ownCloud,你想到的都试过。但这些似乎对我都不够好。
|
||||
|
||||
然后我偶然发现了 [Syncthing][2]。
|
||||
|
||||
@ -28,19 +30,19 @@ Syncthing 可在 Linux、MacOS、Windows 和多种 BSD 中使用。还有一个
|
||||
|
||||
![Installing Syncthing on Ubuntu][4]
|
||||
|
||||
首次启动 Syncthing 时,它将启动 Web 浏览器以配置守护程序。第一台计算机上没有太多要做,但是这是一个很好的机会来介绍一下用户界面 (UI)。最重要的是在右上方的 **Actions** 菜单下的 “System ID”。
|
||||
首次启动 Syncthing 时,它将启动 Web 浏览器以配置守护程序。第一台计算机上没有太多要做,但是这是一个很好的机会来介绍一下用户界面 (UI)。最重要的是在右上方的 “Actions” 菜单下的 “System ID”。
|
||||
|
||||
![Machine ID][5]
|
||||
|
||||
设置第一台计算机后,请在第二台计算机上重复安装。在 UI 中,右下方将显示一个按钮,名为 **Add Remote Device**。单击按钮,你将会看到一个要求输入**设备 ID 和设备名**的框。从第一台计算机上复制并粘贴**设备 ID**,然后单击 **Save**。
|
||||
设置第一台计算机后,请在第二台计算机上重复安装。在 UI 中,右下方将显示一个按钮,名为 “Add Remote Device”。单击该按钮,你将会看到一个要求输入 “Device ID and a Name” 的框。从第一台计算机上复制并粘贴 “Device ID”,然后单击 “Save”。
|
||||
|
||||
你应该会在第一台上看到一个请求添加第二台的弹出窗口。接受后,新机器将显示在第一台机器的右下角。与第二台计算机共享默认目录。单击 **Default Folder**,然后单击 **Edit** 按钮。弹出窗口的顶部有四个链接。单击 **Sharing**,然后选择第二台计算机。单击 **Save**,然后查看第二台计算机。你会看到一个接受共享目录的提示。接受后,它将开始在两台计算机之间同步文件。
|
||||
你应该会在第一台上看到一个请求添加第二台的弹出窗口。接受后,新机器将显示在第一台机器的右下角。与第二台计算机共享默认目录。单击 “Default Folder”,然后单击 “Edit” 按钮。弹出窗口的顶部有四个链接。单击 “Sharing”,然后选择第二台计算机。单击 “Save”,然后查看第二台计算机。你会看到一个接受共享目录的提示。接受后,它将开始在两台计算机之间同步文件。
|
||||
|
||||
![Sharing a directory in Syncthing][6]
|
||||
|
||||
测试从一台计算机上复制文件到默认目录(**/your/home/Share**)。它应该很快会在另一台上出现。
|
||||
测试从一台计算机上复制文件到默认目录(“/你的家目录/Share”)。它应该很快会在另一台上出现。
|
||||
|
||||
你可以根据需要添加任意数量的目录,这非常方便。如你在第一张图中所看到的,我有一个用于保存配置的 **myconfigs** 文件夹。当我买了一台新机器时,我只需安装 Syncthing,如果我在一台机器上调整了配置,我不必更新所有,它会自动更新。
|
||||
你可以根据需要添加任意数量的目录,这非常方便。如你在第一张图中所看到的,我有一个用于保存配置的 `myconfigs` 文件夹。当我买了一台新机器时,我只需安装 Syncthing,如果我在一台机器上调整了配置,我不必更新所有,它会自动更新。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -49,7 +51,7 @@ via: https://opensource.com/article/20/1/sync-files-syncthing
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11796-1.html)
|
||||
[#]: subject: (Use Stow for configuration management of multiple machines)
|
||||
[#]: via: (https://opensource.com/article/20/1/configuration-management-stow)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 Stow 管理多台机器配置
|
||||
======
|
||||
> 2020 年,在我们的 20 个使用开源提升生产力的系列文章中,让我们了解如何使用 Stow 跨机器管理配置。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/18/141330jdcjalqzjal84a03.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 Stow 管理符号链接
|
||||
|
||||
昨天,我解释了如何使用 [Syncthing][2] 在多台计算机上保持文件同步。但是,这只是我用来保持配置一致性的工具之一。还有另一个表面上看起来更简单的工具:[Stow][3]。
|
||||
|
||||
![Stow help screen][4]
|
||||
|
||||
Stow 管理符号链接。默认情况下,它会链接目录到上一级目录。还有设置源和目标目录的选项,但我通常不使用它们。
|
||||
|
||||
正如我在 Syncthing 的[文章][5] 中提到的,我使用 Syncthing 来保持 `myconfigs` 目录在我所有的计算机上一致。`myconfigs` 目录下面有多个子目录。每个子目录包含我经常使用的应用之一的配置文件。
|
||||
|
||||
![myconfigs directory][6]
|
||||
|
||||
在每台计算机上,我进入 `myconfigs` 目录,并运行 `stow -S <目录名称>` 以将目录中的文件符号链接到我的家目录。例如,在 `vim` 目录下,我有 `.vimrc` 和 `.vim` 目录。在每台机器上,我运行 `stow -S vim` 来创建符号链接 `~/.vimrc` 和 `~/.vim`。当我在一台计算机上更改 Vim 配置时,它会应用到我的所有机器上。
|
||||
|
||||
然而,有时候,我需要一些特定于机器的配置,这就是为什么我有如 `msmtp-personal` 和 `msmtp-elastic`(我的雇主)这样的目录。由于我的 `msmtp` SMTP 客户端需要知道要中继电子邮件服务器,并且每个服务器都有不同的设置和凭据,我会使用 `-D` 标志来取消链接,接着链接另外一个。
|
||||
|
||||
![Unstow one, stow the other][7]
|
||||
|
||||
有时我要给配置添加文件。为此,有一个 `-R` 选项来“重新链接”。例如,我喜欢在图形化 Vim 中使用一种与控制台不同的特定字体。除了标准 `.vimrc` 文件,`.gvimrc` 文件能让我设置特定于图形化版本的选项。当我第一次设置它时,我移动 `~/.gvimrc` 到 `~/myconfigs/vim` 中,然后运行 `stow -R vim`,它取消链接并重新链接该目录中的所有内容。
|
||||
|
||||
Stow 让我使用一个简单的命令行在多种配置之间切换,并且,结合 Syncthing,我可以确保无论我身在何处或在哪里进行更改,我都有我喜欢的工具的设置。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/configuration-management-stow
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://www.gnu.org/software/stow/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_2-1.png (Stow help screen)
|
||||
[5]: https://linux.cn/article-11793-1.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_2-2.png (myconfigs directory)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/productivity_2-3.png (Unstow one, stow the other)
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11804-1.html)
|
||||
[#]: subject: (Keep your email in sync with OfflineIMAP)
|
||||
[#]: via: (https://opensource.com/article/20/1/sync-email-offlineimap)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 OfflineIMAP 同步邮件
|
||||
======
|
||||
|
||||
> 将邮件镜像保存到本地是整理消息的第一步。在我们的 20 个使用开源提升生产力的系列的第三篇文章中了解该如何做。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/20/235324nbgfyuwl98syowta.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 OfflineIMAP 在本地同步你的邮件
|
||||
|
||||
我与邮件之间存在爱恨交织的关系。我喜欢它让我与世界各地的人交流的方式。但是,像你们中的许多人一样,我收到过很多邮件,许多是来自邮件列表的,但也有很多垃圾邮件、广告等。这些积累了很多。
|
||||
|
||||
![The OfflineIMAP "blinkenlights" UI][2]
|
||||
|
||||
我尝试过的大多数工具(除了大型邮件服务商外)都可以很好地处理大量邮件,它们都有一个共同点:它们都依赖于以 [Maildir][3] 格式存储的本地邮件副本。这其中最有用的是 [OfflineIMAP][4]。OfflineIMAP 是将 IMAP 邮箱镜像到本地 Maildir 文件夹树的 Python 脚本。我用它来创建邮件的本地副本并使其保持同步。大多数 Linux 发行版都包含它,并且可以通过 Python 的 pip 包管理器获得。
|
||||
|
||||
示例的最小配置文件是一个很好的模板。首先将其复制到 `~/.offlineimaprc`。我的看起来像这样:
|
||||
|
||||
```
|
||||
[general]
|
||||
accounts = LocalSync
|
||||
ui=Quiet
|
||||
autorefresh=30
|
||||
|
||||
[Account LocalSync]
|
||||
localrepository = LocalMail
|
||||
remoterepository = MirrorIMAP
|
||||
|
||||
[Repository MirrorIMAP]
|
||||
type = IMAP
|
||||
remotehost = my.mail.server
|
||||
remoteuser = myusername
|
||||
remotepass = mypassword
|
||||
auth_mechanisms = LOGIN
|
||||
createfolder = true
|
||||
ssl = yes
|
||||
sslcacertfile = OS-DEFAULT
|
||||
|
||||
[Repository LocalMail]
|
||||
type = Maildir
|
||||
localfolders = ~/Maildir
|
||||
sep = .
|
||||
createfolder = true
|
||||
```
|
||||
|
||||
我的配置要做的是定义两个仓库:远程 IMAP 服务器和本地 Maildir 文件夹。还有一个**帐户**,告诉 OfflineIMAP 运行时要同步什么。你可以定义链接到不同仓库的多个帐户。除了本地复制外,这还允许你从一台 IMAP 服务器复制到另一台作为备份。
|
||||
|
||||
如果你有很多邮件,那么首次运行 OfflineIMAP 将花费一些时间。但是完成后,下次会花*少得多*的时间。你也可以将 OfflineIMAP 作为 cron 任务(我的偏好)或作为守护程序在仓库之间不断进行同步。其文档涵盖了所有这些内容以及 Gmail 等高级配置选项。
|
||||
|
||||
现在,我的邮件已在本地复制,并有多种工具用来加快搜索、归档和管理邮件的速度。这些我明天再说。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/sync-email-offlineimap
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/newsletter_email_mail_web_browser.jpg?itok=Lo91H9UH (email or newsletters via inbox and browser)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/productivity_3-1.png (The OfflineIMAP "blinkenlights" UI)
|
||||
[3]: https://en.wikipedia.org/wiki/Maildir
|
||||
[4]: http://www.offlineimap.org/
|
@ -0,0 +1,296 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11800-1.html)
|
||||
[#]: subject: (setV: A Bash function to maintain Python virtual environments)
|
||||
[#]: via: (https://opensource.com/article/20/1/setv-bash-function)
|
||||
[#]: author: (Sachin Patil https://opensource.com/users/psachin)
|
||||
|
||||
setV:一个管理 Python 虚拟环境的 Bash 函数
|
||||
======
|
||||
|
||||
> 了解一下 setV,它是一个轻量级的 Python 虚拟环境管理器,是 virtualenvwrapper 的替代产品。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/19/234306tvvg5ffwakrzr5vv.jpg)
|
||||
|
||||
这一年多来,我的 [bash_scripts][3] 项目中悄悄隐藏这 [setV][2],但现在是时候该公开了。setV 是一个 Bash 函数,我可以用它代替 [virtualenvwrapper][4]。它提供了使你能够执行以下操作的基本功能:
|
||||
|
||||
* 默认使用 Python 3
|
||||
* 创建一个新的虚拟环境
|
||||
* 使用带有 `-p`(或 `--python`)的自定义 Python 路径来创建新的虚拟环境
|
||||
* 删除现有的虚拟环境
|
||||
* 列出所有现有的虚拟环境
|
||||
* 使用制表符补全(以防你忘记虚拟环境名称)
|
||||
|
||||
### 安装
|
||||
|
||||
要安装 setV,请下载该脚本:
|
||||
|
||||
```
|
||||
curl https://gitlab.com/psachin/setV/raw/master/install.sh
|
||||
```
|
||||
|
||||
审核一下脚本,然后运行它:
|
||||
|
||||
```
|
||||
sh ./install.sh
|
||||
```
|
||||
|
||||
当安装 setV 时,安装脚本会要求你引入(`source`)一下 `~/.bashrc` 或 `~/.bash_profile` 的配置,根据你的喜好选择一个。
|
||||
|
||||
### 用法
|
||||
|
||||
基本的命令格式是 `setv`。
|
||||
|
||||
#### 创建虚拟环境
|
||||
|
||||
```
|
||||
setv --new rango # setv -n rango
|
||||
|
||||
# 或使用定制的 Python 路径
|
||||
setv --new --python /opt/python/python3 rango # setv -n -p /opt/python/python3 rango
|
||||
```
|
||||
|
||||
#### 激活已有的虚拟环境
|
||||
|
||||
```
|
||||
setv VIRTUAL_ENVIRONMENT_NAME
|
||||
```
|
||||
|
||||
```
|
||||
# 示例
|
||||
setv rango
|
||||
```
|
||||
|
||||
#### 列出所有的虚拟环境
|
||||
|
||||
```
|
||||
setv --list
|
||||
# 或
|
||||
setv [TAB] [TAB]
|
||||
```
|
||||
|
||||
#### 删除虚拟环境
|
||||
|
||||
```
|
||||
setv --delete rango
|
||||
```
|
||||
|
||||
#### 切换到另外一个虚拟环境
|
||||
|
||||
```
|
||||
# 假设你现在在 'rango',切换到 'tango'
|
||||
setv tango
|
||||
```
|
||||
|
||||
#### 制表符补完
|
||||
|
||||
如果你不完全记得虚拟环境的名称,则 Bash 式的制表符补全也可以适用于虚拟环境名称。
|
||||
|
||||
### 参与其中
|
||||
|
||||
setV 在 GNU [GPLv3][5]下开源,欢迎贡献。要了解更多信息,请访问它的 GitLab 存储库中的 setV 的 [README][6] 的贡献部分。
|
||||
|
||||
### setV 脚本
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
# setV - A Lightweight Python virtual environment manager.
|
||||
# Author: Sachin (psachin) <iclcoolster@gmail.com>
|
||||
# Author's URL: https://psachin.gitlab.io/about
|
||||
#
|
||||
# License: GNU GPL v3, See LICENSE file
|
||||
#
|
||||
# Configure(Optional):
|
||||
# Set `SETV_VIRTUAL_DIR_PATH` value to your virtual environments
|
||||
# directory-path. By default it is set to '~/virtualenvs/'
|
||||
#
|
||||
# Usage:
|
||||
# Manual install: Added below line to your .bashrc or any local rc script():
|
||||
# ---
|
||||
# source /path/to/virtual.sh
|
||||
# ---
|
||||
#
|
||||
# Now you can 'activate' the virtual environment by typing
|
||||
# $ setv <YOUR VIRTUAL ENVIRONMENT NAME>
|
||||
#
|
||||
# For example:
|
||||
# $ setv rango
|
||||
#
|
||||
# or type:
|
||||
# setv [TAB] [TAB] (to list all virtual envs)
|
||||
#
|
||||
# To list all your virtual environments:
|
||||
# $ setv --list
|
||||
#
|
||||
# To create new virtual environment:
|
||||
# $ setv --new new_virtualenv_name
|
||||
#
|
||||
# To delete existing virtual environment:
|
||||
# $ setv --delete existing_virtualenv_name
|
||||
#
|
||||
# To deactivate, type:
|
||||
# $ deactivate
|
||||
|
||||
# Path to virtual environment directory
|
||||
SETV_VIRTUAL_DIR_PATH="$HOME/virtualenvs/"
|
||||
# Default python version to use. This decides whether to use `virtualenv` or `python3 -m venv`
|
||||
SETV_PYTHON_VERSION=3 # Defaults to Python3
|
||||
SETV_PY_PATH=$(which python${SETV_PYTHON_VERSION})
|
||||
|
||||
function _setvcomplete_()
|
||||
{
|
||||
# Bash-autocompletion.
|
||||
# This ensures Tab-auto-completions work for virtual environment names.
|
||||
local cmd="${1##*/}" # to handle command(s).
|
||||
# Not necessary as such. 'setv' is the only command
|
||||
|
||||
local word=${COMP_WORDS[COMP_CWORD]} # Words thats being completed
|
||||
local xpat='${word}' # Filter pattern. Include
|
||||
# only words in variable '$names'
|
||||
local names=$(ls -l "${SETV_VIRTUAL_DIR_PATH}" | egrep '^d' | awk -F " " '{print $NF}') # Virtual environment names
|
||||
|
||||
COMPREPLY=($(compgen -W "$names" -X "$xpat" -- "$word")) # compgen generates the results
|
||||
}
|
||||
|
||||
function _setv_help_() {
|
||||
# Echo help/usage message
|
||||
echo "Usage: setv [OPTIONS] [NAME]"
|
||||
echo Positional argument:
|
||||
echo -e "NAME Activate virtual env."
|
||||
echo Optional arguments:
|
||||
echo -e "-l, --list List all Virtual Envs."
|
||||
echo -e "-n, --new NAME Create a new Python Virtual Env."
|
||||
echo -e "-d, --delete NAME Delete existing Python Virtual Env."
|
||||
echo -e "-p, --python PATH Python binary path."
|
||||
}
|
||||
|
||||
function _setv_custom_python_path()
|
||||
{
|
||||
if [ -f "${1}" ];
|
||||
then
|
||||
if [ "`expr $1 : '.*python\([2,3]\)'`" = "3" ];
|
||||
then
|
||||
SETV_PYTHON_VERSION=3
|
||||
else
|
||||
SETV_PYTHON_VERSION=2
|
||||
fi
|
||||
SETV_PY_PATH=${1}
|
||||
_setv_create $2
|
||||
else
|
||||
echo "Error: Path ${1} does not exist!"
|
||||
fi
|
||||
}
|
||||
|
||||
function _setv_create()
|
||||
{
|
||||
# Creates new virtual environment if ran with -n|--new flag
|
||||
if [ -z ${1} ];
|
||||
then
|
||||
echo "You need to pass virtual environment name"
|
||||
_setv_help_
|
||||
else
|
||||
echo "Creating new virtual environment with the name: $1"
|
||||
|
||||
if [ ${SETV_PYTHON_VERSION} -eq 3 ];
|
||||
then
|
||||
${SETV_PY_PATH} -m venv ${SETV_VIRTUAL_DIR_PATH}${1}
|
||||
else
|
||||
virtualenv -p ${SETV_PY_PATH} ${SETV_VIRTUAL_DIR_PATH}${1}
|
||||
fi
|
||||
|
||||
echo "You can now activate the Python virtual environment by typing: setv ${1}"
|
||||
fi
|
||||
}
|
||||
|
||||
function _setv_delete()
|
||||
{
|
||||
# Deletes virtual environment if ran with -d|--delete flag
|
||||
# TODO: Refactor
|
||||
if [ -z ${1} ];
|
||||
then
|
||||
echo "You need to pass virtual environment name"
|
||||
_setv_help_
|
||||
else
|
||||
if [ -d ${SETV_VIRTUAL_DIR_PATH}${1} ];
|
||||
then
|
||||
read -p "Really delete this virtual environment(Y/N)? " yes_no
|
||||
case $yes_no in
|
||||
Y|y) rm -rvf ${SETV_VIRTUAL_DIR_PATH}${1};;
|
||||
N|n) echo "Leaving the virtual environment as it is.";;
|
||||
*) echo "You need to enter either Y/y or N/n"
|
||||
esac
|
||||
else
|
||||
echo "Error: No virtual environment found by the name: ${1}"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
function _setv_list()
|
||||
{
|
||||
# Lists all virtual environments if ran with -l|--list flag
|
||||
echo -e "List of virtual environments you have under ${SETV_VIRTUAL_DIR_PATH}:\n"
|
||||
for virt in $(ls -l "${SETV_VIRTUAL_DIR_PATH}" | egrep '^d' | awk -F " " '{print $NF}')
|
||||
do
|
||||
echo ${virt}
|
||||
done
|
||||
}
|
||||
|
||||
function setv() {
|
||||
# Main function
|
||||
if [ $# -eq 0 ];
|
||||
then
|
||||
_setv_help_
|
||||
elif [ $# -le 3 ];
|
||||
then
|
||||
case "${1}" in
|
||||
-n|--new) _setv_create ${2};;
|
||||
-d|--delete) _setv_delete ${2};;
|
||||
-l|--list) _setv_list;;
|
||||
*) if [ -d ${SETV_VIRTUAL_DIR_PATH}${1} ];
|
||||
then
|
||||
# Activate the virtual environment
|
||||
source ${SETV_VIRTUAL_DIR_PATH}${1}/bin/activate
|
||||
else
|
||||
# Else throw an error message
|
||||
echo "Sorry, you don't have any virtual environment with the name: ${1}"
|
||||
_setv_help_
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
elif [ $# -le 5 ];
|
||||
then
|
||||
case "${2}" in
|
||||
-p|--python) _setv_custom_python_path ${3} ${4};;
|
||||
*) _setv_help_;;
|
||||
esac
|
||||
fi
|
||||
}
|
||||
|
||||
# Calls bash-complete. The compgen command accepts most of the same
|
||||
# options that complete does but it generates results rather than just
|
||||
# storing the rules for future use.
|
||||
complete -F _setvcomplete_ setv
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/setv-bash-function
|
||||
|
||||
作者:[Sachin Patil][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/psachin
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: https://gitlab.com/psachin/setV
|
||||
[3]: https://github.com/psachin/bash_scripts
|
||||
[4]: https://virtualenvwrapper.readthedocs.org/
|
||||
[5]: https://gitlab.com/psachin/setV/blob/master/LICENSE
|
||||
[6]: https://gitlab.com/psachin/setV/blob/master/ReadMe.org
|
||||
[7]: mailto:iclcoolster@gmail.com
|
109
published/202001/20200114 Organize your email with Notmuch.md
Normal file
109
published/202001/20200114 Organize your email with Notmuch.md
Normal file
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11807-1.html)
|
||||
[#]: subject: (Organize your email with Notmuch)
|
||||
[#]: via: (https://opensource.com/article/20/1/organize-email-notmuch)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 Notmuch 组织你的邮件
|
||||
======
|
||||
|
||||
> Notmuch 可以索引、标记和排序电子邮件。在我们的 20 个使用开源提升生产力的系列的第四篇文章中了解该如何使用它。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/22/112231xg5dgv6f6g5a1iv1.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 用 Notmuch 为你的邮件建立索引
|
||||
|
||||
昨天,我谈到了如何使用 OfflineIMAP [将我的邮件同步][2]到本地计算机。今天,我将讨论如何在阅读之前预处理所有邮件。
|
||||
|
||||
![Notmuch][3]
|
||||
|
||||
[Maildir][4] 可能是最有用的邮件存储格式之一。有很多工具可以帮助你管理邮件。我经常使用一个名为 [Notmuch][5] 的小程序,它能索引、标记和搜索邮件。Notmuch 配合其他几个程序一起使用可以使处理大量邮件更加容易。
|
||||
|
||||
大多数 Linux 发行版都包含 Notmuch,你也可以在 MacOS 上获得它。Windows 用户可以通过 Linux 的 Windows 子系统([WSL][6])访问它,但可能需要进行一些其他调整。
|
||||
|
||||
![Notmuch's first run][7]
|
||||
|
||||
Notmuch 首次运行时,它将询问你一些问题,并在家目录中创建 `.notmuch-config` 文件。接下来,运行 `notmuch new` 来索引并标记所有邮件。你可以使用 `notmuch search tag:new` 进行验证,它会找到所有带有 `new` 标签的消息。这可能会有很多邮件,因为 Notmuch 使用 `new` 标签来指示新邮件,因此你需要对其进行清理。
|
||||
|
||||
运行 `notmuch search tag:unread` 来查找未读消息,这会减少很多邮件。要从你已阅读的消息中删除 `new` 标签,请运行 `notmuch tag -new not tag:unread`,它将搜索所有没有 `unread` 标签的消息,并从其中删除 `new` 标签。现在,当你运行 `notmuch search tag:new` 时,它将仅显示未读邮件。
|
||||
|
||||
但是,批量标记消息可能更有用,因为在每次运行时手动更新标记可能非常繁琐。`--batch` 命令行选项告诉 Notmuch 读取多行命令并执行它们。还有一个 `--input=filename` 选项,该选项从文件中读取命令并应用它们。我有一个名为 `tagmail.notmuch` 的文件,用于给“新”邮件添加标签;它看起来像这样:
|
||||
|
||||
```
|
||||
# Manage sent, spam, and trash folders
|
||||
-unread -new folder:Trash
|
||||
-unread -new folder:Spam
|
||||
-unread -new folder:Sent
|
||||
|
||||
# Note mail sent specifically to me (excluding bug mail)
|
||||
+to-me to:kevin at sonney.com and tag:new and not tag:to-me
|
||||
|
||||
# And note all mail sent from me
|
||||
+sent from:kevin at sonney.com and tag:new and not tag:sent
|
||||
|
||||
# Remove the new tag from messages
|
||||
-new tag:new
|
||||
```
|
||||
|
||||
我可以在运行 `notmuch new` 后运行 `notmuch tag --input=tagmail.notmuch` 批量处理我的邮件,之后我也可以搜索这些标签。
|
||||
|
||||
Notmuch 还支持 `pre-new` 和 `post-new` 钩子。这些脚本存放在 `Maildir/.notmuch/hooks` 中,它们定义了在使用 `notmuch new` 索引新邮件之前(`pre-new`)和之后(`post-new`)要做的操作。在昨天的文章中,我谈到了使用 [OfflineIMAP][8] 同步来自 IMAP 服务器的邮件。从 `pre-new` 钩子运行它非常容易:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# Remove the new tag from messages that are still tagged as new
|
||||
notmuch tag -new tag:new
|
||||
|
||||
# Sync mail messages
|
||||
offlineimap -a LocalSync -u quiet
|
||||
```
|
||||
|
||||
你还可以使用可以操作 Notmuch 数据库的 Python 应用 [afew][9],来为你标记*邮件列表*和*垃圾邮件*。你可以用类似的方法在 `post-new` 钩子中使用 `afew`:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# tag with my custom tags
|
||||
notmuch tag --input=~/tagmail.notmuch
|
||||
|
||||
# Run afew to tag new mail
|
||||
afew -t -n
|
||||
```
|
||||
|
||||
我建议你在使用 `afew` 标记邮件时,不要使用 `[ListMailsFilter]`,因为某些邮件处理程序会在邮件中添加模糊或者彻头彻尾是垃圾的列表标头(我说的就是你 Google)。
|
||||
|
||||
![alot email client][10]
|
||||
|
||||
此时,任何支持 Notmuch 或 Maildir 的邮件阅读器都可以读取我的邮件。有时,我会使用 [alot][11](一个 Notmuch 特定的客户端)在控制台中阅读邮件,但是它不像其他邮件阅读器那么美观。
|
||||
|
||||
在接下来的几天,我将向你展示其他一些邮件客户端,它们可能会与你在使用的工具集成在一起。同时,请查看可与 Maildir 邮箱一起使用的其他工具。你可能会发现我没发现的好东西。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/organize-email-notmuch
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
|
||||
[2]: https://linux.cn/article-11804-1.html
|
||||
[3]: https://opensource.com/sites/default/files/uploads/productivity_4-1.png (Notmuch)
|
||||
[4]: https://en.wikipedia.org/wiki/Maildir
|
||||
[5]: https://notmuchmail.org/
|
||||
[6]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
|
||||
[7]: https://opensource.com/sites/default/files/uploads/productivity_4-2.png (Notmuch's first run)
|
||||
[8]: http://www.offlineimap.org/
|
||||
[9]: https://afew.readthedocs.io/en/latest/index.html
|
||||
[10]: https://opensource.com/sites/default/files/uploads/productivity_4-3.png (alot email client)
|
||||
[11]: https://github.com/pazz/alot
|
510
published/202001/20200115 6 handy Bash scripts for Git.md
Normal file
510
published/202001/20200115 6 handy Bash scripts for Git.md
Normal file
@ -0,0 +1,510 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11797-1.html)
|
||||
[#]: subject: (6 handy Bash scripts for Git)
|
||||
[#]: via: (https://opensource.com/article/20/1/bash-scripts-git)
|
||||
[#]: author: (Bob Peterson https://opensource.com/users/bobpeterson)
|
||||
|
||||
6 个方便的 Git 脚本
|
||||
======
|
||||
|
||||
> 当使用 Git 存储库时,这六个 Bash 脚本将使你的生活更轻松。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/18/231713jegbk8fyek798gxb.jpg)
|
||||
|
||||
我编写了许多 Bash 脚本,这些脚本使我在使用 Git 存储库时工作更加轻松。我的许多同事说没有必要:我所做的一切都可以用 Git 命令完成。虽然这可能是正确的,但我发现脚本远比尝试找出适当的 Git 命令来执行我想要的操作更加方便。
|
||||
|
||||
### 1、gitlog
|
||||
|
||||
`gitlog` 打印针对 master 分支的当前补丁的简短列表。它从最旧到最新打印它们,并显示作者和描述,其中 `H` 代表 `HEAD`,`^` 代表 `HEAD^`,`2` 代表 `HEAD~2`,依此类推。例如:
|
||||
|
||||
```
|
||||
$ gitlog
|
||||
-----------------------[ recovery25 ]-----------------------
|
||||
(snip)
|
||||
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
|
||||
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
|
||||
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
|
||||
```
|
||||
|
||||
如果我想查看其他分支上有哪些补丁,可以指定一个替代分支:
|
||||
|
||||
```
|
||||
$ gitlog recovery24
|
||||
```
|
||||
|
||||
### 2、gitlog.id
|
||||
|
||||
`gitlog.id` 只是打印出补丁的 SHA1 ID:
|
||||
|
||||
```
|
||||
$ gitlog.id
|
||||
-----------------------[ recovery25 ]-----------------------
|
||||
56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9
|
||||
```
|
||||
|
||||
同样,它假定是当前分支,但是如果需要,我可以指定其他分支。
|
||||
|
||||
### 3、gitlog.id2
|
||||
|
||||
`gitlog.id2` 与 `gitlog.id` 相同,但顶部没有显示分支的行。这对于从一个分支挑选所有补丁到当前分支很方便:
|
||||
|
||||
```
|
||||
$ # 创建一个新分支
|
||||
$ git branch --track origin/master
|
||||
$ # 检出刚刚创建的新分支
|
||||
$ git checkout recovery26
|
||||
$ # 从旧的分支挑选所有补丁到新分支
|
||||
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done
|
||||
```
|
||||
|
||||
### 4、gitlog.grep
|
||||
|
||||
`gitlog.grep` 会在该补丁集合中寻找一个字符串。例如,如果我发现一个错误并想修复引用了函数 `inode_go_sync` 的补丁,我可以简单地执行以下操作:
|
||||
|
||||
```
|
||||
$ gitlog.grep inode_go_sync
|
||||
-----------------------[ recovery25 - 50 patches ]-----------------------
|
||||
(snip)
|
||||
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
152:-static void inode_go_sync(struct gfs2_glock *gl)
|
||||
153:+static int inode_go_sync(struct gfs2_glock *gl)
|
||||
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
|
||||
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
|
||||
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
|
||||
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
|
||||
```
|
||||
|
||||
因此,现在我知道补丁 `HEAD~9` 是需要修复的补丁。我使用 `git rebase -i HEAD~10` 编辑补丁 9,`git commit -a --amend`,然后 `git rebase --continue` 以进行必要的调整。
|
||||
|
||||
### 5、gitbranchcmp3
|
||||
|
||||
`gitbranchcmp3` 使我可以将当前分支与另一个分支进行比较,因此我可以将较旧版本的补丁与我的较新版本进行比较,并快速查看已更改和未更改的内容。它生成一个比较脚本(使用了 KDE 工具 [Kompare][2],该工具也可在 GNOME3 上使用)以比较不太相同的补丁。如果除行号外没有其他差异,则打印 `[SAME]`。如果仅存在注释差异,则打印 `[same]`(小写)。例如:
|
||||
|
||||
```
|
||||
$ gitbranchcmp3 recovery24
|
||||
Branch recovery24 has 47 patches
|
||||
Branch recovery25 has 50 patches
|
||||
|
||||
(snip)
|
||||
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
|
||||
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
|
||||
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
|
||||
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
|
||||
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
|
||||
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
|
||||
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time
|
||||
|
||||
Missing from recovery25:
|
||||
The missing:
|
||||
Compare script generated at: /tmp/compare_mismatches.sh
|
||||
```
|
||||
|
||||
### 6、gitlog.find
|
||||
|
||||
最后,我有一个 `gitlog.find` 脚本,可以帮助我识别补丁程序的上游版本在哪里以及每个补丁的当前状态。它通过匹配补丁说明来实现。它还会生成一个比较脚本(再次使用了 Kompare),以将当前补丁与上游对应补丁进行比较:
|
||||
|
||||
```
|
||||
$ gitlog.find
|
||||
-----------------------[ recovery25 - 50 patches ]-----------------------
|
||||
(snip)
|
||||
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
|
||||
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
|
||||
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
|
||||
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
|
||||
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
|
||||
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
|
||||
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
|
||||
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
|
||||
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
|
||||
Not found upstream
|
||||
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
|
||||
Not found upstream
|
||||
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
|
||||
Not found upstream
|
||||
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
|
||||
Not found upstream
|
||||
Compare script generated: /tmp/compare_upstream.sh
|
||||
```
|
||||
|
||||
补丁显示为两行,第一行是你当前的修补程序,然后是相应的上游补丁,以及 2 个字符的缩写,以指示其上游状态:
|
||||
|
||||
* `lo` 表示补丁仅在本地(`local`)上游 Git 存储库中(即尚未推送到上游)。
|
||||
* `ms` 表示补丁位于 Linus Torvald 的主(`master`)分支中。
|
||||
* `fn` 意味着补丁被推送到我的 “for-next” 开发分支,用于下一个上游合并窗口。
|
||||
|
||||
我的一些脚本根据我通常使用 Git 的方式做出假设。例如,当搜索上游补丁时,它使用我众所周知的 Git 树的位置。因此,你需要调整或改进它们以适合你的条件。`gitlog.find` 脚本旨在仅定位 [GFS2][3] 和 [DLM][4] 补丁,因此,除非你是 GFS2 开发人员,否则你需要针对你感兴趣的组件对其进行自定义。
|
||||
|
||||
### 源代码
|
||||
|
||||
以下是这些脚本的源代码。
|
||||
|
||||
#### 1、gitlog
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
branch=$1
|
||||
|
||||
if test "x$branch" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
patches=0
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
|
||||
for i in $LIST; do patches=$(echo $patches + 1 | bc);done
|
||||
|
||||
if [[ $branch =~ .*for-next.* ]]
|
||||
then
|
||||
start=HEAD
|
||||
# start=origin/for-next
|
||||
else
|
||||
start=origin/master
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
/usr/bin/echo "-----------------------[" $branch "]-----------------------"
|
||||
patches=$(echo $patches - 1 | bc);
|
||||
for i in $LIST; do
|
||||
if [ $patches -eq 1 ]; then
|
||||
cnt=" ^"
|
||||
elif [ $patches -eq 0 ]; then
|
||||
cnt=" H"
|
||||
else
|
||||
if [ $patches -lt 10 ]; then
|
||||
cnt=" $patches"
|
||||
else
|
||||
cnt="$patches"
|
||||
fi
|
||||
fi
|
||||
/usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s %n" $i
|
||||
patches=$(echo $patches - 1 | bc)
|
||||
done
|
||||
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
|
||||
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch
|
||||
```
|
||||
|
||||
#### 2、gitlog.id
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
branch=$1
|
||||
|
||||
if test "x$branch" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
/usr/bin/echo "-----------------------[" $branch "]-----------------------"
|
||||
git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
|
||||
```
|
||||
|
||||
#### 3、gitlog.id2
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
branch=$1
|
||||
|
||||
if test "x$branch" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
|
||||
```
|
||||
|
||||
#### 4、gitlog.grep
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
param1=$1
|
||||
param2=$2
|
||||
|
||||
if test "x$param2" = x; then
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
string=$param1
|
||||
else
|
||||
branch=$param1
|
||||
string=$param2
|
||||
fi
|
||||
|
||||
patches=0
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
|
||||
for i in $LIST; do patches=$(echo $patches + 1 | bc);done
|
||||
/usr/bin/echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
|
||||
patches=$(echo $patches - 1 | bc);
|
||||
for i in $LIST; do
|
||||
if [ $patches -eq 1 ]; then
|
||||
cnt=" ^"
|
||||
elif [ $patches -eq 0 ]; then
|
||||
cnt=" H"
|
||||
else
|
||||
if [ $patches -lt 10 ]; then
|
||||
cnt=" $patches"
|
||||
else
|
||||
cnt="$patches"
|
||||
fi
|
||||
fi
|
||||
/usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s" $i
|
||||
/usr/bin/git show --pretty=email --patch-with-stat $i | grep -n "$string"
|
||||
patches=$(echo $patches - 1 | bc)
|
||||
done
|
||||
```
|
||||
|
||||
#### 5、gitbranchcmp3
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#
|
||||
# gitbranchcmp3 <old branch> [<new_branch>]
|
||||
#
|
||||
oldbranch=$1
|
||||
newbranch=$2
|
||||
script=/tmp/compare_mismatches.sh
|
||||
|
||||
/usr/bin/rm -f $script
|
||||
echo "#!/bin/bash" > $script
|
||||
/usr/bin/chmod 755 $script
|
||||
echo "# Generated by gitbranchcmp3.sh" >> $script
|
||||
echo "# Run this script to compare the mismatched patches" >> $script
|
||||
echo " " >> $script
|
||||
echo "function compare_them()" >> $script
|
||||
echo "{" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$1 > /tmp/gronk1" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$2 > /tmp/gronk2" >> $script
|
||||
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
|
||||
echo "}" >> $script
|
||||
echo " " >> $script
|
||||
|
||||
if test "x$newbranch" = x; then
|
||||
newbranch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
declare -a oldsha1s=(`git log --reverse --abbrev-commit --pretty=oneline $tracking..$oldbranch | cut -d ' ' -f1 |paste -s -d ' '`)
|
||||
declare -a newsha1s=(`git log --reverse --abbrev-commit --pretty=oneline $tracking..$newbranch | cut -d ' ' -f1 |paste -s -d ' '`)
|
||||
|
||||
#echo "old: " $oldsha1s
|
||||
oldcount=${#oldsha1s[@]}
|
||||
echo "Branch $oldbranch has $oldcount patches"
|
||||
oldcount=$(echo $oldcount - 1 | bc)
|
||||
#for o in `seq 0 ${#oldsha1s[@]}`; do
|
||||
# echo -n ${oldsha1s[$o]} " "
|
||||
# desc=`git show $i | head -5 | tail -1|cut -b5-`
|
||||
#done
|
||||
|
||||
#echo "new: " $newsha1s
|
||||
newcount=${#newsha1s[@]}
|
||||
echo "Branch $newbranch has $newcount patches"
|
||||
newcount=$(echo $newcount - 1 | bc)
|
||||
#for o in `seq 0 ${#newsha1s[@]}`; do
|
||||
# echo -n ${newsha1s[$o]} " "
|
||||
# desc=`git show $i | head -5 | tail -1|cut -b5-`
|
||||
#done
|
||||
echo
|
||||
|
||||
for new in `seq 0 $newcount`; do
|
||||
newsha=${newsha1s[$new]}
|
||||
newdesc=`git show $newsha | head -5 | tail -1|cut -b5-`
|
||||
oldsha=" "
|
||||
same="[ ]"
|
||||
for old in `seq 0 $oldcount`; do
|
||||
if test "${oldsha1s[$old]}" = "match"; then
|
||||
continue;
|
||||
fi
|
||||
olddesc=`git show ${oldsha1s[$old]} | head -5 | tail -1|cut -b5-`
|
||||
if test "$olddesc" = "$newdesc" ; then
|
||||
oldsha=${oldsha1s[$old]}
|
||||
#echo $oldsha
|
||||
git show $oldsha |tail -n +2 |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk1
|
||||
git show $newsha |tail -n +2 |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk2
|
||||
diff /tmp/gronk1 /tmp/gronk2 &> /dev/null
|
||||
if [ $? -eq 0 ] ;then
|
||||
# No differences
|
||||
same="[SAME]"
|
||||
oldsha1s[$old]="match"
|
||||
break
|
||||
fi
|
||||
git show $oldsha |sed -n '/diff/,$p' |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk1
|
||||
git show $newsha |sed -n '/diff/,$p' |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk2
|
||||
diff /tmp/gronk1 /tmp/gronk2 &> /dev/null
|
||||
if [ $? -eq 0 ] ;then
|
||||
# Differences in comments only
|
||||
same="[same]"
|
||||
oldsha1s[$old]="match"
|
||||
break
|
||||
fi
|
||||
oldsha1s[$old]="match"
|
||||
echo "compare_them $oldsha $newsha" >> $script
|
||||
fi
|
||||
done
|
||||
echo "$new $oldsha $newsha $same $newdesc"
|
||||
done
|
||||
|
||||
echo
|
||||
echo "Missing from $newbranch:"
|
||||
the_missing=""
|
||||
# Now run through the olds we haven't matched up
|
||||
for old in `seq 0 $oldcount`; do
|
||||
if test ${oldsha1s[$old]} != "match"; then
|
||||
olddesc=`git show ${oldsha1s[$old]} | head -5 | tail -1|cut -b5-`
|
||||
echo "${oldsha1s[$old]} $olddesc"
|
||||
the_missing=`echo "$the_missing ${oldsha1s[$old]}"`
|
||||
fi
|
||||
done
|
||||
|
||||
echo "The missing: " $the_missing
|
||||
echo "Compare script generated at: $script"
|
||||
#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
|
||||
```
|
||||
|
||||
#### 6、gitlog.find
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#
|
||||
# Find the upstream equivalent patch
|
||||
#
|
||||
# gitlog.find
|
||||
#
|
||||
cwd=$PWD
|
||||
param1=$1
|
||||
ubranch=$2
|
||||
patches=0
|
||||
script=/tmp/compare_upstream.sh
|
||||
echo "#!/bin/bash" > $script
|
||||
/usr/bin/chmod 755 $script
|
||||
echo "# Generated by gitbranchcmp3.sh" >> $script
|
||||
echo "# Run this script to compare the mismatched patches" >> $script
|
||||
echo " " >> $script
|
||||
echo "function compare_them()" >> $script
|
||||
echo "{" >> $script
|
||||
echo " cwd=$PWD" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$2 > /tmp/gronk2" >> $script
|
||||
echo " cd ~/linux.git/fs/gfs2" >> $script
|
||||
echo " git show --pretty=email --patch-with-stat \$1 > /tmp/gronk1" >> $script
|
||||
echo " cd $cwd" >> $script
|
||||
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
|
||||
echo "}" >> $script
|
||||
echo " " >> $script
|
||||
|
||||
#echo "Gathering upstream patch info. Please wait."
|
||||
branch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
|
||||
cd ~/linux.git
|
||||
if test "X${ubranch}" = "X"; then
|
||||
ubranch=`git branch -a | grep "*" | cut -d ' ' -f2`
|
||||
fi
|
||||
utracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
|
||||
#
|
||||
# gather a list of gfs2 patches from master just in case we can't find it
|
||||
#
|
||||
#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
|
||||
git log --reverse --abbrev-commit --pretty=format:"ms %h %<|(32)%an %s" master fs/gfs2/ > /tmp/gronk.gfs2
|
||||
# ms = in Linus's master
|
||||
git log --reverse --abbrev-commit --pretty=format:"ms %h %<|(32)%an %s" master fs/dlm/ > /tmp/gronk.dlm
|
||||
|
||||
cd $cwd
|
||||
LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
|
||||
for i in $LIST; do patches=$(echo $patches + 1 | bc);done
|
||||
/usr/bin/echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
|
||||
patches=$(echo $patches - 1 | bc);
|
||||
for i in $LIST; do
|
||||
if [ $patches -eq 1 ]; then
|
||||
cnt=" ^"
|
||||
elif [ $patches -eq 0 ]; then
|
||||
cnt=" H"
|
||||
else
|
||||
if [ $patches -lt 10 ]; then
|
||||
cnt=" $patches"
|
||||
else
|
||||
cnt="$patches"
|
||||
fi
|
||||
fi
|
||||
/usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s" $i
|
||||
desc=`/usr/bin/git show --abbrev-commit -s --pretty=format:"%s" $i`
|
||||
cd ~/linux.git
|
||||
cmp=1
|
||||
up_eq=`git log --reverse --abbrev-commit --pretty=format:"lo %h %<|(32)%an %s" $utracking..$ubranch | grep "$desc"`
|
||||
# lo = in local for-next
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=`git log --reverse --abbrev-commit --pretty=format:"fn %h %<|(32)%an %s" master..$utracking | grep "$desc"`
|
||||
# fn = in for-next for next merge window
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=`grep "$desc" /tmp/gronk.gfs2`
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=`grep "$desc" /tmp/gronk.dlm`
|
||||
if test "X$up_eq" = "X"; then
|
||||
up_eq=" Not found upstream"
|
||||
cmp=0
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
echo "$up_eq"
|
||||
if [ $cmp -eq 1 ] ; then
|
||||
UP_SHA1=`echo $up_eq|cut -d' ' -f2`
|
||||
echo "compare_them $UP_SHA1 $i" >> $script
|
||||
fi
|
||||
cd $cwd
|
||||
patches=$(echo $patches - 1 | bc)
|
||||
done
|
||||
echo "Compare script generated: $script"
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/bash-scripts-git
|
||||
|
||||
作者:[Bob Peterson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bobpeterson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
|
||||
[2]: https://kde.org/applications/development/org.kde.kompare
|
||||
[3]: https://en.wikipedia.org/wiki/GFS2
|
||||
[4]: https://en.wikipedia.org/wiki/Distributed_lock_manager
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11812-1.html)
|
||||
[#]: subject: (Organize and sync your calendar with khal and vdirsyncer)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-calendar)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用 khal 和 vdirsyncer 组织和同步你的日历
|
||||
======
|
||||
|
||||
> 保存和共享日历可能会有点麻烦。在我们的 20 个使用开源提升生产力的系列的第五篇文章中了解如何让它更简单。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/23/150009wsr3d5ovg4g1vzws.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 khal 和 vdirsyncer 跟踪你的日程
|
||||
|
||||
处理日历很*麻烦*,要找到好的工具总是很困难的。但是自从我去年将日历列为[我的“失败"之一][2]以来,我已经取得了一些进步。
|
||||
|
||||
目前使用日历最困难的是一直需要以某种方式在线共享。两种最受欢迎的在线日历是 Google Calendar 和 Microsoft Outlook/Exchange。两者都在公司环境中大量使用,这意味着我的日历必须支持其中之一或者两个。
|
||||
|
||||
![khal calendar][3]
|
||||
|
||||
[Khal][4] 是基于控制台的日历,可以读取和写入 VCalendar 文件。它配置相当容易,但是不支持与其他应用同步。
|
||||
|
||||
幸运的是,khal 能与 [vdirsyncer][5] 一起使用,它是一个漂亮的命令行程序,可以将在线日历(和联系人,我将在另一篇文章中讨论)同步到本地磁盘。是的,它还可以上传新事件。
|
||||
|
||||
![vdirsyncer][6]
|
||||
|
||||
Vdirsyncer 是个 Python 3 程序,可以通过软件包管理器或 `pip` 安装。它可以同步 CalDAV、VCalendar/iCalendar、Google Calendar 和目录中的本地文件。由于我使用 Google Calendar,尽管这不是最简单的设置,我也将以它为例。
|
||||
|
||||
在 vdirsyncer 中设置 Google Calendar 是[有文档参考的][7],所以这里我不再赘述。重要的是确保设置你的同步对,将 Google Calendar 设置为冲突解决的“赢家”。也就是说,如果同一事件有两个更新,那么需要知道哪个更新优先。类似这样做:
|
||||
|
||||
```
|
||||
[general]
|
||||
status_path = "~/.calendars/status"
|
||||
|
||||
[pair personal_sync]
|
||||
a = "personal"
|
||||
b = "personallocal"
|
||||
collections = ["from a", "from b"]
|
||||
conflict_resolution = "a wins"
|
||||
metadata = ["color"]
|
||||
|
||||
[storage personal]
|
||||
type = "google_calendar"
|
||||
token_file = "~/.vdirsyncer/google_calendar_token"
|
||||
client_id = "google_client_id"
|
||||
client_secret = "google_client_secret"
|
||||
|
||||
[storage personallocal]
|
||||
type = "filesystem"
|
||||
path = "~/.calendars/Personal"
|
||||
fileext = ".ics"
|
||||
```
|
||||
|
||||
在第一次 vdirsyncer 同步之后,你将在存储路径中看到一系列目录。每个文件夹都将包含多个文件,日历中的每个事件都是一个文件。下一步是导入 khal。首先运行 `khal configure` 进行初始设置。
|
||||
|
||||
![Configuring khal][8]
|
||||
|
||||
现在,运行 `khal interactive` 将显示本文开头的界面。输入 `n` 将打开“新事件”对话框。这里要注意的一件事:日历的名称与 vdirsyncer 创建的目录匹配,但是你可以更改 khal 配置文件来指定更清晰的名称。根据条目所在的日历,向条目添加颜色还可以帮助你确定日历内容:
|
||||
|
||||
```
|
||||
[calendars]
|
||||
[[personal]]
|
||||
path = ~/.calendars/Personal/kevin@sonney.com/
|
||||
color = light magenta
|
||||
[[holidays]]
|
||||
path = ~/.calendars/Personal/cln2stbjc4hmgrrcd5i62ua0ctp6utbg5pr2sor1dhimsp31e8n6errfctm6abj3dtmg@virtual/
|
||||
color = light blue
|
||||
[[birthdays]]
|
||||
path = ~/.calendars/Personal/c5i68sj5edpm4rrfdchm6rreehgm6t3j81jn4rrle0n7cbj3c5m6arj4c5p2sprfdtjmop9ecdnmq@virtual/
|
||||
color = brown
|
||||
```
|
||||
|
||||
现在,当你运行 `khal interactive` 时,每个日历将被着色以区别于其他日历,并且当你添加新条目时,它将有更具描述性的名称。
|
||||
|
||||
![Adding a new calendar entry][9]
|
||||
|
||||
设置有些麻烦,但是完成后,khal 和 vdirsyncer 可以一起为你提供一种简便的方法来管理日历事件并使它们与你的在线服务保持同步。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/open-source-calendar
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
|
||||
[2]: https://opensource.com/article/19/1/productivity-tool-wish-list
|
||||
[3]: https://opensource.com/sites/default/files/uploads/productivity_5-1.png (khal calendar)
|
||||
[4]: https://khal.readthedocs.io/en/v0.9.2/index.html
|
||||
[5]: https://github.com/pimutils/vdirsyncer
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_5-2.png (vdirsyncer)
|
||||
[7]: https://vdirsyncer.pimutils.org/en/stable/config.html#google
|
||||
[8]: https://opensource.com/sites/default/files/uploads/productivity_5-3.png (Configuring khal)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/productivity_5-4.png
|
@ -0,0 +1,169 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11837-1.html)
|
||||
[#]: subject: (Root User in Ubuntu: Important Things You Should Know)
|
||||
[#]: via: (https://itsfoss.com/root-user-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Ubuntu 中的 root 用户:你应该知道的重要事情
|
||||
======
|
||||
|
||||
![][5]
|
||||
|
||||
当你刚开始使用 Linux 时,你将发现与 Windows 的很多不同。其中一个“不同的东西”是 root 用户的概念。
|
||||
|
||||
在这个初学者系列中,我将解释几个关于 Ubuntu 的 root 用户的重要的东西。
|
||||
|
||||
**请记住,尽管我正在从 Ubuntu 用户的角度编写这篇文章,它应该对大多数的 Linux 发行版也是有效的。**
|
||||
|
||||
你将在这篇文章中学到下面的内容:
|
||||
|
||||
* 为什么在 Ubuntu 中禁用 root 用户
|
||||
* 像 root 用户一样使用命
|
||||
* 切换为 root 用户
|
||||
* 解锁 root 用户
|
||||
|
||||
### 什么是 root 用户?为什么它在 Ubuntu 中被锁定?
|
||||
|
||||
在 Linux 中,有一个称为 [root][6] 的超级用户。这是超级管理员账号,它可以做任何事以及使用系统的一切东西。它可以在你的 Linux 系统上访问任何文件和运行任何命令。
|
||||
|
||||
能力越大,责任越大。root 用户给予你完全控制系统的能力,因此,它应该被谨慎地使用。root 用户可以访问系统文件,运行更改系统配置的命令。因此,一个错误的命令可能会破坏系统。
|
||||
|
||||
这就是为什么 [Ubuntu][7] 和其它基于 Ubuntu 的发行版默认锁定 root 用户,以从意外的灾难中挽救你的原因。
|
||||
|
||||
对于你的日常任务,像移动你家目录中的文件,从互联网下载文件,创建文档等等,你不需要拥有 root 权限。
|
||||
|
||||
**打个比方来更好地理解它。假设你想要切一个水果,你可以使用一把厨房用刀。假设你想要砍一颗树,你就得使用一把锯子。现在,你可以使用锯子来切水果,但是那不明智,不是吗?**_
|
||||
|
||||
这意味着,你不能是 Ubuntu 中 root 用户或者不能使用 root 权限来使用系统吗?不,你仍然可以在 `sudo` 的帮助下来拥有 root 权限来访问(在下一节中解释)。
|
||||
|
||||
> **要点:** 使用于常规任务,root 用户权限太过强大。这就是为什么不建议一直使用 root 用户。你仍然可以使用 root 用户来运行特殊的命令。
|
||||
|
||||
### 如何在 Ubuntu 中像 root 用户一样运行命令?
|
||||
|
||||
![Image Credit: xkcd][8]
|
||||
|
||||
对于一些系统的特殊任务来说,你将需要 root 权限。例如。如果你想[通过命令行更新 Ubuntu][9],你不能作为一个常规用户运行该命令。它将给出权限被拒绝的错误。
|
||||
|
||||
```
|
||||
apt update
|
||||
Reading package lists... Done
|
||||
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
|
||||
E: Unable to lock directory /var/lib/apt/lists/
|
||||
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
|
||||
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
|
||||
```
|
||||
|
||||
那么,你如何像 root 用户一样运行命令?简单的答案是,在命令前添加 `sudo`,来像 root 用户一样运行。
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Ubuntu 和很多其它的 Linux 发行版使用一个被称为 `sudo` 的特殊程序机制。`sudo` 是一个以 root 用户(或其它用户)来控制运行命令访问的程序。
|
||||
|
||||
实际上,`sudo` 是一个非常多用途的工具。它可以配置为允许一个用户像 root 用户一样来运行所有的命令,或者仅仅一些命令。你也可以配置为无需密码即可使用 sudo 运行命令。这个主题内容比较丰富,也许我将在另一篇文章中详细讨论它。
|
||||
|
||||
就目前而言,你应该知道[当你安装 Ubuntu 时][10],你必须创建一个用户账号。这个用户账号在你系统上以管理员身份来工作,并且按照 Ubuntu 中的默认 sudo 策略,它可以在你的系统上使用 root 用户权限来运行任何命令。
|
||||
|
||||
`sudo` 的问题是,运行 **sudo 不需要 root 用户密码,而是需要用户自己的密码**。
|
||||
|
||||
并且这就是为什么当你使用 `sudo` 运行一个命令,会要求输入正在运行 `sudo` 命令的用户的密码的原因:
|
||||
|
||||
```
|
||||
[email protected]:~$ sudo apt update
|
||||
[sudo] password for abhishek:
|
||||
```
|
||||
|
||||
正如你在上面示例中所见 `abhishek` 在尝试使用 `sudo` 来运行 `apt update` 命令,系统要求输入 `abhishek` 的密码。
|
||||
|
||||
**如果你对 Linux 完全不熟悉,当你在终端中开始输入密码时,你可能会惊讶,在屏幕上什么都没有发生。这是十分正常的,因为作为默认的安全功能,在屏幕上什么都不会显示。甚至星号(`*`)都没有。输入你的密码并按回车键。**
|
||||
|
||||
> **要点:**为在 Ubuntu 中像 root 用户一样运行命令,在命令前添加 `sudo`。 当被要求输入密码时,输入你的账户的密码。当你在屏幕上输入密码时,什么都看不到。请继续输入密码,并按回车键。
|
||||
|
||||
### 如何在 Ubuntu 中成为 root 用户?
|
||||
|
||||
你可以使用 `sudo` 来像 root 用户一样运行命令。但是,在某些情况下,你必须以 root 用户身份来运行一些命令,而你总是忘了在命令前添加 `sudo`,那么你可以临时切换为 root 用户。
|
||||
|
||||
`sudo` 命令允许你来模拟一个 root 用户登录的 shell ,使用这个命令:
|
||||
|
||||
```
|
||||
sudo -i
|
||||
```
|
||||
|
||||
```
|
||||
[email protected]:~$ sudo -i
|
||||
[sudo] password for abhishek:
|
||||
[email protected]:~# whoami
|
||||
root
|
||||
[email protected]:~#
|
||||
```
|
||||
|
||||
你将注意到,当你切换为 root 用户时,shell 命令提示符从 `$`(美元符号)更改为 `#`(英镑符号)。我开个(拙劣的)玩笑,英镑比美元强大。
|
||||
|
||||
**虽然我已经向你显示如何成为 root 用户,但是我必须警告你,你应该避免作为 root 用户使用系统。毕竟它有阻拦你使用 root 用户的原因。**
|
||||
|
||||
另外一种临时切换为 root 用户的方法是使用 `su` 命令:
|
||||
|
||||
```
|
||||
sudo su
|
||||
```
|
||||
|
||||
如果你尝试使用不带有的 `sudo` 的 `su` 命令,你将遇到 “su authentication failure” 错误。
|
||||
|
||||
你可以使用 `exit` 命令来恢复为正常用户。
|
||||
|
||||
```
|
||||
exit
|
||||
```
|
||||
|
||||
### 如何在 Ubuntu 中启用 root 用户?
|
||||
|
||||
现在你知道,root 用户在基于 Ubuntu 发行版中是默认锁定的。
|
||||
|
||||
Linux 给予你在系统上想做什么就做什么的自由。解锁 root 用户就是这些自由之一。
|
||||
|
||||
如果出于某些原因,你决定启用 root 用户,你可以通过为其设置一个密码来做到:
|
||||
|
||||
```
|
||||
sudo passwd root
|
||||
```
|
||||
|
||||
再强调一次,不建议使用 root 用户,并且我也不鼓励你在桌面上这样做。如果你忘记了密码,你将不能再次[在 Ubuntu 中更改 root 用户密码][11]。(LCTT 译注:可以通过单用户模式修改。)
|
||||
|
||||
你可以通过移除密码来再次锁定 root 用户:
|
||||
|
||||
```
|
||||
sudo passwd -dl root
|
||||
```
|
||||
|
||||
### 最后…
|
||||
|
||||
我希望你现在对 root 概念理解得更好一点。如果你仍然有些关于它的困惑和问题,请在评论中让我知道。我将尝试回答你的问题,并且也可能更新这篇文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/root-user-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: tmp.IrHYJBAqVn#what-is-root
|
||||
[2]: tmp.IrHYJBAqVn#run-command-as-root
|
||||
[3]: tmp.IrHYJBAqVn#become-root
|
||||
[4]: tmp.IrHYJBAqVn#enable-root
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/root_user_ubuntu.png?ssl=1
|
||||
[6]: http://www.linfo.org/root.html
|
||||
[7]: https://ubuntu.com/
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/sudo_sandwich.png?ssl=1
|
||||
[9]: https://itsfoss.com/update-ubuntu/
|
||||
[10]: https://itsfoss.com/install-ubuntu/
|
||||
[11]: https://itsfoss.com/how-to-hack-ubuntu-password/
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11823-1.html)
|
||||
[#]: subject: (Why everyone is talking about WebAssembly)
|
||||
[#]: via: (https://opensource.com/article/20/1/webassembly)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
为什么每个人都在谈论 WebAssembly
|
||||
======
|
||||
|
||||
> 了解有关在 Web 浏览器中运行任何代码的最新方法的更多信息。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/27/125343ch0hxdfbzibrihfn.jpg)
|
||||
|
||||
如果你还没有听说过 [WebAssembly][2],那么你很快就会知道。这是业界最保密的秘密之一,但它无处不在。所有主流的浏览器都支持它,并且它也将在服务器端使用。它很快,它能用于游戏编程。这是主要的国际网络标准组织万维网联盟(W3C)的一个开放标准。
|
||||
|
||||
你可能会说:“哇,这听起来像是我应该学习编程的东西!”你可能是对的,但也是错的。你不需要用 WebAssembly 编程。让我们花一些时间来学习这种通常被缩写为“Wasm”的技术。
|
||||
|
||||
### 它从哪里来?
|
||||
|
||||
大约十年前,人们越来越认识到,广泛使用的 JavaScript 不够快速,无法满足许多目的。JavaScript 无疑是成功和方便的。它可以在任何浏览器中运行,并启用了今天我们认为理所当然的动态网页类型。但这是一种高级语言,在设计时并没有考虑到计算密集型工作负载。
|
||||
|
||||
然而,尽管负责主流 web 浏览器的工程师们对性能问题的看法大体一致,但他们对如何解决这个问题却意见不一。出现了两个阵营,谷歌开始了它的<ruby>原生客户端<rt>Native Client</rt></ruby>项目,后来又推出了<ruby>可移植原生客户端<rt>Portable Native Client</rt></ruby>变体,着重于允许用 C/C++ 编写的游戏和其它软件在 Chrome 的一个安全隔间中运行。与此同时,Mozilla 赢得了微软对 asm.js 的支持。该方法更新了浏览器,因此它可以非常快速地运行 JavaScript 指令的低级子集(有另一个项目可以将 C/C++ 代码转换为这些指令)。
|
||||
|
||||
由于这两个阵营都没有得到广泛采用,各方在 2015 年同意围绕一种称为 WebAssembly 的新标准,以 asm.js 所采用的基本方法为基础,联合起来。[如 CNET 的 Stephen Shankland 当时所写][3],“在当今的 Web 上,浏览器的 JavaScript 将这些指令转换为机器代码。但是,通过 WebAssembly,程序员可以在此过程的早期阶段完成很多工作,从而生成介于两种状态之间的程序。这使浏览器摆脱了创建机器代码的繁琐工作,但也实现了 Web 的承诺 —— 该软件将在具有浏览器的任何设备上运行,而无需考虑基础硬件的细节。”
|
||||
|
||||
在 2017 年,Mozilla 宣布了它的最小可行的产品(MVP),并使其脱离预览版阶段。到该年年底,所有主流的浏览器都采用了它。[2019 年 12 月][4],WebAssembly 工作组发布了三个 W3C 推荐的 WebAssembly 规范。
|
||||
|
||||
WebAssembly 定义了一种可执行程序的可移植二进制代码格式、相应的文本汇编语言以及用于促进此类程序与其宿主环境之间的交互接口。WebAssembly 代码在低级虚拟机中运行,这个可运行于许多微处理器之上的虚拟机可模仿这些处理器的功能。通过即时(JIT)编译或解释,WebAssembly 引擎可以以近乎原生平台编译代码的速度执行。
|
||||
|
||||
### 为什么现在感兴趣?
|
||||
|
||||
当然,最近对 WebAssembly 感兴趣的部分原因是最初希望在浏览器中运行更多计算密集型代码。尤其是笔记本电脑用户,越来越多的时间都花在浏览器上(或者,对于 Chromebook 用户来说,基本上是所有时间)。这种趋势已经迫切需要消除在浏览器中运行各种应用程序的障碍。这些障碍之一通常是性能的某些方面,这正是 WebAssembly 及其前身最初旨在解决的问题。
|
||||
|
||||
但是,WebAssembly 并不仅仅适用于浏览器。在 2019 年,[Mozilla 宣布了一个名为 WASI][5](<ruby>WebAssembly 系统接口<rt>WebAssembly System Interface</rt></ruby>)的项目,以标准化 WebAssembly 代码如何与浏览器上下文之外的操作系统进行交互。通过将浏览器对 WebAssembly 和 WASI 的支持结合在一起,编译后的二进制文件将能够以接近原生的速度,跨不同的设备和操作系统在浏览器内外运行。
|
||||
|
||||
WebAssembly 的低开销立即使它可以在浏览器之外使用,但这无疑是赌注;显然,还有其它不会引入性能瓶颈的运行应用程序的方法。为什么要专门使用 WebAssembly?
|
||||
|
||||
一个重要的原因是它的可移植性。如今,像 C++ 和 Rust 这样的广泛使用的编译语言可能是与 WebAssembly 关联最紧密的语言。但是,[各种各样的其他语言][6]可以编译为 WebAssembly 或拥有它们的 WebAssembly 虚拟机。此外,尽管 WebAssembly 为其执行环境[假定了某些先决条件][7],但它被设计为在各种操作系统和指令集体系结构上有效执行。因此,WebAssembly 代码可以使用多种语言编写,并可以在多种操作系统和处理器类型上运行。
|
||||
|
||||
另一个 WebAssembly 优势源于这样一个事实:代码在虚拟机中运行。因此,每个 WebAssembly 模块都在沙盒环境中执行,并使用故障隔离技术将其与宿主机运行时环境分开。这意味着,对于其它部分而言,应用程序独立于其宿主机环境的其余部分执行,如果不调用适当的 API,就无法摆脱沙箱。
|
||||
|
||||
### WebAssembly 现状
|
||||
|
||||
这一切在实践中意味着什么?
|
||||
|
||||
如今在运作中的 WebAssembly 的一个例子是 [Enarx][8]。
|
||||
|
||||
Enarx 是一个提供硬件独立性的项目,可使用<ruby>受信任的执行环境<rt>Trusted Execution Environments</rt></ruby>(TEE)保护应用程序的安全。Enarx 使你可以安全地将编译为 WebAssembly 的应用程序始终交付到云服务商,并远程执行它。正如 Red Hat 安全工程师 [Nathaniel McCallum 指出的那样][9]:“我们这样做的方式是,我们将你的应用程序作为输入,并使用远程硬件执行认证过程。我们使用加密技术验证了远程硬件实际上是它声称的硬件。最终的结果不仅是我们对硬件的信任度提高了;它也是一个会话密钥,我们可以使用它将加密的代码和数据传递到我们刚刚要求加密验证的环境中。”
|
||||
|
||||
另一个例子是 OPA,<ruby>开放策略代理<rt>Open Policy Agent</rt></ruby>,它[发布][10]于 2019 年 11 月,你可以[编译][11]他们的策略定义语言 Rego 为 WebAssembly。Rego 允许你编写逻辑来搜索和组合来自不同来源的 JSON/YAML 数据,以询问诸如“是否允许使用此 API?”之类的问题。
|
||||
|
||||
OPA 已被用于支持策略的软件,包括但不限于 Kubernetes。使用 OPA 之类的工具来简化策略[被认为是在各种不同环境中正确保护 Kubernetes 部署的重要步骤][12]。WebAssembly 的可移植性和内置的安全功能非常适合这些工具。
|
||||
|
||||
我们的最后一个例子是 [Unity][13]。还记得我们在文章开头提到过 WebAssembly 可用于游戏吗?好吧,跨平台游戏引擎 Unity 是 WebAssembly 的较早采用者,它提供了在浏览器中运行的 Wasm 的首个演示品,并且自 2018 年 8 月以来,[已将 WebAssembly][14]用作 Unity WebGL 构建目标的输出目标。
|
||||
|
||||
这些只是 WebAssembly 已经开始产生影响的几种方式。你可以在 <https://webassembly.org/> 上查找更多信息并了解 Wasm 的所有最新信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/webassembly
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://opensource.com/article/19/8/webassembly-speed-code-reuse
|
||||
[3]: https://www.cnet.com/news/the-secret-alliance-that-could-give-the-web-a-massive-speed-boost/
|
||||
[4]: https://www.w3.org/blog/news/archives/8123
|
||||
[5]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/
|
||||
[6]: https://github.com/appcypher/awesome-wasm-langs
|
||||
[7]: https://webassembly.org/docs/portability/
|
||||
[8]: https://enarx.io
|
||||
[9]: https://enterprisersproject.com/article/2019/9/application-security-4-facts-confidential-computing-consortium
|
||||
[10]: https://blog.openpolicyagent.org/tagged/webassembly
|
||||
[11]: https://github.com/open-policy-agent/opa/tree/master/wasm
|
||||
[12]: https://enterprisersproject.com/article/2019/11/kubernetes-reality-check-3-takeaways-kubecon
|
||||
[13]: https://opensource.com/article/20/1/www.unity.com
|
||||
[14]: https://blogs.unity3d.com/2018/08/15/webassembly-is-here/
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11834-1.html)
|
||||
[#]: subject: (3 open source tools to manage your contacts)
|
||||
[#]: via: (https://opensource.com/article/20/1/sync-contacts-locally)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
用于联系人管理的三个开源工具
|
||||
======
|
||||
|
||||
> 通过将联系人同步到本地从而更快访问它。在我们的 20 个使用开源提升生产力的系列的第六篇文章中了解该如何做。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/30/194811bbtt449zfr9zppb3.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 用于联系人管理的开源工具
|
||||
|
||||
在本系列之前的文章中,我解释了如何在本地同步你的[邮件][2]和[日历][3]。希望这些加速了你访问邮件和日历。现在,我将讨论联系人同步,你可以给他们发送邮件和日历邀请。
|
||||
|
||||
![abook][4]
|
||||
|
||||
我目前收集了很多邮件地址。管理这些数据可能有点麻烦。有基于 Web 的服务,但它们不如本地副本快。
|
||||
|
||||
几天前,我谈到了用于管理日历的 [vdirsyncer][5]。vdirsyncer 还使用 CardDAV 协议处理联系人。vdirsyncer 除了可以使用**文件系统**存储日历外,还支持通过 **google_contacts** 和 **carddav** 进行联系人同步,但 `fileext` 设置会被更改,因此你无法在日历文件中存储联系人。
|
||||
|
||||
我在配置文件添加了一块配置,并从 Google 镜像了我的联系人。设置它需要额外的步骤。从 Google 镜像完成后,配置非常简单:
|
||||
|
||||
```
|
||||
[pair address_sync]
|
||||
a = "googlecard"
|
||||
b = "localcard"
|
||||
collections = ["from a", "from b"]
|
||||
conflict_resolution = "a wins"
|
||||
|
||||
[storage googlecard]
|
||||
type = "google_contacts"
|
||||
token_file = "~/.vdirsyncer/google_token"
|
||||
client_id = "my_client_id"
|
||||
client_secret = "my_client_secret"
|
||||
|
||||
[storage localcard]
|
||||
type = "filesystem"
|
||||
path = "~/.calendars/Addresses/"
|
||||
fileext = ".vcf"
|
||||
```
|
||||
|
||||
现在,当我运行 `vdirsyncer discover` 时,它会找到我的 Google 联系人,并且 `vdirsyncer sync` 将它们复制到我的本地计算机。但同样,这只进行到一半。现在我想查看和使用联系人。需要 [khard][6] 和 [abook][7]。
|
||||
|
||||
![khard search][8]
|
||||
|
||||
为什么选择两个应用?因为每个都有它自己的使用场景,在这里,越多越好。khard 用于管理地址,类似于 [khal][9] 用于管理日历条目。如果你的发行版附带了旧版本,你可能需要通过 `pip` 安装最新版本。安装 khard 后,你需要创建 `~/.config/khard/khard.conf`,因为 khard 没有与 khal 那样漂亮的配置向导。我的看起来像这样:
|
||||
|
||||
```
|
||||
[addressbooks]
|
||||
[[addresses]]
|
||||
path = ~/.calendars/Addresses/default/
|
||||
|
||||
[general]
|
||||
debug = no
|
||||
default_action = list
|
||||
editor = vim, -i, NONE
|
||||
merge_editor = vimdiff
|
||||
|
||||
[contact table]
|
||||
display = first_name
|
||||
group_by_addressbook = no
|
||||
reverse = no
|
||||
show_nicknames = yes
|
||||
show_uids = no
|
||||
sort = last_name
|
||||
localize_dates = yes
|
||||
|
||||
[vcard]
|
||||
preferred_version = 3.0
|
||||
search_in_source_files = yes
|
||||
skip_unparsable = no
|
||||
```
|
||||
|
||||
这会定义源通讯簿(并给它一个友好的名称)、显示内容和联系人编辑程序。运行 `khard list` 将列出所有条目,`khard list <some@email.adr>` 可以搜索特定条目。如果要添加或编辑条目,`add` 和 `edit` 命令将使用相同的基本模板打开配置的编辑器,唯一的区别是 `add` 命令的模板将为空。
|
||||
|
||||
![editing in khard][11]
|
||||
|
||||
abook 需要你导入和导出 VCF 文件,但它为查找提供了一些不错的功能。要将文件转换为 abook 格式,请先安装 abook 并创建 `~/.abook` 默认目录。然后让 abook 解析所有文件,并将它们放入 `~/.abook/addresses` 文件中:
|
||||
|
||||
```
|
||||
apt install abook
|
||||
ls ~/.calendars/Addresses/default/* | xargs cat | abook --convert --informat vcard --outformat abook > ~/.abook/addresses
|
||||
```
|
||||
|
||||
现在运行 `abook`,你将有一个非常漂亮的 UI 来浏览、搜索和编辑条目。将它们导出到单个文件有点痛苦,所以我用 khard 进行大部分编辑,并有一个 cron 任务将它们导入到 abook 中。
|
||||
|
||||
abook 还可在命令行中搜索,并有大量有关将其与邮件客户端集成的文档。例如,你可以在 `.config/alot/config` 文件中添加一些信息,从而在 [Nmuch][12] 的邮件客户端 [alot][13] 中使用 abook 查询联系人:
|
||||
|
||||
```
|
||||
[accounts]
|
||||
[[Personal]]
|
||||
realname = Kevin Sonney
|
||||
address = kevin@sonney.com
|
||||
alias_regexp = kevin\+.+@sonney.com
|
||||
gpg_key = 7BB612C9
|
||||
sendmail_command = msmtp --account=Personal -t
|
||||
# ~ expansion works
|
||||
sent_box = maildir://~/Maildir/Sent
|
||||
draft_box = maildir://~/Maildir/Drafts
|
||||
[[[abook]]]
|
||||
type = abook
|
||||
```
|
||||
|
||||
这样你就可以在邮件和日历中快速查找联系人了!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/sync-contacts-locally
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
|
||||
[2]: https://linux.cn/article-11804-1.html
|
||||
[3]: https://linux.cn/article-11812-1.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_6-1.png (abook)
|
||||
[5]: https://github.com/pimutils/vdirsyncer
|
||||
[6]: https://github.com/scheibler/khard
|
||||
[7]: http://abook.sourceforge.net/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/productivity_6-2.png (khard search)
|
||||
[9]: https://khal.readthedocs.io/en/v0.9.2/index.html
|
||||
[10]: mailto:some@email.adr
|
||||
[11]: https://opensource.com/sites/default/files/uploads/productivity_6-3.png (editing in khard)
|
||||
[12]: https://opensource.com/article/20/1/organize-email-notmuch
|
||||
[13]: https://github.com/pazz/alot
|
||||
[14]: mailto:kevin@sonney.com
|
||||
[15]: mailto:+.+@sonney.com
|
@ -0,0 +1,477 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11825-1.html)
|
||||
[#]: subject: (C vs. Rust: Which to choose for programming hardware abstractions)
|
||||
[#]: via: (https://opensource.com/article/20/1/c-vs-rust-abstractions)
|
||||
[#]: author: (Dan Pittman https://opensource.com/users/dan-pittman)
|
||||
|
||||
C 还是 Rust:选择哪个用于硬件抽象编程
|
||||
======
|
||||
|
||||
> 在 Rust 中使用类型级编程可以使硬件抽象更加安全。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/28/123350k2w4mr3tp7crd4m2.jpg)
|
||||
|
||||
Rust 是一种日益流行的编程语言,被视为硬件接口的最佳选择。通常会将其与 C 的抽象级别相比较。本文介绍了 Rust 如何通过多种方式处理按位运算,并提供了既安全又易于使用的解决方案。
|
||||
|
||||
语言 | 诞生于 | 官方描述 | 总览
|
||||
---|---|---|---
|
||||
C | 1972 年 | C 是一种通用编程语言,具有表达式简约、现代的控制流和数据结构,以及丰富的运算符集等特点。(来源:[CS 基础知识] [2])| C 是(一种)命令式语言,旨在以相对简单的方式进行编译,从而提供对内存的低级访问。(来源:[W3schools.in] [3])
|
||||
Rust | 2010 年 | 一种赋予所有人构建可靠、高效的软件的能力的语言(来源:[Rust 网站] [4])| Rust 是一种专注于安全性(尤其是安全并发性)的多范式系统编程语言。(来源:[维基百科] [5])
|
||||
|
||||
### 在 C 语言中对寄存器值进行按位运算
|
||||
|
||||
在系统编程领域,你可能经常需要编写硬件驱动程序或直接与内存映射设备进行交互,而这些交互几乎总是通过硬件提供的内存映射寄存器来完成的。通常,你通过对某些固定宽度的数字类型进行按位运算来与这些寄存器进行交互。
|
||||
|
||||
例如,假设一个 8 位寄存器具有三个字段:
|
||||
|
||||
```
|
||||
+----------+------+-----------+---------+
|
||||
| (unused) | Kind | Interrupt | Enabled |
|
||||
+----------+------+-----------+---------+
|
||||
5-7 2-4 1 0
|
||||
```
|
||||
|
||||
字段名称下方的数字规定了该字段在寄存器中使用的位。要启用该寄存器,你将写入值 `1`(以二进制表示为 `0000_0001`)来设置 `Enabled` 字段的位。但是,通常情况下,你也不想干扰寄存器中的现有配置。假设你要在设备上启用中断功能,但也要确保设备保持启用状态。为此,必须将 `Interrupt` 字段的值与 `Enabled` 字段的值结合起来。你可以通过按位操作来做到这一点:
|
||||
|
||||
```
|
||||
1 | (1 << 1)
|
||||
```
|
||||
|
||||
通过将 1 和 2(`1` 左移一位得到)进行“或”(`|`)运算得到二进制值 `0000_0011` 。你可以将其写入寄存器,使其保持启用状态,但也启用中断功能。
|
||||
|
||||
你的头脑中要记住很多事情,特别是当你要在一个完整的系统上和可能有数百个之多的寄存器打交道时。在实践上,你可以使用助记符来执行此操作,助记符可跟踪字段在寄存器中的位置以及字段的宽度(即它的上边界是什么)
|
||||
|
||||
下面是这些助记符之一的示例。它们是 C 语言的宏,用右侧的代码替换它们的出现的地方。这是上面列出的寄存器的简写。`&` 的左侧是该字段的起始位置,而右侧则限制该字段所占的位:
|
||||
|
||||
```
|
||||
#define REG_ENABLED_FIELD(x) (x << 0) & 1
|
||||
#define REG_INTERRUPT_FIELD(x) (x << 1) & 2
|
||||
#define REG_KIND_FIELD(x) (x << 2) & (7 << 2)
|
||||
```
|
||||
|
||||
然后,你可以使用这些来抽象化寄存器值的操作,如下所示:
|
||||
|
||||
```
|
||||
void set_reg_val(reg* u8, val u8);
|
||||
|
||||
fn enable_reg_with_interrupt(reg* u8) {
|
||||
set_reg_val(reg, REG_ENABLED_FIELD(1) | REG_INTERRUPT_FIELD(1));
|
||||
}
|
||||
```
|
||||
|
||||
这就是现在的做法。实际上,这就是大多数驱动程序在 Linux 内核中的使用方式。
|
||||
|
||||
有没有更好的办法?如果能够基于对现代编程语言研究得出新的类型系统,就可能能够获得安全性和可表达性的好处。也就是说,如何使用更丰富、更具表现力的类型系统来使此过程更安全、更持久?
|
||||
|
||||
### 在 Rust 语言中对寄存器值进行按位运算
|
||||
|
||||
继续用上面的寄存器作为例子:
|
||||
|
||||
```
|
||||
+----------+------+-----------+---------+
|
||||
| (unused) | Kind | Interrupt | Enabled |
|
||||
+----------+------+-----------+---------+
|
||||
5-7 2-4 1 0
|
||||
```
|
||||
|
||||
你想如何用 Rust 类型来表示它呢?
|
||||
|
||||
你将以类似的方式开始,为每个字段的*偏移*定义常量(即,距最低有效位有多远)及其掩码。*掩码*是一个值,其二进制表示形式可用于更新或读取寄存器内部的字段:
|
||||
|
||||
```
|
||||
const ENABLED_MASK: u8 = 1;
|
||||
const ENABLED_OFFSET: u8 = 0;
|
||||
|
||||
const INTERRUPT_MASK: u8 = 2;
|
||||
const INTERRUPT_OFFSET: u8 = 1;
|
||||
|
||||
const KIND_MASK: u8 = 7 << 2;
|
||||
const KIND_OFFSET: u8 = 2;
|
||||
```
|
||||
|
||||
接下来,你将声明一个 `Field` 类型并进行操作,将给定值转换为与其位置相关的值,以供在寄存器内使用:
|
||||
|
||||
```
|
||||
struct Field {
|
||||
value: u8,
|
||||
}
|
||||
|
||||
impl Field {
|
||||
fn new(mask: u8, offset: u8, val: u8) -> Self {
|
||||
Field {
|
||||
value: (val << offset) & mask,
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
最后,你将使用一个 `Register` 类型,该类型会封装一个与你的寄存器宽度匹配的数字类型。 `Register` 具有 `update` 函数,可使用给定字段来更新寄存器:
|
||||
|
||||
```
|
||||
struct Register(u8);
|
||||
|
||||
impl Register {
|
||||
fn update(&mut self, val: Field) {
|
||||
self.0 = self.0 | field.value;
|
||||
}
|
||||
}
|
||||
|
||||
fn enable_register(&mut reg) {
|
||||
reg.update(Field::new(ENABLED_MASK, ENABLED_OFFSET, 1));
|
||||
}
|
||||
```
|
||||
|
||||
使用 Rust,你可以使用数据结构来表示字段,将它们与特定的寄存器联系起来,并在与硬件交互时提供简洁明了的工效。这个例子使用了 Rust 提供的最基本的功能。无论如何,添加的结构都会减轻上述 C 示例中的某些晦涩的地方。现在,字段是个带有名字的事物,而不是从模糊的按位运算符派生而来的数字,并且寄存器是具有状态的类型 —— 这在硬件上多了一层抽象。
|
||||
|
||||
### 一个易用的 Rust 实现
|
||||
|
||||
用 Rust 重写的第一个版本很好,但是并不理想。你必须记住要带上掩码和偏移量,并且要手工进行临时计算,这容易出错。人类不擅长精确且重复的任务 —— 我们往往会感到疲劳或失去专注力,这会导致错误。一次一个寄存器地手动记录掩码和偏移量几乎可以肯定会以糟糕的结局而告终。这是最好留给机器的任务。
|
||||
|
||||
其次,从结构上进行思考:如果有一种方法可以让字段的类型携带掩码和偏移信息呢?如果可以在编译时就发现硬件寄存器的访问和交互的实现代码中存在错误,而不是在运行时才发现,该怎么办?也许你可以依靠一种在编译时解决问题的常用策略,例如类型。
|
||||
|
||||
你可以使用 [typenum][6] 来修改前面的示例,该库在类型级别提供数字和算术。在这里,你将使用掩码和偏移量对 `Field` 类型进行参数化,使其可用于任何 `Field` 实例,而无需将其包括在调用处:
|
||||
|
||||
```
|
||||
#[macro_use]
|
||||
extern crate typenum;
|
||||
|
||||
use core::marker::PhantomData;
|
||||
|
||||
use typenum::*;
|
||||
|
||||
// Now we'll add Mask and Offset to Field's type
|
||||
struct Field<Mask: Unsigned, Offset: Unsigned> {
|
||||
value: u8,
|
||||
_mask: PhantomData<Mask>,
|
||||
_offset: PhantomData<Offset>,
|
||||
}
|
||||
|
||||
// We can use type aliases to give meaningful names to
|
||||
// our fields (and not have to remember their offsets and masks).
|
||||
type RegEnabled = Field<U1, U0>;
|
||||
type RegInterrupt = Field<U2, U1>;
|
||||
type RegKind = Field<op!(U7 << U2), U2>;
|
||||
```
|
||||
|
||||
现在,当重新访问 `Field` 的构造函数时,你可以忽略掩码和偏移量参数,因为类型中包含该信息:
|
||||
|
||||
```
|
||||
impl<Mask: Unsigned, Offset: Unsigned> Field<Mask, Offset> {
|
||||
fn new(val: u8) -> Self {
|
||||
Field {
|
||||
value: (val << Offset::U8) & Mask::U8,
|
||||
_mask: PhantomData,
|
||||
_offset: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// And to enable our register...
|
||||
fn enable_register(&mut reg) {
|
||||
reg.update(RegEnabled::new(1));
|
||||
}
|
||||
```
|
||||
|
||||
看起来不错,但是……如果你在给定的值是否*适合*该字段方面犯了错误,会发生什么?考虑一个简单的输入错误,你在其中放置了 `10` 而不是 `1`:
|
||||
|
||||
```
|
||||
fn enable_register(&mut reg) {
|
||||
reg.update(RegEnabled::new(10));
|
||||
}
|
||||
```
|
||||
|
||||
在上面的代码中,预期结果是什么?好吧,代码会将启用位设置为 0,因为 `10&1 = 0`。那真不幸;最好在尝试写入之前知道你要写入字段的值是否适合该字段。事实上,我认为截掉错误字段值的高位是一种 1*未定义的行为*(哈)。
|
||||
|
||||
### 出于安全考虑使用 Rust
|
||||
|
||||
如何以一般方式检查字段的值是否适合其规定的位置?需要更多类型级别的数字!
|
||||
|
||||
你可以在 `Field` 中添加 `Width` 参数,并使用它来验证给定的值是否适合该字段:
|
||||
|
||||
```
|
||||
struct Field<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> {
|
||||
value: u8,
|
||||
_mask: PhantomData<Mask>,
|
||||
_offset: PhantomData<Offset>,
|
||||
_width: PhantomData<Width>,
|
||||
}
|
||||
|
||||
type RegEnabled = Field<U1,U1, U0>;
|
||||
type RegInterrupt = Field<U1, U2, U1>;
|
||||
type RegKind = Field<U3, op!(U7 << U2), U2>;
|
||||
|
||||
impl<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> Field<Width, Mask, Offset> {
|
||||
fn new(val: u8) -> Option<Self> {
|
||||
if val <= (1 << Width::U8) - 1 {
|
||||
Some(Field {
|
||||
value: (val << Offset::U8) & Mask::U8,
|
||||
_mask: PhantomData,
|
||||
_offset: PhantomData,
|
||||
_width: PhantomData,
|
||||
})
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
现在,只有给定值适合时,你才能构造一个 `Field` !否则,你将得到 `None` 信号,该信号指示发生了错误,而不是截掉该值的高位并静默写入意外的值。
|
||||
|
||||
但是请注意,这将在运行时环境中引发错误。但是,我们事先知道我们想写入的值,还记得吗?鉴于此,我们可以教编译器完全拒绝具有无效字段值的程序 —— 我们不必等到运行它!
|
||||
|
||||
这次,你将向 `new` 的新实现 `new_checked` 中添加一个特征绑定(`where` 子句),该函数要求输入值小于或等于给定字段用 `Width` 所能容纳的最大可能值:
|
||||
|
||||
```
|
||||
struct Field<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> {
|
||||
value: u8,
|
||||
_mask: PhantomData<Mask>,
|
||||
_offset: PhantomData<Offset>,
|
||||
_width: PhantomData<Width>,
|
||||
}
|
||||
|
||||
type RegEnabled = Field<U1, U1, U0>;
|
||||
type RegInterrupt = Field<U1, U2, U1>;
|
||||
type RegKind = Field<U3, op!(U7 << U2), U2>;
|
||||
|
||||
impl<Width: Unsigned, Mask: Unsigned, Offset: Unsigned> Field<Width, Mask, Offset> {
|
||||
const fn new_checked<V: Unsigned>() -> Self
|
||||
where
|
||||
V: IsLessOrEqual<op!((U1 << Width) - U1), Output = True>,
|
||||
{
|
||||
Field {
|
||||
value: (V::U8 << Offset::U8) & Mask::U8,
|
||||
_mask: PhantomData,
|
||||
_offset: PhantomData,
|
||||
_width: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
只有拥有此属性的数字才实现此特征,因此,如果使用不适合的数字,它将无法编译。让我们看一看!
|
||||
|
||||
```
|
||||
fn enable_register(&mut reg) {
|
||||
reg.update(RegEnabled::new_checked::<U10>());
|
||||
}
|
||||
12 | reg.update(RegEnabled::new_checked::<U10>());
|
||||
| ^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
|
||||
|
|
||||
= note: expected type `typenum::B0`
|
||||
found type `typenum::B1`
|
||||
```
|
||||
|
||||
`new_checked` 将无法生成一个程序,因为该字段的值有错误的高位。你的输入错误不会在运行时环境中才爆炸,因为你永远无法获得一个可以运行的工件。
|
||||
|
||||
就使内存映射的硬件进行交互的安全性而言,你已经接近 Rust 的极致。但是,你在 C 的第一个示例中所写的内容比最终得到的一锅粥的类型参数更简洁。当你谈论潜在可能有数百甚至数千个寄存器时,这样做是否容易处理?
|
||||
|
||||
### 让 Rust 恰到好处:既安全又方便使用
|
||||
|
||||
早些时候,我认为手工计算掩码有问题,但我又做了同样有问题的事情 —— 尽管是在类型级别。虽然使用这种方法很不错,但要达到编写任何代码的地步,则需要大量样板和手动转录(我在这里谈论的是类型的同义词)。
|
||||
|
||||
我们的团队想要像 [TockOS mmio 寄存器][7]之类的东西,而以最少的手动转录生成类型安全的实现。我们得出的结果是一个宏,该宏生成必要的样板以获得类似 Tock 的 API 以及基于类型的边界检查。要使用它,请写下一些有关寄存器的信息,其字段、宽度和偏移量以及可选的[枚举][8]类的值(你应该为字段可能具有的值赋予“含义”):
|
||||
|
||||
```
|
||||
register! {
|
||||
// The register's name
|
||||
Status,
|
||||
// The type which represents the whole register.
|
||||
u8,
|
||||
// The register's mode, ReadOnly, ReadWrite, or WriteOnly.
|
||||
RW,
|
||||
// And the fields in this register.
|
||||
Fields [
|
||||
On WIDTH(U1) OFFSET(U0),
|
||||
Dead WIDTH(U1) OFFSET(U1),
|
||||
Color WIDTH(U3) OFFSET(U2) [
|
||||
Red = U1,
|
||||
Blue = U2,
|
||||
Green = U3,
|
||||
Yellow = U4
|
||||
]
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
由此,你可以生成寄存器和字段类型,如上例所示,其中索引:`Width`、`Mask` 和 `Offset` 是从一个字段定义的 `WIDTH` 和 `OFFSET` 部分的输入值派生的。另外,请注意,所有这些数字都是 “类型数字”;它们将直接进入你的 `Field` 定义!
|
||||
|
||||
生成的代码通过为寄存器及字段指定名称来为寄存器及其相关字段提供名称空间。这很绕口,看起来是这样的:
|
||||
|
||||
```
|
||||
mod Status {
|
||||
struct Register(u8);
|
||||
mod On {
|
||||
struct Field; // There is of course more to this definition
|
||||
}
|
||||
mod Dead {
|
||||
struct Field;
|
||||
}
|
||||
mod Color {
|
||||
struct Field;
|
||||
pub const Red: Field = Field::<U1>new();
|
||||
// &c.
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
生成的 API 包含名义上期望的读取和写入的原语,以获取原始寄存器的值,但它也有办法获取单个字段的值、执行集合操作以及确定是否设置了任何(或全部)位集合的方法。你可以阅读[完整生成的 API][9]上的文档。
|
||||
|
||||
### 粗略检查
|
||||
|
||||
将这些定义用于实际设备会是什么样?代码中是否会充斥着类型参数,从而掩盖了视图中的实际逻辑?
|
||||
|
||||
不会!通过使用类型同义词和类型推断,你实际上根本不必考虑程序的类型层面部分。你可以直接与硬件交互,并自动获得与边界相关的保证。
|
||||
|
||||
这是一个 [UART][10] 寄存器块的示例。我会跳过寄存器本身的声明,因为包括在这里就太多了。而是从寄存器“块”开始,然后帮助编译器知道如何从指向该块开头的指针中查找寄存器。我们通过实现 `Deref` 和 `DerefMut` 来做到这一点:
|
||||
|
||||
```
|
||||
#[repr(C)]
|
||||
pub struct UartBlock {
|
||||
rx: UartRX::Register,
|
||||
_padding1: [u32; 15],
|
||||
tx: UartTX::Register,
|
||||
_padding2: [u32; 15],
|
||||
control1: UartControl1::Register,
|
||||
}
|
||||
|
||||
pub struct Regs {
|
||||
addr: usize,
|
||||
}
|
||||
|
||||
impl Deref for Regs {
|
||||
type Target = UartBlock;
|
||||
|
||||
fn deref(&self) -> &UartBlock {
|
||||
unsafe { &*(self.addr as *const UartBlock) }
|
||||
}
|
||||
}
|
||||
|
||||
impl DerefMut for Regs {
|
||||
fn deref_mut(&mut self) -> &mut UartBlock {
|
||||
unsafe { &mut *(self.addr as *mut UartBlock) }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
一旦到位,使用这些寄存器就像 `read()` 和 `modify()` 一样简单:
|
||||
|
||||
```
|
||||
fn main() {
|
||||
// A pretend register block.
|
||||
let mut x = [0_u32; 33];
|
||||
|
||||
let mut regs = Regs {
|
||||
// Some shenanigans to get at `x` as though it were a
|
||||
// pointer. Normally you'd be given some address like
|
||||
// `0xDEADBEEF` over which you'd instantiate a `Regs`.
|
||||
addr: &mut x as *mut [u32; 33] as usize,
|
||||
};
|
||||
|
||||
assert_eq!(regs.rx.read(), 0);
|
||||
|
||||
regs.control1
|
||||
.modify(UartControl1::Enable::Set + UartControl1::RecvReadyInterrupt::Set);
|
||||
|
||||
// The first bit and the 10th bit should be set.
|
||||
assert_eq!(regs.control1.read(), 0b_10_0000_0001);
|
||||
}
|
||||
```
|
||||
|
||||
当我们使用运行时值时,我们使用如前所述的**选项**。这里我使用的是 `unwrap`,但是在一个输入未知的真实程序中,你可能想检查一下从新调用中返回的**某些东西**: [^1] [^2]
|
||||
|
||||
```
|
||||
fn main() {
|
||||
// A pretend register block.
|
||||
let mut x = [0_u32; 33];
|
||||
|
||||
let mut regs = Regs {
|
||||
// Some shenanigans to get at `x` as though it were a
|
||||
// pointer. Normally you'd be given some address like
|
||||
// `0xDEADBEEF` over which you'd instantiate a `Regs`.
|
||||
addr: &mut x as *mut [u32; 33] as usize,
|
||||
};
|
||||
|
||||
let input = regs.rx.get_field(UartRX::Data::Field::Read).unwrap();
|
||||
regs.tx.modify(UartTX::Data::Field::new(input).unwrap());
|
||||
}
|
||||
```
|
||||
|
||||
### 解码失败条件
|
||||
|
||||
根据你的个人痛苦忍耐程度,你可能已经注意到这些错误几乎是无法理解的。看一下我所说的不那么微妙的提醒:
|
||||
|
||||
```
|
||||
error[E0271]: type mismatch resolving `<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UTerm, typenum::B1>, typenum::B0>, typenum::B1>, typenum::B0>, typenum::B0> as typenum::IsLessOrEqual<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UInt<typenum::UTerm, typenum::B1>, typenum::B0>, typenum::B1>, typenum::B0>>>::Output == typenum::B1`
|
||||
--> src/main.rs:12:5
|
||||
|
|
||||
12 | less_than_ten::<U20>();
|
||||
| ^^^^^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
|
||||
|
|
||||
= note: expected type `typenum::B0`
|
||||
found type `typenum::B1`
|
||||
```
|
||||
|
||||
`expected struct typenum::B0, found struct typenum::B1` 部分是有意义的,但是 ` typenum::UInt<typenum::UInt, typenum::UInt...` 到底是什么呢?好吧,`typenum` 将数字表示为二进制 [cons][13] 单元!像这样的错误使操作变得很困难,尤其是当你将多个这些类型级别的数字限制在狭窄的范围内时,你很难知道它在说哪个数字。当然,除非你一眼就能将巴洛克式二进制表示形式转换为十进制表示形式。
|
||||
|
||||
在第 U100 次试图从这个混乱中破译出某些含义之后,我们的一个队友简直《<ruby>疯了,地狱了,不要再忍受了<rt>Mad As Hell And Wasn't Going To Take It Anymore</rt></ruby>》,并做了一个小工具 `tnfilt`,从这种命名空间的二进制 cons 单元的痛苦中解脱出来。`tnfilt` 将 cons 单元格式的表示法替换为可让人看懂的十进制数字。我们认为其他人也会遇到类似的困难,所以我们分享了 [tnfilt][14]。你可以像这样使用它:
|
||||
|
||||
```
|
||||
$ cargo build 2>&1 | tnfilt
|
||||
```
|
||||
|
||||
它将上面的输出转换为如下所示:
|
||||
|
||||
```
|
||||
error[E0271]: type mismatch resolving `<U20 as typenum::IsLessOrEqual<U10>>::Output == typenum::B1`
|
||||
```
|
||||
|
||||
现在*这*才有意义!
|
||||
|
||||
### 结论
|
||||
|
||||
当在软件与硬件进行交互时,普遍使用内存映射寄存器,并且有无数种方法来描述这些交互,每种方法在易用性和安全性上都有不同的权衡。我们发现使用类型级编程来取得内存映射寄存器交互的编译时检查可以为我们提供制作更安全软件的必要信息。该代码可在 [bounded-registers][15] crate(Rust 包)中找到。
|
||||
|
||||
我们的团队从安全性较高的一面开始,然后尝试找出如何将易用性滑块移近易用端。从这些雄心壮志中,“边界寄存器”就诞生了,我们在 Auxon 公司的冒险中遇到内存映射设备的任何时候都可以使用它。
|
||||
|
||||
* * *
|
||||
|
||||
[^1]: 从技术上讲,从定义上看,从寄存器字段读取的值只能在规定的范围内,但是我们当中没有一个人生活在一个纯净的世界中,而且你永远都不知道外部系统发挥作用时会发生什么。你是在这里接受硬件之神的命令,因此与其强迫你进入“可能的恐慌”状态,还不如给你提供处理“这将永远不会发生”的机会。
|
||||
[^2]: `get_field` 看起来有点奇怪。我正在专门查看 `Field::Read` 部分。`Field` 是一种类型,你需要该类型的实例才能传递给 `get_field`。更干净的 API 可能类似于:`regs.rx.get_field::<UartRx::Data::Field>();` 但是请记住,`Field` 是一种具有固定的宽度、偏移量等索引的类型的同义词。要像这样对 `get_field` 进行参数化,你需要使用更高级的类型。
|
||||
|
||||
* * *
|
||||
|
||||
此内容最初发布在 [Auxon Engineering 博客][16]上,并经许可进行编辑和重新发布。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/c-vs-rust-abstractions
|
||||
|
||||
作者:[Dan Pittman][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dan-pittman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
|
||||
[2]: https://cs-fundamentals.com/c-programming/history-of-c-programming-language.php
|
||||
[3]: https://www.w3schools.in/c-tutorial/history-of-c/
|
||||
[4]: https://www.rust-lang.org/
|
||||
[5]: https://en.wikipedia.org/wiki/Rust_(programming_language)
|
||||
[6]: https://docs.rs/crate/typenum
|
||||
[7]: https://docs.rs/tock-registers/0.3.0/tock_registers/
|
||||
[8]: https://en.wikipedia.org/wiki/Enumerated_type
|
||||
[9]: https://github.com/auxoncorp/bounded-registers#the-register-api
|
||||
[10]: https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter
|
||||
[11]: tmp.shpxgDsodx#1
|
||||
[12]: tmp.shpxgDsodx#2
|
||||
[13]: https://en.wikipedia.org/wiki/Cons
|
||||
[14]: https://github.com/auxoncorp/tnfilt
|
||||
[15]: https://crates.io/crates/bounded-registers
|
||||
[16]: https://blog.auxon.io/2019/10/25/type-level-registers/
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11835-1.html)
|
||||
[#]: subject: (Get started with this open source to-do list manager)
|
||||
[#]: via: (https://opensource.com/article/20/1/open-source-to-do-list)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
开始使用开源待办事项清单管理器
|
||||
======
|
||||
|
||||
> 待办事项清单是跟踪任务列表的强大方法。在我们的 20 个使用开源提升生产力的系列的第七篇文章中了解如何使用它。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/31/111103kmv55ploshuso4ot.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 todo 跟踪任务
|
||||
|
||||
任务管理和待办事项清单是我非常喜欢0的东西。我是一位生产效率的狂热粉丝(以至于我为此做了一个[播客][2]),我尝试了各种不同的应用。我甚至为此[做了演讲][3]并[写了些文章][4]。因此,当我谈到提高工作效率时,肯定会出现任务管理和待办事项清单工具。
|
||||
|
||||
![Getting fancy with Todo.txt][5]
|
||||
|
||||
说实话,由于简单、跨平台且易于同步,用 [todo.txt][6] 肯定不会错。它是我不断反复提到的两个待办事项清单以及任务管理应用之一(另一个是 [Org 模式][7])。让我反复使用它的原因是它简单、可移植、易于理解,并且有许多很好的附加组件,并且当一台机器有附加组件,而另一台没有,也不会破坏它。由于它是一个 Bash shell 脚本,我还没发现一个无法支持它的系统。
|
||||
|
||||
#### 设置 todo.txt
|
||||
|
||||
首先,你需要安装基本 shell 脚本并将默认配置文件复制到 `~/.todo` 目录:
|
||||
|
||||
```
|
||||
git clone https://github.com/todotxt/todo.txt-cli.git
|
||||
cd todo.txt-cli
|
||||
make
|
||||
sudo make install
|
||||
mkdir ~/.todo
|
||||
cp todo.cfg ~/.todo/config
|
||||
```
|
||||
|
||||
接下来,设置配置文件。一般,我想取消对颜色设置的注释,但必须马上设置的是 `TODO_DIR` 变量:
|
||||
|
||||
```
|
||||
export TODO_DIR="$HOME/.todo"
|
||||
```
|
||||
|
||||
#### 添加待办事件
|
||||
|
||||
要添加第一个待办事件,只需输入 `todo.sh add <NewTodo>` 就能添加。这还将在 `$HOME/.todo/` 中创建三个文件:`todo.txt`、`done.txt` 和 `reports.txt`。
|
||||
|
||||
添加几个项目后,运行 `todo.sh ls` 查看你的待办事项。
|
||||
|
||||
![Basic todo.txt list][8]
|
||||
|
||||
#### 管理任务
|
||||
|
||||
你可以通过给项目设置优先级来稍微改善它。要向项目添加优先级,运行 `todo.sh pri # A`。数字是列表中任务的数量,而字母 `A` 是优先级。你可以将优先级设置为从 A 到 Z,因为这是它的排序方式。
|
||||
|
||||
要完成任务,运行 `todo.sh do #` 来标记项目已完成并将它移动到 `done.txt`。运行 `todo.sh report` 会向 `report.txt` 写入已完成和未完成项的数量。
|
||||
|
||||
所有这三个文件的格式都有详细的说明,因此你可以使用你的文本编辑器修改。`todo.txt` 的基本格式是:
|
||||
|
||||
```
|
||||
(Priority) YYYY-MM-DD Task
|
||||
```
|
||||
|
||||
该日期表示任务的到期日期(如果已设置)。手动编辑文件时,只需在任务前面加一个 `x` 来标记为已完成。运行 `todo.sh archive` 会将这些项目移动到 `done.txt`,你可以编辑该文本文件,并在有时间时将已完成的项目归档。
|
||||
|
||||
#### 设置重复任务
|
||||
|
||||
我有很多重复的任务,我需要以每天/周/月来计划。
|
||||
|
||||
![Recurring tasks with the ice_recur add-on][9]
|
||||
|
||||
这就是 `todo.txt` 的灵活性所在。通过在 `~/.todo.actions.d/` 中使用[附加组件][10],你可以添加命令并扩展基本 `todo.sh` 的功能。附加组件基本上是实现特定命令的脚本。对于重复执行的任务,插件 [ice_recur][11] 应该符合要求。按照其页面上的说明操作,你可以设置任务以非常灵活的方式重复执行。
|
||||
|
||||
![Todour on MacOS][12]
|
||||
|
||||
在该[附加组件目录][10]中有很多附加组件,包括同步到某些云服务,也有链接到桌面或移动端应用的组件,这样你可以随时看到待办列表。
|
||||
|
||||
我只是简单介绍了这个代办事项清单功能,请花点时间深入了解这个工具的强大!它确实可以帮助我每天完成任务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/open-source-to-do-list
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
|
||||
[2]: https://productivityalchemy.com/
|
||||
[3]: https://www.slideshare.net/AllThingsOpen/getting-to-done-on-the-command-line
|
||||
[4]: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line
|
||||
[5]: https://opensource.com/sites/default/files/uploads/productivity_7-1.png
|
||||
[6]: http://todotxt.org/
|
||||
[7]: https://orgmode.org/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/productivity_7-2.png (Basic todo.txt list)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/productivity_7-3.png (Recurring tasks with the ice_recur add-on)
|
||||
[10]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
|
||||
[11]: https://github.com/rlpowell/todo-text-stuff
|
||||
[12]: https://opensource.com/sites/default/files/uploads/productivity_7-4.png (Todour on MacOS)
|
@ -0,0 +1,112 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11813-1.html)
|
||||
[#]: subject: (Locking and unlocking accounts on Linux systems)
|
||||
[#]: via: (https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
在 Linux 系统中禁用与解禁用户的账号
|
||||
======
|
||||
|
||||
> 总有这样的时候:有时你需要禁用某位 Linux 用户的账号,有时你还需要反过来解禁用户的账号。
|
||||
本文将介绍一些管理用户访问的命令,并介绍它们背后的原理。
|
||||
|
||||
![](https://images.idgesg.net/images/article/2019/10/cso_cybersecurity_mysterious_padlock_complex_circuits_gold_by_sqback_gettyimages-1177918748_2400x1600-100813830-large.jpg)
|
||||
|
||||
假如你正管理着一台 [Linux][1] 系统,那么很有可能将遇到需要禁用一个账号的情况。可能是某人已经换了职位,他们是否还需要该账号仍是个问题;或许有理由相信再次使用该账号并没有大碍。不管上述哪种情况,知晓如何禁用账号并解禁账号都是你需要知道的知识。
|
||||
|
||||
需要你记住的一件重要的事是尽管有多种方法来禁用账号,但它们并不都达到相同的效果。假如用户使用公钥/私钥来使用该账号而不是使用密码来访问,那么你使用的某些命令来阻止用户获取该账号或许将不会生效。
|
||||
|
||||
### 使用 passwd 来禁用一个账号
|
||||
|
||||
最为简单的用来禁用一个账号的方法是使用 `passwd -l` 命令。例如:
|
||||
|
||||
```
|
||||
$ sudo passwd -l tadpole
|
||||
```
|
||||
|
||||
上面这个命令的效果是在加密后的密码文件 `/etc/shadow` 中,用户对应的那一行的最前面加上一个 `!` 符号。这样就足够阻止用户使用密码来访问账号了。
|
||||
|
||||
在没有使用上述命令前,加密后的密码行如下所示(请注意第一个字符):
|
||||
|
||||
```
|
||||
$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
|
||||
```
|
||||
|
||||
而禁用该账号后,这一行将变为:
|
||||
|
||||
```
|
||||
!$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
|
||||
```
|
||||
|
||||
在 tadpole 下一次尝试登录时,他可能会使用他原有的密码来尝试多次登录,但就是无法再登录成功了。另一方面,你则可以使用下面的命令来查看他这个账号的状态(`-S` = status):
|
||||
|
||||
```
|
||||
$ sudo passwd -S tadpole
|
||||
tadpole L 10/15/2019 0 99999 7 -1
|
||||
```
|
||||
|
||||
第二项的 `L` 告诉你这个账号已经被禁用了。在该账号被禁用前,这一项应该是 `P`。如果显示的是 `NP` 则意味着该账号还没有设置密码。
|
||||
|
||||
命令 `usermod -L` 也具有相同的效果(添加 `!` 来禁用账号的使用)。
|
||||
|
||||
使用这种方法来禁用某个账号的一个好处是当需要解禁某个账号时非常容易。只需要使用一个文本编辑器或者使用 `passwd -u` 命令来执行相反的操作,即将添加的 `!` 移除即可。
|
||||
|
||||
```
|
||||
$ sudo passwd -u tadpole
|
||||
passwd: password expiry information changed.
|
||||
```
|
||||
|
||||
但使用这种方式的问题是如果用户使用公钥/私钥对的方式来访问他/她的账号,这种方式将不能阻止他们使用该账号。
|
||||
|
||||
### 使用 chage 命令来禁用账号
|
||||
|
||||
另一种禁用用户账号的方法是使用 `chage` 命令,它可以帮助管理用户账号的过期日期。
|
||||
|
||||
```
|
||||
$ sudu chage -E0 tadpole
|
||||
$ sudo passwd -S tadpole
|
||||
tadpole P 10/15/2019 0 99999 7 -1
|
||||
```
|
||||
|
||||
`chage` 命令将会稍微修改 `/etc/shadow` 文件。在这个使用 `:` 来分隔的文件(下面将进行展示)中,某行的第 8 项将被设置为 `0`(先前为空),这就意味着这个账号已经过期了。`chage` 命令会追踪密码更改期间的天数,通过选项也可以提供账号过期信息。第 8 项如果是 0 则意味着这个账号在 1970 年 1 月 1 日后的一天过期,当使用上面显示的那个命令时可以用来禁用账号。
|
||||
|
||||
```
|
||||
$ sudo grep tadpole /etc/shadow | fold
|
||||
tadpole:$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPC
|
||||
nXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7::0:
|
||||
^
|
||||
|
|
||||
+--- days until expiration
|
||||
```
|
||||
|
||||
为了执行相反的操作,你可以简单地使用下面的命令将放置在 `/etc/shadow` 文件中的 `0` 移除掉:
|
||||
|
||||
```
|
||||
% sudo chage -E-1 tadpole
|
||||
```
|
||||
|
||||
一旦一个账号使用这种方式被禁用,即便是无密码的 [SSH][4] 登录也不能再访问该账号了。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[4]: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11817-1.html)
|
||||
[#]: subject: (What's your favorite Linux terminal trick?)
|
||||
[#]: via: (https://opensource.com/article/20/1/linux-terminal-trick)
|
||||
[#]: author: (Opensource.com https://opensource.com/users/admin)
|
||||
|
||||
你有什么喜欢的 Linux 终端技巧?
|
||||
======
|
||||
|
||||
> 告诉我们你最喜欢的终端技巧,无论是提高生产率的快捷方式还是有趣的彩蛋。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/25/135858accxc70tfxuifxx1.jpg)
|
||||
|
||||
新年伊始始终是评估提高效率的新方法的好时机。许多人尝试使用新的生产力工具,或者想找出如何优化其最常用的流程。终端是一个需要评估的领域,尤其是在开源世界中,有无数种方法可以通过快捷键和命令使终端上的生活更加高效(又有趣!)。
|
||||
|
||||
我们向作者们询问了他们最喜欢的终端技巧。他们分享了一些节省时间的技巧,甚至还有一个有趣的终端彩蛋。你会采用这些键盘快捷键或命令行技巧吗?你有喜欢分享的最爱吗?请发表评论来告诉我们。
|
||||
|
||||
“我找不出哪个是我最喜欢的;每天我都会使用这三个:
|
||||
|
||||
* `Ctrl + L` 来清除屏幕(而不是键入 `clear`)。
|
||||
* `sudo !!` 以 `sudo` 特权运行先前的命令。
|
||||
* `grep -Ev '^#|^$' <file>` 将显示文件内容,不带注释或空行。” —Mars Toktonaliev
|
||||
|
||||
“对我来说,如果我正在使用终端文本编辑器,并且希望将其丢开,以便可以快速执行其他操作,则可以使用 `Ctrl + Z` 将其放到后台,接着执行我需要做的一切,然后用 `fg` 将其带回前台。有时我也会对 `top` 或 `htop` 做同样的事情。我可以将其丢到后台,并在我想检查当前性能时随时将其带回前台。我不会将通常很快能完成的任务在前后台之间切换,它确实可以增强终端上的多任务处理能力。” —Jay LaCroix
|
||||
|
||||
“我经常在某一天在终端中做很多相同的事情,有两件事是每天都不变的:
|
||||
|
||||
* `Ctrl + R` 反向搜索我的 Bash 历史记录以查找我已经运行并且希望再次执行的命令。
|
||||
* 插入号(`^`)替换是最好的,因为我经常做诸如 `sudo dnf search <package name>` 之类的事情,然后,如果我以这种方式找到合适的软件包,则执行 `^search^install` 来重新运行该命令,以 `install` 替换 `search`。
|
||||
|
||||
这些东西肯定是很基本的,但是对我来说却节省了时间。” —Steve Morris
|
||||
|
||||
“我的炫酷终端技巧不是我在终端上执行的操作,而是我使用的终端。有时候我只是想要使用 Apple II 或旧式琥珀色终端的感觉,那我就启动了 Cool-Retro-Term。它的截屏可以在这个[网站][2]上找到。” —Jim Hall
|
||||
|
||||
“可能是用 `ssh -X` 来在其他计算机上运行图形程序。(在某些终端仿真器上,例如 gnome-terminal)用 `C-S c` 和 `C-S v` 复制/粘贴。我不确定这是否有价值(因为它有趣的是以 ssh 启动的图形化)。最近,我需要登录另一台计算机,但是我的孩子们可以在笔记本电脑的大屏幕上看到它。这个[链接][3]向我展示了一些我从未见过的内容:通过局域网从我的笔记本电脑上镜像来自另一台计算机屏幕上的活动会话(`x11vnc -desktop`),并能够同时从两台计算机上进行控制。” —Kyle R. Conway
|
||||
|
||||
“你可以安装 `sl`(`$ sudo apt install sl` 或 `$ sudo dnf install sl`),并且当在 Bash 中输入命令 `sl` 时,一个基于文本的蒸汽机车就会在显示屏上移动。” —Don Watkins
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/linux-terminal-trick
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/admin
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
|
||||
[2]: https://github.com/Swordfish90/cool-retro-term
|
||||
[3]: https://elinux.org/Screen_Casting_on_a_Raspberry_Pi
|
@ -0,0 +1,202 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11830-1.html)
|
||||
[#]: subject: (Setting up passwordless Linux logins using public/private keys)
|
||||
[#]: via: (https://www.networkworld.com/article/3514607/setting-up-passwordless-linux-logins-using-publicprivate-keys.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
使用公钥/私钥对设定免密的 Linux 登录方式
|
||||
======
|
||||
|
||||
> 使用一组公钥/私钥对让你不需要密码登录到远程 Linux 系统或使用 ssh 运行命令,这会非常方便,但是设置过程有点复杂。下面是帮助你的方法和脚本。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202001/29/141343ldps4muy4kp64k4l.jpg)
|
||||
|
||||
在 [Linux][1] 系统上设置一个允许你无需密码即可远程登录或运行命令的帐户并不难,但是要使它正常工作,你还需要掌握一些繁琐的细节。在本文,我们将完成整个过程,然后给出一个可以帮助处理琐碎细节的脚本。
|
||||
|
||||
设置好之后,如果希望在脚本中运行 `ssh` 命令,尤其是希望配置自动运行的命令,那么免密访问特别有用。
|
||||
|
||||
需要注意的是,你不需要在两个系统上使用相同的用户帐户。实际上,你可以把公用密钥用于系统上的多个帐户或多个系统上的不同帐户。
|
||||
|
||||
设置方法如下。
|
||||
|
||||
### 在哪个系统上启动?
|
||||
|
||||
首先,你需要从要发出命令的系统上着手。那就是你用来创建 `ssh` 密钥的系统。你还需要可以访问远程系统上的帐户并在其上运行这些命令。
|
||||
|
||||
为了使角色清晰明了,我们将场景中的第一个系统称为 “boss”,因为它将发出要在另一个系统上运行的命令。
|
||||
|
||||
因此,命令提示符如下:
|
||||
|
||||
```
|
||||
boss$
|
||||
```
|
||||
|
||||
如果你还没有在 boss 系统上为你的帐户设置公钥/私钥对,请使用如下所示的命令创建一个密钥对。注意,你可以在各种加密算法之间进行选择。(一般使用 RSA 或 DSA。)注意,要在不输入密码的情况下访问系统,你需要在下面的对话框中的两个提示符出不输入密码。
|
||||
|
||||
如果你已经有一个与此帐户关联的公钥/私钥对,请跳过此步骤。
|
||||
|
||||
```
|
||||
boss$ ssh-keygen -t rsa
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/myself/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase): <== 按下回车键即可
|
||||
Enter same passphrase again: <== 按下回车键即可
|
||||
Your identification has been saved in /home/myself/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/myself/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
SHA256:1zz6pZcMjA1av8iyojqo6NVYgTl1+cc+N43kIwGKOUI myself@boss
|
||||
The key's randomart image is:
|
||||
+---[RSA 3072]----+
|
||||
| . .. |
|
||||
| E+ .. . |
|
||||
| .+ .o + o |
|
||||
| ..+.. .o* . |
|
||||
| ... So+*B o |
|
||||
| + ...==B . |
|
||||
| . o . ....++. |
|
||||
|o o . . o..o+ |
|
||||
|=..o.. ..o o. |
|
||||
+----[SHA256]-----+
|
||||
```
|
||||
|
||||
上面显示的命令将创建公钥和私钥。其中公钥用于加密,私钥用于解密。因此,这些密钥之间的关系是关键的,私有密钥**绝不**应该被共享。相反,它应该保存在 boss 系统的 `.ssh` 文件夹中。
|
||||
|
||||
注意,在创建时,你的公钥和私钥将会保存在 `.ssh` 文件夹中。
|
||||
|
||||
下一步是将**公钥**复制到你希望从 boss 系统免密访问的系统。你可以使用 `scp` 命令来完成此操作,但此时你仍然需要输入密码。在本例中,该系统称为 “target”。
|
||||
|
||||
```
|
||||
boss$ scp .ssh/id_rsa.pub myacct@target:/home/myaccount
|
||||
myacct@target's password:
|
||||
```
|
||||
|
||||
你需要安装公钥在 target 系统(将运行命令的系统)上。如果你没有 `.ssh` 目录(例如,你从未在该系统上使用过 `ssh`),运行这样的命令将为你设置一个目录:
|
||||
|
||||
```
|
||||
target$ ssh localhost date
|
||||
target$ ls -la .ssh
|
||||
total 12
|
||||
drwx------ 2 myacct myacct 4096 Jan 19 11:48 .
|
||||
drwxr-xr-x 6 myacct myacct 4096 Jan 19 11:49 ..
|
||||
-rw-r--r-- 1 myacct myacct 222 Jan 19 11:48 known_hosts
|
||||
```
|
||||
|
||||
仍然在目标系统上,你需要将从“boss”系统传输的公钥添加到 `.ssh/authorized_keys` 文件中。如果该文件已经存在,使用下面的命令将把它添加到文件的末尾;如果文件不存在,则创建该文件并添加密钥。
|
||||
|
||||
```
|
||||
target$ cat id_rsa.pub >> .ssh/authorized_keys
|
||||
```
|
||||
|
||||
下一步,你需要确保你的 `authorized_keys` 文件权限为 600。如果还不是,执行命令 `chmod 600 .ssh/authorized_keys`。
|
||||
|
||||
```
|
||||
target$ ls -l authorized_keys
|
||||
-rw------- 1 myself myself 569 Jan 19 12:10 authorized_keys
|
||||
```
|
||||
|
||||
还要检查目标系统上 `.ssh` 目录的权限是否设置为 700。如果需要,执行 `chmod 700 .ssh` 命令修改权限。
|
||||
|
||||
```
|
||||
target$ ls -ld .ssh
|
||||
drwx------ 2 myacct myacct 4096 Jan 14 15:54 .ssh
|
||||
```
|
||||
|
||||
此时,你应该能够从 boss 系统远程免密运行命令到目标系统。除非目标系统上的目标用户帐户拥有与你试图连接的用户和主机相同的旧公钥,否则这应该可以工作。如果是这样,你应该删除早期的(并冲突的)条目。
|
||||
|
||||
### 使用脚本
|
||||
|
||||
使用脚本可以使某些工作变得更加容易。但是,在下面的示例脚本中,你会遇到的一个烦人的问题是,在配置免密访问权限之前,你必须多次输入目标用户的密码。一种选择是将脚本分为两部分——需要在 boss 系统上运行的命令和需要在 target 系统上运行的命令。
|
||||
|
||||
这是“一步到位”版本的脚本:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
# NOTE: This script requires that you have the password for the remote acct
|
||||
# in order to set up password-free access using your public key
|
||||
|
||||
LOC=`hostname` # the local system from which you want to run commands from
|
||||
# wo a password
|
||||
|
||||
# get target system and account
|
||||
echo -n "target system> "
|
||||
read REM
|
||||
echo -n "target user> "
|
||||
read user
|
||||
|
||||
# create a key pair if no public key exists
|
||||
if [ ! -f ~/.ssh/id_rsa.pub ]; then
|
||||
ssh-keygen -t rsa
|
||||
fi
|
||||
|
||||
# ensure a .ssh directory exists in the remote account
|
||||
echo checking for .ssh directory on remote system
|
||||
ssh $user@$REM "if [ ! -d /home/$user/.ssh ]; then mkdir /home/$user/.ssh; fi"
|
||||
|
||||
# share the public key (using local hostname)
|
||||
echo copying the public key
|
||||
scp ~/.ssh/id_rsa.pub $user@$REM:/home/$user/$user-$LOC.pub
|
||||
|
||||
# put the public key into the proper location
|
||||
echo adding key to authorized_keys
|
||||
ssh $user@$REM "cat /home/$user/$user-$LOC.pub >> /home/$user/.ssh/authorized_ke
|
||||
ys"
|
||||
|
||||
# set permissions on authorized_keys and .ssh (might be OK already)
|
||||
echo setting permissions
|
||||
ssh $user@$REM "chmod 600 ~/.ssh/authorized_keys"
|
||||
ssh $user@$REM "chmod 700 ~/.ssh"
|
||||
|
||||
# try it out -- should NOT ask for a password
|
||||
echo testing -- if no password is requested, you are all set
|
||||
ssh $user@$REM /bin/hostname
|
||||
```
|
||||
|
||||
脚本已经配置为在你每次必须输入密码时告诉你它正在做什么。交互看起来是这样的:
|
||||
|
||||
```
|
||||
$ ./rem_login_setup
|
||||
target system> fruitfly
|
||||
target user> lola
|
||||
checking for .ssh directory on remote system
|
||||
lola@fruitfly's password:
|
||||
copying the public key
|
||||
lola@fruitfly's password:
|
||||
id_rsa.pub 100% 567 219.1KB/s 00:00
|
||||
adding key to authorized_keys
|
||||
lola@fruitfly's password:
|
||||
setting permissions
|
||||
lola@fruitfly's password:
|
||||
testing -- if no password is requested, you are all set
|
||||
fruitfly
|
||||
```
|
||||
|
||||
在上面的场景之后,你就可以像这样登录到 lola 的帐户:
|
||||
|
||||
```
|
||||
$ ssh lola@fruitfly
|
||||
[lola@fruitfly ~]$
|
||||
```
|
||||
|
||||
一旦设置了免密登录,你就可以不需要键入密码从 boss 系统登录到 target 系统,并且运行任意的 `ssh` 命令。以这种免密的方式运行并不意味着你的帐户不安全。然而,根据 target 系统的性质,保护你在 boss 系统上的密码可能变得更加重要。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3514607/setting-up-passwordless-linux-logins-using-publicprivate-keys.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
|
||||
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[3]: https://www.networkworld.com/article/3143050/linux/linux-hardening-a-15-step-checklist-for-a-secure-linux-server.html#tk.nww-fsb
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,165 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11827-1.html)
|
||||
[#]: subject: (Wine 5.0 is Released! Here’s How to Install it)
|
||||
[#]: via: (https://itsfoss.com/wine-5-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Wine 5.0 发布了!
|
||||
======
|
||||
|
||||
> Wine 的一个新的主要版本发布了。使用 Wine 5.0,在 Linux 上运行 Windows 应用程序和游戏的体验得到进一步改进。
|
||||
|
||||
通过一些努力,你可以使用 Wine [在 Linux 上运行 Windows 应用程序][1]。当你必须使用一个仅在 Windows 上可用的软件时,Wine 是一个可以尝试的工具。它支持许多这样的软件。
|
||||
|
||||
Wine 的一个新的主要发布版本已经降临,即 Wine 5.0,几乎距它的 4.0 发布一年之后。
|
||||
|
||||
Wine 5.0 发布版本引进了几个主要特性和很多显著的更改/改进。在这篇文章中,我将重点介绍新的特性是什么,并且也将提到安装说明。
|
||||
|
||||
### 在 Wine 5.0 中有什么新的特性?
|
||||
|
||||
![][2]
|
||||
|
||||
如他们的[官方声明][3]所述,这是 5.0 发布版本中的关键更改:
|
||||
|
||||
* PE 格式的内置模块。
|
||||
* 支持多显示器。
|
||||
* 重新实现了 XAudio2。
|
||||
* 支持 Vulkan 1.1。
|
||||
* 支持微软安装程序(MSI)补丁文件。
|
||||
* 性能提升。
|
||||
|
||||
因此,随着 Vulkan 1.1 和对多显示器的支持 —— Wine 5.0 发布版本是一件大事。
|
||||
|
||||
除了上面强调的这些关键内容以外,在新的版本中包含成千上万的更改/改进中,你还可以期待对控制器的支持更好。
|
||||
|
||||
值得注意的是,此版本特别纪念了 **Józef Kucia**(vkd3d 项目的首席开发人员)。
|
||||
|
||||
他们也已经在[发布说明][4]中提到这一点:
|
||||
|
||||
> 这个发布版本特别纪念了 Józef Kucia,他于 2019 年 8 月去世,年仅 30 岁。Józef 是 Wine 的 Direct3D 实现的一个主要贡献者,并且是 vkd3d 项目的首席开发人员。我们都非常怀念他的技能和友善。
|
||||
|
||||
### 如何在 Ubuntu 和 Linux Mint 上安装 Wine 5.0
|
||||
|
||||
> 注意:
|
||||
|
||||
> 如果你在以前安装过 Wine,你应该将其完全移除,以(如你希望的)避免一些冲突。此外,WineHQ 存储库的密钥最近已被更改,针对你的 Linux 发行版的更多的操作指南,你可以参考它的[下载页面][5]。
|
||||
|
||||
Wine 5.0 的源码可在它的[官方网站][3]上获得。为了使其工作,你可以阅读更多关于[构建 Wine][6] 的信息。基于 Arch 的用户应该很快就会得到它。
|
||||
|
||||
在这里,我将向你展示在 Ubuntu 和其它基于 Ubuntu 的发行版上安装 Wine 5.0 的步骤。请耐心,并按照步骤一步一步安装和使用 Wine。这里涉及几个步骤。
|
||||
|
||||
请记住,Wine 安装了太多软件包。你会看到大量的软件包列表,下载大小约为 1.3 GB。
|
||||
|
||||
### 在 Ubuntu 上安装 Wine 5.0(不适用于 Linux Mint)
|
||||
|
||||
首先,使用这个命令来移除现存的 Wine:
|
||||
|
||||
```
|
||||
sudo apt remove winehq-stable wine-stable wine1.6 wine-mono wine-geco winetricks
|
||||
```
|
||||
|
||||
然后确保添加 32 位体系结构支持:
|
||||
|
||||
```
|
||||
sudo dpkg --add-architecture i386
|
||||
```
|
||||
|
||||
下载并添加官方 Wine 存储库密钥:
|
||||
|
||||
```
|
||||
wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -
|
||||
```
|
||||
|
||||
现在,接下来的步骤需要添加存储库,为此, 你需要首先[知道你的 Ubuntu 版本][7]。
|
||||
|
||||
对于 **Ubuntu 18.04 和 19.04**,用这个 PPA 添加 FAudio 依赖, **Ubuntu 19.10** 不需要它:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport
|
||||
```
|
||||
|
||||
现在使用此命令添加存储库:
|
||||
|
||||
```
|
||||
sudo apt-add-repository "deb https://dl.winehq.org/wine-builds/ubuntu $(lsb_release -cs) main"
|
||||
```
|
||||
|
||||
现在你已经添加了正确的存储库,可以使用以下命令安装 Wine 5.0:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt install --install-recommends winehq-stable
|
||||
```
|
||||
|
||||
请注意,尽管[在软件包列表中将 Wine 5 列为稳定版][8],但你仍可能会看到 winehq-stable 的 wine 4.0.3。也许它不会传播到所有地理位置。从今天早上开始,我可以看到 Wine 5.0。
|
||||
|
||||
### 在 Linux Mint 19.1、19.2 和 19.3 中安装 Wine 5.0
|
||||
|
||||
正如一些读者通知我的那样,[apt-add 存储库命令][9]不适用于 Linux Mint 19.x 系列。
|
||||
|
||||
这是添加自定义存储库的另一种方法。你必须执行与 Ubuntu 相同的步骤。如删除现存的 Wine 包:
|
||||
|
||||
```
|
||||
sudo apt remove winehq-stable wine-stable wine1.6 wine-mono wine-geco winetricks
|
||||
```
|
||||
|
||||
添加 32 位支持:
|
||||
|
||||
```
|
||||
sudo dpkg --add-architecture i386
|
||||
```
|
||||
|
||||
然后添加 GPG 密钥:
|
||||
|
||||
```
|
||||
wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -
|
||||
```
|
||||
|
||||
添加 FAudio 依赖:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport
|
||||
```
|
||||
|
||||
现在为 Wine 存储库创建一个新条目:
|
||||
|
||||
```
|
||||
sudo sh -c "echo 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' >> /etc/apt/sources.list.d/winehq.list"
|
||||
```
|
||||
|
||||
更新软件包列表并安装Wine:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt install --install-recommends winehq-stable
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
你尝试过最新的 Wine 5.0 发布版本吗?如果是的话,在运行中你看到什么改进?
|
||||
|
||||
在下面的评论区域,让我知道你对新的发布版本的看法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/wine-5-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/use-windows-applications-linux/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/wine_5.png?ssl=1
|
||||
[3]: https://www.winehq.org/news/2020012101
|
||||
[4]: https://www.winehq.org/announce/5.0
|
||||
[5]: https://wiki.winehq.org/Download
|
||||
[6]: https://wiki.winehq.org/Building_Wine
|
||||
[7]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[8]: https://dl.winehq.org/wine-builds/ubuntu/dists/bionic/main/binary-amd64/
|
||||
[9]: https://itsfoss.com/add-apt-repository-command-not-found/
|
626
published/20200103 Add scorekeeping to your Python game.md
Normal file
626
published/20200103 Add scorekeeping to your Python game.md
Normal file
@ -0,0 +1,626 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11839-1.html)
|
||||
[#]: subject: (Add scorekeeping to your Python game)
|
||||
[#]: via: (https://opensource.com/article/20/1/add-scorekeeping-your-python-game)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
添加计分到你的 Python 游戏
|
||||
======
|
||||
|
||||
> 在本系列的第十一篇有关使用 Python Pygame 模块进行编程的文章中,显示玩家获得战利品或受到伤害时的得分。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202002/01/154838led0y08y2aqetz1q.jpg)
|
||||
|
||||
这是仍在进行中的关于使用 [Pygame][3] 模块来在 [Python 3][2] 在创建电脑游戏的第十一部分。先前的文章是:
|
||||
|
||||
* [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
|
||||
* [使用 Python 和 Pygame 模块构建一个游戏框架][5]
|
||||
* [如何在你的 Python 游戏中添加一个玩家][6]
|
||||
* [用 Pygame 使你的游戏角色移动起来][7]
|
||||
* [如何向你的 Python 游戏中添加一个敌人][8]
|
||||
* [在 Pygame 游戏中放置平台][19]
|
||||
* [在你的 Python 游戏中模拟引力][9]
|
||||
* [为你的 Python 平台类游戏添加跳跃功能][10]
|
||||
* [使你的 Python 游戏玩家能够向前和向后跑][11]
|
||||
* [在你的 Python 平台类游戏中放一些奖励][12]
|
||||
|
||||
如果你已经跟随这一系列很久,那么已经学习了使用 Python 创建一个视频游戏所需的所有基本语法和模式。然而,它仍然缺少一个至关重要的组成部分。这一组成部分不仅仅对用 Python 编程游戏重要;不管你探究哪个计算机分支,你都必需精通:作为一个程序员,通过阅读一种语言的或库的文档来学习新的技巧。
|
||||
|
||||
幸运的是,你正在阅读本文的事实表明你熟悉文档。为了使你的平台类游戏更加美观,在这篇文章中,你将在游戏屏幕上添加得分和生命值显示。不过,教你如何找到一个库的功能以及如何使用这些新的功能的这节课程并没有多神秘。
|
||||
|
||||
### 在 Pygame 中显示得分
|
||||
|
||||
现在,既然你有了可以被玩家收集的奖励,那就有充分的理由来记录分数,以便你的玩家看到他们收集了多少奖励。你也可以跟踪玩家的生命值,以便当他们被敌人击中时会有相应结果。
|
||||
|
||||
你已经有了跟踪分数和生命值的变量,但是这一切都发生在后台。这篇文章教你在游戏期间在游戏屏幕上以你选择的一种字体来显示这些统计数字。
|
||||
|
||||
### 阅读文档
|
||||
|
||||
大多数 Python 模块都有文档,即使那些没有文档的模块,也能通过 Python 的帮助功能来进行最小的文档化。[Pygame 的主页面][13] 链接了它的文档。不过,Pygame 是一个带有很多文档的大模块,并且它的文档不像在 Opensource.com 上的文章一样,以同样易理解的(和友好的、易解释的、有用的)叙述风格来撰写的。它们是技术文档,并且列出在模块中可用的每个类和函数,各自要求的输入类型等等。如果你不适应参考代码组件描述,这可能会令人不知所措。
|
||||
|
||||
在烦恼于库的文档前,第一件要做的事,就是来想想你正在尝试达到的目标。在这种情况下,你想在屏幕上显示玩家的得分和生命值。
|
||||
|
||||
在你确定你需要的结果后,想想它需要什么的组件。你可以从变量和函数的方面考虑这一点,或者,如果你还没有自然地想到这一点,你可以进行一般性思考。你可能意识到需要一些文本来显示一个分数,你希望 Pygame 在屏幕上绘制这些文本。如果你仔细思考,你可能会意识到它与在屏幕上渲染一个玩家、奖励或一个平台并多么大的不同。
|
||||
|
||||
从技术上讲,你*可以*使用数字图形,并让 Pygame 显示这些数字图形。它不是达到你目标的最容易的方法,但是如果它是你唯一知道的方法,那么它是一个有效的方法。不过,如果你参考 Pygame 的文档,你看到列出的模块之一是 `font`,这是 Pygame 使得在屏幕上来使打印文本像输入文字一样容易的方法。
|
||||
|
||||
### 解密技术文档
|
||||
|
||||
`font` 文档页面以 `pygame.font.init()` 开始,它列出了用于初始化字体模块的函数。它由 `pygame.init()` 自动地调用,你已经在代码中调用了它。再强调一次,从技术上讲,你已经到达一个*足够好*的点。虽然你尚不知道*如何做*,你知道你*能够*使用 `pygame.font` 函数来在屏幕上打印文本。
|
||||
|
||||
然而,如果你阅读更多一些,你会找到这里还有一种更好的方法来打印字体。`pygame.freetype` 模块在文档中的描述方式如下:
|
||||
|
||||
> `pygame.freetype` 模块是 `pygame.fontpygame` 模块的一个替代品,用于加载和渲染字体。它有原函数的所有功能,外加很多新的功能。
|
||||
|
||||
在 `pygame.freetype` 文档页面的下方,有一些示例代码:
|
||||
|
||||
```
|
||||
import pygame
|
||||
import pygame.freetype
|
||||
```
|
||||
|
||||
你的代码应该已经导入了 Pygame,不过,请修改你的 `import` 语句以包含 Freetype 模块:
|
||||
|
||||
```
|
||||
import pygame
|
||||
import sys
|
||||
import os
|
||||
import pygame.freetype
|
||||
```
|
||||
|
||||
### 在 Pygame 中使用字体
|
||||
|
||||
从 `font` 模块的描述中可以看出,显然 Pygame 使用一种字体(不管它的你提供的或内置到 Pygame 的默认字体)在屏幕上渲染字体。滚动浏览 `pygame.freetype` 文档来找到 `pygame.freetype.Font` 函数:
|
||||
|
||||
```
|
||||
pygame.freetype.Font
|
||||
从支持的字体文件中创建一个新的字体实例。
|
||||
|
||||
Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
|
||||
|
||||
pygame.freetype.Font.name
|
||||
符合规则的字体名称。
|
||||
|
||||
pygame.freetype.Font.path
|
||||
字体文件路径。
|
||||
|
||||
pygame.freetype.Font.size
|
||||
在渲染中使用的默认点大小
|
||||
```
|
||||
|
||||
这描述了如何在 Pygame 中构建一个字体“对象”。把屏幕上的一个简单对象视为一些代码属性的组合对你来说可能不太自然,但是这与你构建英雄和敌人精灵的方式非常类似。你需要一个字体文件,而不是一个图像文件。在你有一个字体文件后,你可以在你的代码中使用 `pygame.freetype.Font` 函数来创建一个字体对象,然后使用该对象来在屏幕上渲染文本。
|
||||
|
||||
因为并不是世界上的每个人的电脑上都有完全一样的字体,因此将你选择的字体与你的游戏捆绑在一起是很重要的。要捆绑字体,首先在你的游戏文件夹中创建一个新的目录,放在你为图像而创建的文件目录旁边。称其为 `fonts` 。
|
||||
|
||||
即使你的计算机操作系统随附了几种字体,但是将这些字体给予其他人是非法的。这看起来很奇怪,但法律就是这样运作的。如果想与你的游戏一起随附一种字体,你必需找到一种开源或知识共享的字体,以允许你随游戏一起提供该字体。
|
||||
|
||||
专门提供自由和合法字体的网站包括:
|
||||
|
||||
* [Font Library][14]
|
||||
* [Font Squirrel][15]
|
||||
* [League of Moveable Type][16]
|
||||
|
||||
当你找到你喜欢的字体后,下载下来。解压缩 ZIP 或 [TAR][17] 文件,并移动 `.ttf` 或 `.otf` 文件到你的项目目录下的 `fonts` 文件夹中。
|
||||
|
||||
你没有安装字体到你的计算机上。你只是放置字体到你游戏的 `fonts` 文件夹中,以便 Pygame 可以使用它。如果你想,你*可以*在你的计算机上安装该字体,但是没有必要。重要的是将字体放在你的游戏目录中,这样 Pygame 可以“描绘”字体到屏幕上。
|
||||
|
||||
如果字体文件的名称复杂且带有空格或特殊字符,只需要重新命名它即可。文件名称是完全任意的,并且对你来说,文件名称越简单,越容易将其键入你的代码中。
|
||||
|
||||
现在告诉 Pygame 你的字体。从文档中你知道,当你至少提供了字体文件路径给 `pygame.freetype.Font` 时(文档明确指出所有其余属性都是可选的),你将在返回中获得一个字体对象:
|
||||
|
||||
```
|
||||
Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
|
||||
```
|
||||
|
||||
创建一个称为 `myfont` 的新变量来充当你在游戏中字体,并放置 `Font` 函数的结果到这个变量中。这个示例中使用 `amazdoom.ttf` 字体,但是你可以使用任何你想使用的字体。在你的设置部分放置这些代码:
|
||||
|
||||
```
|
||||
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
|
||||
font_size = tx
|
||||
myfont = pygame.freetype.Font(font_path, font_size)
|
||||
```
|
||||
|
||||
### 在 Pygame 中显示文本
|
||||
|
||||
现在你已经创建一个字体对象,你需要一个函数来绘制你想绘制到屏幕上的文本。这和你在你的游戏中绘制背景和平台是相同的原理。
|
||||
|
||||
首先,创建一个函数,并使用 `myfont` 对象来创建一些文本,设置颜色为某些 RGB 值。这必须是一个全局函数;它不属于任何具体的类:
|
||||
|
||||
```
|
||||
def stats(score,health):
|
||||
myfont.render_to(world, (4, 4), "Score:"+str(score), WHITE, None, size=64)
|
||||
myfont.render_to(world, (4, 72), "Health:"+str(health), WHITE, None, size=64)
|
||||
```
|
||||
|
||||
当然,你此刻已经知道,如果它不在主循环中,你的游戏将不会发生任何事,所以在文件的底部添加一个对你的 `stats` 函数的调用:
|
||||
|
||||
```
|
||||
for e in enemy_list:
|
||||
e.move()
|
||||
stats(player.score,player.health) # draw text
|
||||
pygame.display.flip()
|
||||
```
|
||||
|
||||
尝试你的游戏。
|
||||
|
||||
当玩家收集奖励品时,得分会上升。当玩家被敌人击中时,生命值下降。成功!
|
||||
|
||||
![Keeping score in Pygame][18]
|
||||
|
||||
不过,这里有一个问题。当一个玩家被敌人击中时,健康度会*一路*下降,这是不公平的。你刚刚发现一个非致命的错误。非致命的错误是这些在应用程序中小问题,(通常)不会阻止应用程序启动或甚至导致停止工作,但是它们要么没有意义,要么会惹恼用户。这里是如何解决这个问题的方法。
|
||||
|
||||
### 修复生命值计数
|
||||
|
||||
当前生命值系统的问题是,敌人接触玩家时,Pygame 时钟的每一次滴答,健康度都会减少。这意味着一个缓慢移动的敌人可能在一次遭遇中将一个玩家降低健康度至 -200 ,这不公平。当然,你可以给你的玩家一个 10000 的起始健康度得分,而不用担心它;这可以工作,并且可能没有人会注意。但是这里有一个更好的方法。
|
||||
|
||||
当前,你的代码侦查出一个玩家和一个敌人发生碰撞的时候。生命值问题的修复是检测*两个*独立的事件:什么时候玩家和敌人碰撞,并且,在它们碰撞后,什么时候它们*停止*碰撞。
|
||||
|
||||
首先,在你的玩家类中,创建一个变量来代表玩家和敌人碰撞在一起:
|
||||
|
||||
```
|
||||
self.frame = 0
|
||||
self.health = 10
|
||||
self.damage = 0
|
||||
```
|
||||
|
||||
在你的 `Player` 类的 `update` 函数中,*移除*这块代码块:
|
||||
|
||||
```
|
||||
for enemy in enemy_hit_list:
|
||||
self.health -= 1
|
||||
#print(self.health)
|
||||
```
|
||||
|
||||
并且在它的位置,只要玩家当前没有被击中,检查碰撞:
|
||||
|
||||
```
|
||||
if self.damage == 0:
|
||||
for enemy in enemy_hit_list:
|
||||
if not self.rect.contains(enemy):
|
||||
self.damage = self.rect.colliderect(enemy)
|
||||
```
|
||||
|
||||
你可能会在你删除的语句块和你刚刚添加的语句块之间看到相似之处。它们都在做相同的工作,但是新的代码更复杂。最重要的是,只有当玩家*当前*没有被击中时,新的代码才运行。这意味着,当一个玩家和敌人碰撞时,这些代码运行一次,而不是像以前那样一直发生碰撞。
|
||||
|
||||
新的代码使用两个新的 Pygame 函数。`self.rect.contains` 函数检查一个敌人当前是否在玩家的边界框内,并且当它是 `true` 时, `self.rect.colliderect` 设置你的新的 `self.damage` 变量为 1,而不管它多少次是 `true` 。
|
||||
|
||||
现在,即使被一个敌人击中 3 秒,对 Pygame 来说仍然看作一次击中。
|
||||
|
||||
我通过通读 Pygame 的文档而发现了这些函数。你没有必要一次阅读完全部的文档,并且你也没有必要阅读每个函数的每个单词。不过,花费时间在你正在使用的新的库或模块的文档上是很重要的;否则,你极有可能在重新发明轮子。不要花费一个下午的时间来尝试修改拼接一个解决方案到一些东西,而这些东西已经被你正在使用的框架的所解决。阅读文档,知悉函数,并从别人的工作中获益!
|
||||
|
||||
最后,添加另一个代码语句块来侦查出什么时候玩家和敌人不再接触。然后直到那时,才从玩家减少一个生命值。
|
||||
|
||||
```
|
||||
if self.damage == 1:
|
||||
idx = self.rect.collidelist(enemy_hit_list)
|
||||
if idx == -1:
|
||||
self.damage = 0 # set damage back to 0
|
||||
self.health -= 1 # subtract 1 hp
|
||||
```
|
||||
|
||||
注意,*只有*当玩家被击中时,这个新的代码才会被触发。这意味着,在你的玩家在你的游戏世界正在探索或收集奖励时,这个代码不会运行。它仅当 `self.damage` 变量被激活时运行。
|
||||
|
||||
当代码运行时,它使用 `self.rect.collidelist` 来查看玩家是否*仍然*接触在你敌人列表中的敌人(当其未侦查到碰撞时,`collidelist` 返回 -1)。在它没有接触敌人时,是该处理 `self.damage` 的时机:通过设置 `self.damage` 变量回到 0 来使其无效,并减少一点生命值。
|
||||
|
||||
现在尝试你的游戏。
|
||||
|
||||
### 得分反应
|
||||
|
||||
现在,你有一个来让你的玩家知道它们分数和生命值的方法,当你的玩家达到某些里程碑时,你可以确保某些事件发生。例如,也许这里有一个特殊的恢复一些生命值的奖励项目。也许一个到达 0 生命值的玩家不得不从一个关卡的起始位置重新开始。
|
||||
|
||||
你可以在你的代码中检查这些事件,并且相应地操纵你的游戏世界。你已经知道该怎么做,所以请浏览文档来寻找新的技巧,并且独立地尝试这些技巧。
|
||||
|
||||
这里是到目前为止所有的代码:
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# draw a world
|
||||
# add a player and player control
|
||||
# add player movement
|
||||
# add enemy and basic collision
|
||||
# add platform
|
||||
# add gravity
|
||||
# add jumping
|
||||
# add scrolling
|
||||
# add loot
|
||||
# add score
|
||||
|
||||
# GNU All-Permissive License
|
||||
# Copying and distribution of this file, with or without modification,
|
||||
# are permitted in any medium without royalty provided the copyright
|
||||
# notice and this notice are preserved. This file is offered as-is,
|
||||
# without any warranty.
|
||||
|
||||
import pygame
|
||||
import sys
|
||||
import os
|
||||
import pygame.freetype
|
||||
|
||||
'''
|
||||
Objects
|
||||
'''
|
||||
|
||||
class Platform(pygame.sprite.Sprite):
|
||||
# x location, y location, img width, img height, img file
|
||||
def __init__(self,xloc,yloc,imgw,imgh,img):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.image = pygame.image.load(os.path.join('images',img)).convert()
|
||||
self.image.convert_alpha()
|
||||
self.rect = self.image.get_rect()
|
||||
self.rect.y = yloc
|
||||
self.rect.x = xloc
|
||||
|
||||
class Player(pygame.sprite.Sprite):
|
||||
'''
|
||||
Spawn a player
|
||||
'''
|
||||
def __init__(self):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.movex = 0
|
||||
self.movey = 0
|
||||
self.frame = 0
|
||||
self.health = 10
|
||||
self.damage = 0
|
||||
self.collide_delta = 0
|
||||
self.jump_delta = 6
|
||||
self.score = 1
|
||||
self.images = []
|
||||
for i in range(1,9):
|
||||
img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
|
||||
img.convert_alpha()
|
||||
img.set_colorkey(ALPHA)
|
||||
self.images.append(img)
|
||||
self.image = self.images[0]
|
||||
self.rect = self.image.get_rect()
|
||||
|
||||
def jump(self,platform_list):
|
||||
self.jump_delta = 0
|
||||
|
||||
def gravity(self):
|
||||
self.movey += 3.2 # how fast player falls
|
||||
|
||||
if self.rect.y > worldy and self.movey >= 0:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty
|
||||
|
||||
def control(self,x,y):
|
||||
'''
|
||||
control player movement
|
||||
'''
|
||||
self.movex += x
|
||||
self.movey += y
|
||||
|
||||
def update(self):
|
||||
'''
|
||||
Update sprite position
|
||||
'''
|
||||
|
||||
self.rect.x = self.rect.x + self.movex
|
||||
self.rect.y = self.rect.y + self.movey
|
||||
|
||||
# moving left
|
||||
if self.movex < 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[self.frame//ani]
|
||||
|
||||
# moving right
|
||||
if self.movex > 0:
|
||||
self.frame += 1
|
||||
if self.frame > ani*3:
|
||||
self.frame = 0
|
||||
self.image = self.images[(self.frame//ani)+4]
|
||||
|
||||
# collisions
|
||||
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
|
||||
if self.damage == 0:
|
||||
for enemy in enemy_hit_list:
|
||||
if not self.rect.contains(enemy):
|
||||
self.damage = self.rect.colliderect(enemy)
|
||||
|
||||
if self.damage == 1:
|
||||
idx = self.rect.collidelist(enemy_hit_list)
|
||||
if idx == -1:
|
||||
self.damage = 0 # set damage back to 0
|
||||
self.health -= 1 # subtract 1 hp
|
||||
|
||||
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
|
||||
for loot in loot_hit_list:
|
||||
loot_list.remove(loot)
|
||||
self.score += 1
|
||||
print(self.score)
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.collide_delta = 0 # stop jumping
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
for g in ground_hit_list:
|
||||
self.movey = 0
|
||||
self.rect.y = worldy-ty-ty
|
||||
self.collide_delta = 0 # stop jumping
|
||||
if self.rect.y > g.rect.y:
|
||||
self.health -=1
|
||||
print(self.health)
|
||||
|
||||
if self.collide_delta < 6 and self.jump_delta < 6:
|
||||
self.jump_delta = 6*2
|
||||
self.movey -= 33 # how high to jump
|
||||
self.collide_delta += 6
|
||||
self.jump_delta += 6
|
||||
|
||||
class Enemy(pygame.sprite.Sprite):
|
||||
'''
|
||||
Spawn an enemy
|
||||
'''
|
||||
def __init__(self,x,y,img):
|
||||
pygame.sprite.Sprite.__init__(self)
|
||||
self.image = pygame.image.load(os.path.join('images',img))
|
||||
self.movey = 0
|
||||
#self.image.convert_alpha()
|
||||
#self.image.set_colorkey(ALPHA)
|
||||
self.rect = self.image.get_rect()
|
||||
self.rect.x = x
|
||||
self.rect.y = y
|
||||
self.counter = 0
|
||||
|
||||
|
||||
def move(self):
|
||||
'''
|
||||
enemy movement
|
||||
'''
|
||||
distance = 80
|
||||
speed = 8
|
||||
|
||||
self.movey += 3.2
|
||||
|
||||
if self.counter >= 0 and self.counter <= distance:
|
||||
self.rect.x += speed
|
||||
elif self.counter >= distance and self.counter <= distance*2:
|
||||
self.rect.x -= speed
|
||||
else:
|
||||
self.counter = 0
|
||||
|
||||
self.counter += 1
|
||||
|
||||
if not self.rect.y >= worldy-ty-ty:
|
||||
self.rect.y += self.movey
|
||||
|
||||
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
|
||||
for p in plat_hit_list:
|
||||
self.movey = 0
|
||||
if self.rect.y > p.rect.y:
|
||||
self.rect.y = p.rect.y+ty
|
||||
else:
|
||||
self.rect.y = p.rect.y-ty
|
||||
|
||||
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
|
||||
for g in ground_hit_list:
|
||||
self.rect.y = worldy-ty-ty
|
||||
|
||||
|
||||
class Level():
|
||||
def bad(lvl,eloc):
|
||||
if lvl == 1:
|
||||
enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
|
||||
enemy_list = pygame.sprite.Group() # create enemy group
|
||||
enemy_list.add(enemy) # add enemy to group
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return enemy_list
|
||||
|
||||
def loot(lvl,tx,ty):
|
||||
if lvl == 1:
|
||||
loot_list = pygame.sprite.Group()
|
||||
loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
|
||||
loot_list.add(loot)
|
||||
|
||||
if lvl == 2:
|
||||
print(lvl)
|
||||
|
||||
return loot_list
|
||||
|
||||
def ground(lvl,gloc,tx,ty):
|
||||
ground_list = pygame.sprite.Group()
|
||||
i=0
|
||||
if lvl == 1:
|
||||
while i < len(gloc):
|
||||
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
|
||||
ground_list.add(ground)
|
||||
i=i+1
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return ground_list
|
||||
|
||||
def platform(lvl,tx,ty):
|
||||
plat_list = pygame.sprite.Group()
|
||||
ploc = []
|
||||
i=0
|
||||
if lvl == 1:
|
||||
ploc.append((20,worldy-ty-128,3))
|
||||
ploc.append((300,worldy-ty-256,3))
|
||||
ploc.append((500,worldy-ty-128,4))
|
||||
|
||||
while i < len(ploc):
|
||||
j=0
|
||||
while j <= ploc[i][2]:
|
||||
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
|
||||
plat_list.add(plat)
|
||||
j=j+1
|
||||
print('run' + str(i) + str(ploc[i]))
|
||||
i=i+1
|
||||
|
||||
if lvl == 2:
|
||||
print("Level " + str(lvl) )
|
||||
|
||||
return plat_list
|
||||
|
||||
def stats(score,health):
|
||||
myfont.render_to(world, (4, 4), "Score:"+str(score), SNOWGRAY, None, size=64)
|
||||
myfont.render_to(world, (4, 72), "Health:"+str(health), SNOWGRAY, None, size=64)
|
||||
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
worldx = 960
|
||||
worldy = 720
|
||||
|
||||
fps = 40 # frame rate
|
||||
ani = 4 # animation cycles
|
||||
clock = pygame.time.Clock()
|
||||
pygame.init()
|
||||
main = True
|
||||
|
||||
BLUE = (25,25,200)
|
||||
BLACK = (23,23,23 )
|
||||
WHITE = (254,254,254)
|
||||
SNOWGRAY = (137,164,166)
|
||||
ALPHA = (0,255,0)
|
||||
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
|
||||
backdropbox = world.get_rect()
|
||||
player = Player() # spawn player
|
||||
player.rect.x = 0
|
||||
player.rect.y = 0
|
||||
player_list = pygame.sprite.Group()
|
||||
player_list.add(player)
|
||||
steps = 10
|
||||
forwardx = 600
|
||||
backwardx = 230
|
||||
|
||||
eloc = []
|
||||
eloc = [200,20]
|
||||
gloc = []
|
||||
tx = 64 #tile size
|
||||
ty = 64 #tile size
|
||||
|
||||
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
|
||||
font_size = tx
|
||||
myfont = pygame.freetype.Font(font_path, font_size)
|
||||
|
||||
i=0
|
||||
while i <= (worldx/tx)+tx:
|
||||
gloc.append(i*tx)
|
||||
i=i+1
|
||||
|
||||
enemy_list = Level.bad( 1, eloc )
|
||||
ground_list = Level.ground( 1,gloc,tx,ty )
|
||||
plat_list = Level.platform( 1,tx,ty )
|
||||
loot_list = Level.loot(1,tx,ty)
|
||||
|
||||
'''
|
||||
Main loop
|
||||
'''
|
||||
while main == True:
|
||||
for event in pygame.event.get():
|
||||
if event.type == pygame.QUIT:
|
||||
pygame.quit(); sys.exit()
|
||||
main = False
|
||||
|
||||
if event.type == pygame.KEYDOWN:
|
||||
if event.key == pygame.K_LEFT or event.key == ord('a'):
|
||||
print("LEFT")
|
||||
player.control(-steps,0)
|
||||
if event.key == pygame.K_RIGHT or event.key == ord('d'):
|
||||
print("RIGHT")
|
||||
player.control(steps,0)
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
print('jump')
|
||||
|
||||
if event.type == pygame.KEYUP:
|
||||
if event.key == pygame.K_LEFT or event.key == ord('a'):
|
||||
player.control(steps,0)
|
||||
if event.key == pygame.K_RIGHT or event.key == ord('d'):
|
||||
player.control(-steps,0)
|
||||
if event.key == pygame.K_UP or event.key == ord('w'):
|
||||
player.jump(plat_list)
|
||||
|
||||
if event.key == ord('q'):
|
||||
pygame.quit()
|
||||
sys.exit()
|
||||
main = False
|
||||
|
||||
# scroll the world forward
|
||||
if player.rect.x >= forwardx:
|
||||
scroll = player.rect.x - forwardx
|
||||
player.rect.x = forwardx
|
||||
for p in plat_list:
|
||||
p.rect.x -= scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x -= scroll
|
||||
for l in loot_list:
|
||||
|
||||
l.rect.x -= scroll
|
||||
|
||||
# scroll the world backward
|
||||
if player.rect.x <= backwardx:
|
||||
scroll = backwardx - player.rect.x
|
||||
player.rect.x = backwardx
|
||||
for p in plat_list:
|
||||
p.rect.x += scroll
|
||||
for e in enemy_list:
|
||||
e.rect.x += scroll
|
||||
for l in loot_list:
|
||||
l.rect.x += scroll
|
||||
|
||||
world.blit(backdrop, backdropbox)
|
||||
player.gravity() # check gravity
|
||||
player.update()
|
||||
player_list.draw(world) #refresh player position
|
||||
enemy_list.draw(world) # refresh enemies
|
||||
ground_list.draw(world) # refresh enemies
|
||||
plat_list.draw(world) # refresh platforms
|
||||
loot_list.draw(world) # refresh loot
|
||||
for e in enemy_list:
|
||||
e.move()
|
||||
stats(player.score,player.health) # draw text
|
||||
pygame.display.flip()
|
||||
clock.tick(fps)
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/add-scorekeeping-your-python-game
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_maze.png?itok=mZ5LP4-X (connecting yellow dots in a maze)
|
||||
[2]: https://www.python.org/
|
||||
[3]: https://www.pygame.org/news
|
||||
[4]: https://linux.cn/article-9071-1.html
|
||||
[5]: https://linux.cn/article-10850-1.html
|
||||
[6]: https://linux.cn/article-10858-1.html
|
||||
[7]: https://linux.cn/article-10874-1.html
|
||||
[8]: https://linux.cn/article-10883-1.html
|
||||
[9]: https://linux.cn/article-11780-1.html
|
||||
[10]: https://linux.cn/article-11790-1.html
|
||||
[11]: https://linux.cn/article-11819-1.html
|
||||
[12]: https://linux.cn/article-11828-1.html
|
||||
[13]: http://pygame.org/news
|
||||
[14]: https://fontlibrary.org/
|
||||
[15]: https://www.fontsquirrel.com/
|
||||
[16]: https://www.theleagueofmoveabletype.com/
|
||||
[17]: https://opensource.com/article/17/7/how-unzip-targz-file
|
||||
[18]: https://opensource.com/sites/default/files/uploads/pygame-score.jpg (Keeping score in Pygame)
|
||||
[19]: https://linux.cn/article-10902-1.html
|
135
published/20200109 My favorite Bash hacks.md
Normal file
135
published/20200109 My favorite Bash hacks.md
Normal file
@ -0,0 +1,135 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11841-1.html)
|
||||
[#]: subject: (My favorite Bash hacks)
|
||||
[#]: via: (https://opensource.com/article/20/1/bash-scripts-aliases)
|
||||
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
|
||||
|
||||
我珍藏的 Bash 秘籍
|
||||
======
|
||||
|
||||
> 通过别名和其他捷径来提高你经常忘记的那些事情的效率。
|
||||
|
||||
![bash logo on green background][1]
|
||||
|
||||
要是你整天使用计算机,如果能找到需要重复执行的命令并记下它们以便以后轻松使用那就太棒了。它们全都呆在那里,藏在 `~/.bashrc` 中(或 [zsh 用户][2]的 `~/.zshrc` 中),等待着改善你的生活!
|
||||
|
||||
在本文中,我分享了我最喜欢的这些助手命令,对于我经常遗忘的事情,它们很有用,也希望这可以帮助到你,以及为你解决一些经常头疼的问题。
|
||||
|
||||
### 完事吱一声
|
||||
|
||||
当我执行一个需要长时间运行的命令时,我经常采用多任务的方式,然后就必须回头去检查该操作是否已完成。然而通过有用的 `say` 命令,现在就不用再这样了(这是在 MacOS 上;请根据你的本地环境更改为等效的方式):
|
||||
|
||||
```
|
||||
function looooooooong {
|
||||
START=$(date +%s.%N)
|
||||
$*
|
||||
EXIT_CODE=$?
|
||||
END=$(date +%s.%N)
|
||||
DIFF=$(echo "$END - $START" | bc)
|
||||
RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
|
||||
result="$1 completed in $RES, exit code $EXIT_CODE."
|
||||
echo -e "\n⏰ $result"
|
||||
( say -r 250 $result 2>&1 > /dev/null & )
|
||||
}
|
||||
```
|
||||
|
||||
这个命令会记录命令的开始和结束时间,计算所需的分钟数,并“说”出调用的命令、花费的时间和退出码。当简单的控制台铃声无法使用时,我发现这个超级有用。
|
||||
|
||||
### 安装小助手
|
||||
|
||||
我在小时候就开始使用 Ubuntu,而我需要学习的第一件事就是如何安装软件包。我曾经首先添加的别名之一是它的助手(根据当天的流行梗命名的):
|
||||
|
||||
```
|
||||
alias canhas="sudo apt-get install -y"
|
||||
```
|
||||
|
||||
### GPG 签名
|
||||
|
||||
有时候,我必须在没有 GPG 扩展程序或应用程序的情况下给电子邮件签署 [GPG][3] 签名,我会跳到命令行并使用以下令人讨厌的别名:
|
||||
|
||||
```
|
||||
alias gibson="gpg --encrypt --sign --armor"
|
||||
alias ungibson="gpg --decrypt"
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
Docker 的子命令很多,但是 Docker compose 的更多。我曾经使用这些别名来将 `--rm` 标志丢到脑后,但是现在不再使用这些有用的别名了:
|
||||
|
||||
```
|
||||
alias dc="docker-compose"
|
||||
alias dcr="docker-compose run --rm"
|
||||
alias dcb="docker-compose run --rm --build"
|
||||
```
|
||||
|
||||
### Google Cloud 的 gcurl 助手
|
||||
|
||||
对于我来说,Google Cloud 是一个相对较新的东西,而它有[极多的文档][4]。`gcurl` 是一个别名,可确保在用带有身份验证标头的本地 `curl` 命令连接 Google Cloud API 时,可以获得所有正确的标头。
|
||||
|
||||
### Git 和 ~/.gitignore
|
||||
|
||||
我工作中用 Git 很多,因此我有一个专门的部分来介绍 Git 助手。
|
||||
|
||||
我最有用的助手之一是我用来克隆 GitHub 存储库的。你不必运行:
|
||||
|
||||
```
|
||||
git clone git@github.com:org/repo /Users/glasnt/git/org/repo
|
||||
```
|
||||
|
||||
我设置了一个克隆函数:
|
||||
|
||||
```
|
||||
clone(){
|
||||
echo Cloning $1 to ~/git/$1
|
||||
cd ~/git
|
||||
git clone git@github.com:$1 $1
|
||||
cd $1
|
||||
}
|
||||
```
|
||||
|
||||
即使每次进入 `~/.bashrc` 文件看到这个时,我总是会忘记和傻笑,我也有一个“刷新上游”命令:
|
||||
|
||||
```
|
||||
alias yoink="git checkout master && git fetch upstream master && git merge upstream/master"
|
||||
```
|
||||
|
||||
给 Git 一族的另一个助手是全局忽略文件。在你的 `git config --global --list` 中,你应该看到一个 `core.excludesfile`。如果没有,请[创建一个][6],然后将你总是放到各个 `.gitignore` 文件中的内容填满它。作为 MacOS 上的 Python 开发人员,对我来说,这些内容是:
|
||||
|
||||
```
|
||||
.DS_Store # macOS clutter
|
||||
venv/ # I never want to commit my virtualenv
|
||||
*.egg-info/* # ... nor any locally compiled packages
|
||||
__pycache__ # ... or source
|
||||
*.swp # ... nor any files open in vim
|
||||
```
|
||||
|
||||
你可以在 [Gitignore.io][7] 或 GitHub 上的 [Gitignore 存储库][8]上找到其他建议。
|
||||
|
||||
### 轮到你了
|
||||
|
||||
你最喜欢的助手命令是什么?请在评论中分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/bash-scripts-aliases
|
||||
|
||||
作者:[Katie McLaughlin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/glasnt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/19/9/getting-started-zsh
|
||||
[3]: https://gnupg.org/
|
||||
[4]: https://cloud.google.com/service-infrastructure/docs/service-control/getting-started
|
||||
[5]: mailto:git@github.com
|
||||
[6]: https://help.github.com/en/github/using-git/ignoring-files#create-a-global-gitignore
|
||||
[7]: https://www.gitignore.io/
|
||||
[8]: https://github.com/github/gitignore
|
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11846-1.html)
|
||||
[#]: subject: (Keep a journal of your activities with this Python program)
|
||||
[#]: via: (https://opensource.com/article/20/1/python-journal)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
使用这个 Python 程序记录你的活动
|
||||
======
|
||||
|
||||
> jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202002/03/105455tx03zo2pu7woyusp.jpg)
|
||||
|
||||
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
|
||||
|
||||
### 使用 jrnl 记录日志
|
||||
|
||||
在我的公司,许多人会在下班之前在 Slack 上发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。
|
||||
|
||||
![jrnl][2]
|
||||
|
||||
打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助。
|
||||
|
||||
jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。
|
||||
|
||||
由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 `pip3 install jrnl`。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。
|
||||
|
||||
![jrnl's first run][4]
|
||||
|
||||
现在,每当你需要做笔记或记录日志时,只需输入 `jrnl <some text>`,它将带有时间戳的记录保存到默认文件中。你可以使用 `jrnl -on YYYY-MM-DD` 搜索特定日期条目,`jrnl -from YYYY-MM-DD` 搜索在那日期之后的条目,以及用 `jrnl -to YYYY-MM-DD` 搜索到那日期的条目。搜索词可以与 `-and` 参数结合使用,允许像 `jrnl -from 2019-01-01 -and -to 2019-12-31` 这类搜索。
|
||||
|
||||
你还可以使用 `--edit` 标志编辑日志中的条目。开始之前,通过编辑文件 `~/.config/jrnl/jrnl.yaml` 来设置默认编辑器。你还可以指定日志使用什么文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vim,jrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5]。
|
||||
|
||||
![Example jrnl config file][6]
|
||||
|
||||
jrnl 还可以加密日志文件。通过设置全局 `encrypt` 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的针对文件设置 `encrypt: true` 来加密文件。
|
||||
|
||||
```
|
||||
journals:
|
||||
default: ~/journals/journal.txt
|
||||
work: ~/journals/work.txt
|
||||
private:
|
||||
journal: ~/journals/private.txt
|
||||
encrypt: true
|
||||
```
|
||||
|
||||
如果日志尚未加密,系统将提示你输入在对它进行任何操作的密码。日志文件将加密保存在磁盘上,以免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。
|
||||
|
||||
![Encrypted jrnl file][8]
|
||||
|
||||
日志记录帮助我记住什么时候做了什么事,并在我需要的时候能够找到它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/python-journal
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_ (Writing in a notebook)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/productivity_8-1.png (jrnl)
|
||||
[3]: https://jrnl.sh/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/productivity_8-2.png (jrnl's first run)
|
||||
[5]: https://jrnl.sh/recipes/#external-editors
|
||||
[6]: https://opensource.com/sites/default/files/uploads/productivity_8-3.png (Example jrnl config file)
|
||||
[7]: https://jrnl.sh/encryption/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/productivity_8-4.png (Encrypted jrnl file)
|
@ -0,0 +1,128 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11838-1.html)
|
||||
[#]: subject: (How to Set or Change Timezone in Ubuntu Linux [Beginner’s Tip])
|
||||
[#]: via: (https://itsfoss.com/change-timezone-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Ubuntu Linux 中设置或更改时区
|
||||
======
|
||||
|
||||
[你安装 Ubuntu 时][1],它会要求你设置时区。如果你选择一个错误的时区,或者你移动到世界的一些其它地方,你可以很容易地在以后更改它。
|
||||
|
||||
### 如何在 Ubuntu 和其它 Linux 发行版中更改时区
|
||||
|
||||
这里有两种方法来更改 Ubuntu 中的时区。你可以使用图形化设置或在终端中使用 `timedatectl` 命令。你也可以直接更改 `/etc/timezone` 文件,但是我不建议这样做。
|
||||
|
||||
在这篇初学者教程中,我将向你展示图形化和终端两种方法:
|
||||
|
||||
* [通过 GUI 更改 Ubuntu 中的时区][2] (适合桌面用户)
|
||||
* [通过命令行更改 Ubuntu 中的时区][3] (桌面和服务器都工作)
|
||||
|
||||
![][4]
|
||||
|
||||
#### 方法 1: 通过终端更改 Ubuntu 时区
|
||||
|
||||
[Ubuntu][5] 或一些使用 systemd 的其它发行版可以在 Linux 终端中使用 `timedatectl` 命令来设置时区。
|
||||
|
||||
你可以使用没有任何参数的 `timedatectl` 命令来检查当前是日期和时区设置:
|
||||
|
||||
```
|
||||
[email protected]:~$ timedatectl
|
||||
Local time: Sat 2020-01-18 17:39:52 IST
|
||||
Universal time: Sat 2020-01-18 12:09:52 UTC
|
||||
RTC time: Sat 2020-01-18 12:09:52
|
||||
Time zone: Asia/Kolkata (IST, +0530)
|
||||
System clock synchronized: yes
|
||||
systemd-timesyncd.service active: yes
|
||||
RTC in local TZ: no
|
||||
```
|
||||
|
||||
正如你在上面的输出中所看,我的系统使用 Asia/Kolkata 。它也告诉我现在比世界时早 5 小时 30 分钟。
|
||||
|
||||
为在 Linux 中设置时区,你需要知道准确的时区。你必需使用时区的正确的格式 (时区格式是洲/城市)。
|
||||
|
||||
为获取时区列表,使用 `timedatectl` 命令的 `list-timezones` 参数:
|
||||
|
||||
```
|
||||
timedatectl list-timezones
|
||||
```
|
||||
|
||||
它将向你显示大量可用的时区列表。
|
||||
|
||||
![Timezones List][6]
|
||||
|
||||
你可以使用向上箭头和向下箭头或 `PgUp` 和 `PgDown` 键来在页面之间移动。
|
||||
|
||||
你也可以 `grep` 输出,并搜索你的时区。例如,假如你正在寻找欧洲的时区,你可以使用:
|
||||
|
||||
```
|
||||
timedatectl list-timezones | grep -i europe
|
||||
```
|
||||
|
||||
比方说,你想设置时区为巴黎。在这里,使用的时区值的 Europe/Paris :
|
||||
|
||||
```
|
||||
timedatectl set-timezone Europe/Paris
|
||||
```
|
||||
|
||||
它虽然不显示任何成功信息,但是时区会立即更改。你不需要重新启动或注销。
|
||||
|
||||
记住,虽然你不需要成为 root 用户并对命令使用 `sudo`,但是你的账户仍然需要拥有管理器权限来更改时区。
|
||||
|
||||
你可以使用 [date 命令][7] 来验证更改的时间好时区:
|
||||
|
||||
```
|
||||
[email protected]:~$ date
|
||||
Sat Jan 18 13:56:26 CET 2020
|
||||
```
|
||||
|
||||
#### 方法 2: 通过 GUI 更改 Ubuntu 时区
|
||||
|
||||
按下 `super` 键 (Windows 键) ,并搜索设置:
|
||||
|
||||
![Applications Menu Settings][8]
|
||||
|
||||
在左侧边栏中,向下滚动一点,查看详细信息:
|
||||
|
||||
![Go to Settings -> Details][9]
|
||||
|
||||
在详细信息中,你将在左侧边栏中找到“日期和时间”。在这里,你应该关闭自动时区选项(如果它已经被启用),然后在时区上单击:
|
||||
|
||||
![In Details -> Date & Time, turn off the Automatic Time Zone][10]
|
||||
|
||||
当你单击时区时,它将打开一个交互式地图,你可以在你选择的地理位置上单击,关闭窗口。
|
||||
|
||||
![Select a timezone][11]
|
||||
|
||||
在选择新的时区后,除了关闭这个地图后,你不必做任何事情。不需要注销或 [关闭 Ubuntu][12]。
|
||||
|
||||
我希望这篇快速教程能帮助你在 Ubuntu 和其它 Linux 发行版中更改时区。如果你有问题或建议,请告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/change-timezone-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/install-ubuntu/
|
||||
[2]: tmp.bHvVztzy6d#change-timezone-gui
|
||||
[3]: tmp.bHvVztzy6d#change-timezone-command-line
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/Ubuntu_Change-_Time_Zone.png?ssl=1
|
||||
[5]: https://ubuntu.com/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/timezones_in_ubuntu.jpg?ssl=1
|
||||
[7]: https://linuxhandbook.com/date-command/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/applications_menu_settings.jpg?ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/settings_detail_ubuntu.jpg?ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/change_timezone_in_ubuntu.jpg?ssl=1
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/change_timezone_in_ubuntu_2.jpg?ssl=1
|
||||
[12]: https://itsfoss.com/schedule-shutdown-ubuntu/
|
1
sources/README.md
Normal file
1
sources/README.md
Normal file
@ -0,0 +1 @@
|
||||
这里放待翻译的文件。
|
@ -1,93 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news)
|
||||
[#]: via: (https://opensource.com/article/19/12/news-december-7)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look an open source election auditing tool, new open source from Alibaba and Netflix, mimicking animal vision, and more!
|
||||
|
||||
### Alibaba and Netflix share machine learning and data science software
|
||||
|
||||
Two companies at the forefront of machine learning and data science have just released some of their tools under open source licenses.
|
||||
|
||||
Chinese ecommerce giant Alibaba just [open sourced the algorithm libraries][2] for its Alink platform. The algorithms "are essential to support machine learning tasks such as online product recommendations and smart customer services." According to Jia Yangqing, president of Alibaba Cloud, Alink is a good fit for "developers seeking big data and machine-learning tools." You can find the source code for Alink (which is under an Apache 2.0 license) [on GitHub][3], with documentation in both Chinese and English.
|
||||
|
||||
Not to be outdone, streaming service Netflix just released its [Metaflow Python library][4] under an Apache 2.0 license. Metaflow enables data scientists to "see early on whether a prototyped model would fail in production, allowing them to fix whatever the issue was". It also works with a number of Python data science libraries, like SciKit Learn, Pytorch, and Tensorflow. You can grab Metaflow's code from [its GitHub repository][5] or learn more about it at the [Metaflow website][6].
|
||||
|
||||
### Open source software to mimic animal vision
|
||||
|
||||
Have you ever wondered how your dog or cat sees the world? Thanks to work by researchers at the University of Exeter in the UK and Australia's University of Queensland, you can find out. The team just released [software that allows humans to see the world as animals do][7].
|
||||
|
||||
Called micaToolbox, the software can interpret digital photos and process images of various environments by mimicking the limitations of animal vision. Anyone with a camera, a computer, or smartphone can use the software without knowing how to code. But micaToolbox isn't just a novelty. It's a serious scientific tool that can help "help biologists better understand a variety of animal behaviors, including mating systems, distance-dependent signalling and mimicry." And, according to researcher Jolyon Troscianko, the software can help identify "how an animal's camouflage works so that we can manage our land to protect certain species."
|
||||
|
||||
You can [download micaBox][8] or [browse its source code][9] on GitHub.
|
||||
|
||||
### New tool for post-election auditing
|
||||
|
||||
More and more aspects of our lives and institutions are being automated. With that comes an increased danger of systems breaking down or malicious someones tampering with those systems. Open source gives us an opportunity to look at exactly how the automation works.
|
||||
|
||||
Elections, in particular, are increasingly vulnerable. To combat election tampering, the US Cybersecurity and Infrastructure Security Agency (CISA) has joined forces with the non-profit organization VotingWorks to create a [web-based application for auditing ballots][10].
|
||||
|
||||
Called Arlo, the application is designed to ensure that "elections are secure, resilient, and transparent," said CISA's director Chris Krebs. Arlo works with a range of automated voting systems to help "officials compare audited votes to tabulated votes, and providing monitoring & reporting capabilities." Arlo was used to verify the results of recent state and local elections and is being further field-tested in the states of Georgia, Michigan, Missouri, Ohio, Pennsylvania, and Virginia.
|
||||
|
||||
Arlo's source code, released under an AGPL-3.0 license, is [available on GitHub][11].
|
||||
|
||||
### Royal Navy debuts open source application development kit
|
||||
|
||||
Consistency across user interfaces is key to a successful set of applications and services. The UK's Royal Navy understands the importance of this and has released the [open source NELSON standards toolkit][12] to help its developers and suppliers "save time and give users a consistent experience."
|
||||
|
||||
Named after the legendary British admiral, NELSON is intended to "maintain high visual consistency and user-experience quality across the different applications developed or subcontracted by the Royal Navy." The toolkit consists of a set of components including visual styles, typographic elements, forms, elements like buttons and checkboxes, and notifications.
|
||||
|
||||
NELSON has its own [GitHub repository][13], from which the Royal Navy encourages developers to make pull requests.
|
||||
|
||||
#### In other news
|
||||
|
||||
* [Council group plans for open source revenues and benefits platform][14]
|
||||
* [Introducing Nebula, the open source global overlay network from Slack][15]
|
||||
* [webOS Open Source Edition 2.0 keeps Palm's spirit alive in cars and IoT][16]
|
||||
* [Duke University Introduces an Open Source Tool as an Alternative to a Monolithic LMS][17]
|
||||
* [Open Source Technology Could Be a Boon to Farmers][18]
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/news-december-7
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://www.zdnet.com/article/alibaba-cloud-publishes-machine-learning-algorithm-on-github/
|
||||
[3]: https://github.com/alibaba/alink
|
||||
[4]: https://www.zdnet.com/article/netflix-our-metaflow-python-library-for-faster-data-science-is-now-open-source/
|
||||
[5]: https://github.com/Netflix/metaflow
|
||||
[6]: https://metaflow.org/
|
||||
[7]: https://www.upi.com/Science_News/2019/12/03/Novel-software-helps-scientists-see-what-animals-see/5961575389734/
|
||||
[8]: http://www.empiricalimaging.com/download/micatoolbox/
|
||||
[9]: https://github.com/troscianko/micaToolbox
|
||||
[10]: https://www.zdnet.com/article/cisa-and-votingworks-release-open-source-post-election-auditing-tool/
|
||||
[11]: https://github.com/votingworks/arlo
|
||||
[12]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/open-source-royal-navy
|
||||
[13]: https://github.com/Royal-Navy/standards-toolkit
|
||||
[14]: https://www.ukauthority.com/articles/council-group-plans-for-open-source-revenues-and-benefits/
|
||||
[15]: https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579
|
||||
[16]: https://www.slashgear.com/webos-open-source-edition-2-0-keeps-palms-spirit-alive-in-cars-and-iot-25601309/
|
||||
[17]: https://iblnews.org/duke-university-introduces-an-open-source-tool-as-an-alternative-to-a-monolithic-lms/
|
||||
[18]: https://civileats.com/2019/12/02/open-source-technology-could-be-a-boon-to-farmers/
|
@ -1,85 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (First Ever Release of Ubuntu Cinnamon Distribution is Finally Here!)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-cinnamon/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
First Ever Release of Ubuntu Cinnamon Distribution is Finally Here!
|
||||
======
|
||||
|
||||
_**Brief: Ubuntu Cinnamon is a new distribution that utilizes Linux Mint’s Cinnamon desktop environment on top of Ubuntu code base. It’s first stable release is based on Ubuntu 19.10 Eoan Ermine.**_
|
||||
|
||||
[Cinnamon][1] is Linux Mint’s flagship desktop environment. Like [MATE desktop][2], Cinnamon is also a product of dissatisfaction with GNOME 3. With the GNOME Classic like user interface and relatively lower hardware requirements, Cinnamon soon gathered a dedicated userbase.
|
||||
|
||||
Like any other desktop environment out there, you can [install Cinnamon on Ubuntu][3] and other distributions.
|
||||
|
||||
Installing multiple [desktop environments][4] (DE) is not a difficult task but it often leads to conflicts (with other DE’s elements) and may not always provide the best experience. This is why major Linux distributions separate spins/flavors with various popular desktop environments.
|
||||
|
||||
[Ubuntu also has various official flavors][5] featuring [KDE][6] (Kubuntu), [LXQt][7] (Lubuntu), Xfce (Xubuntu), Budgie ([Ubuntu Budgie][8]) etc. Cinnamon was not in this list but Ubuntu Cinnamon Remix project is trying to change that.
|
||||
|
||||
### Ubuntu Cinnamon distribution
|
||||
|
||||
![Ubuntu Cinnamon Desktop Screenshot][9]
|
||||
|
||||
[Ubuntu Cinnamon][10] (website under construction) is a new Linux distribution that brings Cinnamon desktop to Ubuntu distribution. Joshua Peisach is the lead developer for the project and he is being helped by other volunteer contributors. The ex-developer of the now discontinued Ubuntu GNOME project and some members from Ubuntu team are also advising the team to help with the development.
|
||||
|
||||
![Ubuntu Cinnamon Remix Screeenshot 1][11]
|
||||
|
||||
Do note that Ubuntu Cinnamon is not an official flavor of Ubuntu. They are trying to get the flavorship but I think that will take a few more releases.
|
||||
|
||||
The first stable release of Ubuntu Cinnamon is based on [Ubuntu 19.10 Eoan Ermine][12]. It uses Calamares installer from Lubuntu and features Cinnamon desktop version 4.0.10. Naturally, it uses Nemo file manager and LightDM.
|
||||
|
||||
It supports EFI and UEFI and only comes with 64-bit support.
|
||||
|
||||
You’ll get your regular goodies like LibreOffice, Firefox and some GNOME software and games. You can of course install more applications as per your need.
|
||||
|
||||
### Download and install Ubuntu Cinnamon
|
||||
|
||||
Do note that this is the first ever release of Ubuntu Cinnamon and the developers are not that experienced at this moment.
|
||||
|
||||
If you don’t like troubleshooting, don’t use it on your main system. I expect this release to have a few bugs and issues which will be fixed eventually as more users test it out.
|
||||
|
||||
You can download Ubuntu Cinnamon ISO from Sourceforge website:
|
||||
|
||||
[Download Ubuntu Cinnamon][13]
|
||||
|
||||
### What next from here?
|
||||
|
||||
The dev team has a few improvements planned for the 20.04 release. The changes are mostly on the cosmetics though. There will be new GRUB and Plymouth theme, layout application and welcome screen.
|
||||
|
||||
I downloaded it and tried it in a live session. Here’s what this distribution looks like:
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][14]
|
||||
|
||||
Meanwhile, if you manage to try it on your own, why not share your experience in the comments? If you use Linux Mint, will you switch to Ubuntu Cinnamon in near future? What are your overall opinion about this new project? Do share it in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-cinnamon/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Cinnamon_(desktop_environment)
|
||||
[2]: https://mate-desktop.org/
|
||||
[3]: https://itsfoss.com/install-cinnamon-on-ubuntu/
|
||||
[4]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[5]: https://itsfoss.com/which-ubuntu-install/
|
||||
[6]: https://kde.org/
|
||||
[7]: https://lxqt.org/
|
||||
[8]: https://itsfoss.com/ubuntu-budgie-18-review/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/ubuntu_cinnamon_distribution_screenshot.jpg?ssl=1
|
||||
[10]: https://ubuntucinnamon.org/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/ubuntu_cinnamon_remix_screeenshot_1.jpg?ssl=1
|
||||
[12]: https://itsfoss.com/ubuntu-19-10-released/
|
||||
[13]: https://sourceforge.net/projects/ubuntu-cinnamon-remix/
|
||||
[14]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (KubeCon gets bigger, the kernel gets better, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/12/kubecon-bigger-kernel-better-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
KubeCon gets bigger, the kernel gets better, and more industry trends
|
||||
======
|
||||
A weekly look at open source community, market, and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [KubeCon showed Kubernetes is big, but is it a Unicorn?][2]
|
||||
|
||||
> It’s hard to remember now but there was a time when Kubernetes was a distant No. 3 in terms of container orchestrators being used in the market. It’s also eye opening to now realize that [the firms][3] that hatched the two platforms that [towered over][4] Kubernetes have had to completely re-jigger their business models under the Kubernetes onslaught.
|
||||
>
|
||||
> And full credit to the CNCF for attempting to diffuse some of that attention from Kubernetes by spending the vast majority of the KubeCon opening keynote address touting some of the nearly two dozen graduated, incubating, and sandbox projects it also hosts. But, it was really the Big K that stole the show.
|
||||
|
||||
**The impact:** Open source is way more than the source code; governance is a big deal and can be the difference between longevity and irrelevance. Gathering, organizing, and maintaining humans is an entirely different skill set than doing the same for bits, but can have just as big an influence on the success of a project.
|
||||
|
||||
## [Report: Kubernetes use on the rise][5]
|
||||
|
||||
> At the same time, the Datadog report notes that container churn rates are approximately 10 times higher in orchestrated environments. Churn rates in container environments that lack an orchestration platform such as Kubernetes have increased in the last year as well. The average container lifespan at a typical company running infrastructure without orchestration is about two days, down from about six days in mid-2018. In 19% of those environments not running orchestration, the average container lifetime exceeded 30 days. That compares to only 3% of organizations running containers longer than 30 days in Kubernetes environments, according to the report’s findings.
|
||||
|
||||
**The impact**: If your containers aren't churning, you're probably not getting the full benefit of the technology you've adopted.
|
||||
|
||||
## [Upcoming Linux 5.5 kernel improves live patching, scheduling][6]
|
||||
|
||||
> A new WFX Wi-Fi driver for the Silicon Labs WF200 ASIC transceiver is coming to Linux kernel 5.5. This particular wireless transceiver is geared toward low-power IoT devices and uses a 2.4 GHz 802.11b/g/n radio optimized for low power RF performance in crowded RF environments. This new driver can interface via both Serial Peripheral Interface (SPI) and Secure Digital Input Output (SDIO).
|
||||
|
||||
**The impact**: The kernel's continued relevance is a direct result of the never-ending grind to keep being where people need it to be (i.e. basically everywhere).
|
||||
|
||||
## [DigitalOcean Currents: December 2019][7]
|
||||
|
||||
> In that spirit, this fall’s installment of our seasonal Currents report is dedicated to open source for the second year running. We surveyed more than 5800 developers around the world on the overall health and direction of the open source community. When we last checked in with the community in [2018][8], more than half of developers reported contributing to open source projects, and most felt the community was healthy and growing.
|
||||
|
||||
**The impact**: While the good news outweighs the bad, there are a couple of things to keep an eye on: namely, making open source more inclusive and mitigating potential negative impact of big money.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/kubecon-bigger-kernel-better-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.sdxcentral.com/articles/opinion-editorial/kubecon-showed-kubernetes-is-big-but-is-it-a-unicorn/2019/11/
|
||||
[3]: https://www.sdxcentral.com/articles/news/docker-unloads-enterprise-biz-to-mirantis/2019/11/
|
||||
[4]: https://www.sdxcentral.com/articles/news/mesosphere-is-now-d2iq-and-kubernetes-is-its-game/2019/08/
|
||||
[5]: https://containerjournal.com/topics/container-ecosystems/report-kubernetes-use-on-the-rise/
|
||||
[6]: https://thenewstack.io/upcoming-linux-5-5-kernel-improves-live-patching-scheduling/
|
||||
[7]: https://blog.digitalocean.com/digitalocean-currents-december-2019/
|
||||
[8]: https://www.digitalocean.com/currents/october-2018/
|
@ -1,75 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates)
|
||||
[#]: via: (https://opensource.com/article/19/12/python-news-december)
|
||||
[#]: author: (Christian Heimes https://opensource.com/users/christian-heimes)
|
||||
|
||||
Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates
|
||||
======
|
||||
Find out what's going on in the Python community in December.
|
||||
![Python in a coffee cup.][1]
|
||||
|
||||
The Python Software Foundation (PSF) is a nonprofit organization behind the Python programming language. I am fortunate to be a PSF Fellow (honorable member for life,) a Python core developer, and the liaison between my company, Red Hat, and the PSF. Part of that liaison work is providing updates on what’s happening in the Python community. Here’s a look at what we have going on in December.
|
||||
|
||||
### Upcoming events
|
||||
|
||||
A significant part of the Python community is its in-person events. These events are where users and contributors intermingle and learn together. Here are the big announcements of upcoming opportunities to connect.
|
||||
|
||||
#### PyCon US 2020
|
||||
|
||||
[PyCon US][2] is by far the largest annual Python event. The next PyCon is April 15-23, 2020, in Pittsburgh. The call for proposals is open to all until December 20, 2019. I’m planning to attend PyCon for the conference and its [famous post-con sprints][3].
|
||||
|
||||
#### EuroPython
|
||||
|
||||
EuroPython is the largest Python conference in Europe with about 1,000 attendees in the last years. [EP20][4] will be held in Dublin, Ireland, July 20-26, 2020. As a liaison for Red Hat, I’m proud to say that Red Hat sponsored EP18 in Edinburgh and donated the sponsoring tickets to Women Who Code Scotland.
|
||||
|
||||
#### PyData
|
||||
|
||||
[PyData][5] is a separate nonprofit related to the Python community through a focus on data science. They host many international events throughout the year, with upcoming events in [Austin, Texas][6], and [Warsaw, Poland][7] before the end of the year.
|
||||
|
||||
### New PSF fellows from Africa
|
||||
|
||||
The PSF promotes a few members to fellow every quarter. Yesterday, twelve new PSF fellows were [announced][8].
|
||||
|
||||
I’d like to highlight the four new fellows from Ghana, who are also the organizers of the first pan-African [PyCon Africa][9], which took place in August 2019 in Accra, Ghana. The Python community in Africa is growing at an amazing speed. PyCon Africa 2020 will be in Accra again, and I’m planning to spend my summer vacation there.
|
||||
|
||||
### Annual release cycle for Python
|
||||
|
||||
Python used to release a new major version about every 18 months. This timeline will change with the Python 3.9 release. With [PEP 602,][10] a new major version of Python will be released annually in October. The new cadence means fewer changes between releases and more predictable release dates. October was chosen to align with Linux distribution releases such as Fedora. Miro Hrončok from the Python maintenance team joined the discussion and has helped to find a convenient release date for us; for more details, please see <https://discuss.python.org/t/pep-602-annual-release-cycle-for-python/2296/79?u=ambv>.
|
||||
|
||||
### Steering council election
|
||||
|
||||
The Python Steering Council governs the development of Python. It was established after [Guido van Rossum stepped down][11] as benevolent dictator for life. Python core developers elect a new steering council for every major Python release. For the upcoming term, nine candidates were nominated for five seats on the council (with Guido being nominated, but [withdrawing][12]). See <https://www.python.org/dev/peps/pep-8101/> for all the details. Election results are expected to be announced mid-December.
|
||||
|
||||
That covers what’s new in the Python community for December. Stay tuned for more updates in the future and mark your calendars for the conferences mentioned above.
|
||||
|
||||
So you have a great business idea for a wonderful IT product or service, and you want to build your...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/python-news-december
|
||||
|
||||
作者:[Christian Heimes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/christian-heimes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_python.jpg?itok=G04cSvp_ (Python in a coffee cup.)
|
||||
[2]: https://us.pycon.org/2020/
|
||||
[3]: https://opensource.com/article/19/5/pycon-developer-sprints
|
||||
[4]: https://www.europython-society.org/post/188741002380/europython-2020-venue-and-location-selected
|
||||
[5]: https://pydata.org/
|
||||
[6]: https://pydata.org/austin2019/
|
||||
[7]: https://pydata.org/warsaw2019/
|
||||
[8]: https://pyfound.blogspot.com/2019/11/python-software-foundation-fellow.html
|
||||
[9]: https://africa.pycon.org/
|
||||
[10]: https://www.python.org/dev/peps/pep-0602/
|
||||
[11]: https://opensource.com/article/19/6/command-line-heroes-python
|
||||
[12]: https://discuss.python.org/t/steering-council-nomination-guido-van-rossum-2020-term/2657/11
|
@ -1,63 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (2020 technology must haves, a guide to Kubernetes etcd, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/12/gartner-ectd-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
2020 technology must haves, a guide to Kubernetes etcd, and more industry trends
|
||||
======
|
||||
A weekly look at open source community, market, and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [Gartner's top 10 infrastructure and operations trends for 2020][2]
|
||||
|
||||
> “The vast majority of organisations that do not adopt a shared self-service platform approach will find that their DevOps initiatives simply do not scale,” said Winser. "Adopting a shared platform approach enables product teams to draw from an I&O digital toolbox of possibilities, while benefiting from high standards of governance and efficiency needed for scale."
|
||||
|
||||
**The impact**: The breakneck change of technology development and adoption will not slow down next year, as the things you've been reading about for the last two years become things you have to figure out to deal with every day.
|
||||
|
||||
## [A guide to Kubernetes etcd: All you need to know to set up etcd clusters][3]
|
||||
|
||||
> Etcd is a distributed reliable key-value store which is simple, fast and secure. It acts like a backend service discovery and database, runs on different servers in Kubernetes clusters at the same time to monitor changes in clusters and to store state/configuration data that should to be accessed by a Kubernetes master or clusters. Additionally, etcd allows Kubernetes master to support discovery service so that deployed application can declare their availability for inclusion in service.
|
||||
|
||||
**The impact**: This is actually way more than I needed to know about setting up etcd clusters, but now I have a mental model of what that could look like, and you can too.
|
||||
|
||||
## [How the open source model could fuel the future of digital marketing][4]
|
||||
|
||||
> In other words, the broad adoption of open source culture has the power to completely invert the traditional marketing funnel. In the future, prospective customers could be first introduced to “late funnel” materials and then buy into the broader narrative — a complete reversal of how traditional marketing approaches decision-makers today.
|
||||
|
||||
**The impact**: The SEO on this cuts two ways: It can introduce uninitiated marketing people to open source and uninitiated technical people to the ways that technology actually gets adopted. Neat!
|
||||
|
||||
## [Kubernetes integrates interoperability, storage, waits on sidecars][5]
|
||||
|
||||
> In a [recent interview][6], Lachlan Evenson, and was also a lead on the Kubernetes 1.16 release, said sidecar containers was one of the features that team was a “little disappointed” it could not include in their release.
|
||||
>
|
||||
> Guinevere Saenger, software engineer at GitHub and lead for the 1.17 release team, explained that sidecar containers gained increased focus “about a month ago,” and that its implementation “changes the pod spec, so this is a change that affects a lot of areas and needs to be handled with care.” She noted that it did move closer to completion and “will again be prioritized for 1.18.”
|
||||
|
||||
**The impact**: You can read between the lines to understand a lot more about the Kubernetes sausage-making process. It's got governance, tradeoffs, themes, and timeframes; all the stuff that is often invisible to consumers of a project.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/gartner-ectd-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.information-age.com/gartner-top-10-infrastructure-and-operations-trends-2020-123486509/
|
||||
[3]: https://superuser.openstack.org/articles/a-guide-to-kubernetes-etcd-all-you-need-to-know-to-set-up-etcd-clusters/
|
||||
[4]: https://www.forbes.com/sites/forbescommunicationscouncil/2019/11/19/how-the-open-source-model-could-fuel-the-future-of-digital-marketing/#71b602fb20a5
|
||||
[5]: https://www.sdxcentral.com/articles/news/kubernetes-integrates-interoperability-storage-waits-on-sidecars/2019/12/
|
||||
[6]: https://kubernetes.io/blog/2019/12/06/when-youre-in-the-release-team-youre-family-the-kubernetes-1.16-release-interview/
|
@ -1,165 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Mint 19.3 “Tricia” Released: Here’s What’s New and How to Get it)
|
||||
[#]: via: (https://itsfoss.com/linux-mint-19-3/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Linux Mint 19.3 “Tricia” Released: Here’s What’s New and How to Get it
|
||||
======
|
||||
|
||||
_**Linux Mint 19.3 “Tricia” has been released. See what’s new in it and learn how to upgrade to Linux Mint 19.3.**_
|
||||
|
||||
The Linux Mint team finally announced the release of Linux Mint 19.3 codenamed ‘Tricia’ with useful feature additions along with a ton of improvements under-the-hood.
|
||||
|
||||
This is a point release based on the latest **Ubuntu 18.04.3** and it comes packed with the **Linux kernel 5.0**.
|
||||
|
||||
I downloaded and quickly tested the edition featuring the [Cinnamon 4.4][1] desktop environment. You may also try the Xfce or MATE edition of Linux Mint 19.3.
|
||||
|
||||
### Linux Mint 19.3: What’s New?
|
||||
|
||||
![Linux Mint 19 3 Desktop][2]
|
||||
|
||||
While being an LTS release that will be supported until 2023 – it brings in a couple of useful features and improvements. Let me highlight some of them for you.
|
||||
|
||||
#### System Reports
|
||||
|
||||
![][3]
|
||||
|
||||
Right after installing Linux Mint 19.3 (or upgrading it), you will notice a warning icon on the right side of the panel (taskbar).
|
||||
|
||||
When you click on it, you should be displayed a list of potential issues that you can take care of to ensure the best out of Linux Mint experience.
|
||||
|
||||
For starters, it will suggest that you should create a root password, install a language pack, or update software packages – in the form of a warning. This is particularly useful to make sure that you perform important actions even after following the first set of steps on the welcome screen.
|
||||
|
||||
#### Improved Language Settings
|
||||
|
||||
Along with the ability to install/set a language, you will also get the ability to change the time format.
|
||||
|
||||
So, the language settings are now more useful than ever before.
|
||||
|
||||
#### HiDPI Support
|
||||
|
||||
As a result of [HiDPI][4] support, the system tray icons will look crisp and overall, you should get a pleasant user experience on a high-res display.
|
||||
|
||||
#### New Applications
|
||||
|
||||
![Linux Mint Drawing App][5]
|
||||
|
||||
With the new release, you will n longer find “**GIMP**” pre-installed.
|
||||
|
||||
Even though GIMP is a powerful utility, they decided to add a simpler “**Drawing**” app to let users to easily crop/resize images while being able to tweak it a little.
|
||||
|
||||
Also, **Gnote** replaces **Tomboy** as the default note-taking application on Linux Mint 19.3
|
||||
|
||||
In addition to both these replacements, Celluloid video player has also been added instead of Xplayer. In case you did not know, Celluloid happens to be one of the [best open source video players][6] for Linux.
|
||||
|
||||
#### Cinnamon 4.4 Desktop
|
||||
|
||||
![Cinnamon 4 4 Desktop][7]
|
||||
|
||||
In my case, the new Cinnamon 4.4 desktop experience introduces a couple of new abilities like adjusting/tweaking the panel zones individually as you can see in the screenshot above.
|
||||
|
||||
#### Other Improvements
|
||||
|
||||
There are several other improvements including more customizability options in the file manager and so on.
|
||||
|
||||
You can read more about the detailed changes in the [official release notes][8].
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][9]
|
||||
|
||||
### Linux Mint 19 vs 19.1 vs 19.2 vs 19.3: What’s the difference?
|
||||
|
||||
You probably already know that Linux Mint releases are based on Ubuntu long term support releases. Linux Mint 19 series is based on Ubuntu 18.04 LTS.
|
||||
|
||||
Ubuntu LTS releases get ‘point releases’ on the interval of a few months. Point release basically consists of bug fixes and security updates that have been pushed since the last release of the LTS version. This is similar to the Service Pack concept in Windows XP if you remember it.
|
||||
|
||||
If you are going to download Ubuntu 18.04 which was released in April 2018 in 2019, you’ll get Ubuntu 18.04.2. The ISO image of 18.04.2 will consist of 18.04 and the bug fixes and security updates applied till 18.04.2. Imagine if there were no point releases, then right after [installing Ubuntu 18.04][10], you’ll have to install a few gigabytes of system updates. Not very convenient, right?
|
||||
|
||||
But Linux Mint has it slightly different. Linux Mint has a major release based on Ubuntu LTS release and then it has three minor releases based on Ubuntu LTS point releases.
|
||||
|
||||
Mint 19 was based on Ubuntu 18.04, 19.1 was based on 18.04.1 and Mint 19.2 is based on Ubuntu 18.04.2. Similarly, Mint 19.3 is based on Ubuntu 18.04.3. It is worth noting that all Mint 19.x releases are long term support releases and will get security updates till 2023.
|
||||
|
||||
Now, if you are using Ubuntu 18.04 and keep your system updated, you’ll automatically get updated to 18.04.1, 18.04.2 etc. That’s not the case in Linux Mint.
|
||||
|
||||
Linux Mint minor releases also consist of _feature changes_ along with bug fixes and security updates and this is the reason why updating Linux Mint 19 won’t automatically put you on 19.1.
|
||||
|
||||
Linux Mint gives you the option if you want the new features or not. For example, Mint 19.3 has Cinnamon 4.4 and several other visual changes. If you are happy with the existing features, you can stay on Mint 19.2. You’ll still get the necessary security and maintenance updates on Mint 19.2 till 2023.
|
||||
|
||||
Now that you understand the concept of minor releases and want the latest minor release, let’s see how to upgrade to Mint 19.3.
|
||||
|
||||
### Linux Mint 19.3: How to Upgrade?
|
||||
|
||||
No matter whether you have Linux Mint 19.1 or 19, you can follow these steps to [upgrade Linux Mint version][11].
|
||||
|
||||
**Note**: _You should consider making a system snapshot (just in case) for backup. In addition, the Linux Mint team advises you to disable the screensaver and upgrade Cinnamon spices (if installed) from the System settings._
|
||||
|
||||
![][12]
|
||||
|
||||
1. Launch the Update Manager.
|
||||
2. Now, refresh it to load up the latest available updates (or you can change the mirror if you want).
|
||||
3. Once done, simply click on the Edit button to find “**Upgrade to Linux Mint 19.3 Tricia**” button similar to the image above.
|
||||
4. Finally, just follow the on-screen instructions to easily update it.
|
||||
|
||||
|
||||
|
||||
Based on your internet connection, it should take anything between a couple of minutes to 30 minutes.
|
||||
|
||||
### Don’t see Mint 19.3 update yet? Here’s what you can do
|
||||
|
||||
If you don’t see the option to upgrade to Linux Mint 19.3 Tricia, don’t lose hope. Here are a couple of things you can do.
|
||||
|
||||
#### **Step 1: Make sure to use mint-upgrade-info version 1.1.3**
|
||||
|
||||
Make sure that mint-upgrade-info is updated to version 1.1.3. You can try the install command that will update it to a newer version (if there is any).
|
||||
|
||||
```
|
||||
sudo apt install mint-upgrade-info
|
||||
```
|
||||
|
||||
#### **Step 2: Switch to default software sources**
|
||||
|
||||
Chances are that you are using a mirror closer to you to get faster software downloads. But this could cause a problem as the mirrors might not have the new upgrade info yet.
|
||||
|
||||
Go to Software Source and change the sources to default. Now run the update manager again and see if Mint 19.3 upgrade is available.
|
||||
|
||||
### Download Linux Mint 19.3 ‘Tricia’
|
||||
|
||||
If you want to perform a fresh install, you can easily download the latest available version from the official download page (depending on what edition you want).
|
||||
|
||||
You will also find multiple mirrors available to download the ISOs – feel free to try the nearest mirror for potentially faster download.
|
||||
|
||||
[Linux Mint 19.3][13]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
Have you tried Linux Mint 19.3 yet? Let me know your thoughts in the comments down below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-mint-19-3/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/linuxmint/cinnamon/releases/tag/4.4.0
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/linux-mint-19-3-desktop.jpg?ssl=1
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/linux-mint-system-report.jpg?ssl=1
|
||||
[4]: https://wiki.archlinux.org/index.php/HiDPI
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/linux-mint-drawing-app.jpg?ssl=1
|
||||
[6]: https://itsfoss.com/video-players-linux/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/cinnamon-4-4-desktop.jpg?ssl=1
|
||||
[8]: https://linuxmint.com/rel_tricia_cinnamon.php
|
||||
[9]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[10]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
|
||||
[11]: https://itsfoss.com/upgrade-linux-mint-version/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/mintupgrade.png?ssl=1
|
||||
[13]: https://linuxmint.com/download.php
|
@ -1,76 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Eliminating gender bias in open source software development, a database of microbes, and more open source news)
|
||||
[#]: via: (https://opensource.com/article/19/12/news-december-21)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Eliminating gender bias in open source software development, a database of microbes, and more open source news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look at eliminating gender bias in open source software development, an open source database of microbes, an open source index for cooperatives, and more!
|
||||
|
||||
### Eliminating gender bias from open source development
|
||||
|
||||
It's a sad fact that certain groups, among them women, are woefully underrepresented in open source projects. It's like a bug in the open source development process. Fortunately, there are initiatives to make that under representation a thing of the past. A study out of Oregon State University (OSU) intends to resolve the issue of the lack of women in open source development by "[finding these bugs and proposing redesigns around them][2], leading to more gender-inclusive tools used by software developers."
|
||||
|
||||
The study will look at tools commonly used in open source development — including Eclipse, GitHub, and Hudson — to determine if they "significantly discourage newcomers, especially women, from joining OSS projects." According to Igor Steinmacher, one of the principal investigators of the study, the study will examine "how people use tools because the 'bugs' may be embedded in how the tool was designed, which may place people with different cognitive styles at a disadvantage."
|
||||
|
||||
The developers of the tools being studied will walk through their software and answer questions based on specific personas. The researchers at OSU will suggest ways to redesign the software to eliminate gender bias and will "create a list of best practices for fixing gender-bias bugs in both products and processes."
|
||||
|
||||
### Canadian university compiles open source microbial database
|
||||
|
||||
What do you do when you have a vast amount of data but no way to effectively search and build upon it? You turn it into a database, of course. That's what researchers at Simon Fraser University in British Columbia, along with collaborators from around the globe, did with [information about chemical compounds created by bacteria and fungi][3]. Called the Natural Products Atlas, the database "holds information on nearly 25,000 natural compounds and serves as a knowledge base and repository for the global scientific community."
|
||||
|
||||
Licensed under a Creative Commons Attribution 4.0 International License, the Natural Products Atlas "holds information on nearly 25,000 natural compounds and serves as a knowledge base and repository for the global scientific community." The [website for the Natural Products Atlas][4] hosts the database also includes a number of visualization tools and is fully searchable.
|
||||
|
||||
Roger Linington, an associate professor at SFU who spearheaded the creation of the database, said that having "all the available data in one place and in a standardized format means we can now index natural compounds for anyone to freely access and learn more about."
|
||||
|
||||
### Open source index for cooperatives
|
||||
|
||||
Europe has long been a hotbed of both open source development and open source adoption. While European governments strongly advocate open source, non profits have been following suit. One of those is Cooperatives Europe, which is developing "[open source software to allow users to index co-op information and resources in a standardised way][5]."
|
||||
|
||||
The idea behind the software, called Coop Starter, reinforces the [essential freedoms of free software][6]: it's intended to provide "education, training and information. The software may be used and repurposed by the public for their own needs and on their own infrastructure." Anyone can use it "to reference existing material on co-operative entrepreneurship" and can contribute "by sharing resources and information."
|
||||
|
||||
The [code for Coop Starter][7], along with a related WordPress plugin, is available from Cooperative Europe's GitLab repository.
|
||||
|
||||
#### In other news
|
||||
|
||||
* [Nancy recognised as France’s top digital free and collaborative public service][8]
|
||||
* [Open Source and AI: Ready for primetime in government?][9]
|
||||
* [Open Software Means Kinder Science][10]
|
||||
* [New Open-Source CoE to be launched by Wipro and Oman’s Ministry of Tech & Communication][11]
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and [Correspondents][12] for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/news-december-21
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://techxplore.com/news/2019-12-professors-gender-biased-bugs-open-source-software.html
|
||||
[3]: https://www.sfu.ca/sfunews/stories/2019/12/sfu-global-collaboration-creates-world-s-first-open-source-datab.html
|
||||
[4]: https://www.npatlas.org/joomla/
|
||||
[5]: https://www.thenews.coop/144412/sector/regional-organisations/cooperatives-europe-builds-open-source-index-for-the-co-op-movement/
|
||||
[6]: https://www.gnu.org/philosophy/free-sw.en.html
|
||||
[7]: https://git.happy-dev.fr/startinblox/applications/coop-starter
|
||||
[8]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/territoire-numerique-libre
|
||||
[9]: https://federalnewsnetwork.com/commentary/2019/12/open-source-and-ai-ready-for-primetime-in-government/
|
||||
[10]: https://blogs.scientificamerican.com/observations/open-software-means-kinder-science/
|
||||
[11]: https://www.indianweb2.com/2019/12/11/new-open-source-coe-to-be-launched-by-wipro-and-omans-ministry-of-tech-communication/
|
||||
[12]: https://opensource.com/correspondent-program
|
@ -1,86 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (fuzheng1998)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Shocking! EA is Permanently Banning Linux Gamers on Battlefield V)
|
||||
[#]: via: (https://itsfoss.com/ea-banning-linux-gamers/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Shocking! EA is Permanently Banning Linux Gamers on Battlefield V
|
||||
======
|
||||
|
||||
Only when I thought that [EA][1] as a game company might be getting better after [its decision to make its games available on Steam][2] – but it looks like that isn’t the case.
|
||||
|
||||
In a [Reddit thread][3], a lot of Linux players seem to complain about getting banned by FairFight (which is the server-side anti-cheat engine used for BF V) just because they chose to play Battlefield V (BF V) on [Linux using Wine][4].
|
||||
|
||||
![][5]
|
||||
|
||||
### Is this a widespread issue?
|
||||
|
||||
Unfortunately, it seems to be the case with a number of Linux players using Wine to play Battlefield V on Linux.
|
||||
|
||||
You can also find users on [Lutris Gaming forums][6] and [Battlefield forums][7] talking about it.
|
||||
|
||||
Of course, the userbase on Linux playing Battlefield V isn’t huge – but it still matters, right?
|
||||
|
||||
### What’s exactly the issue here?
|
||||
|
||||
It looks like EA’s anti-cheat tech considers [DXVK][8] (Vulkan-based implementation of DirectX which tries to solve compatibility issues) as cheating.
|
||||
|
||||
So, basically, the compatibility layer that is being utilized to make it possible to run Battlefield V is being detected as a modified file through which you’re “**potentially**” cheating.
|
||||
|
||||
![Battlefield V on Lutris][9]
|
||||
|
||||
Even though this could be an innocent problem for the anti-cheat engine but EA does not seem to acknowledge that at all.
|
||||
|
||||
Here’s what they respond with when one of the players wrote an email to EA in order to lift the ban:
|
||||
|
||||
> After thoroughly investigating your account and concern, we found that your account was actioned correctly and will not remove this sanction from your account.
|
||||
|
||||
Also, with all this going on, [Lutris Gaming][10] seems to be quite furious on EA’s behavior with the permanent bans:
|
||||
|
||||
> It has come to our attention that several Battlefield 5 players have recently been banned for playing on Linux, and that EA has chosen not to revert these wrongful punishments. Due to this, we advise to refrain from playing any multiplayer games published by [@EA][11] in the future.
|
||||
>
|
||||
> — Lutris Gaming (@LutrisGaming) [January 2, 2020][12]
|
||||
|
||||
### Not just Battlefield V, it’s the same with Destiny 2
|
||||
|
||||
As pointed by a Redditor in the same thread, Bungie also happens to consider Wine as an emulator (which is against their policy) and has banned players on Linux a while back.
|
||||
|
||||
### EA needs to address the issue
|
||||
|
||||
_We have reached out to EA for a comment on the issue_. _And, we’re still waiting for a response._
|
||||
|
||||
I shall update the article if we have an official response from EA. However, considering Blizzard as an example, they should actually work on fixing the issue and [reverse the bans on players using Linux][13].
|
||||
|
||||
I know that BF V does not offer native Linux support – but supporting the compatibility layer and not considering it as cheating would allow Linux users to experience the game which they rightfully own (or considering to purchase).
|
||||
|
||||
What are your thoughts on this? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ea-banning-linux-gamers/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ea.com/
|
||||
[2]: https://thenextweb.com/gaming/2019/10/29/ea-games-are-coming-back-to-steam-but-you-still-need-origin/
|
||||
[3]: https://www.reddit.com/r/linux/comments/ej3q2p/ea_is_permanently_banning_linux_players_on/
|
||||
[4]: https://itsfoss.com/install-latest-wine/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/reddit-thread-ea.jpg?ssl=1
|
||||
[6]: https://forums.lutris.net/t/ea-banning-dxvk-on-battlefield-v/7810
|
||||
[7]: https://forums.battlefield.com/en-us/discussion/197938/ea-banning-dxvk-on-battlefield-v-play-linux
|
||||
[8]: https://github.com/doitsujin/dxvk
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/battlefield-v-lutris-gaming.png?ssl=1
|
||||
[10]: https://lutris.net/
|
||||
[11]: https://twitter.com/EA?ref_src=twsrc%5Etfw
|
||||
[12]: https://twitter.com/LutrisGaming/status/1212827248430059520?ref_src=twsrc%5Etfw
|
||||
[13]: https://www.altchar.com/game-news/blizzard-unbans-overwatch-players-who-used-linux-os-agF9y0G2gWjn
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (NSA cloud advice, Facebook open source year in review, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/20/1/nsa-facebook-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
NSA cloud advice, Facebook open source year in review, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [Facebook open source year in review][2]
|
||||
|
||||
> Last year was a busy one for our [open source][3] engineers. In 2019 we released 170 new open source projects, bringing our portfolio to a total of 579 [active repositories][3]. While it’s important for our internal engineers to contribute to these projects (and they certainly do — with more than 82,000 commits this year), we are also incredibly grateful for the massive support from external contributors. Approximately 2,500 external contributors committed more than 32,000 changes. In addition to these contributions, nearly 93,000 new people starred our projects this year, growing the most important component of any open source project — the community! Facebook Open Source would not be here without your contributions, so we want to thank you for your participation in 2019.
|
||||
|
||||
**The impact**: Facebook got ~33% more changes than they would have had they decided to develop these as closed projects. Organizations addressing similar challenges got an 82,000-commit boost in exchange. What a clear illustration of the business impact of open source development.
|
||||
|
||||
## [Cloud advice from the NSA][4]
|
||||
|
||||
> This document divides cloud vulnerabilities into four classes (misconfiguration, poor access control, shared tenancy vulnerabilities, and supply chain vulnerabilities) that encompass the vast majority of known vulnerabilities. Cloud customers have a critical role in mitigating misconfiguration and poor access control, but can also take actions to protect cloud resources from the exploitation of shared tenancy and supply chain vulnerabilities. Descriptions of each vulnerability class along with the most effective mitigations are provided to help organizations lock down their cloud resources. By taking a risk-based approach to cloud adoption, organizations can securely benefit from the cloud’s extensive capabilities.
|
||||
|
||||
**The impact**: The Fear, Uncertainty, and Doubt (FUD) that has been associated with cloud adoption is being debunked more all the time. None other then the US Department of Defense has done a lot of the thinking so you don't have to, and there is a good chance that their concerns are at least as dire as yours are.
|
||||
|
||||
## [With Kubernetes, China Minsheng Bank transformed its legacy applications][5]
|
||||
|
||||
> But all of CMBC’s legacy applications—for example, the core banking system, payment systems, and channel systems—were written in C and Java, using traditional architecture. “We wanted to do distributed applications because in the past we used VMs in our own data center, and that was quite expensive and with low resource utilization rate,” says Zhang. “Our biggest challenge is how to make our traditional legacy applications adaptable to the cloud native environment.” So far, around 20 applications are running in production on the Kubernetes platform, and 30 new applications are in active development to adopt the Kubernetes platform.
|
||||
|
||||
**The impact**: This illustrates nicely the challenges and opportunities facing businesses in a competitive environment, and suggests a common adoption pattern. Do new stuff the new way, and move the old stuff as it makes sense.
|
||||
|
||||
## [The '5 Rs' of the move to cloud native: Re-platform, re-host, re-factor, replace, retire][6]
|
||||
|
||||
> The bottom line is that telcos and service providers will go cloud native when it is cheaper for them to migrate to the cloud and pay cloud costs than it is to remain in the data centre. That time is now and by adhering to the "5 Rs" of the move to cloud native, Re-platform, Re-host, Re-factor, Replace and/or Retire, the path is open, clearly marked and the goal eminently achievable.
|
||||
|
||||
**The impact**: Cloud-native is basically used as a synonym for open source in this interview; there is no other type of technology that will deliver the same lift.
|
||||
|
||||
## [Fedora CoreOS out of preview][7]
|
||||
|
||||
> Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. It’s the successor to both [Fedora Atomic Host][8] and [CoreOS Container Linux][9] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host. For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][10].
|
||||
|
||||
**The impact**: Collapsing these two branches of the Linux family tree into one another moves the state of the art forward for everyone (once you get through the migration).
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/nsa-facebook-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://opensource.com/article/20/1/hybrid-developer-future-industry-trends
|
||||
[3]: https://opensource.facebook.com/
|
||||
[4]: https://media.defense.gov/2020/Jan/22/2002237484/-1/-1/0/CSI-MITIGATING-CLOUD-VULNERABILITIES_20200121.PDF
|
||||
[5]: https://www.cncf.io/blog/2020/01/23/with-kubernetes-china-minsheng-bank-transformed-its-legacy-applications-and-moved-into-ai-blockchain-and-big-data/
|
||||
[6]: https://www.telecomtv.com/content/cloud-native/the-5-rs-of-the-move-to-cloud-native-re-platform-re-host-re-factor-replace-retire-37473/
|
||||
[7]: https://fedoramagazine.org/fedora-coreos-out-of-preview/
|
||||
[8]: https://www.projectatomic.io/
|
||||
[9]: https://coreos.com/os/docs/latest/
|
||||
[10]: https://fedoramagazine.org/introducing-fedora-coreos/
|
1
sources/news/README.md
Normal file
1
sources/news/README.md
Normal file
@ -0,0 +1 @@
|
||||
这里放新闻类文章,要求时效性
|
@ -1,221 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Ultimate Guide to JavaScript Fatigue: Realities of our industry)
|
||||
[#]: via: (https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html)
|
||||
[#]: author: (Lucas Fernandes Da Costa https://lucasfcosta.com)
|
||||
|
||||
The Ultimate Guide to JavaScript Fatigue: Realities of our industry
|
||||
======
|
||||
|
||||
**Complaining about JS Fatigue is just like complaining about the fact that humanity has created too many tools to solve the problems we have** , from email to airplanes and spaceships.
|
||||
|
||||
Last week I’ve done a talk about this very same subject at the NebraskaJS 2017 Conference and I got so many positive feedbacks that I just thought this talk should also become a blog post in order to reach more people and help them deal with JS Fatigue and understand the realities of our industry. **My goal with this post is to change the way you think about software engineering in general and help you in any areas you might work on**.
|
||||
|
||||
One of the things that has inspired me to write this blog post and that totally changed my life is [this great post by Patrick McKenzie, called “Don’t Call Yourself a Programmer and other Career Advice”][1]. **I highly recommend you read that**. Most of this blog post is advice based on what Patrick has written in that post applied to the JavaScript ecosystem and with a few more thoughts I’ve developed during these last years working in the tech industry.
|
||||
|
||||
This first section is gonna be a bit philosophical, but I swear it will be worth reading.
|
||||
|
||||
### Realities of Our Industry 101
|
||||
|
||||
Just like Patrick has done in [his post][1], let’s start with the most basic and essential truth about our industry:
|
||||
|
||||
Software solves business problems
|
||||
|
||||
This is it. **Software does not exist to please us as programmers** and let us write beautiful code. Neither it exists to create jobs for people in the tech industry. **Actually, it exists to kill as many jobs as possible, including ours** , and this is why basic income will become much more important in the next few years, but that’s a whole other subject.
|
||||
|
||||
I’m sorry to say that, but the reason things are that way is that there are only two things that matter in the software engineering (and any other industries):
|
||||
|
||||
**Cost versus Revenue**
|
||||
|
||||
**The more you decrease cost and increase revenue, the more valuable you are** , and one of the most common ways of decreasing cost and increasing revenue is replacing human beings by machines, which are more effective and usually cost less in the long run.
|
||||
|
||||
You are not paid to write code
|
||||
|
||||
**Technology is not a goal.** Nobody cares about which programming language you are using, nobody cares about which frameworks your team has chosen, nobody cares about how elegant your data structures are and nobody cares about how good is your code. **The only thing that somebody cares about is how much does your software cost and how much revenue it generates**.
|
||||
|
||||
Writing beautiful code does not matter to your clients. We write beautiful code because it makes us more productive in the long run and this decreases cost and increases revenue.
|
||||
|
||||
The whole reason why we try not to write bugs is not that we value correctness, but that **our clients** value correctness. If you have ever seen a bug becoming a feature you know what I’m talking about. That bug exists but it should not be fixed. That happens because our goal is not to fix bugs, our goal is to generate revenue. If our bugs make clients happy then they increase revenue and therefore we are accomplishing our goals.
|
||||
|
||||
Reusable space rockets, self-driving cars, robots, artificial intelligence: these things do not exist just because someone thought it would be cool to create them. They exist because there are business interests behind them. And I’m not saying the people behind them just want money, I’m sure they think that stuff is also cool, but the truth is that if they were not economically viable or had any potential to become so, they would not exist.
|
||||
|
||||
Probably I should not even call this section “Realities of Our Industry 101”, maybe I should just call it “Realities of Capitalism 101”.
|
||||
|
||||
And given that our only goal is to increase revenue and decrease cost, I think we as programmers should be paying more attention to requirements and design and start thinking with our minds and participating more actively in business decisions, which is why it is extremely important to know the problem domain we are working on. How many times before have you found yourself trying to think about what should happen in certain edge cases that have not been thought before by your managers or business people?
|
||||
|
||||
In 1975, Boehm has done a research in which he found out that about 64% of all errors in the software he was studying were caused by design, while only 36% of all errors were coding errors. Another study called [“Higher Order Software—A Methodology for Defining Software”][2] also states that **in the NASA Apollo project, about 73% of all errors were design errors**.
|
||||
|
||||
The whole reason why Design and Requirements exist is that they define what problems we’re going to solve and solving problems is what generates revenue.
|
||||
|
||||
> Without requirements or design, programming is the art of adding bugs to an empty text file.
|
||||
>
|
||||
> * Louis Srygley
|
||||
>
|
||||
|
||||
|
||||
This same principle also applies to the tools we’ve got available in the JavaScript ecosystem. Babel, webpack, react, Redux, Mocha, Chai, Typescript, all of them exist to solve a problem and we gotta understand which problem they are trying to solve, we need to think carefully about when most of them are needed, otherwise, we will end up having JS Fatigue because:
|
||||
|
||||
JS Fatigue happens when people use tools they don't need to solve problems they don't have.
|
||||
|
||||
As Donald Knuth once said: “Premature optimization is the root of all evil”. Remember that software only exists to solve business problems and most software out there is just boring, it does not have any high scalability or high-performance constraints. Focus on solving business problems, focus on decreasing cost and generating revenue because this is all that matters. Optimize when you need, otherwise you will probably be adding unnecessary complexity to your software, which increases cost, and not generating enough revenue to justify that.
|
||||
|
||||
This is why I think we should apply [Test Driven Development][3] principles to everything we do in our job. And by saying this I’m not just talking about testing. **I’m talking about waiting for problems to appear before solving them. This is what TDD is all about**. As Kent Beck himself says: “TDD reduces fear” because it guides your steps and allows you take small steps towards solving your problems. One problem at a time. By doing the same thing when it comes to deciding when to adopt new technologies then we will also reduce fear.
|
||||
|
||||
Solving one problem at a time also decreases [Analysis Paralysis][4], which is basically what happens when you open Netflix and spend three hours concerned about making the optimal choice instead of actually watching something. By solving one problem at a time we reduce the scope of our decisions and by reducing the scope of our decisions we have fewer choices to make and by having fewer choices to make we decrease Analysis Paralysis.
|
||||
|
||||
Have you ever thought about how easier it was to decide what you were going to watch when there were only a few TV channels available? Or how easier it was to decide which game you were going to play when you had only a few cartridges at home?
|
||||
|
||||
### But what about JavaScript?
|
||||
|
||||
By the time I’m writing this post NPM has 489,989 packages and tomorrow approximately 515 new ones are going to be published.
|
||||
|
||||
And the packages we use and complain about have a history behind them we must comprehend in order to understand why we need them. **They are all trying to solve problems.**
|
||||
|
||||
Babel, Dart, CoffeeScript and other transpilers come from our necessity of writing code other than JavaScript but making it runnable in our browsers. Babel even lets us write new generation JavaScript and make sure it will work even on older browsers, which has always been a great problem given the inconsistencies and different amount of compliance to the ECMA Specification between browsers. Even though the ECMA spec is becoming more and more solid these days, we still need Babel. And if you want to read more about Babel’s history I highly recommend that you read [this excellent post by Henry Zhu][5].
|
||||
|
||||
Module bundlers such as Webpack and Browserify also have their reason to exist. If you remember well, not so long ago we used to suffer a lot with lots of `script` tags and making them work together. They used to pollute the global namespace and it was reasonably hard to make them work together when one depended on the other. In order to solve this [`Require.js`][6] was created, but it still had its problems, it was not that straightforward and its syntax also made it prone to other problems, as you can see [in this blog post][7]. Then Node.js came with `CommonJS` imports, which were synchronous, simple and clean, but we still needed a way to make that work on our browsers and this is why we needed Webpack and Browserify.
|
||||
|
||||
And Webpack itself actually solves more problems than that by allowing us to deal with CSS, images and many other resources as if they were JavaScript dependencies.
|
||||
|
||||
Front-end frameworks are a bit more complicated, but the reason why they exist is to reduce the cognitive load when we write code so that we don’t need to worry about manipulating the DOM ourselves or even dealing with messy browser APIs (another problem JQuery came to solve), which is not only error prone but also not productive.
|
||||
|
||||
This is what we have been doing this whole time in computer science. We use low-level abstractions and build even more abstractions on top of it. The more we worry about describing how our software should work instead of making it work, the more productive we are.
|
||||
|
||||
But all those tools have something in common: **they exist because the web platform moves too fast**. Nowadays we’re using web technology everywhere: in web browsers, in desktop applications, in phone applications or even in watch applications.
|
||||
|
||||
This evolution also creates problems we need to solve. PWAs, for example, do not exist only because they’re cool and we programmers have fun writing them. Remember the first section of this post: **PWAs exist because they create business value**.
|
||||
|
||||
And usually standards are not fast enough to be created and therefore we need to create our own solutions to these things, which is why it is great to have such a vibrant and creative community with us. We’re solving problems all the time and **we are allowing natural selection to do its job**.
|
||||
|
||||
The tools that suit us better thrive, get more contributors and develop themselves more quickly and sometimes other tools end up incorporating the good ideas from the ones that thrive and becoming even more popular than them. This is how we evolve.
|
||||
|
||||
By having more tools we also have more choices. If you remember the UNIX philosophy well, it states that we should aim at creating programs that do one thing and do it well.
|
||||
|
||||
We can clearly see this happening in the JS testing environment, for example, where we have Mocha for running tests and Chai for doing assertions, while in Java JUnit tries to do all these things. This means that if we have a problem with one of them or if we find another one that suits us better, we can simply replace that small part and still have the advantages of the other ones.
|
||||
|
||||
The UNIX philosophy also states that we should write programs that work together. And this is exactly what we are doing! Take a look at Babel, Webpack and React, for example. They work very well together but we still do not need one to use the other. In the testing environment, for example, if we’re using Mocha and Chai all of a sudden we can just install Karma and run those same tests in multiple environments.
|
||||
|
||||
### How to Deal With It
|
||||
|
||||
My first advice for anyone suffering from JS Fatigue would definitely be to stay aware that **you don’t need to know everything**. Trying to learn it all at once, even when we don’t have to do so, only increases the feeling of fatigue. Go deep in areas that you love and for which you feel an inner motivation to study and adopt a lazy approach when it comes to the other ones. I’m not saying that you should be lazy, I’m just saying that you can learn those only when needed. Whenever you face a problem that requires you to use a certain technology to solve it, go learn.
|
||||
|
||||
Another important thing to say is that **you should start from the beginning**. Make sure you have learned enough about JavaScript itself before using any JavaScript frameworks. This is the only way you will be able to understand them and bend them to your will, otherwise, whenever you face an error you have never seen before you won’t know which steps to take in order to solve it. Learning core web technologies such as CSS, HTML5, JavaScript and also computer science fundamentals or even how the HTTP protocol works will help you master any other technologies a lot more quickly.
|
||||
|
||||
But please, don’t get too attached to that. Sometimes you gotta risk yourself and start doing things on your own. As Sacha Greif has written in [this blog post][8], spending too much time learning the fundamentals is just like trying to learn how to swim by studying fluid dynamics. Sometimes you just gotta jump into the pool and try to swim by yourself.
|
||||
|
||||
And please, don’t get too attached to a single technology. All of the things we have available nowadays have already been invented in the past. Of course, they have different features and a brand new name, but, in their essence, they are all the same.
|
||||
|
||||
If you look at NPM, it is nothing new, we already had Maven Central and Ruby Gems quite a long time ago.
|
||||
|
||||
In order to transpile your code, Babel applies the very same principles and theory as some of the oldest and most well-known compilers, such as the GCC.
|
||||
|
||||
Even JSX is not a new idea. It E4X (ECMAScript for XML) already existed more than 10 years ago.
|
||||
|
||||
Now you might ask: “what about Gulp, Grunt and NPM Scripts?” Well, I’m sorry but we can solve all those problems with GNU Make in 1976. And actually, there are a reasonable number of JavaScript projects that still use it, such as Chai.js, for example. But we do not do that because we are hipsters that like vintage stuff. We use `make` because it solves our problems, and this is what you should aim at doing, as we’ve talked before.
|
||||
|
||||
If you really want to understand a certain technology and be able to solve any problems you might face, please, dig deep. One of the most decisive factors to success is curiosity, so **dig deep into the technologies you like**. Try to understand them from bottom-up and whenever you think something is just “magic”, debunk that myth by exploring the codebase by yourself.
|
||||
|
||||
In my opinion, there is no better quote than this one by Richard Feinman, when it comes to really learning something:
|
||||
|
||||
> What I cannot create, I do not understand
|
||||
|
||||
And just below this phrase, [in the same blackboard, Richard also wrote][9]:
|
||||
|
||||
> Know how to solve every problem that has been solved
|
||||
|
||||
Isn’t this just amazing?
|
||||
|
||||
When Richard said that, he was talking about being able to take any theoretical result and re-derive it, but I think the exact same principle can be applied to software engineering. The tools that solve our problems have already been invented, they already exist, so we should be able to get to them all by ourselves.
|
||||
|
||||
This is the very reason I love [some of the videos available in Egghead.io][10] in which Dan Abramov explains how to implement certain features that exist in Redux from scratch or [blog posts that teach you how to build your own JSX renderer][11].
|
||||
|
||||
So why not trying to implement these things by yourself or going to GitHub and reading their codebase in order to understand how they work? I’m sure you will find a lot of useful knowledge out there. Comments and tutorials might lie and be incorrect sometimes, the code cannot.
|
||||
|
||||
Another thing that we have been talking a lot in this post is that **you should not get ahead of yourself**. Follow a TDD approach and solve one problem at a time. You are paid to increase revenue and decrease cost and you do this by solving problems, this is the reason why software exists.
|
||||
|
||||
And since we love comparing our role to the ones related to civil engineering, let’s do a quick comparison between software development and civil engineering, just as [Sam Newman does in his brilliant book called “Building Microservices”][12].
|
||||
|
||||
We love calling ourselves “engineers” or “architects”, but is that term really correct? We have been developing software for what we know as computers less than a hundred years ago, while the Colosseum, for example, exists for about two thousand years.
|
||||
|
||||
When was the last time you’ve seen a bridge falling and when was the last time your telephone or your browser crashed?
|
||||
|
||||
In order to explain this, I’ll use an example I love.
|
||||
|
||||
This is the beautiful and awesome city of Barcelona:
|
||||
|
||||
![The City of Barcelona][13]
|
||||
|
||||
When we look at it this way and from this distance, it just looks like any other city in the world, but when we look at it from above, this is how Barcelona looks:
|
||||
|
||||
![Barcelona from above][14]
|
||||
|
||||
As you can see, every block has the same size and all of them are very organized. If you’ve ever been to Barcelona you will also know how good it is to move through the city and how well it works.
|
||||
|
||||
But the people that planned Barcelona could not predict what it was going to look like in the next two or three hundred years. In cities, people come in and people move through it all the time so what they had to do was make it grow organically and adapt as the time goes by. They had to be prepared for changes.
|
||||
|
||||
This very same thing happens to our software. It evolves quickly, refactors are often needed and requirements change more frequently than we would like them to.
|
||||
|
||||
So, instead of acting like a Software Engineer, act as a Town Planner. Let your software grow organically and adapt as needed. Solve problems as they come by but make sure everything still has its place.
|
||||
|
||||
Doing this when it comes to software is even easier than doing this in cities due to the fact that **software is flexible, civil engineering is not**. **In the software world, our build time is compile time**. In Barcelona we cannot simply destroy buildings to give space to new ones, in Software we can do that a lot easier. We can break things all the time, we can make experiments because we can build as many times as we want and it usually takes seconds and we spend a lot more time thinking than building. Our job is purely intellectual.
|
||||
|
||||
So **act like a town planner, let your software grow and adapt as needed**.
|
||||
|
||||
By doing this you will also have better abstractions and know when it’s the right time to adopt them.
|
||||
|
||||
As Sam Koblenski says:
|
||||
|
||||
> Abstractions only work well in the right context, and the right context develops as the system develops.
|
||||
|
||||
Nowadays something I see very often is people looking for boilerplates when they’re trying to learn a new technology, but, in my opinion, **you should avoid boilerplates when you’re starting out**. Of course boilerplates and generators are useful if you are already experienced, but they take a lot of control out of your hands and therefore you won’t learn how to set up a project and you won’t understand exactly where each piece of the software you are using fits.
|
||||
|
||||
When you feel like you are struggling more than necessary to get something simple done, it might be the right time for you to look for an easier way to do this. In our role **you should strive to be lazy** , you should work to not work. By doing that you have more free time to do other things and this decreases cost and increases revenue, so that’s another way of accomplishing your goal. You should not only work harder, you should work smarter.
|
||||
|
||||
Probably someone has already had the same problem as you’re having right now, but if nobody did it might be your time to shine and build your own solution and help other people.
|
||||
|
||||
But sometimes you will not be able to realize you could be more effective in your tasks until you see someone doing them better. This is why it is so important to **talk to people**.
|
||||
|
||||
By talking to people you share experiences that help each other’s careers and we discover new tools to improve our workflow and, even more important than that, learn how they solve their problems. This is why I like reading blog posts in which companies explain how they solve their problems.
|
||||
|
||||
Especially in our area we like to think that Google and StackOverflow can answer all our questions, but we still need to know which questions to ask. I’m sure you have already had a problem you could not find a solution for because you didn’t know exactly what was happening and therefore didn’t know what was the right question to ask.
|
||||
|
||||
But if I needed to sum this whole post in a single advice, it would be:
|
||||
|
||||
Solve problems.
|
||||
|
||||
Software is not a magic box, software is not poetry (unfortunately). It exists to solve problems and improves peoples’ lives. Software exists to push the world forward.
|
||||
|
||||
**Now it’s your time to go out there and solve problems**.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html
|
||||
|
||||
作者:[Lucas Fernandes Da Costa][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://lucasfcosta.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/
|
||||
[2]: http://ieeexplore.ieee.org/document/1702333/
|
||||
[3]: https://en.wikipedia.org/wiki/Test_Driven_Development
|
||||
[4]: https://en.wikipedia.org/wiki/Analysis_paralysis
|
||||
[5]: https://babeljs.io/blog/2016/12/07/the-state-of-babel
|
||||
[6]: http://requirejs.org
|
||||
[7]: https://benmccormick.org/2015/05/28/moving-past-requirejs/
|
||||
[8]: https://medium.freecodecamp.org/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1
|
||||
[9]: https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-cannot-create-I-do-not-understand
|
||||
[10]: https://egghead.io/lessons/javascript-redux-implementing-store-from-scratch
|
||||
[11]: https://jasonformat.com/wtf-is-jsx/
|
||||
[12]: https://www.barnesandnoble.com/p/building-microservices-sam-newman/1119741399/2677517060476?st=PLA&sid=BNB_DRS_Marketplace+Shopping+Books_00000000&2sid=Google_&sourceId=PLGoP4760&k_clickid=3x4760
|
||||
[13]: /assets/barcelona-city.jpeg
|
||||
[14]: /assets/barcelona-above.jpeg
|
||||
[15]: https://twitter.com/thewizardlucas
|
@ -1,513 +0,0 @@
|
||||
What every software engineer should know about search
|
||||
============================================================
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/2000/1*5AlsVRQrewLw74uHYTZ36w.jpeg)
|
||||
|
||||
|
||||
### Want to build or improve a search experience? Start here.
|
||||
|
||||
Ask a software engineer: “[How would you add search functionality to your product?][78]” or “[How do I build a search engine?][79]” You’ll probably immediately hear back something like: “Oh, we’d just launch an ElasticSearch cluster. Search is easy these days.”
|
||||
|
||||
But is it? Numerous current products [still][80] [have][81] [suboptimal][82] [search][83] [experiences][84]. Any true search expert will tell you that few engineers have a very deep understanding of how search engines work, knowledge that’s often needed to improve search quality.
|
||||
|
||||
Even though many open source software packages exist, and the research is vast, the knowledge around building solid search experiences is limited to a select few. Ironically, [searching online][85] for search-related expertise doesn’t yield any recent, thoughtful overviews.
|
||||
|
||||
#### Emoji Legend
|
||||
|
||||
```
|
||||
❗ “Serious” gotcha: consequences of ignorance can be deadly
|
||||
🔷 Especially notable idea or piece of technology
|
||||
☁️ ️Cloud/SaaS
|
||||
🍺 Open source / free software
|
||||
🦏 JavaScript
|
||||
🐍 Python
|
||||
☕ Java
|
||||
🇨 C/C++
|
||||
```
|
||||
|
||||
### Why read this?
|
||||
|
||||
Think of this post as a collection of insights and resources that could help you to build search experiences. It can’t be a complete reference, of course, but hopefully we can improve it based on feedback (please comment or reach out!).
|
||||
|
||||
I’ll point at some of the most popular approaches, algorithms, techniques, and tools, based on my work on general purpose and niche search experiences of varying sizes at Google, Airbnb and several startups.
|
||||
|
||||
❗️Not appreciating or understanding the scope and complexity of search problems can lead to bad user experiences, wasted engineering effort, and product failure.
|
||||
|
||||
If you’re impatient or already know a lot of this, you might find it useful to jump ahead to the tools and services sections.
|
||||
|
||||
### Some philosophy
|
||||
|
||||
This is a long read. But most of what we cover has four underlying principles:
|
||||
|
||||
#### 🔷 Search is an inherently messy problem:
|
||||
|
||||
* Queries are highly variable. The search problems are highly variablebased on product needs.
|
||||
|
||||
* Think about how different Facebook search (searching a graph of people).
|
||||
|
||||
* YouTube search (searching individual videos).
|
||||
|
||||
* Or how different both of those are are from Kayak ([air travel planning is a really hairy problem][2]).
|
||||
|
||||
* Google Maps (making sense of geo-spacial data).
|
||||
|
||||
* Pinterest (pictures of a brunch you might cook one day).
|
||||
|
||||
#### Quality, metrics, and processes matter a lot:
|
||||
|
||||
* There is no magic bullet (like PageRank) nor a magic ranking formula that makes for a good approach. Processes are always evolving collection of techniques and processes that solve aspects of the problem and improve overall experience, usually gradually and continuously.
|
||||
|
||||
* ❗️In other words, search is not just just about building software that does ranking or retrieval (which we will discuss below) for a specific domain. Search systems are usually an evolving pipeline of components that are tuned and evolve over time and that build up to a cohesive experience.
|
||||
|
||||
* In particular, the key to success in search is building processes for evaluation and tuning into the product and development cycles. A search system architect should think about processes and metrics, not just technologies.
|
||||
|
||||
#### Use existing technologies first:
|
||||
|
||||
* As in most engineering problems, don’t reinvent the wheel yourself. When possible, use existing services or open source tools. If an existing SaaS (such as [Algolia][3] or managed Elasticsearch) fits your constraints and you can afford to pay for it, use it. This solution will likely will be the best choice for your product at first, even if down the road you need to customize, enhance, or replace it.
|
||||
|
||||
#### ❗️Even if you buy, know the details:
|
||||
|
||||
* Even if you are using an existing open source or commercial solution, you should have some sense of the complexity of the search problem and where there are likely to be pitfalls.
|
||||
|
||||
### Theory: the search problem
|
||||
|
||||
Search is different for every product, and choices depend on many technical details of the requirements. It helps to identify the key parameters of your search problem:
|
||||
|
||||
1. Size: How big is the corpus (a complete set of documents that need to be searched)? Is it thousands or billions of documents?
|
||||
|
||||
2. Media: Are you searching through text, images, graphical relationships, or geospatial data?
|
||||
|
||||
3. 🔷 Corpus control and quality: Are the sources for the documents under your control, or coming from a (potentially adversarial) third party? Are all the documents ready to be indexed or need to be cleaned up and selected?
|
||||
|
||||
4. Indexing speed: Do you need real-time indexing, or is building indices in batch is fine?
|
||||
|
||||
5. Query language: Are the queries structured, or you need to support unstructured ones?
|
||||
|
||||
6. Query structure: Are your queries textual, images, sounds? Street addresses, record ids, people’s faces?
|
||||
|
||||
7. Context-dependence: Do the results depend on who the user is, what is their history with the product, their geographical location, time of the day etc?
|
||||
|
||||
8. Suggest support: Do you need to support incomplete queries?
|
||||
|
||||
9. Latency: What are the serving latency requirements? 100 milliseconds or 100 seconds?
|
||||
|
||||
10. Access control: Is it entirely public or should users only see a restricted subset of the documents?
|
||||
|
||||
11. Compliance: Are there compliance or organizational limitations?
|
||||
|
||||
12. Internationalization: Do you need to support documents with multilingual character sets or Unicode? (Hint: Always use UTF-8 unless you really know what you’re doing.) Do you need to support a multilingual corpus? Multilingual queries?
|
||||
|
||||
Thinking through these points up front can help you make significant choices designing and building individual search system components.
|
||||
|
||||
** 此处有Canvas,请手动处理 **
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*qTK1iCtyJUr4zOyw4IFD7A.jpeg)
|
||||
A production indexing pipeline.
|
||||
|
||||
### Theory: the search pipeline
|
||||
|
||||
Now let’s go through a list of search sub-problems. These are usually solved by separate subsystems that form a pipeline. What that means is that a given subsystem consumes the output of previous subsystems, and produces input for the following subsystems.
|
||||
|
||||
This leads to an important property of the ecosystem: once you change how an upstream subsystem works, you need to evaluate the effect of the change and possibly change the behavior downstream.
|
||||
|
||||
Here are the most important problems you need to solve:
|
||||
|
||||
#### Index selection:
|
||||
|
||||
given a set of documents (e.g. the entirety of the Internet, all the Twitter posts, all the pictures on Instagram), select a potentially smaller subset of documents that may be worthy for consideration as search results and only include those in the index, discarding the rest. This is done to keep your indexes compact, and is almost orthogonal to selecting the documents to show to the user. Examples of particular classes of documents that don’t make the cut may include:
|
||||
|
||||
#### Spam:
|
||||
|
||||
oh, all the different shapes and sizes of search spam! A giant topic in itself, worthy of a separate guide. [A good web spam taxonomy overview][86].
|
||||
|
||||
#### Undesirable documents:
|
||||
|
||||
domain constraints might require filtering: [porn][87], illegal content, etc. The techniques are similar to spam filtering, probably with extra heuristics.
|
||||
|
||||
#### Duplicates:
|
||||
|
||||
Or near-duplicates and redundant documents. Can be done with [Locality-sensitive hashing][88], [similarity measures][89], clustering techniques or even [clickthrough data][90]. A [good overview][91] of techniques.
|
||||
|
||||
#### Low-utility documents:
|
||||
|
||||
The definition of utility depends highly on the problem domain, so it’s hard to recommend the approaches here. Some ideas are: it might be possible to build a utility function for your documents; heuristics might work, or example an image that contains only black pixels is not a useful document; utility might be learned from user behavior.
|
||||
|
||||
#### Index construction:
|
||||
|
||||
For most search systems, document retrieval is performed using an [inverted index][92] — often just called the index.
|
||||
|
||||
* The index is a mapping of search terms to documents. A search term could be a word, an image feature or any other document derivative useful for query-to-document matching. The list of the documents for a given term is called a [posting list][1]. It can be sorted by some metric, like document quality.
|
||||
|
||||
* Figure out whether you need to index the data in real time.❗️Many companies with large corpora of documents use a batch-oriented indexing approach, but then find this is unsuited to a product where users expect results to be current.
|
||||
|
||||
* With text documents, term extraction usually involves using NLP techniques, such as stop lists, [stemming][4] and [entity extraction][5]; for images or videos computer vision methods are used etc.
|
||||
|
||||
* In addition, documents are mined for statistical and meta information, such as references to other documents (used in the famous [PageRank][6]ranking signal), [topics][7], counts of term occurrences, document size, entities A mentioned etc. That information can be later used in ranking signal construction or document clustering. Some larger systems might contain several indexes, e.g. for documents of different types.
|
||||
|
||||
* Index formats. The actual structure and layout of the index is a complex topic, since it can be optimized in many ways. For instance there are [posting lists compression methods][8], one could target [mmap()able data representation][9] or use[ LSM-tree][10] for continuously updated index.
|
||||
|
||||
#### Query analysis and document retrieval:
|
||||
|
||||
Most popular search systems allow non-structured queries. That means the system has to extract structure out of the query itself. In the case of an inverted index, you need to extract search terms using [NLP][93] techniques.
|
||||
|
||||
The extracted terms can be used to retrieve relevant documents. Unfortunately, most queries are not very well formulated, so it pays to do additional query expansion and rewriting, like:
|
||||
|
||||
* [Term re-weighting][11].
|
||||
|
||||
* [Spell checking][12]. Historical query logs are very useful as a dictionary.
|
||||
|
||||
* [Synonym matching][13]. [Another survey][14].
|
||||
|
||||
* [Named entity recognition][15]. A good approach is to use [HMM-based language modeling][16].
|
||||
|
||||
* Query classification. Detect queries of particular type. For example, Google Search detects queries that contain a geographical entity, a porny query, or a query about something in the news. The retrieval algorithm can then make a decision about which corpora or indexes to look at.
|
||||
|
||||
* Expansion through [personalization][17] or [local context][18]. Useful for queries like “gas stations around me”.
|
||||
|
||||
#### Ranking:
|
||||
|
||||
Given a list of documents (retrieved in the previous step), their signals, and a processed query, create an optimal ordering (ranking) for those documents.
|
||||
|
||||
Originally, most ranking models in use were hand-tuned weighted combinations of all the document signals. Signal sets might include PageRank, clickthrough data, topicality information and [others][94].
|
||||
|
||||
To further complicate things, many of those signals, such as PageRank, or ones generated by [statistical language models][95] contain parameters that greatly affect the performance of a signal. Those have to be hand-tuned too.
|
||||
|
||||
Lately, 🔷 [learning to rank][96], signal-based discriminative supervised approaches are becoming more and more popular. Some popular examples of LtR are [McRank][97] and [LambdaRank][98] from Microsoft, and [MatrixNet][99] from Yandex.
|
||||
|
||||
A new, [vector space based approach][100] for semantic retrieval and ranking is gaining popularity lately. The idea is to learn individual low-dimensional vector document representations, then build a model which maps queries into the same vector space.
|
||||
|
||||
Then, retrieval is just finding several documents that are closest by some metric (e.g. Eucledian distance) to the query vector. Ranking is the distance itself. If the mapping of both the documents and queries is built well, the documents are chosen not by a fact of presence of some simple pattern (like a word), but how close the documents are to the query by _meaning_ .
|
||||
|
||||
### Indexing pipeline operation
|
||||
|
||||
Usually, each of the above pieces of the pipeline must be operated on a regular basis to keep the search index and search experience current.
|
||||
|
||||
❗️Operating a search pipeline can be complex and involve a lot of moving pieces. Not only is the data moving through the pipeline, but the code for each module and the formats and assumptions embedded in the data will change over time.
|
||||
|
||||
A pipeline can be run in “batch” or based on a regular or occasional basis (if indexing speed does not need to be real time) or in a streamed way (if real-time indexing is needed) or based on certain triggers.
|
||||
|
||||
Some complex search engines (like Google) have several layers of pipelines operating on different time scales — for example, a page that changes often (like [cnn.com][101]) is indexed with a higher frequency than a static page that hasn’t changed in years.
|
||||
|
||||
### Serving systems
|
||||
|
||||
Ultimately, the goal of a search system is to accept queries, and use the index to return appropriately ranked results. While this subject can be incredibly complex and technical, we mention a few of the key aspects to this part of the system.
|
||||
|
||||
* Performance: users notice when the system they interact with is laggy. ❗️Google has done [extensive research][19], and they have noticed that number of searches falls 0.6%, when serving is slowed by 300ms. They recommend to serve results under 200 ms for most of your queries. A good article [on the topic][20]. This is the hard part: the system needs to collect documents from, possibly, many computers, than merge them into possible a very long list and then sort that list in the ranking order. To complicate things further, ranking might be query-dependent, so, while sorting, the system is not just comparing 2 numbers, but performing computation.
|
||||
|
||||
* 🔷 Caching results: is often necessary to achieve decent performance. ❗️ But caches are just one large gotcha. The might show stale results when indices are updated or some results are blacklisted. Purging caches is a can of warm of itself: a search system might not have the capacity to serve the entire query stream with an empty (cold) cache, so the [cache needs to be pre-warmed][21] before the queries start arriving. Overall, caches complicate a system’s performance profile. Choosing a cache size and a replacement algorithm is also a [challenge][22].
|
||||
|
||||
* Availability: is often defined by an uptime/(uptime + downtime) metric. When index is distributed, in order to serve any search results, the system often needs to query all the shards for their share of results. ❗️That means, that if one shard is unavailable, the entire search system is compromised. The more machines are involved in serving the index — the higher the probability of one of them becoming defunct and bringing the whole system down.
|
||||
|
||||
* Managing multiple indices: Indices for large systems may separated into shards (pieces) or divided by media type or indexing cadence (fresh versus long-term indices). Results can then be merged.
|
||||
|
||||
* Merging results of different kinds: e.g. Google showing results from Maps, News etc.
|
||||
|
||||
** 此处有Canvas,请手动处理 **
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*M8WQu17E7SDziV0rVwUKbw.jpeg)
|
||||
A human rater. Yeah, you should still have those.
|
||||
|
||||
### Quality, evaluation, and improvement
|
||||
|
||||
So you’ve launched your indexing pipeline and search servers, and it’s all running nicely. Unfortunately the road to a solid search experience only begins with running infrastructure.
|
||||
|
||||
Next, you’ll need to build a set of processes around continuous search quality evaluation and improvement. In fact, this is actually most of the work and the hardest problem you’ll have to solve.
|
||||
|
||||
🔷 What is quality? First, you’ll need to determine (and get your boss or the product lead to agree), what quality means in your case:
|
||||
|
||||
* Self-reported user satisfaction (includes UX)
|
||||
|
||||
* Perceived relevance of the returned results (not including UX)
|
||||
|
||||
* Satisfaction relative to competitors
|
||||
|
||||
* Satisfaction relative performance of the previous version of the search engine (e.g. last week)
|
||||
|
||||
* [User engagement][23]
|
||||
|
||||
Metrics: Some of these concepts can be quite hard to quantify. On the other hand, it’s incredibly useful to be able to express how well a search engine is performing in a single number, a quality metric.
|
||||
|
||||
Continuously computing such a metric for your (and your competitors’) system you can both track your progress and explain how well you are doing to your boss. Here are some classical ways to quantify quality, that can help you construct your magic quality metric formula:
|
||||
|
||||
* [Precision][24] and [recall][25] measure how well the retrieved set of documents corresponds to the set you expected to see.
|
||||
|
||||
* [F score][26] (specifically F1 score) is a single number, that represents both precision and recall well.
|
||||
|
||||
* [Mean Average Precision][27] (MAP) allows to quantify the relevance of the top returned results.
|
||||
|
||||
* 🔷 [Normalized Discounted Cumulative Gain][28] (nDCG) is like MAP, but weights the relevance of the result by its position.
|
||||
|
||||
* [Long and short clicks][29] — Allow to quantify how useful the results are to the real users.
|
||||
|
||||
* [A good detailed overview][30].
|
||||
|
||||
🔷 Human evaluations: Quality metrics might seem like statistical calculations, but they can’t all be done by automated calculations. Ultimately, metrics need to represent subjective human evaluation, and this is where a “human in the loop” comes into play.
|
||||
|
||||
❗️Skipping human evaluation is probably the most spread reason of sub-par search experiences.
|
||||
|
||||
Usually, at early stages the developers themselves evaluate the results manually. At later point [human raters][102] (or assessors) may get involved. Raters typically use custom tools to look at returned search results and provide feedback on the quality of the results.
|
||||
|
||||
Subsequently, you can use the feedback signals to guide development, help make launch decisions or even feed them back into the index selection, retrieval or ranking systems.
|
||||
|
||||
Here is the list of some other types of human-driven evaluation, that can be done on a search system:
|
||||
|
||||
* Basic user evaluation: The user ranks their satisfaction with the whole experience
|
||||
|
||||
* Comparative evaluation: Compare with other search results (compare with search results from earlier versions of the system or competitors)
|
||||
|
||||
* Retrieval evaluation: The query analysis and retrieval quality is often evaluated using manually constructed query-document sets. A user is shown a query and the list of the retrieved documents. She can then mark all the documents that are relevant to the query, and the ones that are not. The resulting pairs of (query, [relevant docs]) are called a “golden set”. Golden sets are remarkably useful. For one, an engineer can set up automatic retrieval regression tests using those sets. The selection signal from golden sets can also be fed back as ground truth to term re-weighting and other query re-writing models.
|
||||
|
||||
* Ranking evaluation: Raters are presented with a query and two documents side-by-side. The rater must choose the document that fits the query better. This creates a partial ordering on the documents for a given query. That ordering can be later be compared to the output of the ranking system. The usual ranking quality measures used are MAP and nDCG.
|
||||
|
||||
#### Evaluation datasets:
|
||||
|
||||
One should start thinking about the datasets used for evaluation (like “golden sets” mentioned above) early in the search experience design process. How you collect and update them? How you push them to the production eval pipeline? Is there a built-in bias?
|
||||
|
||||
Live experiments:
|
||||
|
||||
After your search engine catches on and gains enough users, you might want to start conducting [live search experiments][103] on a portion of your traffic. The basic idea is to turn some optimization on for a group of people, and then compare the outcome with that of a “control” group — a similar sample of your users that did not have the experiment feature on for them. How you would measure the outcome is, once again, very product specific: it could be clicks on results, clicks on ads etc.
|
||||
|
||||
Evaluation cycle time: How fast you improve your search quality is directly related to how fast you can complete the above cycle of measurement and improvement. It is essential from the beginning to ask yourself, “how fast can we measure and improve our performance?”
|
||||
|
||||
Will it take days, hours, minutes or seconds to make changes and see if they improve quality? ❗️Running evaluation should also be as easy as possible for the engineers and should not take too much hands-on time.
|
||||
|
||||
### 🔷 So… How do I PRACTICALLY build it?
|
||||
|
||||
This blogpost is not meant as a tutorial, but here is a brief outline of how I’d approach building a search experience right now:
|
||||
|
||||
1. As was said above, if you can afford it — just buy the existing SaaS (some good ones are listed below). An existing service fits if:
|
||||
|
||||
* Your experience is a “connected” one (your service or app has internet connection).
|
||||
|
||||
* Does it support all the functionality you need out of box? This post gives a pretty good idea of what functions would you want. To name a few, I’d at least consider: support for the media you are searching; real-time indexing support; query flexibility, including context-dependent queries.
|
||||
|
||||
* Given the size of the corpus and the expected [QpS][31], can you afford to pay for it for the next 12 months?
|
||||
|
||||
* Can the service support your expected traffic within the required latency limits? In case when you are querying the service from an app, make sure that the given service is accessible quickly enough from where your users are.
|
||||
|
||||
2\. If a hosted solution does not fit your needs or resources, you probably want to use one of the open source libraries or tools. In case of connected apps or websites, I’d choose ElasticSearch right now. For embedded experiences, there are multiple tools below.
|
||||
|
||||
3\. You most likely want to do index selection and clean up your documents (say extract relevant text from HTML pages) before uploading them to the search index. This will decrease the index size and make getting to good results easier. If your corpus fits on a single machine, just write a script (or several) to do that. If not, I’d use [Spark][104].
|
||||
|
||||
** 此处有Canvas,请手动处理 **
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*lGw4kVVQyj8E5by2GWVoQg.jpeg)
|
||||
You can never have too many tools.
|
||||
|
||||
### ☁️ SaaS
|
||||
|
||||
☁️ 🔷[Algolia][105] — a proprietary SaaS that indexes a client’s website and provides an API to search the website’s pages. They also have an API to submit your own documents, support context dependent searches and serve results really fast. If I were building a web search experience right now and could afford it, I’d probably use Algolia first — and buy myself time to build a comparable search experience.
|
||||
|
||||
* Various ElasticSearch providers: AWS (☁️ [ElasticSearch Cloud)][32], ☁️[elastic.co][33] and from ☁️ [Qbox][34].
|
||||
|
||||
* ☁️[ Azure Search][35] — a SaaS solution from Microsoft. Accessible through a REST API, it can scale to billions of documents. Has a Lucene query interface to simplify migrations from Lucene-based solutions.
|
||||
|
||||
* ☁️[ Swiftype][36] — an enterprise SaaS that indexes your company’s internal services, like Salesforce, G Suite, Dropbox and the intranet site.
|
||||
|
||||
### Tools and libraries
|
||||
|
||||
🍺☕🔷[ Lucene][106] is the most popular IR library. Implements query analysis, index retrieval and ranking. Either of the components can be replaced by an alternative implementation. There is also a C port — 🍺[Lucy][107].
|
||||
|
||||
* 🍺☕🔷[ Solr][37] is a complete search server, based on Lucene. It’s a part of the [Hadoop][38] ecosystem of tools.
|
||||
|
||||
* 🍺☕🔷[ Hadoop][39] is the most widely used open source MapReduce system, originally designed as a indexing pipeline framework for Solr. It has been gradually loosing ground to 🍺[Spark][40] as the batch data processing framework used for indexing. ☁️[EMR][41] is a proprietary implementation of MapReduce on AWS.
|
||||
|
||||
* 🍺☕🔷 [ElasticSearch][42] is also based on Lucene ([feature comparison with Solr][43]). It has been getting more attention lately, so much that a lot of people think of ES when they hear “search”, and for good reasons: it’s well supported, has [extensive API][44], [integrates with Hadoop][45] and [scales well][46]. There are open source and [Enterprise][47] versions. ES is also available as a SaaS on Can scale to billions of documents, but scaling to that point can be very challenging, so typical scenario would involve orders of magnitude smaller corpus.
|
||||
|
||||
* 🍺🇨 [Xapian][48] — a C++-based IR library. Relatively compact, so good for embedding into desktop or mobile applications.
|
||||
|
||||
* 🍺🇨 [Sphinx][49] — an full-text search server. Has a SQL-like query language. Can also act as a [storage engine for MySQL][50] or used as a library.
|
||||
|
||||
* 🍺☕ [Nutch][51] — a web crawler. Can be used in conjunction with Solr. It’s also the tool behind [🍺Common Crawl][52].
|
||||
|
||||
* 🍺🦏 [Lunr][53] — a compact embedded search library for web apps on the client-side.
|
||||
|
||||
* 🍺🦏 [searchkit][54] — a library of web UI components to use with ElasticSearch.
|
||||
|
||||
* 🍺🦏 [Norch][55] — a [LevelDB][56]-based search engine library for Node.js.
|
||||
|
||||
* 🍺🐍 [Whoosh][57] — a fast, full-featured search library implemented in pure Python.
|
||||
|
||||
* OpenStreetMaps has it’s own 🍺[deck of search software][58].
|
||||
|
||||
### Datasets
|
||||
|
||||
A few fun or useful data sets to try building a search engine or evaluating search engine quality:
|
||||
|
||||
* 🍺🔷 [Commoncrawl][59] — a regularly-updated open web crawl data. There is a [mirror on AWS][60], accessible for free within the service.
|
||||
|
||||
* 🍺🔷 [Openstreetmap data dump][61] is a very rich source of data for someone building a geospacial search engine.
|
||||
|
||||
* 🍺 [Google Books N-grams][62] can be very useful for building language models.
|
||||
|
||||
* 🍺 [Wikipedia dumps][63] are a classic source to build, among other things, an entity graph out of. There is a [wide range of helper tools][64] available.
|
||||
|
||||
* [IMDb dumps][65] are a fun dataset to build a small toy search engine for.
|
||||
|
||||
### References
|
||||
|
||||
* [Modern Information Retrieval][66] by R. Baeza-Yates and B. Ribeiro-Neto is a good, deep academic treatment of the subject. This is a good overview for someone completely new to the topic.
|
||||
|
||||
* [Information Retrieval][67] by S. Büttcher, C. Clarke and G. Cormack is another academic textbook with a wide coverage and is more up-to-date. Covers learn-to-rank and does a pretty good job at discussing theory of search systems evaluation. Also is a good overview.
|
||||
|
||||
* [Learning to Rank][68] by T-Y Liu is a best theoretical treatment of LtR. Pretty thin on practical aspects though. Someone considering building an LtR system should probably check this out.
|
||||
|
||||
* [Managing Gigabytes][69] — published in 1999, is still a definitive reference for anyone embarking on building an efficient index of a significant size.
|
||||
|
||||
* [Text Retrieval and Search Engines][70] — a MOOC from Coursera. A decent overview of basics.
|
||||
|
||||
* [Indexing the World Wide Web: The Journey So Far][71] ([PDF][72]), an overview of web search from 2012, by Ankit Jain and Abhishek Das of Google.
|
||||
|
||||
* [Why Writing Your Own Search Engine is Hard][73] a classic article from 2004 from Anna Patterson.
|
||||
|
||||
* [https://github.com/harpribot/awesome-information-retrieval][74] — a curated list of search-related resources.
|
||||
|
||||
* A [great blog][75] on everything search by [Daniel Tunkelang][76].
|
||||
|
||||
* Some good slides on [search engine evaluation][77].
|
||||
|
||||
This concludes my humble attempt to make a somewhat-useful “map” for an aspiring search engine engineer. Did I miss something important? I’m pretty sure I did — you know, [the margin is too narrow][108] to contain this enormous topic. Let me know if you think that something should be here and is not — you can reach [me][109] at[ forwidur@gmail.com][110] or at [@forwidur][111].
|
||||
|
||||
> P.S. — This post is part of a open, collaborative effort to build an online reference, the Open Guide to Practical AI, which we’ll release in draft form soon. See [this popular guide][112] for an example of what’s coming. If you’d like to get updates on or help with with this effort, sign up [here][113].
|
||||
|
||||
> Special thanks to [Joshua Levy][114], [Leo Polovets][115] and [Abhishek Das][116] for reading drafts of this and their invaluable feedback!
|
||||
|
||||
> Header image courtesy of [Mickaël Forrett][117]. The beautiful toolbox is called [The Studley Tool Chest][118].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Max Grigorev
|
||||
distributed systems, data, AI
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://medium.com/startup-grind/what-every-software-engineer-should-know-about-search-27d1df99f80d
|
||||
|
||||
作者:[Max Grigorev][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@forwidur?source=post_header_lockup
|
||||
[1]:https://en.wikipedia.org/wiki/Inverted_index
|
||||
[2]:http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/ITA-software-travel-complexity.pdf
|
||||
[3]:https://www.algolia.com/
|
||||
[4]:https://en.wikipedia.org/wiki/Stemming
|
||||
[5]:https://en.wikipedia.org/wiki/Named-entity_recognition
|
||||
[6]:http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
|
||||
[7]:https://gofishdigital.com/semantic-topic-modeling/
|
||||
[8]:https://nlp.stanford.edu/IR-book/html/htmledition/postings-file-compression-1.html
|
||||
[9]:https://deplinenoise.wordpress.com/2013/03/31/fast-mmapable-data-structures/
|
||||
[10]:https://en.wikipedia.org/wiki/Log-structured_merge-tree
|
||||
[11]:http://orion.lcg.ufrj.br/Dr.Dobbs/books/book5/chap11.htm
|
||||
[12]:http://norvig.com/spell-correct.html
|
||||
[13]:http://nlp.stanford.edu/IR-book/html/htmledition/query-expansion-1.html
|
||||
[14]:https://www.iro.umontreal.ca/~nie/IFT6255/carpineto-Survey-QE.pdf
|
||||
[15]:https://en.wikipedia.org/wiki/Named-entity_recognition
|
||||
[16]:http://www.aclweb.org/anthology/P02-1060
|
||||
[17]:https://en.wikipedia.org/wiki/Personalized_search
|
||||
[18]:http://searchengineland.com/future-search-engines-context-217550
|
||||
[19]:http://services.google.com/fh/files/blogs/google_delayexp.pdf
|
||||
[20]:http://highscalability.com/latency-everywhere-and-it-costs-you-sales-how-crush-it
|
||||
[21]:https://stackoverflow.com/questions/22756092/what-does-it-mean-by-cold-cache-and-warm-cache-concept
|
||||
[22]:https://en.wikipedia.org/wiki/Cache_performance_measurement_and_metric
|
||||
[23]:http://blog.popcornmetrics.com/5-user-engagement-metrics-for-growth/
|
||||
[24]:https://en.wikipedia.org/wiki/Information_retrieval#Precision
|
||||
[25]:https://en.wikipedia.org/wiki/Information_retrieval#Recall
|
||||
[26]:https://en.wikipedia.org/wiki/F1_score
|
||||
[27]:http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/
|
||||
[28]:https://en.wikipedia.org/wiki/Discounted_cumulative_gain
|
||||
[29]:http://www.blindfiveyearold.com/short-clicks-versus-long-clicks
|
||||
[30]:https://arxiv.org/pdf/1302.2318.pdf
|
||||
[31]:https://en.wikipedia.org/wiki/Queries_per_second
|
||||
[32]:https://aws.amazon.com/elasticsearch-service/
|
||||
[33]:https://www.elastic.co/
|
||||
[34]:https://qbox.io/
|
||||
[35]:https://azure.microsoft.com/en-us/services/search/
|
||||
[36]:https://swiftype.com/
|
||||
[37]:http://lucene.apache.org/solr/
|
||||
[38]:http://hadoop.apache.org/
|
||||
[39]:http://hadoop.apache.org/
|
||||
[40]:http://spark.apache.org/
|
||||
[41]:https://aws.amazon.com/emr/
|
||||
[42]:https://www.elastic.co/products/elasticsearch
|
||||
[43]:http://solr-vs-elasticsearch.com/
|
||||
[44]:https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html
|
||||
[45]:https://github.com/elastic/elasticsearch-hadoop
|
||||
[46]:https://www.elastic.co/guide/en/elasticsearch/guide/current/distributed-cluster.html
|
||||
[47]:https://www.elastic.co/cloud/enterprise
|
||||
[48]:https://xapian.org/
|
||||
[49]:http://sphinxsearch.com/
|
||||
[50]:https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/
|
||||
[51]:https://nutch.apache.org/
|
||||
[52]:http://commoncrawl.org/
|
||||
[53]:https://lunrjs.com/
|
||||
[54]:https://github.com/searchkit/searchkit
|
||||
[55]:https://github.com/fergiemcdowall/norch
|
||||
[56]:https://github.com/google/leveldb
|
||||
[57]:https://bitbucket.org/mchaput/whoosh/wiki/Home
|
||||
[58]:http://wiki.openstreetmap.org/wiki/Search_engines
|
||||
[59]:http://commoncrawl.org/
|
||||
[60]:https://aws.amazon.com/public-datasets/common-crawl/
|
||||
[61]:http://wiki.openstreetmap.org/wiki/Downloading_data
|
||||
[62]:http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
|
||||
[63]:https://dumps.wikimedia.org/
|
||||
[64]:https://www.mediawiki.org/wiki/Alternative_parsers
|
||||
[65]:http://www.imdb.com/interfaces
|
||||
[66]:https://www.amazon.com/dp/0321416910
|
||||
[67]:https://www.amazon.com/dp/0262528878/
|
||||
[68]:https://www.amazon.com/dp/3642142664/
|
||||
[69]:https://www.amazon.com/dp/1558605703
|
||||
[70]:https://www.coursera.org/learn/text-retrieval
|
||||
[71]:https://research.google.com/pubs/pub37043.html
|
||||
[72]:https://pdfs.semanticscholar.org/28d8/288bff1b1fc693e6d80c238de9fe8b5e8160.pdf
|
||||
[73]:http://queue.acm.org/detail.cfm?id=988407
|
||||
[74]:https://github.com/harpribot/awesome-information-retrieval
|
||||
[75]:https://medium.com/@dtunkelang
|
||||
[76]:https://www.cs.cmu.edu/~quixote/
|
||||
[77]:https://web.stanford.edu/class/cs276/handouts/lecture8-evaluation_2014-one-per-page.pdf
|
||||
[78]:https://stackoverflow.com/questions/34314/how-do-i-implement-search-functionality-in-a-website
|
||||
[79]:https://www.quora.com/How-to-build-a-search-engine-from-scratch
|
||||
[80]:https://github.com/isaacs/github/issues/908
|
||||
[81]:https://www.reddit.com/r/Windows10/comments/4jbxgo/can_we_talk_about_how_bad_windows_10_search_sucks/d365mce/
|
||||
[82]:https://www.reddit.com/r/spotify/comments/2apwpd/the_search_function_sucks_let_me_explain/
|
||||
[83]:https://medium.com/@RohitPaulK/github-issues-suck-723a5b80a1a3#.yp8ui3g9i
|
||||
[84]:https://thenextweb.com/opinion/2016/01/11/netflix-search-sucks-flixed-fixes-it/
|
||||
[85]:https://www.google.com/search?q=building+a+search+engine
|
||||
[86]:http://airweb.cse.lehigh.edu/2005/gyongyi.pdf
|
||||
[87]:https://www.researchgate.net/profile/Gabriel_Sanchez-Perez/publication/262371199_Explicit_image_detection_using_YCbCr_space_color_model_as_skin_detection/links/549839cf0cf2519f5a1dd966.pdf
|
||||
[88]:https://en.wikipedia.org/wiki/Locality-sensitive_hashing
|
||||
[89]:https://en.wikipedia.org/wiki/Similarity_measure
|
||||
[90]:https://www.microsoft.com/en-us/research/wp-content/uploads/2011/02/RadlinskiBennettYilmaz_WSDM2011.pdf
|
||||
[91]:http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
|
||||
[92]:https://en.wikipedia.org/wiki/Inverted_index
|
||||
[93]:https://en.wikipedia.org/wiki/Natural_language_processing
|
||||
[94]:http://backlinko.com/google-ranking-factors
|
||||
[95]:http://times.cs.uiuc.edu/czhai/pub/slmir-now.pdf
|
||||
[96]:https://en.wikipedia.org/wiki/Learning_to_rank
|
||||
[97]:https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf
|
||||
[98]:https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/lambdarank.pdf
|
||||
[99]:https://yandex.com/company/technologies/matrixnet/
|
||||
[100]:https://arxiv.org/abs/1708.02702
|
||||
[101]:http://cnn.com/
|
||||
[102]:http://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf
|
||||
[103]:https://googleblog.blogspot.co.uk/2008/08/search-experiments-large-and-small.html
|
||||
[104]:https://spark.apache.org/
|
||||
[105]:https://www.algolia.com/
|
||||
[106]:https://lucene.apache.org/
|
||||
[107]:https://lucy.apache.org/
|
||||
[108]:https://www.brainyquote.com/quotes/quotes/p/pierredefe204944.html
|
||||
[109]:https://www.linkedin.com/in/grigorev/
|
||||
[110]:mailto:forwidur@gmail.com
|
||||
[111]:https://twitter.com/forwidur
|
||||
[112]:https://github.com/open-guides/og-aws
|
||||
[113]:https://upscri.be/d29cfe/
|
||||
[114]:https://twitter.com/ojoshe
|
||||
[115]:https://twitter.com/lpolovets
|
||||
[116]:https://www.linkedin.com/in/abhishek-das-3280053/
|
||||
[117]:https://www.behance.net/gallery/3530289/-HORIZON-
|
||||
[118]:https://en.wikipedia.org/wiki/Henry_O._Studley
|
@ -1,69 +0,0 @@
|
||||
Why I love technical debt
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory1.png?itok=nbSRovsj)
|
||||
This is not necessarily the title you'd expect for an article, I guess,* but I'm a fan of [technical debt][1]. There are two reasons for this: a Bad Reason and a Good Reason. I'll be upfront about the Bad Reason first, then explain why even that isn't really a reason to love it. I'll then tackle the Good Reason, and you'll nod along in agreement.
|
||||
|
||||
### The Bad Reason I love technical debt
|
||||
|
||||
We'll get this out of the way, then, shall we? The Bad Reason is that, well, there's just lots of it, it's interesting, it keeps me in a job, and it always provides a reason, as a security architect, for me to get involved in** projects that might give me something new to look at. I suppose those aren't all bad things. It can also be a bit depressing, because there's always so much of it, it's not always interesting, and sometimes I need to get involved even when I might have better things to do.
|
||||
|
||||
And what's worse is that it almost always seems to be security-related, and it's always there. That's the bad part.
|
||||
|
||||
Security, we all know, is the piece that so often gets left out, or tacked on at the end, or done in half the time it deserves, or done by people who have half an idea, but don't quite fully grasp it. I should be clear at this point: I'm not saying that this last reason is those people's fault. That people know they need security is fantastic. If we (the security folks) or we (the organization) haven't done a good enough job in making sufficient security resources--whether people, training, or visibility--available to those people who need it, the fact that they're trying is great and something we can work on. Let's call that a positive. Or at least a reason for hope.***
|
||||
|
||||
### The Good Reason I love technical debt
|
||||
|
||||
Let's get on to the other reason: the legitimate reason. I love technical debt when it's named.
|
||||
|
||||
What does that mean?
|
||||
|
||||
We all get that technical debt is a bad thing. It's what happens when you make decisions for pragmatic reasons that are likely to come back and bite you later in a project's lifecycle. Here are a few classic examples that relate to security:
|
||||
|
||||
* Not getting around to applying authentication or authorization controls on APIs that might, at some point, be public.
|
||||
* Lumping capabilities together so it's difficult to separate out appropriate roles later on.
|
||||
* Hard-coding roles in ways that don't allow for customisation by people who may use your application in different ways from those you initially considered.
|
||||
* Hard-coding cipher suites for cryptographic protocols, rather than putting them in a config file where they can be changed or selected later.
|
||||
|
||||
|
||||
|
||||
There are lots more, of course, but those are just a few that jump out at me and that I've seen over the years. Technical debt means making decisions that will mean more work later on to fix them. And that can't be good, can it?
|
||||
|
||||
There are two words in the preceding paragraphs that should make us happy: they are "decisions" and "pragmatic." Because, in order for something to be named technical debt, I'd argue, it has to have been subject to conscious decision-making, and trade-offs must have been made--hopefully for rational reasons. Those reasons may be many and various--lack of qualified resources; project deadlines; lack of sufficient requirement definition--but if they've been made consciously, then the technical debt can be named, and if technical debt can be named, it can be documented.
|
||||
|
||||
And if it's documented, we're halfway there. As a security guy, I know that I can't force everything that goes out of the door to meet all the requirements I'd like--but the same goes for the high availability gal, the UX team, the performance folks, etc.
|
||||
|
||||
What we need--what we all need--is for documentation to exist about why decisions were made, because when we return to the problem we'll know it was thought about. And, what's more, the recording of that information might even make it into product documentation. "This API is designed to be used in a protected environment and should not be exposed on the public Internet" is a great piece of documentation. It may not be what a customer is looking for, but at least they know how to deploy the product, and, crucially, it's an opportunity for them to come back to the product manager and say, "We'd really like to deploy that particular API in this way. Could you please add this as a feature request?" Product managers like that. Very much.****
|
||||
|
||||
The best thing, though, is not just that named technical debt is visible technical debt, but that if you encourage your developers to document the decisions in code,***** then there's a decent chance that they'll record some ideas about how this should be done in the future. If you're really lucky, they might even add some hooks in the code to make it easier (an "auth" parameter on the API, which is unused in the current version, but will make API compatibility so much simpler in new releases; or cipher entry in the config file that currently only accepts one option, but is at least checked by the code).
|
||||
|
||||
I've been a bit disingenuous, I know, by defining technical debt as named technical debt. But honestly, if it's not named, then you can't know what it is, and until you know what it is, you can't fix it.******* My advice is this: when you're doing a release close-down (or in your weekly standup--EVERY weekly standup), have an agenda item to record technical debt. Name it, document it, be proud, sleep at night.
|
||||
|
||||
* Well, apart from the obvious clickbait reason--for which I'm (a little) sorry.
|
||||
|
||||
** I nearly wrote "poke my nose into."
|
||||
|
||||
*** Work with me here.
|
||||
|
||||
**** If you're software engineer/coder/hacker, here's a piece of advice: Learn to talk to product managers like real people, and treat them nicely. They (the better ones, at least) are invaluable allies when you need to prioritize features or have tricky trade-offs to make.
|
||||
|
||||
***** Do this. Just do it. Documentation that isn't at least mirrored in code isn't real documentation.******
|
||||
|
||||
****** Don't believe me? Talk to developers. "Who reads product documentation?" "Oh, the spec? I skimmed it. A few releases back. I think." "I looked in the header file; couldn't see it there."
|
||||
|
||||
******* Or decide not to fix it, which may also be an entirely appropriate decision.
|
||||
|
||||
This article originally appeared on [Alice, Eve, and Bob - a security blog][2] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/10/why-i-love-technical-debt
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://en.wikipedia.org/wiki/Technical_debt
|
||||
[2]:https://aliceevebob.wordpress.com/2017/08/29/why-i-love-technical-debt/
|
@ -1,86 +0,0 @@
|
||||
How to Monetize an Open Source Project
|
||||
======
|
||||
|
||||
![](http://www.itprotoday.com/sites/itprotoday.com/files/styles/article_featured_standard/public/ThinkstockPhotos-629994230_0.jpg?itok=5dZ68OTn)
|
||||
The problem for any small group of developers putting the finishing touches on a commercial open source application is figuring out how to monetize the software in order to keep the bills paid and food on the table. Often these small pre-startups will start by deciding which of the recognized open source business models they're going to adapt, whether that be following Red Hat's lead and offering professional services, going the SaaS route, releasing as open core or something else.
|
||||
|
||||
Steven Grandchamp, general manager for MariaDB's North America operations and CEO for Denver-based startup [Drud Tech][1], thinks that might be putting the cart before the horse. With an open source project, the best first move is to get people downloading and using your product for free.
|
||||
|
||||
**Related:** [Demand for Open Source Skills Continues to Grow][2]
|
||||
|
||||
"The number one tangent to monetization in any open source product is adoption, because the key to monetizing an open source product is you flip what I would call the sales funnel upside down," he told ITPro at the recent All Things Open conference in Raleigh, North Carolina.
|
||||
|
||||
In many ways, he said, selling open source solutions is the opposite of marketing traditional proprietary products, where adoption doesn't happen until after a contract is signed.
|
||||
|
||||
**Related:** [Is Raleigh the East Coast's Silicon Valley?][3]
|
||||
|
||||
"In a proprietary software company, you advertise, you market, you make claims about what the product can do, and then you have sales people talk to customers. Maybe you have a free trial or whatever. Maybe you have a small version. Maybe it's time bombed or something like that, but you don't really get to realize the benefit of the product until there's a contract and money changes hands."
|
||||
|
||||
Selling open source solutions is different because of the challenge of selling software that's freely available as a GitHub download.
|
||||
|
||||
"The whole idea is to put the product out there, let people use it, experiment with it, and jump on the chat channels," he said, pointing out that his company Drud has a public chat channel that's open to anybody using their product. "A subset of that group is going to raise their hand and go, 'Hey, we need more help. We'd like a tighter relationship with the company. We'd like to know where your road map's going. We'd like to know about customization. We'd like to know if maybe this thing might be on your road map.'"
|
||||
|
||||
Grandchamp knows more than a little about making software pay, from both the proprietary and open source sides of the fence. In the 1980s he served as VP of research and development at Formation Technologies, and became SVP of R&D at John H. Harland after it acquired Formation in the mid-90s. He joined MariaDB in 2016, after serving eight years as CEO at OpenLogic, which was providing commercial support for more than 600 open-source projects at the time it was acquired by Rogue Wave Software. Along the way, there was a two year stint at Microsoft's Redmond campus.
|
||||
|
||||
OpenLogic was where he discovered open source, and his experiences there are key to his approach for monetizing open source projects.
|
||||
|
||||
"When I got to OpenLogic, I was told that we had 300 customers that were each paying $99 a year for access to our tool," he explained. "But the problem was that nobody was renewing the tool. So I called every single customer that I could find and said 'did you like the tool?'"
|
||||
|
||||
It turned out that nearly everyone he talked to was extremely happy with the company's software, which ironically was the reason they weren't renewing. The company's tool solved their problem so well there was no need to renew.
|
||||
|
||||
"What could we have offered that would have made you renew the tool?" he asked. "They said, 'If you had supported all of the open source products that your tool assembled for me, then I would have that ongoing relationship with you.'"
|
||||
|
||||
Grandchamp immediately grasped the situation, and when the CTO said such support would be impossible, Grandchamp didn't mince words: "Then we don't have a company."
|
||||
|
||||
"We figured out a way to support it," he said. "We created something called the Open Logic Expert Community. We developed relationships with committers and contributors to a couple of hundred open source packages, and we acted as sort of the hub of the SLA for our customers. We had some people on staff, too, who knew the big projects."
|
||||
|
||||
After that successful launch, Grandchamp and his team began hearing from customers that they were confused over exactly what open source code they were using in their projects. That lead to the development of what he says was the first software-as-a-service compliance portal of open source, which could scan an application's code and produce a list of all of the open source code included in the project. When customers then expressed confusion over compliance issues, the SaaS service was expanded to flag potential licensing conflicts.
|
||||
|
||||
Although the product lines were completely different, the same approach was used to monetize MariaDB, then called SkySQL, after MySQL co-founders Michael "Monty" Widenius, David Axmark, and Allan Larsson created the project by forking MySQL, which Oracle had acquired from Sun Microsystems in 2010.
|
||||
|
||||
Again, users were approached and asked what things they would be willing to purchase.
|
||||
|
||||
"They wanted different functionality in the database, and you didn't really understand this if you didn't talk to your customers," Grandchamp explained. "Monty and his team, while they were being acquired at Sun and Oracle, were working on all kinds of new functionality, around cloud deployments, around different ways to do clustering, they were working on lots of different things. That work, Oracle and MySQL didn't really pick up."
|
||||
|
||||
Rolling in the new features customers wanted needed to be handled gingerly, because it was important to the folks at MariaDB to not break compatibility with MySQL. This necessitated a strategy around when the code bases would come together and when they would separate. "That road map, knowledge, influence and technical information was worth paying for."
|
||||
|
||||
As with OpenLogic, MariaDB customers expressed a willingness to spend money on a variety of fronts. For example, a big driver in the early days was a project called Remote DBA, which helped customers make up for a shortage of qualified database administrators. The project could help with design issues, as well as monitor existing systems to take the workload off of a customer's DBA team. The service also offered access to MariaDB's own DBAs, many of whom had a history with the database going back to the early days of MySQL.
|
||||
|
||||
"That was a subscription offering that people were definitely willing to pay for," he said.
|
||||
|
||||
The company also learned, again by asking and listening to customers, that there were various types of support subscriptions that customers were willing to purchase, including subscriptions around capability and functionality, and a managed service component of Remote DBA.
|
||||
|
||||
These days Grandchamp is putting much of his focus on his latest project, Drud, a startup that offers a suite of integrated, automated, open source development tools for developing and managing multiple websites, which can be running on any combination of content management systems and deployment platforms. It is monetized partially through modules that add features like a centralized dashboard and an "intelligence engine."
|
||||
|
||||
As you might imagine, he got it off the ground by talking to customers and giving them what they indicated they'd be willing to purchase.
|
||||
|
||||
"Our number one customer target is the agency market," he said. "The enterprise market is a big target, but I believe it's our second target, not our first. And the reason it's number two is they don't make decisions very fast. There are technology refresh cycles that have to come up, there are lots of politics involved and lots of different vendors. It's lucrative once you're in, but in a startup you've got to figure out how to pay your bills. I want to pay my bills today. I don't want to pay them in three years."
|
||||
|
||||
Drud's focus on the agency market illustrates another consideration: the importance of understanding something about your customers' business. When talking with agencies, many said they were tired of being offered generic software that really didn't match their needs from proprietary vendors that didn't understand their business. In Drud's case, that understanding is built into the company DNA. The software was developed by an agency to fill its own needs.
|
||||
|
||||
"We are a platform designed by an agency for an agency," Grandchamp said. "Right there is a relationship that they're willing to pay for. We know their business."
|
||||
|
||||
Grandchamp noted that startups also need to be able to distinguish users from customers. Most of the people downloading and using commercial open source software aren't the people who have authorization to make purchasing decisions. These users, however, can point to the people who control the purse strings.
|
||||
|
||||
"It's our job to build a way to communicate with those users, provide them value so that they'll give us value," he explained. "It has to be an equal exchange. I give you value of a tool that works, some advice, really good documentation, access to experts who can sort of guide you along. Along the way I'm asking you for pieces of information. Who do you work for? How are the technology decisions happening in your company? Are there other people in your company that we should refer the product to? We have to create the dialog."
|
||||
|
||||
In the end, Grandchamp said, in the open source world the people who go out to find business probably shouldn't see themselves as salespeople, but rather, as problem solvers.
|
||||
|
||||
"I believe that you're not really going to need salespeople in this model. I think you're going to need customer success people. I think you're going to need people who can enable your customers to be successful in a business relationship that's more highly transactional."
|
||||
|
||||
"People don't like to be sold," he added, "especially in open source. The last person they want to see is the sales person, but they like to ply and try and consume and give you input and give you feedback. They love that."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itprotoday.com/software-development/how-monetize-open-source-project
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itprotoday.com/author/christine-hall
|
||||
[1]:https://www.drud.com/
|
||||
[2]:http://www.itprotoday.com/open-source/demand-open-source-skills-continues-grow
|
||||
[3]:http://www.itprotoday.com/software-development/raleigh-east-coasts-silicon-valley
|
@ -1,87 +0,0 @@
|
||||
Why pair writing helps improve documentation
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
|
||||
|
||||
Professional writers, at least in the Red Hat documentation team, nearly always work on docs alone. But have you tried writing as part of a pair? In this article, I'll explain a few benefits of pair writing.
|
||||
### What is pair writing?
|
||||
|
||||
Pair writing is when two writers work in real time, on the same piece of text, in the same room. This approach improves document quality, speeds up writing, and allows writers to learn from each other. The idea of pair writing is borrowed from [pair programming][1].
|
||||
|
||||
When pair writing, you and your colleague work on the text together, making suggestions and asking questions as needed. Meanwhile, you're observing each other's work. For example, while one is writing, the other writer observes details such as structure or context. Often discussion around the document turns into sharing experiences and opinions, and brainstorming about writing in general.
|
||||
|
||||
At all times, the writing is done by only one person. Thus, you need only one computer, unless you want one writer to do online research while the other person does the writing. The text workflow is the same as if you are working alone: a text editor, the documentation source files, git, and so on.
|
||||
|
||||
### Pair writing in practice
|
||||
|
||||
My colleague Aneta Steflova and I have done more than 50 hours of pair writing working on the Red Hat Enterprise Linux System Administration docs and on the Red Hat Identity Management docs. I've found that, compared to writing alone, pair writing:
|
||||
|
||||
* is as productive or more productive;
|
||||
* improves document quality;
|
||||
* helps writers share technical expertise; and
|
||||
* is more fun.
|
||||
|
||||
|
||||
|
||||
### Speed
|
||||
|
||||
Two writers writing one text? Sounds half as productive, right? Wrong. (Usually.)
|
||||
|
||||
Pair writing can help you work faster because two people have solutions to a bigger set of problems, which means getting blocked less often during the process. For example, one time we wrote urgent API docs for identity management. I know at least the basics of web APIs, the REST protocol, and so on, which helped us speed through those parts of the documentation. Working alone, Aneta would have needed to interrupt the writing process frequently to study these topics.
|
||||
|
||||
### Quality
|
||||
|
||||
Poor wording or sentence structure, inconsistencies in material, and so on have a harder time surviving under the scrutiny of four eyes. For example, one of our pair writing documents was reviewed by an extremely critical developer, who was known for catching technical inaccuracies and bad structure. After this particular review, he said, "Perfect. Thanks a lot."
|
||||
|
||||
### Sharing expertise
|
||||
|
||||
Each of us lives in our own writing bubble, and we normally don't know how others approach writing. Pair writing can help you improve your own writing process. For example, Aneta showed me how to better handle assignments in which the developer has provided starting text (as opposed to the writer writing from scratch using their own knowledge of the subject), which I didn't have experience with. Also, she structures the docs thoroughly, which I began doing as well.
|
||||
|
||||
As another example, I'm good enough at Vim that XML editing (e.g., tags manipulation) is enjoyable instead of torturous. Aneta saw how I was using Vim, asked about it, suffered through the learning curve, and now takes advantage of the Vim features that help me.
|
||||
|
||||
Pair writing is especially good for helping and mentoring new writers, and it's a great way to get to know professionally (and have fun with) colleagues.
|
||||
|
||||
### When pair writing shines
|
||||
|
||||
In addition to benefits I've already listed, pair writing is especially good for:
|
||||
|
||||
* **Working with[Bugzilla][2]** : Bugzillas can be cumbersome and cause problems, especially for administration-clumsy people (like me).
|
||||
* **Reviewing existing documents** : When documentation needs to be expanded or fixed, it is necessary to first examine the existing document.
|
||||
* **Learning new technology** : A fellow writer can be a better teacher than an engineer.
|
||||
* **Writing emails/requests for information to developers with well-chosen questions** : The difficulty of this task rises in proportion to the difficulty of technology you are documenting.
|
||||
|
||||
|
||||
|
||||
Also, with pair writing, feedback is in real time, as-needed, and two-way.
|
||||
|
||||
On the downside, pair writing can be a faster pace, giving a writer less time to mull over a topic or wording. On the other hand, generally peer review is not necessary after pair writing.
|
||||
|
||||
### Words of caution
|
||||
|
||||
To get the most out of pair writing:
|
||||
|
||||
* Go into the project well prepared, otherwise you can waste your colleague's time.
|
||||
* Talkative types need to stay focused on the task, otherwise they end up talking rather than writing.
|
||||
* Be prepared for direct feedback. Pair writing is not for feedback-allergic writers.
|
||||
* Beware of session hijackers. Dominant personalities can turn pair writing into writing solo with a spectator. (However, it _can _ be good if one person takes over at times, as long as the less-experienced partner learns from the hijacker, or the more-experienced writer is providing feedback to the hijacker.)
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
Pair writing is a meeting, but one in which you actually get work done. It's an activity that lets writers focus on the one indispensable thing in our vocation--writing.
|
||||
|
||||
_This post was written with the help of pair writing with Aneta Steflova._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/try-pair-writing
|
||||
|
||||
作者:[Maxim Svistunov][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/maxim-svistunov
|
||||
[1]:https://developer.atlassian.com/blog/2015/05/try-pair-programming/
|
||||
[2]:https://www.bugzilla.org/
|
@ -1,120 +0,0 @@
|
||||
Why and How to Set an Open Source Strategy
|
||||
============================================================
|
||||
|
||||
![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg)
|
||||
|
||||
This article explains how to walk through, measure, and define strategies collaboratively in an open source community.
|
||||
|
||||
_“If you don’t know where you are going, you’ll end up someplace else.” _ _—_ Yogi Berra
|
||||
|
||||
Open source projects are generally started as a way to scratch one’s itch — and frankly that’s one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand.
|
||||
|
||||
Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge — how does a project start to build a strategic vision? In this article, I’ll describe how to walk through, measure, and define strategies collaboratively, in a community.
|
||||
|
||||
Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isn’t a tool to justify unfortunate actions but to help show focus and where each community member can contribute.
|
||||
|
||||
A good application of strategy achieves the following:
|
||||
|
||||
* Why the project exists?
|
||||
|
||||
* What the project looks to achieve?
|
||||
|
||||
* What is the ideal end state for a project is.
|
||||
|
||||
The key to success is answering these questions as simply as possible, with consensus from your community. Let’s look at some ways to do this.
|
||||
|
||||
### Setting a mission and vision
|
||||
|
||||
_“_ _Efforts and courage are not enough without purpose and direction.”_ — John F. Kennedy
|
||||
|
||||
All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are _Mission_ and _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision).
|
||||
|
||||
A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions:
|
||||
|
||||
* What drove you to join and/or contribute the project?
|
||||
|
||||
* How do you define success for your participation?
|
||||
|
||||
In a company, you’d ask your customers these questions usually. But in open source projects, the customers are the project participants — and their time investment is what makes the project a success.
|
||||
|
||||
Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each member’s answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights.
|
||||
|
||||
Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Here’s their mission:
|
||||
|
||||
_Build community and adoption of Open Source on the mainframe by:_
|
||||
|
||||
* _Eliminating barriers to Open Source adoption on the mainframe_
|
||||
|
||||
* _Demonstrating value of the mainframe on technical and business levels_
|
||||
|
||||
* _Strengthening collaboration points and resources for the community to thrive_
|
||||
|
||||
At 40 words, it passes the key eye tests of a good mission statement; it’s clear, concise, and demonstrates the useful value the project aims for.
|
||||
|
||||
The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well:
|
||||
|
||||
_Linux on the Mainframe as the standard for enterprise class systems and applications._
|
||||
|
||||
You could read that as a [BHAG][1], but it’s really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision — it’s only 13 words, inspirational, clear, memorable, and concise.
|
||||
|
||||
Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there?
|
||||
|
||||
### Goals, Objectives, Actions, and Results
|
||||
|
||||
_“I don’t focus on what I’m up against. I focus on my goals and I try to ignore the rest.”_ — Venus Williams
|
||||
|
||||
Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off.
|
||||
|
||||
There are lots of methods of time bound planning, but the method I think works the best for projects is what I’ve dubbed the GOAR method. It’s an acronym that stands for:
|
||||
|
||||
* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction.
|
||||
|
||||
* Objectives show how you measure a goal’s completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.”
|
||||
|
||||
* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors.
|
||||
|
||||
* Results come along the way, showing progress both positive and negative from the actions.
|
||||
|
||||
You can put these into a table like this:
|
||||
|
||||
| Goals | Objectives | Actions | Results |
|
||||
|:--|:--|:--|:--|
|
||||
| Grow a diverse contributor base | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | |
|
||||
| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | |
|
||||
|
||||
|
||||
In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along.
|
||||
|
||||
The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project — this person could help introduce those developers to the codebase and guide them through their first commit.
|
||||
|
||||
### What happens if the project doesn’t hit the goals?
|
||||
|
||||
_“I have not failed. I’ve just found 10,000 ways that won’t work.”_ — Thomas A. Edison
|
||||
|
||||
Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not!
|
||||
|
||||
Instead, you can use this experience as a way to better understand your project’s velocity, its impact, and its community, and perhaps as a way to prioritize what is important and what’s not.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxfoundation.org/blog/set-open-source-strategy/
|
||||
|
||||
作者:[ John Mertic][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxfoundation.org/author/jmertic/
|
||||
[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal
|
||||
[2]:https://www.linuxfoundation.org/author/jmertic/
|
||||
[3]:https://www.linuxfoundation.org/category/blog/
|
||||
[4]:https://www.linuxfoundation.org/category/audience/c-level/
|
||||
[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/
|
||||
[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
|
||||
[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/
|
||||
[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/
|
||||
[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/
|
||||
[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
|
||||
[11]:https://www.linuxfoundation.org/category/audience/open-source-users/
|
||||
[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/
|
@ -1,94 +0,0 @@
|
||||
Why is collaboration so difficult?
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_block_collaboration.png?itok=pKbXpr1e)
|
||||
|
||||
Many contemporary definitions of "collaboration" define it simply as "working together"--and, in part, it is working together. But too often, we tend to use the term "collaboration" interchangeably with cognate terms like "cooperation" and "coordination." These terms also refer to some manner of "working together," yet there are subtle but important differences between them all.
|
||||
|
||||
How does collaboration differ from coordination or cooperation? What is so important about collaboration specifically? Does it have or do something that coordination and cooperation don't? The short answer is a resounding "yes!"
|
||||
|
||||
[This unit explores collaboration][1], a problematic term because it has become a simple buzzword for "working together." By the time you've studied the cases and practiced the exercises contained in this section, you will understand that it's so much more than that.
|
||||
|
||||
### Not like the others
|
||||
|
||||
"Coordination" can be defined as the ordering of a variety of people acting in an effective, unified manner toward an end goal or state
|
||||
|
||||
In traditional organizations and businesses, people contributed according to their role definitions, such as in manufacturing, where each employee was responsible for adding specific components to the widget on an assembly line until the widget was complete. In contexts like these, employees weren't expected to contribute beyond their pre-defined roles (they were probably discouraged from doing so), and they didn't necessarily have a voice in the work or in what was being created. Often, a manager oversaw the unification of effort (hence the role "project coordinator"). Coordination is meant to connote a sense of harmony and unity, as if elements are meant to go together, resulting in efficiency among the ordering of the elements.
|
||||
|
||||
One common assumption is that coordinated efforts are aimed at the same, single goal. So some end result is "successful" when people and parts work together seamlessly; when one of the parts breaks down and fails, then the whole goal fails. Many traditional businesses (for instance, those with command-and-control hierarchies) manage work through coordination.
|
||||
|
||||
Cooperation is another term whose surface meaning is "working together." Rather than the sense of compliance that is part of "coordination," it carries a sense of agreement and helpfulness on the path toward completing a shared activity or goal.
|
||||
|
||||
"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
|
||||
|
||||
People tend to use the term "cooperation" when joining two semi-related entities where one or more entity could decide not to cooperate. The people and pieces that are part of a cooperative effort make the shared activity easier to perform or the shared goal easier to reach. "Cooperation" implies a shared goal or activity we agree to pursue jointly. One example is how police and witnesses cooperate to solve crimes.
|
||||
|
||||
"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
|
||||
|
||||
Sometimes collaboration involves two or more groups that do not normally work together; they are disparate groups or not usually connected. For instance, a traitor collaborates with the enemy, or rival businesses collaborate with each other. The subtlety of collaboration is that the two groups may have oppositional initial goals but work together to create a shared goal. Collaboration can be more contentious than coordination or cooperation, but like cooperation, any one of the entities could choose not to collaborate. Despite the contention and conflict, however, there is discourse--whether in the form of multi-way discussion or one-way feedback--because without discourse, there is no way for people to express a point of dissent that is ripe for negotiation.
|
||||
|
||||
The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
|
||||
|
||||
### For example
|
||||
|
||||
One way to think about these things is through a real-life example--like the writing of [this book][1].
|
||||
|
||||
The editor, [Bryan][2], coordinates the authors' work through the call for proposals, setting dates and deadlines, collecting the writing, and meeting editing dates and deadlines for feedback about our work. He coordinates the authors, the writing, the communications. In this example, I'm not coordinating anything except myself (still a challenge most days!).
|
||||
|
||||
The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
|
||||
|
||||
I cooperate with Bryan's dates and deadlines, and with the ways he has decided to coordinate the work. I propose the introduction on GitHub; I wait for approval. I comply with instructions, write some stuff, and send it to him by the deadlines. He cooperates by accepting a variety of document formats. I get his edits,incorporate them, send it back him, and so forth. If I don't cooperate (or something comes up and I can't cooperate), then maybe someone else writes this introduction instead.
|
||||
|
||||
Bryan and I collaborate when either one of us challenges something, including pieces of the work or process that aren't clear, things that we thought we agreed to, or things on which we have differing opinions. These intersections are ripe for negotiation and therefore indicative of collaboration. They are the opening for us to negotiate some creative work.
|
||||
|
||||
Once the collaboration is negotiated and settled, writing and editing the book returns to cooperation/coordination; that is why collaboration relies on the other two terms of joint work.
|
||||
|
||||
One of the most interesting parts of this example (and of work and shared activity in general) is the moment-by-moment pivot from any of these terms to the other. The writing of this book is not completely collaborative, coordinated, or cooperative. It's a messy mix of all three.
|
||||
|
||||
### Why is collaboration important?
|
||||
|
||||
Collaboration is an important facet of contemporary organizations--specifically those oriented toward knowledge work--because it allows for productive disagreement between actors. That kind of disagreement then helps increase the level of engagement and provide meaning to the group's work.
|
||||
|
||||
In his book, The Age of Discontinuity: Guidelines to our Changing Society, [Peter Drucker discusses][3] the "knowledge worker" and the pivot from work based on experience (e.g. apprenticeships) to work based on knowledge and the application of knowledge. This change in work and workers, he writes:
|
||||
|
||||
> ...will make the management of knowledge workers increasingly crucial to the performance and achievement of the knowledge society. We will have to learn to manage the knowledge worker both for productivity and for satisfaction, both for achievement and for status. We will have to learn to give the knowledge worker a job big enough to challenge him, and to permit performance as a "professional."
|
||||
|
||||
In other words, knowledge workers aren't satisfied with being subordinate--told what to do by managers as, if there is one right way to do a task. And, unlike past workers, they expect more from their work lives, including some level of emotional fulfillment or meaning-making from their work. The knowledge worker, according to Drucker, is educated toward continual learning, "paid for applying his knowledge, exercising his judgment, and taking responsible leadership." So it then follows that knowledge workers expect from work the chance to apply and share their knowledge, develop themselves professionally, and continuously augment their knowledge.
|
||||
|
||||
Interesting to note is the fact that Peter Drucker wrote about those concepts in 1969, nearly 50 years ago--virtually predicting the societal and organizational changes that would reveal themselves, in part, through the development of knowledge sharing tools such as forums, bulletin boards, online communities, and cloud knowledge sharing like DropBox and GoogleDrive as well as the creation of social media tools such as MySpace, Facebook, Twitter, YouTube and countless others. All of these have some basis in the idea that knowledge is something to liberate and share.
|
||||
|
||||
In this light, one might view the open organization as one successful manifestation of a system of management for knowledge workers. In other words, open organizations are a way to manage knowledge workers by meeting the needs of the organization and knowledge workers (whether employees, customers, or the public) simultaneously. The foundational values this book explores are the scaffolding for the management of knowledge, and they apply to ways we can:
|
||||
|
||||
* make sure there's a lot of varied knowledge around (inclusivity)
|
||||
* help people come together and participate (community)
|
||||
* circulate information, knowledge, and decision making (transparency)
|
||||
* innovate and not become entrenched in old ways of thinking and being (adaptability)
|
||||
* develop a shared goal and work together to use knowledge (collaboration)
|
||||
|
||||
|
||||
|
||||
Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups. As we've discovered, collaboration is more than working together with some degree of compliance; in fact, it describes a type of working together that overcomes compliance because people can disagree, question, and express their needs in a negotiation and in collaboration. And, collaboration is more than "working toward a shared goal"; collaboration is a process which defines the shared goals via negotiation and, when successful, leads to cooperation and coordination to focus activity on the negotiated outcome.
|
||||
|
||||
Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups.
|
||||
|
||||
Collaboration works best when the other four open organization values are present. For instance, when people are transparent, there is no guessing about what is needed, why, by whom, or when. Also, because collaboration involves negotiation, it also needs diversity (a product of inclusivity); after all, if we aren't negotiating among differing views, needs, or goals, then what are we negotiating? During a negotiation, the parties are often asked to give something up so that all may gain, so we have to be adaptable and flexible to the different outcomes that negotiation can provide. Lastly, collaboration is often an ongoing process rather than one which is quickly done and over, so it's best to enter collaboration as if you are part of the same community, desiring everyone to benefit from the negotiation. In this way, acts of authentic and purposeful collaboration directly necessitate the emergence of the other four values--transparency, inclusivity, adaptability, and community--as they assemble part of the organization's collective purpose spontaneously.
|
||||
|
||||
### Collaboration in open organizations
|
||||
|
||||
Traditional organizations advance an agreed-upon set of goals that people are welcome to support or not. In these organizations, there is some amount of discourse and negotiation, but often a higher-ranking or more powerful member of the organization intervenes to make a decision, which the membership must accept (and sometimes ignores). In open organizations, however, the focus is for members to perform their activity and to work out their differences; only if necessary would someone get involved (and even then would try to do it in the most minimal way that support the shared values of community, transparency, adaptability, collaboration and inclusivity.) This make the collaborative processes in open organizations "messier" (or "chaotic" to use Jim Whitehurst's term) but more participatory and, hopefully, innovative.
|
||||
|
||||
This article is part of the [Open Organization Workbook project][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/11/what-is-collaboration
|
||||
|
||||
作者:[Heidi Hess Von Ludewig][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/heidi-hess-von-ludewig
|
||||
[1]:https://opensource.com/open-organization/17/8/workbook-project-announcement
|
||||
[2]:http://opensource.com/users/bbehrens
|
||||
[3]:https://www.elsevier.com/books/the-age-of-discontinuity/drucker/978-0-434-90395-5
|
@ -1,95 +0,0 @@
|
||||
Changing how we use Slack solved our transparency and silo problems
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_abstract_pieces.jpg?itok=tGR1d2MU)
|
||||
|
||||
Collaboration and information silos are a reality in most organizations today. People tend to regard them as huge barriers to innovation and organizational efficiency. They're also a favorite target for solutions from software tool vendors of all types.
|
||||
|
||||
Tools by themselves, however, are seldom (if ever), the answer to a problem like organizational silos. The reason for this is simple: Silos are made of people, and human dynamics are key drivers for the existence of silos in the first place.
|
||||
|
||||
So what is the answer?
|
||||
|
||||
Successful communities are the key to breaking down silos. Tools play an important role in the process, but if you don't build successful communities around those tools, then you'll face an uphill battle with limited chances for success. Tools enable communities; they do not build them. This takes a thoughtful approach--one that looks at culture first, process second, and tools last.
|
||||
|
||||
Successful communities are the key to breaking down silos.
|
||||
|
||||
However, this is a challenge because, in most cases, this is not the way the process works in most businesses. Too many companies begin their journey to fix silos by thinking about tools first and considering metrics that don't evaluate the right factors for success. Too often, people choose tools for purely cost-based, compliance-based, or effort-based reasons--instead of factoring in the needs and desires of the user base. But subjective measures like "customer/user delight" are a real factor for these internal tools, and can make or break the success of both the tool adoption and the goal of increased collaboration.
|
||||
|
||||
It's critical to understand the best technical tool (or what the business may consider the most cost-effective) is not always the solution that drives community, transparency, and collaboration forward. There is a reason that "Shadow IT"--users choosing their own tool solution, building community and critical mass around them--exists and is so effective: People who choose their own tools are more likely to stay engaged and bring others with them, breaking down silos organically.
|
||||
|
||||
This is a story of how Autodesk ended up adopting Slack at enterprise scale to help solve our transparency and silo problems. Interestingly, Slack wasn't (and isn't) an IT-supported application at Autodesk. It's an enterprise solution that was adopted, built, and is still run by a group of passionate volunteers who are committed to a "default to open" paradigm.
|
||||
|
||||
Utilizing Slack makes transparency happen for us.
|
||||
|
||||
### Chat-tastrophe
|
||||
|
||||
First, some perspective: My job at Autodesk is running our [Open@ADSK][1] initiative. I was originally hired to drive our open source strategy, but we quickly expanded my role to include driving open source best practices for internal development (inner source), and transforming how we collaborate internally as an organization. This last piece is where we pick up our story of Slack adoption in the company.
|
||||
|
||||
But before we even begin to talk about our journey with Slack, let's address why lack of transparency and openness was a challenge for us. What is it that makes transparency such a desirable quality in organizations, and what was I facing when I started at Autodesk?
|
||||
|
||||
Every company says they want "better collaboration." In our case, we are a 35-year-old software company that has been immensely successful at selling desktop "shrink-wrapped" software to several industries, including architecture, engineering, construction, manufacturing, and entertainment. But no successful company rests on its laurels, and Autodesk leadership recognized that a move to Cloud-based solutions for our products was key to the future growth of the company, including opening up new markets through product combinations that required Cloud computing and deep product integrations.
|
||||
|
||||
The challenge in making this move was far more than just technical or architectural--it was rooted in the DNA of the company, in everything from how we were organized to how we integrated our products. The basic format of integration in our desktop products was file import/export. While this is undoubtedly important, it led to a culture of highly-specialized teams working in an environment that's more siloed than we'd like and not sharing information (or code). Prior to the move to a cloud-based approach, this wasn't as a much of a problem--but, in an environment that requires organizations to behave more like open source projects do, transparency, openness, and collaboration go from "nice-to-have" to "business critical."
|
||||
|
||||
Like many companies our size, Autodesk has had many different collaboration solutions through the years, some of them commercial, and many of them home-grown. However, none of them effectively solved the many-to-many real-time collaboration challenge. Some reasons for this were technical, but many of them were cultural.
|
||||
|
||||
I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last."
|
||||
|
||||
When someone first tasked me with trying to find a solution for this, I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last." This is still a challenge for engineering folks like myself. We want to jump immediately to tools as the solution to any problem. However, it's critical to evaluate a company's ethos (culture), as well as existing processes to determine what kinds of tools might be a good fit. Unfortunately, I've seen too many cases where leaders have dictated a tool choice from above, based on the factors discussed earlier. I needed a different approach that relied more on fitting a tool into the culture we wanted to become, not the other way around.
|
||||
|
||||
What I found at Autodesk were several small camps of people using tools like HipChat, IRC, Microsoft Lync, and others, to try to meet their needs. However, the most interesting thing I found was 85 separate instances of Slack in the company!
|
||||
|
||||
Eureka! I'd stumbled onto a viral success (one enabled by Slack's ability to easily spin up "free" instances). I'd also landed squarely in what I like to call "silo-land."
|
||||
|
||||
All of those instances were not talking to each other--so, effectively, we'd created isolated islands of information that, while useful to those in them, couldn't transform the way we operated as an enterprise. Essentially, our existing organizational culture was recreated in digital format in these separate Slack systems. Our organization housed a mix of these small, free instances, as well as multiple paid instances, which also meant we were not taking advantage of a common billing arrangement.
|
||||
|
||||
My first (open source) thought was: "Hey, why aren't we using IRC, or some other open source tool, for this?" I quickly realized that didn't matter, as our open source engineers weren't the only people using Slack. People from all areas of the company--even senior leadership--were adopting Slack in droves, and, in some cases, convincing their management to pay for it!
|
||||
|
||||
My second (engineering) thought was: "Oh, this is simple. We just collapse all 85 of those instances into a single cohesive Slack instance." What soon became obvious was that was the easy part of the solution. Much harder was the work of cajoling, convincing, and moving people to a single, transparent instance. Building in the "guard rails" to enable a closed source tool to provide this transparency was key. These guard rails came in the form of processes, guidelines, and community norms that were the hardest part of this transformation.
|
||||
|
||||
### The real work begins
|
||||
|
||||
As I began to slowly help users migrate to the common instance (paying for it was also a challenge, but a topic for another day), I discovered a dedicated group of power users who were helping each other in the #adsk-slack-help channel on our new common instance of Slack. These power users were, in effect, building the roots of our transparency and community through their efforts.
|
||||
|
||||
The open source community manager in me quickly realized these users were the path to successfully scaling Slack at Autodesk. I enlisted five of them to help me, and, together we set about fabricating the community structure for the tool's rollout.
|
||||
|
||||
We did, however, learn an important lesson about transparency and company culture along the way.
|
||||
|
||||
Here I should note the distinction between a community structure/governance model and traditional IT policies: With the exception of security and data privacy/legal policies, volunteer admins and user community members completely define and govern our Slack instance. One of the keys to our success with Slack (currently approximately 9,100 users and roughly 4,300 public channels) was how we engaged and involved our users in building these governance structures. Things like channel naming conventions and our growing list of frequently asked questions were organic and have continued in that same vein. Our community members feel like their voices are heard (even if some disagree), and that they have been a part of the success of our deployment of Slack.
|
||||
|
||||
We did, however, learn an important lesson about transparency and company culture along the way.
|
||||
|
||||
### It's not the tool
|
||||
|
||||
When we first launched our main Slack instance, we left the ability for anyone to make a channel private turned on. After about three months of usage, we saw a clear trend: More people were creating private channels (and messages) than they were public channels (the ratio was about two to one, private versus public). Since our effort to merge 85 Slack instances was intended to increase participation and transparency, we quickly adjusted our policy and turned off this feature for regular users. We instead implemented a policy of review by the admin team, with clear criteria (finance, legal, personnel discussions among the reasons) defined for private channels.
|
||||
|
||||
This was probably the only time in this entire process that I regretted something.
|
||||
|
||||
We took an amazing amount of flak for this decision because we were dealing with a corporate culture that was used to working in independent units that had minimal interaction with each other. Our defining moment of clarity (and the tipping point where things started to get better) occurred in an all-hands meeting when one of our senior executives asked me to address a question about Slack. I stood up to answer the question, and said (paraphrased from memory): "It's not about the tool. I could give you all the best, gold-plated collaboration platform in existence, but we aren't going to be successful if we don't change our approach to collaboration and learn to default to open."
|
||||
|
||||
I didn't think anything more about that statement--until that senior executive starting using the phrase "default to open" in his slide decks, in his staff meetings, and with everyone he met. That one moment has defined what we have been trying to do with Slack: The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise.
|
||||
|
||||
### What we learned
|
||||
|
||||
The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise.
|
||||
|
||||
I say all the time that this could have happened with other, similar tools (Hipchat, IRC, etc), but it works in this case specifically because we chose an approach of supporting a solution that the user community adopted for their needs, not strictly what the company may have chosen if the decision was coming from the top of the organizational chart. We put a lot of work into making it an acceptable solution (from the perspectives of security, legal, finance, etc.) for the company, but, ultimately, our success has come from the fact that we built this rollout (and continue to run the tool) as a community, not as a traditional corporate IT system.
|
||||
|
||||
The most important lesson I learned through all of this is that transparency and community are evolutionary, not revolutionary. You have to understand where your culture is, where you want it to go, and utilize the lever points that the community is adopting itself to make sustained and significant progress. There is a fine balance point between an anarchy, and a thriving community, and we've tried to model our approach on the successful practices of today's thriving open source communities.
|
||||
|
||||
Communities are personal. Tools come and go, but keeping your community at the forefront of your push to transparency is the key to success.
|
||||
|
||||
This article is part of the [Open Organization Workbook project][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/12/chat-platform-default-to-open
|
||||
|
||||
作者:[Guy Martin][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/guyma
|
||||
[1]:mailto:Open@ADSK
|
||||
[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement
|
@ -1,116 +0,0 @@
|
||||
How Mycroft used WordPress and GitHub to improve its documentation
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
|
||||
|
||||
Image credits : Photo by Unsplash; modified by Rikki Endsley. CC BY-SA 4.0
|
||||
|
||||
Imagine you've just joined a new technology company, and one of the first tasks you're assigned is to improve and centralize the organization's developer-facing documentation. There's just one catch: That documentation exists in many different places, across several platforms, and differs markedly in accuracy, currency, and style.
|
||||
|
||||
So how did we tackle this challenge?
|
||||
|
||||
### Understanding the scope
|
||||
|
||||
As with any project, we first needed to understand the scope and bounds of the problem we were trying to solve. What documentation was good? What was working? What wasn't? How much documentation was there? What format was it in? We needed to do a **documentation audit**. Luckily, [Aneta Šteflova][1] had recently [published an article on OpenSource.com][2] about this, and it provided excellent guidance.
|
||||
|
||||
![mycroft doc audit][4]
|
||||
|
||||
Mycroft documentation audit, showing source, topic, medium, currency, quality and audience
|
||||
|
||||
Next, every piece of publicly facing documentation was assessed for the topic it covered, the medium it used, currency, and quality. A pattern quickly emerged that different platforms had major deficiencies, allowing us to make a data-driven approach to decommission our existing Jekyll-based sites. The audit also highlighted just how fragmented our documentation sources were--we had developer-facing documentation across no fewer than seven sites. Although search engines were finding this content just fine, the fragmentation made it difficult for developers and users of Mycroft--our primary audiences--to navigate the information they needed. Again, this data helped us make the decision to centralize our documentation on to one platform.
|
||||
|
||||
### Choosing a central platform
|
||||
|
||||
As an organization, we wanted to constrain the number of standalone platforms in use. Over time, maintenance and upkeep of multiple platforms and integration touchpoints becomes cumbersome for any organization, but this is exacerbated for a small startup.
|
||||
|
||||
One of the other business drivers in platform choice was that we had two primary but very different audiences. On one hand, we had highly technical developers who we were expecting would push documentation to its limits--and who would want to contribute to technical documentation using their tools of choice--[Git][5], [GitHub][6], and [Markdown][7]. Our second audience--end users--would primarily consume technical documentation and would want to do so in an inviting, welcoming platform that was visually appealing and provided additional features such as the ability to identify reading time and to provide feedback. The ability to capture feedback was also a key requirement from our side as without feedback on the quality of the documentation, we would not have a solid basis to undertake continuous quality improvement.
|
||||
|
||||
Would we be able to identify one platform that met all of these competing needs?
|
||||
|
||||
We realised that two platforms covered all of our needs:
|
||||
|
||||
* [WordPress][8]: Our existing website is built on WordPress, and we have some reasonably robust WordPress skills in-house. The flexibility of WordPress also fulfilled our requirements for functionality like reading time and the ability to capture user feedback.
|
||||
* [GitHub][9]: Almost [all of Mycroft.AI's source code is available on GitHub][10], and our development team uses this platform daily.
|
||||
|
||||
|
||||
|
||||
But how could we marry the two?
|
||||
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/wordpress-github-sync.png)
|
||||
|
||||
### Integrating WordPress and GitHub with WordPress GitHub Sync
|
||||
|
||||
Luckily, our COO, [Nate Tomasi][11], spotted a WordPress plugin that promised to integrate the two.
|
||||
|
||||
This was put through its paces on our test website, and it passed with flying colors. It was easy to install, had a straightforward configuration, which just required an OAuth token and webhook with GitHub, and provided two-way integration between WordPress and GitHub.
|
||||
|
||||
It did, however, have a dependency--on Markdown--which proved a little harder to implement. We trialed several Markdown plugins, but each had several quirks that interfered with the rendering of non-Markdown-based content. After several days of frustration, and even an attempt to custom-write a plugin for our needs, we stumbled across [Parsedown Party][12]. There was much partying! With WordPress GitHub Sync and Parsedown Party, we had integrated our two key platforms.
|
||||
|
||||
Now it was time to make our content visually appealing and usable for our user audience.
|
||||
|
||||
### Reading time and feedback
|
||||
|
||||
To implement the reading time and feedback functionality, we built a new [page template for WordPress][13], and leveraged plugins within the page template.
|
||||
|
||||
Knowing the estimated reading time of an article in advance has been [proven to increase engagement with content][14] and provides developers and users with the ability to decide whether to read the content now or bookmark it for later. We tested several WordPress plugins for reading time, but settled on [Reading Time WP][15] because it was highly configurable and could be easily embedded into WordPress page templates. Our decision to place Reading Time at the top of the content was designed to give the user the choice of whether to read now or save for later. With Reading Time in place, we then turned our attention to gathering user feedback and ratings for our documentation.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/screenshot-from-2017-12-08-00-55-31.png)
|
||||
|
||||
There are several rating and feedback plugins available for WordPress. We needed one that could be easily customized for several use cases, and that could aggregate or summarize ratings. After some experimentation, we settled on [Multi Rating Pro][16] because of its wide feature set, especially the ability to create a Review Ratings page in WordPress--i.e., a central page where staff can review ratings without having to be logged in to the WordPress backend. The only gap we ran into here was the ability to set the display order of rating options--but it will likely be added in a future release.
|
||||
|
||||
The WordPress GitHub Integration plugin also gave us the ability to link back to the GitHub repository where the original Markdown content was held, inviting technical developers to contribute to improving our documentation.
|
||||
|
||||
### Updating the existing documentation
|
||||
|
||||
Now that the "container" for our new documentation had been developed, it was time to update the existing content. Because much of our documentation had grown organically over time, there were no style guidelines to shape how keywords and code were styled. This was tackled first, so that it could be applied to all content. [You can see our content style guidelines on GitHub.][17]
|
||||
|
||||
As part of the update, we also ran several checks to ensure that the content was technically accurate, augmenting the existing documentation with several images for better readability.
|
||||
|
||||
There were also a couple of additional tools that made creating internal links for documentation pieces easier. First, we installed the [WP Anchor Header][18] plugin. This plugin provided a small but important function: adding `id` content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes to each `<h1>`, `<h2>` (and so on) element. This meant that internal anchors could be automatically generated on the command line from the Markdown content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes generated by WP Anchor Header.
|
||||
|
||||
Next, we imported the updated documentation into WordPress from GitHub, and made sure we had meaningful and easy-to-search on slugs, descriptions, and keywords--because what good is excellent documentation if no one can find it?! A final activity was implementing redirects so that people hitting the old documentation would be taken to the new version.
|
||||
|
||||
### What next?
|
||||
|
||||
[Please do take a moment and have a read through our new documentation][20]. We know it isn't perfect--far from it--but we're confident that the mechanisms we've baked into our new documentation infrastructure will make it easier to identify gaps--and resolve them quickly. If you'd like to know more, or have suggestions for our documentation, please reach out to Kathy Reid on [Chat][21] (@kathy-mycroft) or via [email][22].
|
||||
|
||||
_Reprinted with permission from[Mycroft.ai][23]._
|
||||
|
||||
### About the author
|
||||
Kathy Reid - Director of Developer Relations @MycroftAI, President of @linuxaustralia. Kathy Reid has expertise in open source technology management, web development, video conferencing, digital signage, technical communities and documentation. She has worked in a number of technical and leadership roles over the last 20 years, and holds Arts and Science undergraduate degrees... more about Kathy Reid
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/rocking-docs-mycroft
|
||||
|
||||
作者:[Kathy Reid][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kathyreid
|
||||
[1]:https://opensource.com/users/aneta
|
||||
[2]:https://opensource.com/article/17/10/doc-audits
|
||||
[3]:/file/382466
|
||||
[4]:https://opensource.com/sites/default/files/images/life-uploads/mycroft-documentation-audit.png (mycroft documentation audit)
|
||||
[5]:https://git-scm.com/
|
||||
[6]:https://github.com/MycroftAI
|
||||
[7]:https://en.wikipedia.org/wiki/Markdown
|
||||
[8]:https://www.wordpress.org/
|
||||
[9]:https://github.com/
|
||||
[10]:https://github.com/mycroftai
|
||||
[11]:http://mycroft.ai/team/
|
||||
[12]:https://wordpress.org/plugins/parsedown-party/
|
||||
[13]:https://developer.wordpress.org/themes/template-files-section/page-template-files/
|
||||
[14]:https://marketingland.com/estimated-reading-times-increase-engagement-79830
|
||||
[15]:https://jasonyingling.me/reading-time-wp/
|
||||
[16]:https://multiratingpro.com/
|
||||
[17]:https://github.com/MycroftAI/docs-rewrite/blob/master/README.md
|
||||
[18]:https://wordpress.org/plugins/wp-anchor-header/
|
||||
[19]:https://github.com/jonschlinkert/markdown-toc
|
||||
[20]:https://mycroft.ai/documentation
|
||||
[21]:https://chat.mycroft.ai/
|
||||
[22]:mailto:kathy.reid@mycroft.ai
|
||||
[23]:https://mycroft.ai/blog/improving-mycrofts-documentation/
|
@ -1,121 +0,0 @@
|
||||
The open organization and inner sourcing movements can share knowledge
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gov_collaborative_risk.png?itok=we8DKHuL)
|
||||
Image by : opensource.com
|
||||
|
||||
Red Hat is a company with roughly 11,000 employees. The IT department consists of roughly 500 members. Though it makes up just a fraction of the entire organization, the IT department is still sufficiently staffed to have many application service, infrastructure, and operational teams within it. Our purpose is "to enable Red Hatters in all functions to be effective, productive, innovative, and collaborative, so that they feel they can make a difference,"--and, more specifically, to do that by providing technologies and related services in a fashion that is as open as possible.
|
||||
|
||||
Being open like this takes time, attention, and effort. While we always strive to be as open as possible, it can be difficult. For a variety of reasons, we don't always succeed.
|
||||
|
||||
In this story, I'll explain a time when, in the rush to innovate, the Red Hat IT organization lost sight of its open ideals. But I'll also explore how returning to those ideals--and using the collaborative tactics of "inner source"--helped us to recover and greatly improve the way we deliver services.
|
||||
|
||||
### About inner source
|
||||
|
||||
Before I explain how inner source helped our team, let me offer some background on the concept.
|
||||
|
||||
Inner source is the adoption of open source development practices between teams within an organization to promote better and faster delivery without requiring project resources be exposed to the world or openly licensed. It allows an organization to receive many of the benefits of open source development methods within its own walls.
|
||||
|
||||
In this way, inner source aligns well with open organization strategies and principles; it provides a path for open, collaborative development. While the open organization defines its principles of openness broadly as transparency, inclusivity, adaptability, collaboration, and community--and covers how to use these open principles for communication, decision making, and many other topics--inner source is about the adoption of specific and tactical practices, processes, and patterns from open source communities to improve delivery.
|
||||
|
||||
For instance, [the Open Organization Maturity Model][1] suggests that in order to be transparent, teams should, at minimum, share all project resources with the project team (though it suggests that it's generally better to share these resources with the entire organization). The common pattern in both inner source and open source development is to host all resources in a publicly available version control system, for source control management, which achieves the open organization goal of high transparency.
|
||||
|
||||
Inner source aligns well with open organization strategies and principles.
|
||||
|
||||
Another example of value alignment appears in the way open source communities accept contributions. In open source communities, source code is transparently available. Community contributions in the form of patches or merge requests are commonly accepted practices (even expected ones). This provides one example of how to meet the open organization's goal of promoting inclusivity and collaboration.
|
||||
|
||||
### The challenge
|
||||
|
||||
Early in 2014, Red Hat IT began its first steps toward making Amazon Web Services (AWS) a standard hosting offering for business critical systems. While teams within Red Hat IT had built several systems and services in AWS by this time, these were bespoke creations, and we desired to make deploying services to IT standards in AWS both simple and standardized.
|
||||
|
||||
In order to make AWS cloud hosting meet our operational standards (while being scalable), the Cloud Enablement team within Red Hat IT decided that all infrastructure in AWS would be configured through code, rather than manually, and that everyone would use a standard set of tools. The Cloud Enablement team designed and built these standard tools; a separate group, the Platform Operations team, was responsible for provisioning and hosting systems and services in AWS using the tools.
|
||||
|
||||
The Cloud Enablement team built a toolset, obtusely named "Template Util," based on AWS Cloud Formations configurations wrapped in a management layer to enforce certain configuration requirements and make stamping out multiple copies of services across environments easier. While the Template Util toolset technically met all our initial requirements, and we eventually provisioned the infrastructure for more than a dozen services with it, engineers in every team working with the tool found using it to be painful. Michael Johnson, one engineer using the tool, said "It made doing something relatively straightforward really complicated."
|
||||
|
||||
Among the issues Template Util exhibited were:
|
||||
|
||||
* Underlying cloud formations technologies implied constraints on application stack management at odds with how we managed our application systems.
|
||||
* The tooling was needlessly complex and brittle in places, using multiple layered templating technologies and languages making syntax issues hard to debug.
|
||||
* The code for the tool--and some of the data users needed to manipulate the tool--were kept in a repository that was difficult for most users to access.
|
||||
* There was no standard process to contributing or accepting changes.
|
||||
* The documentation was poor.
|
||||
|
||||
|
||||
|
||||
As more engineers attempted to use the Template Util toolset, they found even more issues and limitations with the tools. Unhappiness continued to grow. To make matters worse, the Cloud Enablement team then shifted priorities to other deliverables without relinquishing ownership of the tool, so bug fixes and improvements to the tools were further delayed.
|
||||
|
||||
The real, core issues here were our inability to build an inclusive community to collaboratively build shared tooling that met everyone's needs. Fear of losing "ownership," fear of changing requirements, and fear of seeing hard work abandoned all contributed to chronic conflict, which in turn led to poorer outcomes.
|
||||
|
||||
### Crisis point
|
||||
|
||||
By September 2015, more than a year after launching our first major service in AWS with the Template Util tool, we hit a crisis point.
|
||||
|
||||
Many engineers refused to use the tools. That forced all of the related service provisioning work on a small set of engineers, further fracturing the community and disrupting service delivery roadmaps as these engineers struggled to deal with unexpected work. We called an emergency meeting and invited all the teams involved to find a solution.
|
||||
|
||||
During the emergency meeting, we found that people generally thought we needed immediate change and should start the tooling effort over, but even the decision to start over wasn't unanimous. Many solutions emerged--sometimes multiple solutions from within a single team--all of which would require significant work to implement. While we couldn't reach a consensus on which solution to use during this meeting, we did reach an agreement to give proponents of different technologies two weeks to work together, across teams, to build their case with a prototype, which the community could then review.
|
||||
|
||||
While we didn't reach a final and definitive decision, this agreement was the first point where we started to return to the open source ideals that guide our mission. By inviting all involved parties, we were able to be transparent and inclusive, and we could begin rebuilding our internal community. By making clear that we wanted to improve things and were open to new options, we showed our commitment to adaptability and meritocracy. Most importantly, the plan for building prototypes gave people a clear, return path to collaboration.
|
||||
|
||||
When the community reviewed the prototypes, it determined that the clear leader was an Ansible-based toolset that would eventually become known, internally, as Ansicloud. (At the time, no one involved with this work had any idea that Red Hat would acquire Ansible the following month. It should also be noted that other teams within Red Hat have found tools based on Cloud Formation extremely useful, even when our specific Template Util tool did not find success.)
|
||||
|
||||
This prototyping and testing phase didn't fix things overnight, though. While we had consensus on the general direction we needed to head, we still needed to improve the new prototype to the point at which engineers could use it reliably for production services.
|
||||
|
||||
So over the next several months, a handful of engineers worked to further build and extend the Ansicloud toolset. We built three new production services. While we were sharing code, that sharing activity occurred at a low level of maturity. Some engineers had trouble getting access due to older processes. Other engineers headed in slightly different directions, with each engineer having to rediscover some of the core design issues themselves.
|
||||
|
||||
### Returning to openness
|
||||
|
||||
This led to a turning point: Building on top of the previous agreement, we focused on developing a unified vision and providing easier access. To do this, we:
|
||||
|
||||
1. created a list of specific goals for the project (both "must-haves" and "nice-to-haves"),
|
||||
2. created an open issue log for the project to avoid solving the same problem repeatedly,
|
||||
3. opened our code base so anyone in Red Hat could read or clone it, and
|
||||
4. made it easy for engineers to get trusted committer access
|
||||
|
||||
|
||||
|
||||
Our agreement to collaborate, our finally unified vision, and our improved tool development methods spurred the growth of our community. Ansicloud adoption spread throughout the involved organizations, but this led to a new problem: The tool started changing more quickly than users could adapt to it, and improvements that different groups submitted were beginning to affect other groups in unanticipated ways.
|
||||
|
||||
These issues resulted in our recent turn to inner source practices. While every open source project operates differently, we focused on adopting some best practices that seemed common to many of them. In particular:
|
||||
|
||||
* We identified the business owner of the project and the core-contributor group of developers who would govern the development of the tools and decide what contributions to accept. While we want to keep things open, we can't have people working against each other or breaking each other's functionality.
|
||||
* We developed a project README clarifying the purpose of the tool and specifying how to use it. We also created a CONTRIBUTING document explaining how to contribute, what sort of contributions would be useful, and what sort of tests a contribution would need to pass to be accepted.
|
||||
* We began building continuous integration and testing services for the Ansicloud tool itself. This helped us ensure we could quickly and efficiently validate contributions technically, before the project accepted and merged them.
|
||||
|
||||
|
||||
|
||||
With these basic agreements, documents, and tools available, we were back onto the path of open collaboration and successful inner sourcing.
|
||||
|
||||
### Why it matters
|
||||
|
||||
Why does inner source matter?
|
||||
|
||||
From a developer community point of view, shifting from a traditional siloed development model to the inner source model has produced significant, quantifiable improvements:
|
||||
|
||||
* Contributions to our tooling have grown 72% per week (by number of commits).
|
||||
* The percentage of contributions from non-core committers has grown from 27% to 78%; the users of the toolset are driving its development.
|
||||
* The contributor list has grown by 15%, primarily from new users of the tool set, rather than core committers, increasing our internal community.
|
||||
|
||||
|
||||
|
||||
And the tools we've delivered through this project have allowed us to see dramatic improvements in our business outcomes. Using the Ansicloud tools, 54 new multi-environment application service deployments were created in 385 days (compared to 20 services in 1,013 days with the Template Util tools). We've gone from one new service deployment in a 50-day period to one every week--a seven-fold increase in the velocity of our delivery.
|
||||
|
||||
What really matters here is that the improvements we saw were not aberrations. Inner source provides common, easily understood patterns that organizations can adopt to effectively promote collaboration (not to mention other open organization principles). By mirroring open source production practices, inner source can also mirror the benefits of open source code, which have been seen time and time again: higher quality code, faster development, and more engaged communities.
|
||||
|
||||
This article is part of the [Open Organization Workbook project][2].
|
||||
|
||||
### about the author
|
||||
Tom Benninger - Tom Benninger is a Solutions Architect, Systems Engineer, and continual tinkerer at Red Hat, Inc. Having worked with startups, small businesses, and larger enterprises, he has experience within a broad set of IT disciplines. His current area of focus is improving Application Lifecycle Management in the enterprise. He has a particular interest in how open source, inner source, and collaboration can help support modern application development practices and the adoption of DevOps, CI/CD, Agile,...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/1/open-orgs-and-inner-source-it
|
||||
|
||||
作者:[Tom Benninger][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tomben
|
||||
[1]:https://opensource.com/open-organization/resources/open-org-maturity-model
|
||||
[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement
|
@ -1,181 +0,0 @@
|
||||
in which the cost of structured data is reduced
|
||||
======
|
||||
Last year I got the wonderful opportunity to attend [RacketCon][1] as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.
|
||||
|
||||
![lensmen chronicles][2]
|
||||
|
||||
I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)
|
||||
|
||||
The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.
|
||||
|
||||
### GUIs and XML
|
||||
|
||||
I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:
|
||||
```
|
||||
(define (main path)
|
||||
(let ([frame (new frame% [label "World color"])]
|
||||
[categorizations (box '())]
|
||||
[doc (call-with-input-file path read-xml/document)])
|
||||
(new (class canvas%
|
||||
(define/override (on-char event)
|
||||
(handle-key this categorizations (send event get-key-code)))
|
||||
(super-new))
|
||||
[parent frame]
|
||||
[paint-callback (draw doc categorizations)])
|
||||
(send frame show #t)))
|
||||
|
||||
```
|
||||
|
||||
While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of [generic interfaces][3] in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a [`box`][4] which you use in the way you'd use a `ref` in ML or Clojure: a mutable wrapper around an immutable data structure.
|
||||
|
||||
The world map I'm using is [an SVG of the Robinson projection][5] from Wikipedia. If you look closely there's a call to bind `doc` that calls [`call-with-input-file`][6] with [`read-xml/document`][7] which loads up the whole map file's SVG; just about as easily as you could ask for.
|
||||
|
||||
The data you get back from `read-xml/document` is in fact a [document][8] struct, which contains an `element` struct containing `attribute` structs and lists of more `element` structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.
|
||||
|
||||
Here's how we handle keyboard input; we're displaying a map with one country highlighted, and `key` here tells us what the user pressed to categorize the highlighted country. If that key is in the `categories` hash then we put it into `categorizations`.
|
||||
```
|
||||
(define categories #hash((select . "eeeeff")
|
||||
(#\1 . "993322")
|
||||
(#\2 . "229911")
|
||||
(#\3 . "ABCD31")
|
||||
(#\4 . "91FF55")
|
||||
(#\5 . "2439DF")))
|
||||
|
||||
(define (handle-key canvas categorizations key)
|
||||
(cond [(equal? #\backspace key) (swap! categorizations cdr)]
|
||||
[(member key (dict-keys categories)) (swap! categorizations (curry cons key))]
|
||||
[(equal? #\space key) (display (unbox categorizations))])
|
||||
(send canvas refresh))
|
||||
|
||||
```
|
||||
|
||||
### Nested updates: the bad parts
|
||||
|
||||
Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a [`fold`][9] reduction over the XML document struct and the list of country categorizations (plus `'select` for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to [`draw-pict`][10]:
|
||||
```
|
||||
|
||||
(define (update original-doc categorizations)
|
||||
(for/fold ([doc original-doc])
|
||||
([category (cons 'select (unbox categorizations))]
|
||||
[n (in-range (length (unbox categorizations)) 0 -1)])
|
||||
(set-style doc n (style-for category))))
|
||||
|
||||
(define ((draw doc categorizations) _ context)
|
||||
(let* ([newdoc (update doc categorizations)]
|
||||
[xml (call-with-output-string (curry write-xml newdoc))])
|
||||
(draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))
|
||||
|
||||
```
|
||||
|
||||
The problem is in that pesky `set-style` function. All it has to do is reach deep down into the `document` struct to find the `n`th `path` element (the one associated with a given country), and change its `'style` attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:
|
||||
```
|
||||
|
||||
(define (set-style doc n new-style)
|
||||
(let* ([root (document-element doc)]
|
||||
[g (list-ref (element-content root) 8)]
|
||||
[paths (element-content g)]
|
||||
[path (first (drop (filter element? paths) n))]
|
||||
[path-num (list-index (curry eq? path) paths)]
|
||||
[style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
|
||||
(element-attributes path))]
|
||||
[attr (list-ref (element-attributes path) style-index)]
|
||||
[new-attr (make-attribute (source-start attr)
|
||||
(source-stop attr)
|
||||
(attribute-name attr)
|
||||
new-style)]
|
||||
[new-path (make-element (source-start path)
|
||||
(source-stop path)
|
||||
(element-name path)
|
||||
(list-set (element-attributes path)
|
||||
style-index new-attr)
|
||||
(element-content path))]
|
||||
[new-g (make-element (source-start g)
|
||||
(source-stop g)
|
||||
(element-name g)
|
||||
(element-attributes g)
|
||||
(list-set paths path-num new-path))]
|
||||
[root-contents (list-set (element-content root) 8 new-g)])
|
||||
(make-document (document-prolog doc)
|
||||
(make-element (source-start root)
|
||||
(source-stop root)
|
||||
(element-name root)
|
||||
(element-attributes root)
|
||||
root-contents)
|
||||
(document-misc doc))))
|
||||
|
||||
```
|
||||
|
||||
The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field `x` replaced by the value of `(f (lookup x))`". Racket can [do this with dictionaries][11] but not with structs2. If you want a modified version you have to create a fresh one3.
|
||||
|
||||
### Lenses to the rescue?
|
||||
|
||||
![first lensman][12]
|
||||
|
||||
When I brought this up in the `#racket` channel on Freenode, I was helpfully pointed to the 3rd-party [Lens][13] library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's [a flaw][14] preventing them from working with `xml` structs, so it seemed I was out of luck.
|
||||
|
||||
But then I was pointed to [X-expressions][15] as an alternative to structs. The [`xml->xexpr`][16] function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.
|
||||
|
||||
For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the `n`th country and its `style` attribute. The [`lens-compose`][17] function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way [`compose`][18] works for functions). Also note that defining one lens gives us the ability to both get nested values (with [`lens-view`][19]) and update them.
|
||||
```
|
||||
(define (style-lens n)
|
||||
(lens-compose (dict-ref-lens 'style)
|
||||
second-lens
|
||||
(list-ref-lens (add1 (* n 2)))
|
||||
(list-ref-lens 10)))
|
||||
```
|
||||
|
||||
Our `<path>` XML elements are under the 10th item of the root xexpr, (hence the [`list-ref-lens`][20] with 10) and they are interspersed with whitespace, so we have to double `n` to find the `<path>` we want. The [`second-lens`][21] call gets us to that element's attribute alist, and [`dict-ref-lens`][22] lets us zoom in on the `'style` key out of that alist.
|
||||
|
||||
Once we have our lens, it's just a matter of replacing `set-style` with a call to [`lens-set`][23] in our `update` function we had above, and then we're off:
|
||||
```
|
||||
(define (update doc categorizations)
|
||||
(for/fold ([d doc])
|
||||
([category (cons 'select (unbox categorizations))]
|
||||
[n (in-range (length (unbox categorizations)) 0 -1)])
|
||||
(lens-set (style-lens n) d (list (style-for category)))))
|
||||
```
|
||||
|
||||
![second stage lensman][24]
|
||||
|
||||
Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the `xml` structs4, lenses provide a way to get the best of both worlds, at least in some situations.
|
||||
|
||||
The final version of the code clocks in at 51 lines and is is available [on GitLab][25].
|
||||
|
||||
๛
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://technomancy.us/185
|
||||
|
||||
作者:[Phil Hagelberg][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://technomancy.us/
|
||||
[1]:https://con.racket-lang.org/
|
||||
[2]:https://technomancy.us/i/chronicles-of-lensmen.jpg
|
||||
[3]:https://docs.racket-lang.org/reference/struct-generics.html
|
||||
[4]:https://docs.racket-lang.org/reference/boxes.html?q=box#%28def._%28%28quote._~23~25kernel%29._box%29%29
|
||||
[5]:https://commons.wikimedia.org/wiki/File:BlankMap-World_gray.svg
|
||||
[6]:https://docs.racket-lang.org/reference/port-lib.html#(def._((lib._racket%2Fport..rkt)._call-with-input-string))
|
||||
[7]:https://docs.racket-lang.org/xml/index.html?q=read-xml#%28def._%28%28lib._xml%2Fmain..rkt%29._read-xml%2Fdocument%29%29
|
||||
[8]:https://docs.racket-lang.org/xml/#%28def._%28%28lib._xml%2Fmain..rkt%29._document%29%29
|
||||
[9]:https://docs.racket-lang.org/reference/for.html?q=for%2Ffold#%28form._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._for%2Ffold%29%29
|
||||
[10]:https://docs.racket-lang.org/pict/Rendering.html?q=draw-pict#%28def._%28%28lib._pict%2Fmain..rkt%29._draw-pict%29%29
|
||||
[11]:https://docs.racket-lang.org/reference/dicts.html?q=dict-update#%28def._%28%28lib._racket%2Fdict..rkt%29._dict-update%29%29
|
||||
[12]:https://technomancy.us/i/first-lensman.jpg
|
||||
[13]:https://docs.racket-lang.org/lens/lens-guide.html
|
||||
[14]:https://github.com/jackfirth/lens/issues/290
|
||||
[15]:https://docs.racket-lang.org/pollen/second-tutorial.html?q=xexpr#%28part._.X-expressions%29
|
||||
[16]:https://docs.racket-lang.org/xml/index.html?q=xexpr#%28def._%28%28lib._xml%2Fmain..rkt%29._xml-~3exexpr%29%29
|
||||
[17]:https://docs.racket-lang.org/lens/lens-reference.html#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-compose%29%29
|
||||
[18]:https://docs.racket-lang.org/reference/procedures.html#%28def._%28%28lib._racket%2Fprivate%2Flist..rkt%29._compose%29%29
|
||||
[19]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-view%29%29
|
||||
[20]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._list-ref-lens%29%29
|
||||
[21]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._second-lens%29%29
|
||||
[22]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Fdict..rkt%29._dict-ref-lens%29%29
|
||||
[23]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-set%29%29
|
||||
[24]:https://technomancy.us/i/second-stage-lensman.jpg
|
||||
[25]:https://gitlab.com/technomancy/world-color/blob/master/world-color.rkt
|
@ -1,87 +0,0 @@
|
||||
Security Chaos Engineering: A new paradigm for cybersecurity
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_bank_vault_secure_safe.png?itok=YoW93h7C)
|
||||
|
||||
Security is always changing and failure always exists.
|
||||
|
||||
This toxic scenario requires a fresh perspective on how we think about operational security. We must understand that we are often the primary cause of our own security flaws. The industry typically looks at cybersecurity and failure in isolation or as separate matters. We believe that our lack of insight and operational intelligence into our own security control failures is one of the most common causes of security incidents and, subsequently, data breaches.
|
||||
|
||||
> Fall seven times, stand up eight." --Japanese proverb
|
||||
|
||||
The simple fact is that "to err is human," and humans derive their success as a direct result of the failures they encounter. Their rate of failure, how they fail, and their ability to understand that they failed in the first place are important building blocks to success. Our ability to learn through failure is inherent in the systems we build, the way we operate them, and the security we use to protect them. Yet there has been a lack of focus when it comes to how we approach preventative security measures, and the spotlight has trended toward the evolving attack landscape and the need to buy or build new solutions.
|
||||
|
||||
### Security spending is continually rising and so are security incidents
|
||||
|
||||
We spend billions on new information security technologies, however, we rarely take a proactive look at whether those security investments perform as expected. This has resulted in a continual increase in security spending on new solutions to keep up with the evolving attacks.
|
||||
|
||||
Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries. We have marched so fast down this path of the "get-ahead-of-the-attacker" strategy that we haven't considered that we may be a primary cause of our own demise. How is it that we are building more and more security measures, but the problem seems to be getting worse? Furthermore, many of the notable data breaches over the past year were not the result of an advanced nation-state or spy-vs.-spy malicious advanced persistent threats (APTs); rather the principal causes of those events were incomplete implementation, misconfiguration, design flaws, and lack of oversight.
|
||||
|
||||
The 2017 Ponemon Cost of a Data Breach Study breaks down the [root causes of data breaches][1] into three areas: malicious or criminal attacks, human factors or errors, and system glitches, including both IT and business-process failure. Of the three categories, malicious or criminal attacks comprises the largest distribution (47%), followed by human error (28%), and system glitches (25%). Cybersecurity vendors have historically focused on malicious root causes of data breaches, as it is the largest sole cause, but together human error and system glitches total 53%, a larger share of the overall problem.
|
||||
|
||||
What is not often understood, whether due to lack of insight, reporting, or analysis, is that malicious or criminal attacks are often successful due to human error and system glitches. Both human error and system glitches are, at their root, primary markers of the existence of failure. Whether it's IT system failures, failures in process, or failures resulting from humans, it begs the question: "Should we be focusing on finding a method to identify, understand, and address our failures?" After all, it can be an arduous task to predict the next malicious attack, which often requires investment of time to sift threat intelligence, dig through forensic data, or churn threat feeds full of unknown factors and undetermined motives. Failure instrumentation, identification, and remediation are mostly comprised of things that we know, have the ability to test, and can measure.
|
||||
|
||||
Failures we can analyze consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers. How often do we proactively instrument what we designed, built, and are operationally managing to determine if the controls are failing? Most organizations do not discover that their security controls were failing until a security incident results from that failure. The worst time to find out your security investment failed is during a security incident at 3 a.m.
|
||||
|
||||
> Security incidents are not detective measures and hope is not a strategy when it comes to operating effective security controls.
|
||||
|
||||
We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches. Failure in security controls can arise from poor control placement, technical misconfiguration, gaps in coverage, inadequate testing practices, human error, and numerous other things.
|
||||
|
||||
### The journey into Security Chaos Testing
|
||||
|
||||
Our venture into this new territory of Security Chaos Testing has shifted our thinking about the root cause of many of our notable security incidents and data breaches.
|
||||
|
||||
We were brought together by [Bruce Wong][2], who now works at Stitch Fix with Charles, one of the authors of this article. Prior to Stitch Fix, Bruce was a founder of the Chaos Engineering and System Reliability Engineering (SRE) practices at Netflix, the company commonly credited with establishing the field. Bruce learned about this article's other author, Aaron, through the open source [ChaoSlingr][3] Security Chaos Testing tool project, on which Aaron was a contributor. Aaron was interested in Bruce's perspective on the idea of applying Chaos Engineering to cybersecurity, which led Bruce to connect us to share what we had been working on. As security practitioners, we were both intrigued by the idea of Chaos Engineering and had each begun thinking about how this new method of instrumentation might have a role in cybersecurity.
|
||||
|
||||
Within a short timeframe, we began finishing each other's thoughts around testing and validating security capabilities, which we collectively call "Security Chaos Engineering." We directly challenged many of the concepts we had come to depend on in our careers, such as compensating security controls, defense-in-depth, and how to design preventative security. Quickly we realized that we needed to challenge the status quo "set-it-and-forget-it" model and instead execute on continuous instrumentation and validation of security capabilities.
|
||||
|
||||
Businesses often don't fully understand whether their security capabilities and controls are operating as expected until they are not. We had both struggled throughout our careers to provide measurements on security controls that go beyond simple uptime metrics. Our journey has shown us there is a need for a more pragmatic approach that emphasizes proactive instrumentation and experimentation over blind faith.
|
||||
|
||||
### Defining new terms
|
||||
|
||||
In the security industry, we have a habit of not explaining terms and assuming we are speaking the same language. To correct that, here are a few key terms in this new approach:
|
||||
|
||||
* **(Security) Chaos Experiments** are foundationally rooted in the scientific method, in that they seek not to validate what is already known to be true or already known to be false, rather they are focused on deriving new insights about the current state.
|
||||
* **Security Chaos Engineering** is the discipline of instrumentation, identification, and remediation of failure within security controls through proactive experimentation to build confidence in the system's ability to defend against malicious conditions in production.
|
||||
|
||||
|
||||
|
||||
### Security and distributed systems
|
||||
|
||||
Consider the evolving nature of modern application design where systems are becoming more and more distributed, ephemeral, and immutable in how they operate. In this shifting paradigm, it is becoming difficult to comprehend the operational state and health of our systems' security. Moreover, how are we ensuring that it remains effective and vigilant as the surrounding environment is changing its parameters, components, and methodologies?
|
||||
|
||||
What does it mean to be effective in terms of security controls? After all, a single security capability could easily be implemented in a wide variety of diverse scenarios in which failure may arise from many possible sources. For example, a standard firewall technology may be implemented, placed, managed, and configured differently depending on complexities in the business, web, and data logic.
|
||||
|
||||
It is imperative that we not operate our business products and services on the assumption that something works. We must constantly, consistently, and proactively instrument our security controls to ensure they cut the mustard when it matters. This is why Security Chaos Testing is so important. What Security Chaos Engineering does is it provides a methodology for the experimentation of the security of distributed systems in order to build confidence in the ability to withstand malicious conditions.
|
||||
|
||||
In Security Chaos Engineering:
|
||||
|
||||
* Security capabilities must be end-to-end instrumented.
|
||||
* Security must be continuously instrumented to build confidence in the system's ability to withstand malicious conditions.
|
||||
* Readiness of a system's security defenses must be proactively assessed to ensure they are battle-ready and operating as intended.
|
||||
* The security capability toolchain must be instrumented from end to end to drive new insights into not only the effectiveness of the functionality within the toolchain but also to discover where added value and improvement can be injected.
|
||||
* Practiced instrumentation seeks to identify, detect, and remediate failures in security controls.
|
||||
* The focus is on vulnerability and failure identification, not failure management.
|
||||
* The operational effectiveness of incident management is sharpened.
|
||||
|
||||
|
||||
|
||||
As Henry Ford said, "Failure is only the opportunity to begin again, this time more intelligently." Security Chaos Engineering and Security Chaos Testing give us that opportunity.
|
||||
|
||||
Would you like to learn more? Join the discussion by following [@aaronrinehart][4] and [@charles_nwatu][5] on Twitter.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/new-paradigm-cybersecurity
|
||||
|
||||
作者:[Aaron Rinehart][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/aaronrinehart
|
||||
[1]:https://www.ibm.com/security/data-breach
|
||||
[2]:https://twitter.com/bruce_m_wong?lang=en
|
||||
[3]:https://github.com/Optum/ChaoSlingr
|
||||
[4]:https://twitter.com/aaronrinehart
|
||||
[5]:https://twitter.com/charles_nwatu
|
@ -1,395 +0,0 @@
|
||||
How to write a really great resume that actually gets you hired
|
||||
============================================================
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/2000/1*k7HRLZAsuINP9vIs2BIh1g.png)
|
||||
|
||||
This is a data-driven guide to writing a resume that actually gets you hired. I’ve spent the past four years analyzing which resume advice works regardless of experience, role, or industry. The tactics laid out below are the result of what I’ve learned. They helped me land offers at Google, Microsoft, and Twitter and have helped my students systematically land jobs at Amazon, Apple, Google, Microsoft, Facebook, and more.
|
||||
|
||||
### Writing Resumes Sucks.
|
||||
|
||||
It’s a vicious cycle.
|
||||
|
||||
We start by sifting through dozens of articles by career “gurus,” forced to compare conflicting advice and make our own decisions on what to follow.
|
||||
|
||||
The first article says “one page MAX” while the second says “take two or three and include all of your experience.”
|
||||
|
||||
The next says “write a quick summary highlighting your personality and experience” while another says “summaries are a waste of space.”
|
||||
|
||||
You scrape together your best effort and hit “Submit,” sending your resume into the ether. When you don’t hear back, you wonder what went wrong:
|
||||
|
||||
_“Was it the single page or the lack of a summary? Honestly, who gives a s**t at this point. I’m sick of sending out 10 resumes every day and hearing nothing but crickets.”_
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*_zQqAjBhB1R4fz55InrrIw.jpeg)
|
||||
How it feels to try and get your resume read in today’s world.
|
||||
|
||||
Writing resumes sucks but it’s not your fault.
|
||||
|
||||
The real reason it’s so tough to write a resume is because most of the advice out there hasn’t been proven against the actual end goal of getting a job. If you don’t know what consistently works, you can’t lay out a system to get there.
|
||||
|
||||
It’s easy to say “one page works best” when you’ve seen it happen a few times. But how does it hold up when we look at 100 resumes across different industries, experience levels, and job titles?
|
||||
|
||||
That’s what this article aims to answer.
|
||||
|
||||
Over the past four years, I’ve personally applied to hundreds of companies and coached hundreds of people through the job search process. This has given me a huge opportunity to measure, analyze, and test the effectiveness of different resume strategies at scale.
|
||||
|
||||
This article is going to walk through everything I’ve learned about resumes over the past 4 years, including:
|
||||
|
||||
* Mistakes that more than 95% of people make, causing their resumes to get tossed immediately
|
||||
|
||||
* Three things that consistently appear in the resumes of highly effective job searchers (who go on to land jobs at the world’s best companies)
|
||||
|
||||
* A quick hack that will help you stand out from the competition and instantly build relationships with whomever is reading your resume (increasing your chances of hearing back and getting hired)
|
||||
|
||||
* The exact resume template that got me interviews and offers at Google, Microsoft, Twitter, Uber, and more
|
||||
|
||||
Before we get to the unconventional strategies that will help set you apart, we need to make sure our foundational bases are covered. That starts with understanding the mistakes most job seekers make so we can make our resume bulletproof.
|
||||
|
||||
### Resume Mistakes That 95% Of People Make
|
||||
|
||||
Most resumes that come through an online portal or across a recruiter’s desk are tossed out because they violate a simple rule.
|
||||
|
||||
When recruiters scan a resume, the first thing they look for is mistakes. Your resume could be fantastic, but if you violate a rule like using an unprofessional email address or improper grammar, it’s going to get tossed out.
|
||||
|
||||
Our goal is to fully understand the triggers that cause recruiters/ATS systems to make the snap decisions on who stays and who goes.
|
||||
|
||||
In order to get inside the heads of these decision makers, I collected data from dozens of recruiters and hiring mangers across industries. These people have several hundred years of hiring experience under their belts and they’ve reviewed 100,000+ resumes across industries.
|
||||
|
||||
They broke down the five most common mistakes that cause them to cut resumes from the pile:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*5Zbr3HFeKSjvPGZdq_LCKA.png)
|
||||
|
||||
### The Five Most Common Resume Mistakes (According To Recruiters & Hiring Managers)
|
||||
|
||||
Issue #1: Sloppiness (typos, spelling errors, & grammatical mistakes). Close to 60% of resumes have some sort of typo or grammatical issue.
|
||||
|
||||
Solution: Have your resume reviewed by three separate sources — spell checking software, a friend, and a professional. Spell check should be covered if you’re using Microsoft Word or Google Docs to create your resume.
|
||||
|
||||
A friend or family member can cover the second base, but make sure you trust them with reviewing the whole thing. You can always include an obvious mistake to see if they catch it.
|
||||
|
||||
Finally, you can hire a professional editor on [Upwork][1]. It shouldn’t take them more than 15–20 minutes to review so it’s worth paying a bit more for someone with high ratings and lots of hours logged.
|
||||
|
||||
Issue #2: Summaries are too long and formal. Many resumes include summaries that consist of paragraphs explaining why they are a “driven, results oriented team player.” When hiring managers see a block of text at the top of the resume, you can bet they aren’t going to read the whole thing. If they do give it a shot and read something similar to the sentence above, they’re going to give up on the spot.
|
||||
|
||||
Solution: Summaries are highly effective, but they should be in bullet form and showcase your most relevant experience for the role. For example, if I’m applying for a new business sales role my first bullet might read “Responsible for driving $11M of new business in 2018, achieved 168% attainment (#1 on my team).”
|
||||
|
||||
Issue #3: Too many buzz words. Remember our driven team player from the last paragraph? Phrasing like that makes hiring managers cringe because your attempt to stand out actually makes you sound like everyone else.
|
||||
|
||||
Solution: Instead of using buzzwords, write naturally, use bullets, and include quantitative results whenever possible. Would you rather hire a salesperson who “is responsible for driving new business across the healthcare vertical to help companies achieve their goals” or “drove $15M of new business last quarter, including the largest deal in company history”? Skip the buzzwords and focus on results.
|
||||
|
||||
Issue #4: Having a resume that is more than one page. The average employer spends six seconds reviewing your resume — if it’s more than one page, it probably isn’t going to be read. When asked, recruiters from Google and Barclay’s both said multiple page resumes “are the bane of their existence.”
|
||||
|
||||
Solution: Increase your margins, decrease your font, and cut down your experience to highlight the most relevant pieces for the role. It may seem impossible but it’s worth the effort. When you’re dealing with recruiters who see hundreds of resumes every day, you want to make their lives as easy as possible.
|
||||
|
||||
### More Common Mistakes & Facts (Backed By Industry Research)
|
||||
|
||||
In addition to personal feedback, I combed through dozens of recruitment survey results to fill any gaps my contacts might have missed. Here are a few more items you may want to consider when writing your resume:
|
||||
|
||||
* The average interviewer spends 6 seconds scanning your resume
|
||||
|
||||
* The majority of interviewers have not looked at your resume until
|
||||
you walk into the room
|
||||
|
||||
* 76% of resumes are discarded for an unprofessional email address
|
||||
|
||||
* Resumes with a photo have an 88% rejection rate
|
||||
|
||||
* 58% of resumes have typos
|
||||
|
||||
* Applicant tracking software typically eliminates 75% of resumes due to a lack of keywords and phrases being present
|
||||
|
||||
Now that you know every mistake you need to avoid, the first item on your to-do list is to comb through your current resume and make sure it doesn’t violate anything mentioned above.
|
||||
|
||||
Once you have a clean resume, you can start to focus on more advanced tactics that will really make you stand out. There are a few unique elements you can use to push your application over the edge and finally get your dream company to notice you.
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*KthhefFO33-8tm0kBEPbig.jpeg)
|
||||
|
||||
### The 3 Elements Of A Resume That Will Get You Hired
|
||||
|
||||
My analysis showed that highly effective resumes typically include three specific elements: quantitative results, a simple design, and a quirky interests section. This section breaks down all three elements and shows you how to maximize their impact.
|
||||
|
||||
### Quantitative Results
|
||||
|
||||
Most resumes lack them.
|
||||
|
||||
Which is a shame because my data shows that they make the biggest difference between resumes that land interviews and resumes that end up in the trash.
|
||||
|
||||
Here’s an example from a recent resume that was emailed to me:
|
||||
|
||||
> Experience
|
||||
|
||||
> + Identified gaps in policies and processes and made recommendations for solutions at the department and institution level
|
||||
|
||||
> + Streamlined processes to increase efficiency and enhance quality
|
||||
|
||||
> + Directly supervised three managers and indirectly managed up to 15 staff on multiple projects
|
||||
|
||||
> + Oversaw execution of in-house advertising strategy
|
||||
|
||||
> + Implemented comprehensive social media plan
|
||||
|
||||
As an employer, that tells me absolutely nothing about what to expect if I hire this person.
|
||||
|
||||
They executed an in-house marketing strategy. Did it work? How did they measure it? What was the ROI?
|
||||
|
||||
They also also identified gaps in processes and recommended solutions. What was the result? Did they save time and operating expenses? Did it streamline a process resulting in more output?
|
||||
|
||||
Finally, they managed a team of three supervisors and 15 staffers. How did that team do? Was it better than the other teams at the company? What results did they get and how did those improve under this person’s management?
|
||||
|
||||
See what I’m getting at here?
|
||||
|
||||
These types of bullets talk about daily activities, but companies don’t care about what you do every day. They care about results. By including measurable metrics and achievements in your resume, you’re showcasing the value that the employer can expect to get if they hire you.
|
||||
|
||||
Let’s take a look at revised versions of those same bullets:
|
||||
|
||||
> Experience
|
||||
|
||||
> + Managed a team of 20 that consistently outperformed other departments in lead generation, deal size, and overall satisfaction (based on our culture survey)
|
||||
|
||||
> + Executed in-house marketing strategy that resulted in a 15% increase in monthly leads along with a 5% drop in the cost per lead
|
||||
|
||||
> + Implemented targeted social media campaign across Instagram & Pintrest, which drove an additional 50,000 monthly website visits and generated 750 qualified leads in 3 months
|
||||
|
||||
If you were in the hiring manager’s shoes, which resume would you choose?
|
||||
|
||||
That’s the power of including quantitative results.
|
||||
|
||||
### Simple, Aesthetic Design That Hooks The Reader
|
||||
|
||||
These days, it’s easy to get carried away with our mission to “stand out.” I’ve seen resume overhauls from graphic designers, video resumes, and even resumes [hidden in a box of donuts.][2]
|
||||
|
||||
While those can work in very specific situations, we want to aim for a strategy that consistently gets results. The format I saw the most success with was a black and white Word template with sections in this order:
|
||||
|
||||
* Summary
|
||||
|
||||
* Interests
|
||||
|
||||
* Experience
|
||||
|
||||
* Education
|
||||
|
||||
* Volunteer Work (if you have it)
|
||||
|
||||
This template is effective because it’s familiar and easy for the reader to digest.
|
||||
|
||||
As I mentioned earlier, hiring managers scan resumes for an average of 6 seconds. If your resume is in an unfamiliar format, those 6 seconds won’t be very comfortable for the hiring manager. Our brains prefer things we can easily recognize. You want to make sure that a hiring manager can actually catch a glimpse of who you are during their quick scan of your resume.
|
||||
|
||||
If we’re not relying on design, this hook needs to come from the _Summary_ section at the top of your resume.
|
||||
|
||||
This section should be done in bullets (not paragraph form) and it should contain 3–4 highlights of the most relevant experience you have for the role. For example, if I was applying for a New Business Sales position, my summary could look like this:
|
||||
|
||||
> Summary
|
||||
|
||||
> Drove quarterly average of $11M in new business with a quota attainment of 128% (#1 on my team)
|
||||
|
||||
> Received award for largest sales deal of the year
|
||||
|
||||
> Developed and trained sales team on new lead generation process that increased total leads by 17% in 3 months, resulting in 4 new deals worth $7M
|
||||
|
||||
Those bullets speak directly to the value I can add to the company if I was hired for the role.
|
||||
|
||||
### An “Interests” Section That’s Quirky, Unique, & Relatable
|
||||
|
||||
This is a little “hack” you can use to instantly build personal connections and positive associations with whomever is reading your resume.
|
||||
|
||||
Most resumes have a skills/interests section, but it’s usually parked at the bottom and offers little to no value. It’s time to change things up.
|
||||
|
||||
[Research shows][3] that people rely on emotions, not information, to make decisions. Big brands use this principle all the time — emotional responses to advertisements are more influential on a person’s intent to buy than the content of an ad.
|
||||
|
||||
You probably remember Apple’s famous “Get A Mac” campaign:
|
||||
|
||||
|
||||
When it came to specs and performance, Macs didn’t blow every single PC out of the water. But these ads solidified who was “cool” and who wasn’t, which was worth a few extra bucks to a few million people.
|
||||
|
||||
By tugging at our need to feel “cool,” Apple’s campaign led to a [42% increase in market share][4] and a record sales year for Macbooks.
|
||||
|
||||
Now we’re going to take that same tactic and apply it to your resume.
|
||||
|
||||
If you can invoke an emotional response from your recruiter, you can influence the mental association they assign to you. This gives you a major competitive advantage.
|
||||
|
||||
Let’s start with a question — what could you talk about for hours?
|
||||
|
||||
It could be cryptocurrency, cooking, World War 2, World of Warcraft, or how Google’s bet on segmenting their company under the Alphabet is going to impact the technology sector over the next 5 years.
|
||||
|
||||
Did a topic (or two) pop into year head? Great.
|
||||
|
||||
Now think about what it would be like to have a conversation with someone who was just as passionate and knew just as much as you did on the topic. It’d be pretty awesome, right? _Finally, _ someone who gets it!
|
||||
|
||||
That’s exactly the kind of emotional response we’re aiming to get from a hiring manager.
|
||||
|
||||
There are five “neutral” topics out there that people enjoy talking about:
|
||||
|
||||
1. Food/Drink
|
||||
|
||||
2. Sports
|
||||
|
||||
3. College
|
||||
|
||||
4. Hobbies
|
||||
|
||||
5. Geography (travel, where people are from, etc.)
|
||||
|
||||
These topics are present in plenty of interest sections but we want to take them one step further.
|
||||
|
||||
Let’s say you had the best night of your life at the Full Moon Party in Thailand. Which of the following two options would you be more excited to read:
|
||||
|
||||
* Traveling
|
||||
|
||||
* Ko Pha Ngan beaches (where the full moon party is held)
|
||||
|
||||
Or, let’s say that you went to Duke (an ACC school) and still follow their basketball team. Which would you be more pumped about:
|
||||
|
||||
* College Sports
|
||||
|
||||
* ACC Basketball (Go Blue Devils!)
|
||||
|
||||
In both cases, the second answer would probably invoke a larger emotional response because it is tied directly to your experience.
|
||||
|
||||
I want you to think about your interests that fit into the five categories I mentioned above.
|
||||
|
||||
Now I want you to write a specific favorite associated with each category in parentheses next to your original list. For example, if you wrote travel you can add (ask me about the time I was chased by an elephant in India) or (specifically meditation in a Tibetan monastery).
|
||||
|
||||
Here is the [exact set of interests][5] I used on my resume when I interviewed at Google, Microsoft, and Twitter:
|
||||
|
||||
_ABC Kitchen’s Atmosphere, Stumptown Coffee (primarily cold brew), Michael Lewis (Liar’s Poker), Fishing (especially fly), Foods That Are Vehicles For Hot Sauce, ACC Sports (Go Deacs!) & The New York Giants_
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*ONxtGr_xUYmz4_Xe66aeng.jpeg)
|
||||
|
||||
If you want to cheat here, my experience shows that anything about hot sauce is an instant conversation starter.
|
||||
|
||||
### The Proven Plug & Play Resume Template
|
||||
|
||||
Now that we have our strategies down, it’s time to apply these tactics to a real resume. Our goal is to write something that increases your chances of hearing back from companies, enhances your relationships with hiring managers, and ultimately helps you score the job offer.
|
||||
|
||||
The example below is the exact resume that I used to land interviews and offers at Microsoft, Google, and Twitter. I was targeting roles in Account Management and Sales, so this sample is tailored towards those positions. We’ll break down each section below:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*B2RQ89ue2dGymRdwMY2lBA.png)
|
||||
|
||||
First, I want you to notice how clean this is. Each section is clearly labeled and separated and flows nicely from top to bottom.
|
||||
|
||||
My summary speaks directly to the value I’ve created in the past around company culture and its bottom line:
|
||||
|
||||
* I consistently exceeded expectations
|
||||
|
||||
* I started my own business in the space (and saw real results)
|
||||
|
||||
* I’m a team player who prioritizes culture
|
||||
|
||||
I purposefully include my Interests section right below my Summary. If my hiring manager’s six second scan focused on the summary, I know they’ll be interested. Those bullets cover all the subconscious criteria for qualification in sales. They’re going to be curious to read more in my Experience section.
|
||||
|
||||
By sandwiching my Interests in the middle, I’m upping their visibility and increasing the chance of creating that personal connection.
|
||||
|
||||
You never know — the person reading my resume may also be a hot sauce connoisseur and I don’t want that to be overlooked because my interests were sitting at the bottom.
|
||||
|
||||
Next, my Experience section aims to flesh out the points made in my Summary. I mentioned exceeding my quota up top, so I included two specific initiatives that led to that attainment, including measurable results:
|
||||
|
||||
* A partnership leveraging display advertising to drive users to a gamified experience. The campaign resulted in over 3000 acquisitions and laid the groundwork for the 2nd largest deal in company history.
|
||||
|
||||
* A partnership with a top tier agency aimed at increasing conversions for a client by improving user experience and upgrading tracking during a company-wide website overhaul (the client has ~20 brand sites). Our efforts over 6 months resulted in a contract extension worth 316% more than their original deal.
|
||||
|
||||
Finally, I included my education at the very bottom starting with the most relevant coursework.
|
||||
|
||||
Download My Resume Templates For Free
|
||||
|
||||
You can download a copy of the resume sample above as well as a plug and play template here:
|
||||
|
||||
Austin’s Resume: [Click To Download][6]
|
||||
|
||||
Plug & Play Resume Template: [Click To Download][7]
|
||||
|
||||
### Bonus Tip: An Unconventional Resume “Hack” To Help You Beat Applicant Tracking Software
|
||||
|
||||
If you’re not already familiar, Applicant Tracking Systems are pieces of software that companies use to help “automate” the hiring process.
|
||||
|
||||
After you hit submit on your online application, the ATS software scans your resume looking for specific keywords and phrases (if you want more details, [this article][8] does a good job of explaining ATS).
|
||||
|
||||
If the language in your resume matches up, the software sees it as a good fit for the role and will pass it on to the recruiter. However, even if you’re highly qualified for the role but you don’t use the right wording, your resume can end up sitting in a black hole.
|
||||
|
||||
I’m going to teach you a little hack to help improve your chances of beating the system and getting your resume in the hands of a human:
|
||||
|
||||
Step 1: Highlight and select the entire job description page and copy it to your clipboard.
|
||||
|
||||
Step 2: Head over to [WordClouds.com][9] and click on the “Word List” button at the top. Towards the top of the pop up box, you should see a link for Paste/Type Text. Go ahead and click that.
|
||||
|
||||
Step 3: Now paste the entire job description into the box, then hit “Apply.”
|
||||
|
||||
WordClouds is going to spit out an image that showcases every word in the job description. The larger words are the ones that appear most frequently (and the ones you want to make sure to include when writing your resume). Here’s an example for a data a science role:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*O7VO1C9nhC9LZct7vexTbA.png)
|
||||
|
||||
You can also get a quantitative view by clicking “Word List” again after creating your cloud. That will show you the number of times each word appeared in the job description:
|
||||
|
||||
9 data
|
||||
|
||||
6 models
|
||||
|
||||
4 experience
|
||||
|
||||
4 learning
|
||||
|
||||
3 Experience
|
||||
|
||||
3 develop
|
||||
|
||||
3 team
|
||||
|
||||
2 Qualifications
|
||||
|
||||
2 statistics
|
||||
|
||||
2 techniques
|
||||
|
||||
2 libraries
|
||||
|
||||
2 preferred
|
||||
|
||||
2 research
|
||||
|
||||
2 business
|
||||
|
||||
When writing your resume, your goal is to include those words in the same proportions as the job description.
|
||||
|
||||
It’s not a guaranteed way to beat the online application process, but it will definitely help improve your chances of getting your foot in the door!
|
||||
|
||||
* * *
|
||||
|
||||
### Want The Inside Info On Landing A Dream Job Without Connections, Without “Experience,” & Without Applying Online?
|
||||
|
||||
[Click here to get the 5 free strategies that my students have used to land jobs at Google, Microsoft, Amazon, and more without applying online.][10]
|
||||
|
||||
_Originally published at _ [_cultivatedculture.com_][11] _._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I help people land jobs they love and salaries they deserve at CultivatedCulture.com
|
||||
|
||||
----------
|
||||
|
||||
via: https://medium.freecodecamp.org/how-to-write-a-really-great-resume-that-actually-gets-you-hired-e18533cd8d17
|
||||
|
||||
作者:[Austin Belcak ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.freecodecamp.org/@austin.belcak
|
||||
[1]:http://www.upwork.com/
|
||||
[2]:https://www.thrillist.com/news/nation/this-guy-hides-his-resume-in-boxes-of-donuts-to-score-job-interviews
|
||||
[3]:https://www.psychologytoday.com/blog/inside-the-consumer-mind/201302/how-emotions-influence-what-we-buy
|
||||
[4]:https://www.businesswire.com/news/home/20070608005253/en/Apple-Mac-Named-Successful-Marketing-Campaign-2007
|
||||
[5]:http://cultivatedculture.com/resume-skills-section/
|
||||
[6]:https://drive.google.com/file/d/182gN6Kt1kBCo1LgMjtsGHOQW2lzATpZr/view?usp=sharing
|
||||
[7]:https://drive.google.com/open?id=0B3WIcEDrxeYYdXFPVlcyQlJIbWc
|
||||
[8]:https://www.jobscan.co/blog/8-things-you-need-to-know-about-applicant-tracking-systems/
|
||||
[9]:https://www.wordclouds.com/
|
||||
[10]:https://cultivatedculture.com/dreamjob/
|
||||
[11]:https://cultivatedculture.com/write-a-resume/
|
@ -1,99 +0,0 @@
|
||||
UQDS: A software-development process that puts quality first
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
|
||||
|
||||
The Ultimate Quality Development System (UQDS) is a software development process that provides clear guidelines for how to use branches, tickets, and code reviews. It was invented more than a decade ago by Divmod and adopted by [Twisted][1], an event-driven framework for Python that underlies popular commercial platforms like HipChat as well as open source projects like Scrapy (a web scraper).
|
||||
|
||||
Divmod, sadly, is no longer around—it has gone the way of many startups. Luckily, since many of its products were open source, its legacy lives on.
|
||||
|
||||
When Twisted was a young project, there was no clear process for when code was "good enough" to go in. As a result, while some parts were highly polished and reliable, others were alpha quality software—with no way to tell which was which. UQDS was designed as a process to help an existing project with definite quality challenges ramp up its quality while continuing to add features and become more useful.
|
||||
|
||||
UQDS has helped the Twisted project evolve from having frequent regressions and needing multiple release candidates to get a working version, to achieving its current reputation of stability and reliability.
|
||||
|
||||
### UQDS's building blocks
|
||||
|
||||
UQDS was invented by Divmod back in 2006. At that time, Continuous Integration (CI) was in its infancy and modern version control systems, which allow easy branch merging, were barely proofs of concept. Although Divmod did not have today's modern tooling, it put together CI, some ad-hoc tooling to make [Subversion branches][2] work, and a lot of thought into a working process. Thus the UQDS methodology was born.
|
||||
|
||||
UQDS is based upon fundamental building blocks, each with their own carefully considered best practices:
|
||||
|
||||
1. Tickets
|
||||
2. Branches
|
||||
3. Tests
|
||||
4. Reviews
|
||||
5. No exceptions
|
||||
|
||||
|
||||
|
||||
Let's go into each of those in a little more detail.
|
||||
|
||||
#### Tickets
|
||||
|
||||
In a project using the UQDS methodology, no change is allowed to happen if it's not accompanied by a ticket. This creates a written record of what change is needed and—more importantly—why.
|
||||
|
||||
* Tickets should define clear, measurable goals.
|
||||
* Work on a ticket does not begin until the ticket contains goals that are clearly defined.
|
||||
|
||||
|
||||
|
||||
#### Branches
|
||||
|
||||
Branches in UQDS are tightly coupled with tickets. Each branch must solve one complete ticket, no more and no less. If a branch addresses either more or less than a single ticket, it means there was a problem with the ticket definition—or with the branch. Tickets might be split or merged, or a branch split and merged, until congruence is achieved.
|
||||
|
||||
Enforcing that each branch addresses no more nor less than a single ticket—which corresponds to one logical, measurable change—allows a project using UQDS to have fine-grained control over the commits: A single change can be reverted or changes may even be applied in a different order than they were committed. This helps the project maintain a stable and clean codebase.
|
||||
|
||||
#### Tests
|
||||
|
||||
UQDS relies upon automated testing of all sorts, including unit, integration, regression, and static tests. In order for this to work, all relevant tests must pass at all times. Tests that don't pass must either be fixed or, if no longer relevant, be removed entirely.
|
||||
|
||||
Tests are also coupled with tickets. All new work must include tests that demonstrate that the ticket goals are fully met. Without this, the work won't be merged no matter how good it may seem to be.
|
||||
|
||||
A side effect of the focus on tests is that the only platforms that a UQDS-using project can say it supports are those on which the tests run with a CI framework—and where passing the test on the platform is a condition for merging a branch. Without this restriction on supported platforms, the quality of the project is not Ultimate.
|
||||
|
||||
#### Reviews
|
||||
|
||||
While automated tests are important to the quality ensured by UQDS, the methodology never loses sight of the human factor. Every branch commit requires code review, and each review must follow very strict rules:
|
||||
|
||||
1. Each commit must be reviewed by a different person than the author.
|
||||
2. Start with a comment thanking the contributor for their work.
|
||||
3. Make a note of something that the contributor did especially well (e.g., "that's the perfect name for that variable!").
|
||||
4. Make a note of something that could be done better (e.g., "this line could use a comment explaining the choices.").
|
||||
5. Finish with directions for an explicit next step, typically either merge as-is, fix and merge, or fix and submit for re-review.
|
||||
|
||||
|
||||
|
||||
These rules respect the time and effort of the contributor while also increasing the sharing of knowledge and ideas. The explicit next step allows the contributor to have a clear idea on how to make progress.
|
||||
|
||||
#### No exceptions
|
||||
|
||||
In any process, it's easy to come up with reasons why you might need to flex the rules just a little bit to let this thing or that thing slide through the system. The most important fundamental building block of UQDS is that there are no exceptions. The entire community works together to make sure that the rules do not flex, not for any reason whatsoever.
|
||||
|
||||
Knowing that all code has been approved by a different person than the author, that the code has complete test coverage, that each branch corresponds to a single ticket, and that this ticket is well considered and complete brings a piece of mind that is too valuable to risk losing, even for a single small exception. The goal is quality, and quality does not come from compromise.
|
||||
|
||||
### A downside to UQDS
|
||||
|
||||
While UQDS has helped Twisted become a highly stable and reliable project, this reliability hasn't come without cost. We quickly found that the review requirements caused a slowdown and backlog of commits to review, leading to slower development. The answer to this wasn't to compromise on quality by getting rid of UQDS; it was to refocus the community priorities such that reviewing commits became one of the most important ways to contribute to the project.
|
||||
|
||||
To help with this, the community developed a bot in the [Twisted IRC channel][3] that will reply to the command `review tickets` with a list of tickets that still need review. The [Twisted review queue][4] website returns a prioritized list of tickets for review. Finally, the entire community keeps close tabs on the number of tickets that need review. It's become an important metric the community uses to gauge the health of the project.
|
||||
|
||||
### Learn more
|
||||
|
||||
The best way to learn about UQDS is to [join the Twisted Community][5] and see it in action. If you'd like more information about the methodology and how it might help your project reach a high level of reliability and stability, have a look at the [UQDS documentation][6] in the Twisted wiki.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/uqds
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/moshez
|
||||
[1]:https://twistedmatrix.com/trac/
|
||||
[2]:http://structure.usc.edu/svn/svn.branchmerge.html
|
||||
[3]:http://webchat.freenode.net/?channels=%23twisted
|
||||
[4]:https://twisted.reviews
|
||||
[5]:https://twistedmatrix.com/trac/wiki/TwistedCommunity
|
||||
[6]:https://twistedmatrix.com/trac/wiki/UltimateQualityDevelopmentSystem
|
@ -1,73 +0,0 @@
|
||||
Why Mainframes Aren't Going Away Any Time Soon
|
||||
======
|
||||
|
||||
![](http://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/styles/article_featured_standard/public/ibm%20z13%20mainframe%202015%20getty.jpg?itok=uB8agshi)
|
||||
|
||||
IBM's last earnings report showed the [first uptick in revenue in more than five years.][1] Some of that growth was from an expected source, cloud revenue, which was up 24 percent year over year and now accounts for 21 percent of Big Blue's take. Another major boost, however, came from a spike in mainframe revenue. Z series mainframe sales were up 70 percent, the company said.
|
||||
|
||||
This may sound somewhat akin to a return to vacuum tube technology in a world where transistors are yesterday's news. In actuality, this is only a sign of the changing face of IT.
|
||||
|
||||
**Related:** [One Click and Voilà, Your Entire Data Center is Encrypted][2]
|
||||
|
||||
Modern mainframes definitely aren't your father's punch card-driven machines that filled entire rooms. These days, they most often run Linux and have found a renewed place in the data center, where they're being called upon to do a lot of heavy lifting. Want to know where the largest instance of Oracle's database runs? It's on a Linux mainframe. How about the largest implementation of SAP on the planet? Again, Linux on a mainframe.
|
||||
|
||||
"Before the advent of Linux on the mainframe, the people who bought mainframes primarily were people who already had them," Leonard Santalucia explained to Data Center Knowledge several months back at the All Things Open conference. "They would just wait for the new version to come out and upgrade to it, because it would run cheaper and faster.
|
||||
|
||||
**Related:** [IBM Designs a “Performance Beast” for AI][3]
|
||||
|
||||
"When Linux came out, it opened up the door to other customers that never would have paid attention to the mainframe. In fact, probably a good three to four hundred new clients that never had mainframes before got them. They don't have any old mainframes hanging around or ones that were upgraded. These are net new mainframes."
|
||||
|
||||
Although Santalucia is CTO at Vicom Infinity, primarily an IBM reseller, at the conference he was wearing his hat as chairperson of the Linux Foundation's Open Mainframe Project. He was joined in the conversation by John Mertic, the project's director of program management.
|
||||
|
||||
Santalucia knows IBM's mainframes from top to bottom, having spent 27 years at Big Blue, the last eight as CTO for the company's systems and technology group.
|
||||
|
||||
"Because of Linux getting started with it back in 1999, it opened up a lot of doors that were closed to the mainframe," he said. "Beforehand it was just z/OS, z/VM, z/VSE, z/TPF, the traditional operating systems. When Linux came along, it got the mainframe into other areas that it never was, or even thought to be in, because of how open it is, and because Linux on the mainframe is no different than Linux on any other platform."
|
||||
|
||||
The focus on Linux isn't the only motivator behind the upsurge in mainframe use in data centers. Increasingly, enterprises with heavy IT needs are finding many advantages to incorporating modern mainframes into their plans. For example, mainframes can greatly reduce power, cooling, and floor space costs. In markets like New York City, where real estate is at a premium, electricity rates are high, and electricity use is highly taxed to reduce demand, these are significant advantages.
|
||||
|
||||
"There was one customer where we were able to do a consolidation of 25 x86 cores to one core on a mainframe," Santalucia said. "They have several thousand machines that are ten and twenty cores each. So, as far as the eye could see in this data center, [x86 server workloads] could be picked up and moved onto this box that is about the size of a sub-zero refrigerator in your kitchen."
|
||||
|
||||
In addition to saving on physical data center resources, this customer by design would likely see better performance.
|
||||
|
||||
"When you look at the workload as it's running on an x86 system, the math, the application code, the I/O to manage the disk, and whatever else is attached to that system, is all run through the same chip," he explained. "On a Z, there are multiple chip architectures built into the system. There's one specifically just for the application code. If it senses the application needs an I/O or some mathematics, it sends it off to a separate processor to do math or I/O, all dynamically handled by the underlying firmware. Your Linux environment doesn't have to understand that. When it's running on a mainframe, it knows it's running on a mainframe and it will exploit that architecture."
|
||||
|
||||
The operating system knows it's running on a mainframe because when IBM was readying its mainframe for Linux it open sourced something like 75,000 lines of code for Linux distributions to use to make sure their OS's were ready for IBM Z.
|
||||
|
||||
"A lot of times people will hear there's 170 processors on the Z14," Santalucia said. "Well, there's actually another 400 other processors that nobody counts in that count of application chips, because it is taken for granted."
|
||||
|
||||
Mainframes are also resilient when it comes to disaster recovery. Santalucia told the story of an insurance company located in lower Manhattan, within sight of the East River. The company operated a large data center in a basement that among other things housed a mainframe backed up to another mainframe located in Upstate New York. When Hurricane Sandy hit in 2012, the data center flooded, electrocuting two employees and destroying all of the servers, including the mainframe. But the mainframe's workload was restored within 24 hours from the remote backup.
|
||||
|
||||
The x86 machines were all destroyed, and the data was never recovered. But why weren't they also backed up?
|
||||
|
||||
"The reason they didn't do this disaster recovery the same way they did with the mainframe was because it was too expensive to have a mirror of all those distributed servers someplace else," he explained. "With the mainframe, you can have another mainframe as an insurance policy that's lower in price, called Capacity BackUp, and it just sits there idling until something like this happens."
|
||||
|
||||
Mainframes are also evidently tough as nails. Santalucia told another story in which a data center in Japan was struck by an earthquake strong enough to destroy all of its x86 machines. The center's one mainframe fell on its side but continued to work.
|
||||
|
||||
The mainframe also comes with built-in redundancy to guard against situations that would be disastrous with x86 machines.
|
||||
|
||||
"What if a hard disk fails on a node in x86?" the Open Mainframe Project's Mertic asked. "You're taking down a chunk of that cluster potentially. With a mainframe you're not. A mainframe just keeps on kicking like nothing's ever happened."
|
||||
|
||||
Mertic added that a motherboard can be pulled from a running mainframe, and again, "the thing keeps on running like nothing's ever happened."
|
||||
|
||||
So how do you figure out if a mainframe is right for your organization? Simple, says Santalucia. Do the math.
|
||||
|
||||
"The approach should be to look at it from a business, technical, and financial perspective -- not just a financial, total-cost-of-acquisition perspective," he said, pointing out that often, costs associated with software, migration, networking, and people are not considered. The break-even point, he said, comes when at least 20 to 30 servers are being migrated to a mainframe. After that point the mainframe has a financial advantage.
|
||||
|
||||
"You can get a few people running the mainframe and managing hundreds or thousands of virtual servers," he added. "If you tried to do the same thing on other platforms, you'd find that you need significantly more resources to maintain an environment like that. Seven people at ADP handle the 8,000 virtual servers they have, and they need seven only in case somebody gets sick.
|
||||
|
||||
"If you had eight thousand servers on x86, even if they're virtualized, do you think you could get away with seven?"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datacenterknowledge.com/hardware/why-mainframes-arent-going-away-any-time-soon
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datacenterknowledge.com/archives/author/christine-hall
|
||||
[1]:http://www.datacenterknowledge.com/ibm/mainframe-sales-fuel-growth-ibm
|
||||
[2]:http://www.datacenterknowledge.com/design/one-click-and-voil-your-entire-data-center-encrypted
|
||||
[3]:http://www.datacenterknowledge.com/design/ibm-designs-performance-beast-ai
|
@ -1,127 +0,0 @@
|
||||
Arch Anywhere Is Dead, Long Live Anarchy Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_main.jpg?itok=fyBpTjQW)
|
||||
|
||||
Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to [Anarchy Linux][1]. And I’m here to say, if you’re looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy.
|
||||
|
||||
Let’s face it; Arch Linux isn’t for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). That’s where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution.
|
||||
|
||||
Anarchy Linux goes a little bit further than that, however. Let’s fire it up and see what it does.
|
||||
|
||||
### The installation
|
||||
|
||||
The installation of Anarchy Linux isn’t terribly challenging, but it’s also not quite as simple as for, say, [Ubuntu][2], [Linux Mint][3], or [Elementary OS][4]. Although you can run the installer from within the default graphical desktop environment (Xfce4), it’s still much in the same vein as Arch Linux. In other words, you’re going to have to do a bit of work—all within a text-based installer.
|
||||
|
||||
To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users.
|
||||
|
||||
![Updating the mirror][6]
|
||||
|
||||
Figure 1: Updating the mirror list is a necessity for the Anarchy Linux installation.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard.
|
||||
|
||||
![partitioning][9]
|
||||
|
||||
Figure 2: Anarchy makes partitioning easy.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once you’ve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, you’ll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If you’ll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue.
|
||||
|
||||
The next screen that could give new users pause, requires the selection of the desired installation. There are five options:
|
||||
|
||||
* Anarchy-Desktop
|
||||
|
||||
* Anarchy-Desktop-LTS
|
||||
|
||||
* Anarchy-Server
|
||||
|
||||
* Anarchy-Server-LTS
|
||||
|
||||
* Anarchy-Advanced
|
||||
|
||||
|
||||
|
||||
|
||||
If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4.
|
||||
Once you’ve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Don’t worry, if you miss something, you can always installed it later.
|
||||
|
||||
|
||||
![software][11]
|
||||
|
||||
Figure 3: Selecting the software you want on your system.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Once you’ve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard.
|
||||
|
||||
![ready to install][13]
|
||||
|
||||
Figure 4: Everything is ready to install.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Once the installation completes, reboot and enjoy Anarchy.
|
||||
|
||||
### Post install
|
||||
|
||||
I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5).
|
||||
|
||||
![GNOME and Budgie][15]
|
||||
|
||||
Figure 5: GNOME is on the right, Budgie is on the left.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
My guess is that you’ll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, you’ll see those similarities immediately disappear (Figure 6).
|
||||
|
||||
![GNOME and Budgie][17]
|
||||
|
||||
Figure 6: The GNOME Dash and the Budgie menu are nothing alike.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Regardless of which desktop you select, you’ll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done.
|
||||
|
||||
### Why use Arch Linux without the “Arch”?
|
||||
|
||||
This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like [Arch Linux][18] because they want the feeling of “elitism” that comes with using, say, [Gentoo][19], without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means it’s accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if you’re looking for a Linux distribution with high stability, that’s not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, you’ll wind up with an outstanding desktop platform that’s easy to work with (and maintain), based on a very highly regarded distribution of Linux.
|
||||
|
||||
That’s why you might opt for Arch Linux without the Arch.
|
||||
|
||||
Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux I’ve ever had the privilege of using. Without a doubt, if you’re looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/arch-anywhere-dead-long-live-anarchy-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://anarchy-linux.org/
|
||||
[2]:https://www.ubuntu.com/
|
||||
[3]:https://linuxmint.com/
|
||||
[4]:https://elementary.io/
|
||||
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_1.jpg?itok=WgHRqFTf (Updating the mirror)
|
||||
[7]:https://www.linux.com/licenses/category/used-permission
|
||||
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_2.jpg?itok=D7HkR97t (partitioning)
|
||||
[10]:/files/images/anarchyinstall3jpg
|
||||
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_3.jpg?itok=5-9E2u0S (software)
|
||||
[12]:/files/images/anarchyinstall4jpg
|
||||
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_4.jpg?itok=fuSZqtZS (ready to install)
|
||||
[14]:/files/images/anarchyinstall5jpg
|
||||
[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_5.jpg?itok=4y9kiC8I (GNOME and Budgie)
|
||||
[16]:/files/images/anarchyinstall6jpg
|
||||
[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_6.jpg?itok=fJ7Lmdci (GNOME and Budgie)
|
||||
[18]:https://www.archlinux.org/
|
||||
[19]:https://www.gentoo.org/
|
||||
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,149 +0,0 @@
|
||||
How writing can change your career for the better, even if you don't identify as a writer
|
||||
======
|
||||
Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?
|
||||
|
||||
Early in the book, Kondo talks about keeping possessions that "spark joy." In this article, I'll examine ways writing about what we and other people are doing in the open source world can "spark joy," or at least how writing can improve your career in unexpected ways.
|
||||
|
||||
Because I'm a community manager and editor on Opensource.com, you might be thinking, "She just wants us to [write for Opensource.com][2]." And that is true. But everything I will tell you about why you should write is true, even if you never send a story in to Opensource.com. Writing can change your career for the better, even if you don't identify as a writer. Let me explain.
|
||||
|
||||
### How I started writing
|
||||
|
||||
Early in the first decade of my career, I transitioned from a customer service-related role at a tech publishing company into an editing role on Sys Admin Magazine. I was plugging along, happily laying low in my career, and then that all changed when I started writing about open source technologies and communities, and the people in them. But I did _not_ start writing voluntarily. The tl;dr: of it is that my colleagues at Linux New Media eventually talked me into launching our first blog on the [Linux Pro Magazine][3] site. And as it turns out, it was one of the best career decisions I've ever made. I would not be working on Opensource.com today had I not started writing about what other people in open source were doing all those years ago.
|
||||
|
||||
When I first started writing, my goal was to raise awareness of the company I worked for and our publications, while also helping raise the visibility of women in tech. But soon after I started writing, I began seeing unexpected results.
|
||||
|
||||
#### My network started growing
|
||||
|
||||
When I wrote about a person, an organization, or a project, I got their attention. Suddenly the people I wrote about knew who I was. And because I was sharing knowledge—that is to say, I wasn't being a critic—I'd generally become an ally, and in many cases, a friend. I had a platform and an audience, and I was sharing them with other people in open source.
|
||||
|
||||
#### I was learning
|
||||
|
||||
In addition to promoting our website and magazine and growing my network, the research and fact-checking I did when writing articles helped me become more knowledgeable in my field and improve my tech chops.
|
||||
|
||||
#### I started meeting more people IRL
|
||||
|
||||
When I went to conferences, I found that my blog posts helped me meet people. I introduced myself to people I'd written about or learned about during my research, and I met new people to interview. People started knowing who I was because they'd read my articles. Sometimes people were even excited to meet me because I'd highlighted them, their projects, or someone or something they were interested in. I had no idea writing could be so exciting and interesting away from the keyboard.
|
||||
|
||||
#### My conference talks improved
|
||||
|
||||
I started speaking at events about a year after launching my blog. A few years later, I started writing articles based on my talks prior to speaking at events. The process of writing the articles helps me organize my talks and slides, and it was a great way to provide "notes" for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person.
|
||||
|
||||
### What should you write about?
|
||||
|
||||
Maybe you're interested in writing, but you struggle with what to write about. You should write about two things: what you know, and what you don't know.
|
||||
|
||||
#### Write about what you know
|
||||
|
||||
Writing about what you know can be relatively easy. For example, a script you wrote to help automate part of your daily tasks might be something you don't give any thought to, but it could make for a really exciting article for someone who hates doing that same task every day. That could be a relatively quick, short, and easy article for you to write, and you might not even think about writing it. But it could be a great contribution to the open source community.
|
||||
|
||||
#### Write about what you don't know
|
||||
|
||||
Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it.
|
||||
|
||||
> "When I write about a technical topic, I usually learn a lot more about it. I want to make sure my article is as good as it can be. So even if I'm writing about something I know well, I'll research the topic a bit more so I can make sure to get everything right." ~Jim Hall, FreeDOS project leader
|
||||
|
||||
For example, I wanted to learn about machine learning, and I thought narrowing down the topic would help me get started. My team mate Jason Baker suggested that I write an article on the [Top 3 machine learning libraries for Python][4], which gave me a focus for research.
|
||||
|
||||
The process of researching that article inspired another article, [3 cool machine learning projects using TensorFlow and the Raspberry Pi][5]. That article was also one of our most popular last year. I'm not an _expert_ on machine learning now, but researching the topic with writing an article in mind allowed me to give myself a crash course in the topic.
|
||||
|
||||
### Why people in tech write
|
||||
|
||||
Now let's look at a few benefits of writing that other people in tech have found. I emailed the Opensource.com writers' list and asked, and here's what writers told me.
|
||||
|
||||
#### Grow your network or your project community
|
||||
|
||||
Xavier Ho wrote for us for the first time last year ("[A programmer's cleaning guide for messy sensor data][6]"). He says: "I've been getting Twitter mentions from all over the world, including Spain, US, Australia, Indonesia, the UK, and other European countries. It shows the article is making some impact... This is the kind of reach I normally don't have. Hope it's really helping someone doing similar work!"
|
||||
|
||||
#### Help people
|
||||
|
||||
Writing about what other people are working on is a great way to help your fellow community members. Antoine Thomas, who wrote "[Linux helped me grow as a musician][7]", says, "I began to use open source years ago, by reading tutorials and documentation. That's why now I share my tips and tricks, experience or knowledge. It helped me to get started, so I feel that it's my turn to help others to get started too."
|
||||
|
||||
#### Give back to the community
|
||||
|
||||
[Jim Hall][8], who started the [FreeDOS project][9], says, "I like to write ... because I like to support the open source community by sharing something neat. I don't have time to be a program maintainer anymore, but I still like to do interesting stuff. So when something cool comes along, I like to write about it and share it."
|
||||
|
||||
#### Highlight your community
|
||||
|
||||
Emilio Velis wrote an article, "[Open hardware groups spread across the globe][10]", about projects in Central and South America. He explains, "I like writing about specific aspects of the open culture that are usually enclosed in my region (Latin America). I feel as if smaller communities and their ideas are hidden from the mainstream, so I think that creating this sense of broadness in participation is what makes some other cultures as valuable."
|
||||
|
||||
#### Gain confidence
|
||||
|
||||
[Don Watkins][11] is one of our regular writers and a [community moderator][12]. He says, "When I first started writing I thought I was an impostor, later I realized that many people feel that way. Writing and contributing to Opensource.com has been therapeutic, too, as it contributed to my self esteem and helped me to overcome feelings of inadequacy. … Writing has given me a renewed sense of purpose and empowered me to help others to write and/or see the valuable contributions that they too can make if they're willing to look at themselves in a different light. Writing has kept me younger and more open to new ideas."
|
||||
|
||||
#### Get feedback
|
||||
|
||||
One of our writers described writing as a feedback loop. He said that he started writing as a way to give back to the community, but what he found was that community responses give back to him.
|
||||
|
||||
Another writer, [Stuart Keroff][13] says, "Writing for Opensource.com about the program I run at school gave me valuable feedback, encouragement, and support that I would not have had otherwise. Thousands upon thousands of people heard about the Asian Penguins because of the articles I wrote for the website."
|
||||
|
||||
#### Exhibit expertise
|
||||
|
||||
Writing can help you show that you've got expertise in a subject, and having writing samples on well-known websites can help you move toward better pay at your current job, get a new role at a different organization, or start bringing in writing income.
|
||||
|
||||
[Jeff Macharyas][14] explains, "There are several ways I've benefitted from writing for Opensource.com. One, is the credibility I can add to my social media sites, resumes, bios, etc., just by saying 'I am a contributing writer to Opensource.com.' … I am hoping that I will be able to line up some freelance writing assignments, using my Opensource.com articles as examples, in the future."
|
||||
|
||||
### Where should you publish your articles?
|
||||
|
||||
That depends. Why are you writing?
|
||||
|
||||
You can always post on your personal blog, but if you don't already have a lot of readers, your article might get lost in the noise online.
|
||||
|
||||
Your project or company blog is a good option—again, you'll have to think about who will find it. How big is your company's reach? Or will you only get the attention of people who already give you their attention?
|
||||
|
||||
Are you trying to reach a new audience? A bigger audience? That's where sites like Opensource.com can help. We attract more than a million page views a month, and more than 700,000 unique visitors. Plus you'll work with editors who will polish and help promote your article.
|
||||
|
||||
We aren't the only site interested in your story. What are your favorite sites to read? They might want to help you share your story, and it's ok to pitch to multiple publications. Just be transparent about whether your article has been shared on other sites when working with editors. Occasionally, editors can even help you modify articles so that you can publish variations on multiple sites.
|
||||
|
||||
#### Do you want to get rich by writing? (Don't count on it.)
|
||||
|
||||
If your goal is to make money by writing, pitch your article to publications that have author budgets. There aren't many of them, the budgets don't tend to be huge, and you will be competing with experienced professional tech journalists who write seven days a week, 365 days a year, with large social media followings and networks. I'm not saying it can't be done—I've done it—but I am saying don't expect it to be easy or lucrative. It's not. (And frankly, I've found that nothing kills my desire to write much like having to write if I want to eat...)
|
||||
|
||||
A couple of people have asked me whether Opensource.com pays for content, or whether I'm asking someone to write "for exposure." Opensource.com does not have an author budget, but I won't tell you to write "for exposure," either. You should write because it meets a need.
|
||||
|
||||
If you already have a platform that meets your needs, and you don't need editing or social media and syndication help: Congratulations! You are privileged.
|
||||
|
||||
### Spark joy!
|
||||
|
||||
Most people don't know they have a story to tell, so I'm here to tell you that you probably do, and my team can help, if you just submit a proposal.
|
||||
|
||||
Most people—myself included—could use help from other people. Sites like Opensource.com offer one way to get editing and social media services at no cost to the writer, which can be hugely valuable to someone starting out in their career, someone who isn't a native English speaker, someone who wants help with their project or organization, and so on.
|
||||
|
||||
If you don't already write, I hope this article helps encourage you to get started. Or, maybe you already write. In that case, I hope this article makes you think about friends, colleagues, or people in your network who have great stories and experiences to share. I'd love to help you help them get started.
|
||||
|
||||
I'll conclude with feedback I got from a recent writer, [Mario Corchero][15], a Senior Software Developer at Bloomberg. He says, "I wrote for Opensource because you told me to :)" (For the record, I "invited" him to write for our [PyCon speaker series][16] last year.) He added, "And I am extremely happy about it—not only did it help me at my workplace by gaining visibility, but I absolutely loved it! The article appeared in multiple email chains about Python and was really well received, so I am now looking to publish the second :)" Then he [wrote for us][17] again.
|
||||
|
||||
I hope you find writing to be as fulfilling as we do.
|
||||
|
||||
You can connect with Opensource.com editors, community moderators, and writers in our Freenode [IRC][18] channel #opensource.com, and you can reach me and the Opensource.com team by email at [open@opensource.com][19].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/career-changing-magic-writing
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:http://tidyingup.com/books/the-life-changing-magic-of-tidying-up-hc
|
||||
[2]:https://opensource.com/how-submit-article
|
||||
[3]:http://linuxpromagazine.com/
|
||||
[4]:https://opensource.com/article/17/2/3-top-machine-learning-libraries-python
|
||||
[5]:https://opensource.com/article/17/2/machine-learning-projects-tensorflow-raspberry-pi
|
||||
[6]:https://opensource.com/article/17/9/messy-sensor-data
|
||||
[7]:https://opensource.com/life/16/9/my-linux-story-musician
|
||||
[8]:https://opensource.com/users/jim-hall
|
||||
[9]:http://www.freedos.org/
|
||||
[10]:https://opensource.com/article/17/6/open-hardware-latin-america
|
||||
[11]:https://opensource.com/users/don-watkins
|
||||
[12]:https://opensource.com/community-moderator-program
|
||||
[13]:https://opensource.com/education/15/3/asian-penguins-Linux-middle-school-club
|
||||
[14]:https://opensource.com/users/jeffmacharyas
|
||||
[15]:https://opensource.com/article/17/5/understanding-datetime-python-primer
|
||||
[16]:https://opensource.com/tags/pycon
|
||||
[17]:https://opensource.com/article/17/9/python-logging
|
||||
[18]:https://opensource.com/article/16/6/getting-started-irc
|
||||
[19]:mailto:open@opensource.com
|
@ -1,47 +0,0 @@
|
||||
Why an involved user community makes for better software
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_cubestalk.png?itok=Ozw4NhGW)
|
||||
|
||||
Imagine releasing a major new infrastructure service based on open source software only to discover that the product you deployed had evolved so quickly that the documentation for the version you released is no longer available. At Bloomberg, we experienced this problem firsthand in our deployment of OpenStack. In late 2016, we spent six months testing and rolling out [Liberty][1] on our OpenStack environment. By that time, Liberty was about a year old, or two versions behind the latest build.
|
||||
|
||||
As our users started taking advantage of its new functionality, we found ourselves unable to solve a few tricky problems and to answer some detailed questions about its API. When we went looking for Liberty's documentation, it was nowhere to be found on the OpenStack website. Liberty, it turned out, had been labeled "end of life" and was no longer supported by the OpenStack developer community.
|
||||
|
||||
The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. The documentation was stored in the source branch along with the source code, and, as Liberty was superseded by newer versions, it had been deleted. Worse, in the intervening months, the documentation for the newer versions had been completely restructured, and there was no way to easily rebuild it in a useful form. And believe me, we tried.
|
||||
|
||||
The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. ]After consulting other users and our vendor, we found that OpenStack's development cadence of two releases per year had created some unintended, yet deeply frustrating, consequences. Older releases that were typically still widely in use were being superseded and effectively killed for the purposes of support.
|
||||
|
||||
Eventually, conversations took place between OpenStack users and developers that resulted in changes. Documentation was moved out of the source branch, and users can now build documentation for whatever version they're using—more or less indefinitely. The problem was solved. (I'm especially indebted to my colleague [Chris Morgan][2], who was knee-deep in this effort and first wrote about it in detail for the [OpenStack Superuser blog][3].)
|
||||
|
||||
Many other enterprise users were in the same boat as Bloomberg—running older versions of OpenStack that are three or four versions behind the latest build. There's a good reason for that: On average it takes a reasonably large enterprise about six months to qualify, test, and deploy a new version of OpenStack. And, from my experience, this is generally true of most open source infrastructure projects.
|
||||
|
||||
For most of the past decade, companies like Bloomberg that adopted open source software relied on distribution vendors to incorporate, test, verify, and support much of it. These vendors provide long-term support (LTS) releases, which enable enterprise users to plan for upgrades on a two- or three-year cycle, knowing they'll still have support for a year or two, even if their deployment schedule slips a bit (as they often do). In the past few years, though, infrastructure software has advanced so rapidly that even the distribution vendors struggle to keep up. And customers of those vendors are yet another step removed, so many are choosing to deploy this type of software without vendor support.
|
||||
|
||||
Losing vendor support also usually means there are no LTS releases; OpenStack, Kubernetes, and Prometheus, and many more, do not yet provide LTS releases of their own. As a result, I'd argue that healthy interaction between the development and user community should be high on the list of considerations for adoption of any open source infrastructure. Do the developers building the software pay attention to the needs—and frustrations—of the people who deploy it and make it useful for their enterprise?
|
||||
|
||||
There is a solid model for how this should happen. We recently joined the [Cloud Native Computing Foundation][4], part of The Linux Foundation. It has a formal [end-user community][5], whose members include organizations just like us: enterprises that are trying to make open source software useful to their internal customers. Corporate members also get a chance to have their voices heard as they vote to select a representative to serve on the CNCF [Technical Oversight Committee][6]. Similarly, in the OpenStack community, Bloomberg is involved in the semi-annual Operators Meetups, where companies who deploy and support OpenStack for their own users get together to discuss their challenges and provide guidance to the OpenStack developer community.
|
||||
|
||||
The past few years have been great for open source infrastructure. If you're working for a large enterprise, the opportunity to deploy open source projects like the ones mentioned above has made your company more productive and more agile.
|
||||
|
||||
As large companies like ours begin to consume more open source software to meet their infrastructure needs, they're going to be looking at a long list of considerations before deciding what to use: license compatibility, out-of-pocket costs, and the health of the development community are just a few examples. As a result of our experiences, we'll add the presence of a vibrant and engaged end-user community to the list.
|
||||
|
||||
Increased reliance on open source infrastructure projects has also highlighted a key problem: People in the development community have little experience deploying the software they work on into production environments or supporting the people who use it to get things done on a daily basis. The fast pace of updates to these projects has created some unexpected problems for the people who deploy and use them. There are numerous examples I can cite where open source projects are updated so frequently that new versions will, usually unintentionally, break backwards compatibility.
|
||||
|
||||
As open source increasingly becomes foundational to the operation of so many enterprises, this cannot be allowed to happen, and members of the user community should assert themselves accordingly and press for the creation of formal representation. In the end, the software can only be better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/important-conversation
|
||||
|
||||
作者:[Kevin P.Fleming][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kpfleming
|
||||
[1]:https://releases.openstack.org/liberty/
|
||||
[2]:https://www.linkedin.com/in/mihalis68/
|
||||
[3]:http://superuser.openstack.org/articles/openstack-at-bloomberg/
|
||||
[4]:https://www.cncf.io/
|
||||
[5]:https://www.cncf.io/people/end-user-community/
|
||||
[6]:https://www.cncf.io/people/technical-oversight-committee/
|
@ -1,79 +0,0 @@
|
||||
Can anonymity and accountability coexist?
|
||||
=========================================
|
||||
|
||||
Anonymity might be a boon to more open, meritocratic organizational cultures. But does it conflict with another important value: accountability?
|
||||
|
||||
![Can anonymity and accountability coexist?](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_Transparency_B.png?itok=SkP1mUt5 "Can anonymity and accountability coexist?")
|
||||
|
||||
Image by :opensource.com
|
||||
|
||||
### Get the newsletter
|
||||
|
||||
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
|
||||
|
||||
Whistleblowing protections, crowdsourcing, anonymous voting processes, and even Glassdoor reviews—anonymous speech may take many forms in organizations.
|
||||
|
||||
As well-established and valued as these anonymous feedback mechanisms may be, anonymous speech becomes a paradoxical idea when one considers how to construct a more open organization. While an inability to discern speaker identity seems non-transparent, an opportunity for anonymity may actually help achieve a _more inclusive and meritocratic_ environment.
|
||||
|
||||
More about open organizations
|
||||
|
||||
* [Download free Open Org books](https://opensource.com/open-organization/resources/book-series?src=too_resource_menu1a)
|
||||
* [What is an Open Organization?](https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu2a)
|
||||
* [How open is your organization?](https://opensource.com/open-organization/resources/open-org-maturity-model?src=too_resource_menu3a)
|
||||
* [What is an Open Decision?](https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu4a)
|
||||
* [The Open Org two years later](https://www.redhat.com/en/about/blog/open-organization-two-years-later-and-going-strong?src=too_resource_menu4b&intcmp=70160000000h1s6AAA)
|
||||
|
||||
But before allowing outlets for anonymous speech to propagate, however, leaders of an organization should carefully reflect on whether an organization's "closed" practices make anonymity the unavoidable alternative to free, non-anonymous expression. Though some assurance of anonymity is necessary in a few sensitive and exceptional scenarios, dependence on anonymous feedback channels within an organization may stunt the normalization of a culture that encourages diversity and community.
|
||||
|
||||
### The benefits of anonymity
|
||||
|
||||
In the case of [_Talley v. California (1960)_](https://supreme.justia.com/cases/federal/us/362/60/case.html), the Supreme Court voided a city ordinance prohibiting the anonymous distribution of handbills, asserting that "there can be no doubt that such an identification requirement would tend to restrict freedom to distribute information and thereby freedom of expression." Our judicial system has legitimized the notion that the protection of anonymity facilitates the expression of otherwise unspoken ideas. A quick scroll through any [subreddit](https://www.reddit.com/reddits/) exemplifies what the Court has codified: anonymity can foster [risk-taking creativity](https://www.reddit.com/r/sixwordstories/) and the [inclusion and support of marginalized voices](https://www.reddit.com/r/MyLittleSupportGroup/). Anonymity empowers individuals by granting them the safety to speak without [detriment to their reputations or, more importantly, their physical selves.](https://www.psychologytoday.com/blog/the-compassion-chronicles/201711/why-dont-victims-sexual-harassment-come-forward-sooner)
|
||||
|
||||
For example, an anonymous suggestion program to garner ideas from members or employees in an organization may strengthen inclusivity and enhance the diversity of suggestions the organization receives. It would also make for a more meritocratic decision-making process, as anonymity would ensure that the quality of the articulated idea, rather than the rank and reputation of the articulator, is what's under evaluation. Allowing members to anonymously vote for anonymously-submitted ideas would help curb the influence of office politics in decisions affecting the organization's growth.
|
||||
|
||||
### The harmful consequences of anonymity
|
||||
|
||||
Yet anonymity and the open value of _accountability_ may come into conflict with one another. For instance, when establishing anonymous programs to drive greater diversity and more meritocratic evaluation of ideas, organizations may need to sacrifice the ability to hold speakers accountable for the opinions they express.
|
||||
|
||||
Reliance on anonymous speech for serious organizational decision-making may also contribute to complacency in an organizational culture that falls short of openness. Outlets for anonymous speech may be as similar to open as crowdsourcing is—or rather, is not. [Like efforts to crowdsource creative ideas](https://opensource.com/business/10/4/why-open-source-way-trumps-crowdsourcing-way), anonymous suggestion programs may create an organizational environment in which diverse perspectives are only valued when an organization's leaders find it convenient to take advantage of members' ideas.
|
||||
|
||||
Anonymity and the open value of accountability may come into conflict with one another.
|
||||
|
||||
A similar concern holds for anonymous whistle-blowing or concern submission. Though anonymity is important for sexual harassment and assault reporting, regularly redirecting member concerns and frustrations to a "complaints box" makes it more difficult for members to hold their organization's leaders accountable for acting on concerns. It may also hinder intra-organizational support networks and advocacy groups from forming around shared concerns, as members would have difficulty identifying others with similar experiences. For example, many working mothers might anonymously submit requests for a lactation room in their workplace, then falsely attribute a lack of action from leaders to a lack of similar concerns from others.
|
||||
|
||||
### An anonymity checklist
|
||||
|
||||
Organizations in which anonymous speech is the primary mode of communication, like subreddits, have generated innovative works and thought-provoking discourse. These anonymous networks call attention to the potential for anonymity to help organizations pursue open values of diversity and meritocracy. Organizations in which anonymous speech is _not_ the main form of communication should acknowledge the strengths of anonymous speech, but carefully consider whether anonymity is the wisest means to the goal of sustainable openness.
|
||||
|
||||
Leaders may find reflecting on the following questions useful prior to establishing outlets for anonymous feedback within their organizations:
|
||||
|
||||
1\. _Availability of additional communication mechanisms_: Rather than investing time and resources into establishing a new, anonymous channel for communication, can the culture or structure of existing avenues of communication be reconfigured to achieve the same goal? This question echoes the open source affinity toward realigning, rather than reinventing, the wheel.
|
||||
|
||||
2\. _Failure of other communication avenues:_ How and why is the organization ill-equipped to handle the sensitive issue/situation at hand through conventional (i.e. non-anonymous) means of communication?
|
||||
|
||||
Careful deliberation on these questions may help prevent outlets for anonymous speech from leading to a dangerous sense of complacency.
|
||||
|
||||
3\. _Consequences of anonymity:_ If implemented, could the anonymous mechanism stifle the normalization of face-to-face discourse about issues important to the organization's growth? If so, how can leaders ensure that members consider the anonymous communication channel a "last resort," without undermining the legitimacy of the anonymous system?
|
||||
|
||||
4\. _Designing the anonymous communication channel:_ How can accountability be promoted in anonymous communication without the ability to determine the identity of speakers?
|
||||
|
||||
5\. _Long-term considerations_: Is the anonymous feedback mechanism sustainable, or a temporary solution to a larger organizational issue? If the latter, is [launching a campaign](https://opensource.com/open-organization/16/6/8-steps-more-open-communications) to address overarching problems with the organization's communication culture feasible?
|
||||
|
||||
These five points build off of one another to help leaders recognize the tradeoffs involved in legitimizing anonymity within their organization. Careful deliberation on these questions may help prevent outlets for anonymous speech from leading to a dangerous sense of complacency with a non-inclusive organizational structure.
|
||||
|
||||
About the author
|
||||
----------------
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/osdc_default_avatar_1.png?itok=mmbfqFXm)](https://opensource.com/users/susiechoi)
|
||||
|
||||
Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality.
|
||||
|
||||
[More about me](https://opensource.com/users/susiechoi)
|
||||
|
||||
* * *
|
||||
|
||||
via: [https://opensource.com/open-organization/18/1/balancing-accountability-and-anonymity](https://opensource.com/open-organization/18/1/balancing-accountability-and-anonymity)
|
||||
|
||||
作者: [Susie Choi](https://opensource.com/users/susiechoi) 选题者: [@lujun9972](https://github.com/lujun9972) 译者: [译者ID](https://github.com/译者ID) 校对: [校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user